CN112330522B - Watermark removal model training method, device, computer equipment and storage medium - Google Patents

Watermark removal model training method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN112330522B
CN112330522B CN202011238056.2A CN202011238056A CN112330522B CN 112330522 B CN112330522 B CN 112330522B CN 202011238056 A CN202011238056 A CN 202011238056A CN 112330522 B CN112330522 B CN 112330522B
Authority
CN
China
Prior art keywords
image
watermark
loss value
generator
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011238056.2A
Other languages
Chinese (zh)
Other versions
CN112330522A (en
Inventor
张少林
宁欣
曾庆亮
许少辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Wave Kingdom Co ltd
Original Assignee
Shenzhen Wave Kingdom Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Wave Kingdom Co ltd filed Critical Shenzhen Wave Kingdom Co ltd
Priority to CN202011238056.2A priority Critical patent/CN112330522B/en
Publication of CN112330522A publication Critical patent/CN112330522A/en
Application granted granted Critical
Publication of CN112330522B publication Critical patent/CN112330522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a watermark removal model training method, a watermark removal model training device, computer equipment and a storage medium. The method comprises the following steps: extracting watermark images and corresponding clean images from the sample image dataset; inputting the extracted image into a watermark removal model to be trained to perform tertiary air grid migration to obtain a style migration result, and identifying a target image in the style migration result according to the extracted image to obtain an image identification result; calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to a watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image; calculating a target loss value corresponding to the watermark removal model to be trained according to the loss values; and performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the watermark removal model after training. The method can improve the image quality after watermark removal.

Description

Watermark removal model training method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a watermark removal model training method, apparatus, computer device, and storage medium.
Background
With the development of digital media technology and computer technology, various digital media such as images are spread through the internet, and people can download and use them. In order to protect the copyright of an image, a watermark is often added to the image. Since watermarks can interfere with or corrupt the intrinsic data information of an image to some extent, the watermarks in the image need to be removed in order to better apply the value of the image.
At present, watermark removal can be performed on a watermark image through a generated countermeasure model to obtain a corresponding clean image, however, in the watermark removal process of the traditional generated countermeasure model, original information of the watermark image can be lost, so that the quality of the obtained clean image is lower.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a watermark removal model training method, apparatus, computer device, and storage medium that can improve the image quality after watermark removal.
A watermark removal model training method, the method comprising:
acquiring a sample image dataset;
Extracting a watermark image from the sample image dataset and a clean image corresponding to the watermark image;
Inputting the watermark image and the clean image into a watermark removal model to be trained, performing tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result, and identifying a target image in the style migration result according to the watermark image and the clean image to obtain an image identification result;
Calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image;
calculating a target loss value corresponding to the watermark removal model to be trained according to the counterloss value, the cyclical consistency loss value and the identity reconstruction loss value;
And performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the watermark removal model after training.
In one embodiment, the inputting the watermark image and the clean image into the watermark removal model to be trained, and performing tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained, to obtain a style migration result includes:
Inputting the watermark image and the clean image into a corresponding generator of a watermark removal model to be trained for tertiary air grid migration to obtain an anhydrous watermark image corresponding to the watermark image, a watermarked image corresponding to the clean image, a first watermark adding image corresponding to the anhydrous watermark image, a first watermark removing image corresponding to the watermarked image, a second watermark adding image corresponding to the watermark image and a second watermark removing image corresponding to the clean image;
And generating a style migration result according to the watermark-free image, the watermarked image, the first watermarking removed image, the second watermarking image and the second watermarking removed image.
In one embodiment, the generating of the watermark removal model to be trained includes a first generating unit and a second generating unit, and the inputting the watermark image and the clean image into the corresponding generating unit of the watermark removal model to be trained performs tertiary air grid migration to obtain an anhydrous watermark image corresponding to the watermark image, a watermarked image corresponding to the clean image, a first watermark added image corresponding to the anhydrous watermark image, a first watermark removed image corresponding to the watermarked image, a second watermark added image corresponding to the watermark image, and a second watermark removed image corresponding to the clean image includes:
inputting the watermark image into a first generator of the watermark removal model to be trained for first style migration, inputting the clean image into a second generator of the watermark removal model to be trained for first style migration, outputting a watermark-free image corresponding to the watermark image through the first generator, and outputting a watermark-bearing image corresponding to the clean image through the second generator;
Inputting the watermark-free image into the second generator for second time of grid migration, inputting the watermarked image into the second generator for second time of grid migration, outputting a first watermark-added image corresponding to the watermark-free image through the second generator, and outputting a first watermark-removed image corresponding to the watermark-free image through the first generator;
And inputting the clean image into the first generator for tertiary air grid migration, inputting the watermark image into the second generator for tertiary air grid migration, outputting a second watermark removed image corresponding to the clean image through the first generator, and outputting a second watermark added image corresponding to the watermark image through the second generator.
In one embodiment, the style migration result includes a watermark-free image, a watermarked image, a first watermark removal image, a second watermark addition image, and a second watermark removal image, and calculating the countermeasures loss value, the cyclic consistency loss value, and the identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image identification result, the watermark image, and the clean image includes:
Calculating a counterdamage value corresponding to the watermark removal model to be trained according to the image recognition result;
Calculating a cyclic consistency loss value corresponding to the watermark removal model to be trained according to the first watermark elimination image and the first watermark addition image in the style migration result, the watermark image and the clean image;
and calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark added image and the second watermark removed image in the style migration result, the watermark image and the clean image.
In one embodiment, the watermark removal model to be trained includes two sub-networks, each sub-network includes a generator and a discriminator, the generator and the discriminator both include encoders and share one encoder, the watermark removal model to be trained is countertrained according to the target loss value until a preset condition is reached, model training is stopped, and the watermark removal model after training includes:
fixing the generation parameters of decoders in all generators of the watermark removal model to be trained, and adjusting the discrimination parameters of the discriminators in all sub-networks according to the target loss value to obtain an adjusted loss value;
Fixing discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting generation parameters of the decoder according to the adjusted loss value, wherein the discrimination parameters comprise coding parameters corresponding to an encoder;
repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain a trained watermark removal model, wherein the current generator parameters comprise current encoder parameters corresponding to an encoder in the model generator.
An image watermark removal method, the method comprising:
acquiring an image to be processed;
Inputting the image to be processed into a generator of a trained watermark removal model, encoding the image to be processed through an encoder in the generator, and outputting image characteristics, wherein the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the process of countermeasure training, a target loss value corresponding to the watermark removal model is obtained by calculating according to a countermeasure loss value, a cyclic consistency loss value and an identity reconstruction loss value; the challenge loss value, the cyclical identity loss value and the identity reconstruction loss value are obtained by performing style migration calculation on the sample image dataset;
and performing style migration on the image characteristics through the generator to obtain a clean image corresponding to the image to be processed.
A watermark removal model training apparatus, the apparatus comprising:
the style migration module is used for acquiring a sample image dataset; extracting a watermark image from the sample image dataset and a clean image corresponding to the watermark image; inputting the watermark image and the clean image into a watermark removal model to be trained, and performing tertiary air grid migration on the watermark image and the clean image through the watermark removal model to obtain a style migration result;
The loss calculation module is used for calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model according to the style migration result, the watermark image and the clean image; calculating a target loss value corresponding to the watermark removal model according to the counterloss value, the cyclic consistency loss value and the identity reconstruction loss value;
And the countermeasure training module is used for performing countermeasure training on the watermark removal model according to the target loss value until a preset condition is reached, stopping model training, and obtaining the trained watermark removal model.
An image watermark removal apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an image to be processed;
The image coding module is used for inputting the image to be processed into an encoder of a trained watermark removal model to be coded and outputting image characteristics, the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the process of the countermeasure training, a target loss value corresponding to the watermark removal model is obtained by calculating according to a countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value;
And the watermark removing module is used for inputting the image characteristics into a generator of the trained watermark removing model to remove the watermark, so as to obtain a clean image corresponding to the image to be processed.
A computer device comprising a memory storing a computer program executable on the processor and a processor implementing the steps of the method embodiments described above when the computer program is executed by the processor.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the various method embodiments described above.
According to the watermark removal model training method, the device, the computer equipment and the storage medium, the sample image data set is obtained, the watermark image and the clean image corresponding to the watermark image are extracted in the sample image data set, the watermark image and the clean image are subjected to tertiary air grid migration through the watermark removal model to be trained, and the target image in the style migration result is identified according to the watermark image and the clean image, so that the image identification result is obtained. The sample data set used for model training only needs one batch of watermark images and a batch of clean images corresponding to the watermark images, and any additional labeling is not needed, so that the time consumption caused by manual labeling is reduced. Calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image, calculating a target loss value corresponding to the watermark removal model to be trained according to the loss value, and further performing antagonism training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the watermark removal model after training. The cyclic consistency loss value can ensure that the content of the image is not changed before and after watermark removal because the counter loss value can remove the generated watermark-free image and the mode gap between the watermarked image and the corresponding input image. The identity reconstruction loss value can ensure that the color composition of the images is the same before and after watermark removal, so that the trained watermark removal model effectively removes the watermark and ensures the original information of the input image, thereby avoiding the loss of the original information and improving the quality of the clean image output by the watermark removal model.
Drawings
FIG. 1 is a diagram of an application environment for a watermark removal model training method in one embodiment;
FIG. 2 is a flow chart of a method of training a watermark removal model in one embodiment;
FIG. 3 is a flow chart of a step of inputting a watermark image and a clean image into a watermark removal model to be trained, and performing tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result in one embodiment;
FIG. 4 is a schematic diagram of a network architecture of any one of the sub-networks in the training watermark removal model in one embodiment;
FIG. 5 is a schematic flow chart of a third air grid migration step performed by inputting watermark images and clean images into corresponding generators of watermark removal models to be trained in one embodiment;
FIG. 6 is a flowchart illustrating steps for calculating a contrast loss value, a cyclic consistency loss value, and an identity reconstruction loss value corresponding to a watermark removal model to be trained according to a style migration result, an image recognition result, a watermark image, and a clean image in one embodiment;
FIG. 7 is a flowchart of a step of performing countermeasure training on a watermark removal model to be trained according to a target loss value until a preset condition is reached, stopping model training, and obtaining a trained watermark removal model in one embodiment;
FIG. 8 is a flow chart of a method of image watermark removal in one embodiment;
FIG. 9 is a block diagram of a watermark removal model training device in one embodiment;
FIG. 10 is a block diagram of an image watermark removal apparatus in one embodiment;
FIG. 11 is an internal block diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The watermark removal model training method provided by the application can be applied to an application environment shown in figure 1. Wherein the terminal 102 communicates with the server 104 via a network. When the watermark removal model needs to be trained, the terminal 102 sends the initial image dataset to the server 104, and the server 104 pre-processes the initial image dataset to obtain a sample image dataset. After the sample image dataset is obtained, the server 104 extracts the watermark image and the clean image corresponding to the watermark image in the sample image dataset, inputs the watermark image and the clean image into the watermark removal model to be trained, carries out tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result, and identifies the target image in the style migration result according to the watermark image and the clean image to obtain an image identification result. And the server 104 calculates an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the watermark image and the clean image, calculates a target loss value corresponding to the watermark removal model to be trained according to the antagonism loss value, the cyclic consistency loss value and the identity reconstruction loss value, performs antagonism training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stops model training, and obtains the watermark removal model after training. The terminal 102 may be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices, among others. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
In one embodiment, as shown in fig. 2, a watermark removal model training method is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
Step 202, a sample image dataset is acquired.
The sample image dataset refers to a training set for training a watermark removal model, and the sample image dataset may comprise a plurality of sample images, the plurality being two or more. The image categories to which the sample image corresponds may include a watermark image category and a clean image category. The watermark images in the watermark image category are in one-to-one correspondence with the clean images in the clean image category. Wherein the watermark in the watermark image may be a digital watermark. A clean image refers to an image in which no watermark is present in the image.
In one embodiment, the sample image dataset may be obtained from the terminal, may be obtained by preprocessing an initial image dataset stored in advance, or may be obtained by preprocessing an initial image dataset after obtaining the initial image dataset sent by the terminal. The initial image data set refers to an image processed through watermarking, and comprises a clean image and a watermark image corresponding to the clean image.
When the server obtains the sample image dataset by preprocessing the pre-stored initial image dataset, the initial image dataset may be pre-constructed by the server and stored in the server. Specifically, the server may obtain a plurality of clean images, add random watermarks in each clean image to obtain watermark images corresponding to each clean image, thereby marking the clean image as a clean image category, marking the watermark images obtained by adding the watermarks as watermark image categories, further obtaining an initial image dataset, and storing the initial image dataset. The random watermark refers to that the information such as the size, the position, the color, the transparency, the number and the like of the watermark are added randomly, the watermarks in the same watermark image can be different, and the watermarks in a plurality of watermark images can be different. Since the watermarks in the watermark images are randomly added, the distribution quantity, the distribution positions and the like of the watermarks are uncertain, more abundant image data can be obtained, the accuracy of the trained watermark removal model is improved, and the watermarks in the watermark images can be removed cleanly under the conditions that the watermarks in the watermark images are densely distributed and the randomness of the watermarks is strong in the practical application process of the trained watermark removal model.
Further, the server may correlate the clean image with the watermark image corresponding to the clean image after obtaining the watermark image corresponding to each clean image. Specifically, the terminal can add the association identifier in the clean image or any one image of watermark images corresponding to the clean image, so that the association image can be rapidly determined according to the association identifier. The association identifier is used to mark the image in which the association relationship exists, and for example, the association identifier may be an image number, an image name, or the like. For example, the associated identifier corresponding to the clean image may be the image name of the corresponding watermark image. The associated identifier corresponding to the watermark image may be the image name of the corresponding clean image. And the server further obtains an initial image data set according to the image after the association processing, and stores the initial image data set.
When the server obtains the sample image data set by preprocessing the pre-stored initial image data set or preprocessing the initial image data set sent by the terminal, the server needs to preprocess the initial image data, wherein the preprocessing modes can include size adjustment, clipping, normalization processing and the like.
In one embodiment, acquiring the sample image dataset comprises: acquiring an initial image dataset; performing size adjustment on the initial image data set to obtain an adjusted initial image data set; randomly clipping the adjusted initial image data set to obtain a clipped initial image data set; and carrying out normalization processing on the cut initial image data set to obtain a sample data set. The initial image data set may be pre-stored or may be acquired from the terminal. The server resizes the clean image and the watermark image in the initial image dataset, scaling all images in the initial image dataset to the same image size, e.g., 286 x 286. For example, the size adjustment may be performed by any of a plurality of methods such as nearest neighbor interpolation, linear interpolation, and region interpolation. The server clips the resized initial image dataset, which means that the image is clipped to a specific size, such as 256 x 256. For example, the manner of clipping may be random clipping. By clipping the initial image data set after the size adjustment, data expansion is realized, and data noise is weakened, so that the accuracy and stability of the watermark removal model can be improved. The server may also perform normalization processing on the cropped initial image dataset, where the normalization processing may be any one of a z-score normalization (zero-mean normalization ) method, a min-max normalization (min-max normalization, variance normalization) method, and the like. The accuracy of the subsequent watermark removal model can be improved and the convergence rate of the watermark removal model can be improved by carrying out normalization processing on the cut initial image data set.
Step 204, extracting the watermark image and the clean image corresponding to the watermark image from the sample image data set.
And 206, inputting the watermark image and the clean image into a watermark removal model to be trained, carrying out tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result, and identifying a target image in the style migration result according to the watermark image and the clean image to obtain an image identification result.
The image categories corresponding to the sample images in the sample image dataset may include watermark image categories and clean image categories. The watermark images in the watermark image category are in one-to-one correspondence with the clean images in the clean image category. The server may extract the watermark image in a watermark image category and the clean image corresponding to the watermark image in a clean image category.
The watermark removal model to be trained refers to a watermark removal model which needs to be trained. The watermark removal model to be trained is used for carrying out style migration on the input image to obtain an image after style migration. For example, the watermark removal model to be trained may be a model obtained by performing network structure improvement on the generated countermeasure network model. Specifically, the watermark removal model to be trained is added with a pair of generator and a discriminator on the basis of the generation type countermeasure network, namely the watermark removal model to be trained can comprise two sub-networks, each sub-network can comprise a generator and a discriminator, and the generator and the discriminator both comprise encoders. The roles of the generators in the two sub-networks may be different, one for generating an image without a watermark and the other for generating an image carrying a watermark, the purpose of the generators being to make the generated image as discriminated as true as possible by the discriminator. The aim of the arbiter is then to distinguish as correctly as possible whether the image output by the generator is a real image or a generated image.
The target image is an image which needs to be identified in the style migration result. The server can call the watermark removal model to be trained, input the extracted watermark image and the clean image into the watermark removal model to be trained, and perform tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result. Style migration refers to converting the image style of an image. For example, style migration is performed only once on the watermark image, so that a clean image corresponding to the watermark image can be obtained. The clean image is an image obtained by removing the watermark in the watermark image. And performing tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained, wherein the obtained style migration result can comprise the image obtained after tertiary air grid migration. The tertiary air grid migration may represent that a tertiary air grid migration process is performed in the watermark removal model to be trained.
The image obtained after the first style migration may include a watermark-free image corresponding to the watermark image and a watermark-containing image corresponding to the clean image, the image obtained after the second style migration may include a first watermark-added image corresponding to the watermark-free image and a first watermark-removed image corresponding to the watermark-containing image, and the image obtained after the third style migration may include: and adding a second watermark into the image corresponding to the watermark image and removing the second watermark into the image corresponding to the clean image. And identifying the target image in the style migration result through the watermark removal model to be trained, thereby obtaining an image identification result. The target image may include both a watermark-free image and a watermarked image obtained after a style migration. The identification refers to judging whether the target image is a real image, namely judging whether the watermark-free image is a clean image input to the watermark removal model to be trained and judging whether the watermark-containing image is a watermark image input to the watermark removal model to be trained. The image recognition result may be a probability that the target image is a true image.
And step 208, calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image.
The style migration result may include a watermark-free image corresponding to the watermark image, a watermarked image corresponding to the clean image, a first watermark-added image corresponding to the watermark-free image, a first watermark-removed image corresponding to the watermarked image, a second watermark-added image corresponding to the watermark image, and a second watermark-removed image corresponding to the clean image. The server can calculate a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the watermark image and the clean image, and calculate an antagonism loss value corresponding to the watermark removal model to be trained according to the image recognition result. The cyclic consistency loss value refers to a cyclic consistency loss value of a generator in the watermark removal model to be trained, and can be used for representing the difference between the first watermark added image and the watermark image and the difference between the first watermark removed image and the clean image, and the content of the image is not changed, such as identity, expression and the like, before watermark removal and after watermark removal can be ensured by performing countermeasure training on the watermark removal model to be trained according to the cyclic consistency loss value. The identity reconstruction loss value refers to an identity reconstruction loss value of a generator in the watermark removal model to be trained, the identity reconstruction loss value can be used for representing the difference between the second watermark added image and the watermark image and the difference between the second watermark removed image and the clean image, and the color composition of the images before watermark removal and after watermark removal can be ensured to be the same by performing countermeasure training on the watermark removal model to be trained according to the identity reconstruction loss value. The contrast loss value comprises the contrast loss value of the generator and the contrast loss value of the discriminator in the watermark removal model to be trained, can be used for representing the difference between the watermark-free image and the clean image and the difference between the watermark-containing image and the watermark image, can remove the generated watermark-free image and the pattern difference between the watermark-containing image and the corresponding input image, can also realize training of the unlabeled sample image data set, and reduces the time and effort consumed by manual labeling. The larger the difference between the output and input of the watermark removal model to be trained, the larger the loss values.
And step 210, calculating a target loss value corresponding to the watermark removal model to be trained according to the antagonism loss value, the cyclic consistency loss value and the identity reconstruction loss value.
After the server calculates the counterloss value, the cyclic consistency loss value and the identity reconstruction loss value, the counterloss value, the cyclic consistency loss value and the identity reconstruction loss value can be divided into generator loss and discriminator loss, and the target loss value corresponding to the watermark removal model to be trained is calculated according to the loss value in the generator loss and the loss value in the discriminator loss. The target loss value refers to the final loss of the watermark removal model to be trained, and may include a generator loss value and a discriminator loss value.
In particular, the server may divide the fight loss value, the loop consistency loss value, and the identity reconstruction loss value into generator losses and arbiter losses, which may include the generator loss value, the loop consistency loss value, and the identity reconstruction loss value in the fight loss value. The discriminator loss may include a discriminator loss value among the challenge loss values. And the server performs weighted operation on each loss value in the generator loss to obtain the generator loss value, and takes the generator loss value and the discriminator loss value as target loss values corresponding to the watermark removal model to be trained.
And 212, performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the watermark removal model after training.
And the server performs countermeasure training on the two pairs of generators and the discriminants in the watermark removal model to be trained according to the target loss value. The countermeasure training refers to training in the opposite direction of the generator and the arbiter. Specifically, because the generator and the arbiter both comprise encoders, and the encoders are shared, the server can perform decoupling training on the watermark removal model to be trained, and the decoupling training refers to that the encoding parameters of the encoder are adjusted only when the arbiter is trained. After training is completed once, the model parameters are adjusted once, and iterative training is repeated until the preset condition is reached. The preset condition may be that the loss value of the model is no longer decreasing, or that the loss value of the model is less than a threshold value. At this time, the server may stop model training, and save the target generator at this time and the generation parameters corresponding to the target generator, to obtain the watermark removal model after training. The target generator is a generator for generating an image without a watermark. The target generator comprises an encoder, and the corresponding generation parameters of the target generator comprise the encoding parameters of the encoder.
In this embodiment, a watermark image and a clean image corresponding to the watermark image are extracted from a sample image dataset by acquiring the sample image dataset, tertiary air grid migration is performed on the watermark image and the clean image by a watermark removal model to be trained, and a target image in a style migration result is identified according to the watermark image and the clean image, so as to obtain an image identification result. The sample data set used for model training only needs one batch of watermark images and a batch of clean images corresponding to the watermark images, and any additional labeling is not needed, so that the time consumption caused by manual labeling is reduced. Calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image, calculating a target loss value corresponding to the watermark removal model to be trained according to the antagonism loss value, the cyclic consistency loss value and the identity reconstruction loss value, further performing antagonism training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the watermark removal model after training. The cyclic consistency loss value can ensure that the content of the image is not changed before and after watermark removal because the counter loss value can remove the generated watermark-free image and the mode gap between the watermarked image and the corresponding input image. The identity reconstruction loss value can ensure that the color composition of the images is the same before and after watermark removal, so that the trained watermark removal model effectively removes the watermark and ensures the original information of the input image, thereby avoiding the loss of the original information and improving the quality of the clean image output by the watermark removal model.
In one embodiment, as shown in fig. 3, the step of inputting the watermark image and the clean image into the watermark removal model to be trained, and performing tertiary air grid migration on the watermark image and the clean image through the watermark removal model to be trained to obtain a style migration result includes:
step 302, inputting the watermark image and the clean image into a corresponding generator of the watermark removal model to be trained for tertiary air grid migration, and obtaining a watermark-free image corresponding to the watermark image, a watermark-containing image corresponding to the clean image, a first watermark adding image corresponding to the watermark-free image, a first watermark removing image corresponding to the watermark-containing image, a second watermark adding image corresponding to the watermark image and a second watermark removing image corresponding to the clean image.
Step 304, generating style migration results according to the non-watermark image, the first watermark adding image, the first watermark removing image, the second watermark adding image and the second watermark removing image.
The watermark removal model to be trained comprises two sub-networks, each sub-network comprises a generator and a discriminator, the generator and the discriminator both comprise encoders, and the generator and the discriminator share one encoder. Compared with the traditional countermeasure network model in which the generator and the discriminator are respectively provided with an encoder, the model parameters are reduced, and the network structure is simpler.
The two sub-networks in the watermark removal model to be trained are independent of each other, and the network structures of the two sub-networks are identical. Fig. 4 is a schematic diagram of a network structure of any one of the sub-networks in the watermark removal model to be trained. The encoder may be a convolutional neural network, including a convolutional layer and two downsampling layers. The generator may be a combination of a convolutional neural network, which is an encoder structure, and a residual block network, which includes six residual blocks, two upsampling layers, and a convolutional layer. The residual block network may be referred to as a first decoder, i.e. the generator network comprises an encoder and a first decoder. The arbiter may be a convolutional neural network comprising two convolutional layers coupled to an encoder. The two convolutional layers connected to the encoder may be referred to as a decoder, i.e. the arbiter comprises an encoder and a second decoder.
In one embodiment, the residual block in the generator may take the form of a short connection. A short connection is a pointer that for each residual block, one layer or multiple layers of the residual block may be skipped, combining the inputs directly together. The residual block is designed in a short connection mode, extra parameters and calculation complexity are not required to be increased, the network degradation problem and gradient disappearance problem of the traditional deep learning network are solved, the deep neural network can also effectively learn the characteristics, and the watermark removal model to be trained can effectively learn the image characteristics.
The server invokes a watermark removal model to be trained, and since the watermark removal model to be trained comprises two generators and two discriminators, the roles of the two generators may be different, one generator being used to generate an image without a watermark and the other generator being used to generate an image carrying a watermark. The server can respectively input the watermark image and the clean image into corresponding generators of the watermark removal model to be trained, and tertiary air grid migration is carried out on the watermark image and the clean image through the generators. Style migration refers to converting the image style of an image. For example, the watermark image is subjected to style migration once, so that a clean image corresponding to the watermark image can be obtained. Once through the generator, style migration is realized. The tertiary air grid migration may be achieved by sequential operations between the two generators as well as by cross operations.
Because the generators comprise the encoder and the decoder, in each generator, the encoder firstly performs feature extraction to obtain corresponding feature vectors, and then the feature vectors are input into the decoder for decoding to obtain corresponding output images, so that one-time style migration is realized. For example, in the first style migration, the watermark image and the clean image may be input to the encoders in each generator, and feature extraction is performed on the corresponding input image by the encoders to obtain feature vectors output by each encoder. The feature vectors may include a first feature vector corresponding to the watermark image and a second feature vector corresponding to the clean image. And taking the first characteristic vector and the second characteristic vector as input of a decoder in a corresponding generator through a watermark removal model to be trained, and obtaining an anhydrous watermark image corresponding to the watermark image and a watermark image corresponding to the clean image. And the watermark removal model to be trained can realize tertiary air grid migration according to sequential operation and cross operation between the two generators, so as to obtain a watermark-free image, a watermark-containing image, a first watermark adding image, a first watermark removal image, a second watermark adding image and a second watermark removal image, and the obtained images are used as style migration results.
In this embodiment, the watermark image and the clean image are input into the corresponding generator of the watermark removal model to be trained to perform tertiary air grid migration, so that the subsequent calculation of the countermeasures loss value, the cyclic consistency loss value and the identity reconstruction loss value corresponding to the watermark removal model to be trained is facilitated.
In one embodiment, as shown in fig. 5, the step of inputting the watermark image and the clean image into the corresponding generator of the watermark removal model to be trained for tertiary air grid migration includes:
Step 502, inputting the watermark image into a first generator of the watermark removal model to be trained for performing first style migration, inputting the clean image into a second generator of the watermark removal model to be trained for performing first style migration, outputting a watermark-free image corresponding to the watermark image through the first generator, and outputting a watermark-containing image corresponding to the clean image through the second generator.
Step 504, inputting the watermark-free image into the second generator for second time of grid migration, inputting the watermarked image into the second generator for second time of grid migration, outputting a first watermark added image corresponding to the watermark-free image through the second generator, and outputting a first watermark removed image corresponding to the watermark-free image through the first generator.
Step 506, inputting the clean image into the first generator for third air grid migration, inputting the watermark image into the second generator for third air grid migration, outputting a second watermark removed image corresponding to the clean image through the first generator, and outputting a second watermark added image corresponding to the watermark image through the second generator.
The watermark removal model to be trained comprises two generators, a first generator and a second generator. The first generator may be used to generate an image without a watermark and the second generator may be used to generate an image carrying a watermark.
The server can input the watermark image into a first generator in the watermark removal model to be trained, and input the clean image into a second generator in the watermark removal model to be trained, perform first-time style migration on the watermark image through the first generator to obtain a watermark-free image corresponding to the watermark image, and perform first-time style migration on the clean image through the second generator to obtain a watermark-containing image corresponding to the clean image. The watermark-free image and the watermarked image may be used to calculate a counter-loss value corresponding to the watermark removal model to be trained.
The watermark removing model to be trained inputs the watermark-free image into a second generator, inputs the watermark-containing image into a first generator, performs second-time grid migration on the watermark-free image through the second generator to obtain a first watermark added image corresponding to the watermark-free image, and performs second-time grid migration on the watermark-containing image through the first generator to obtain a first watermark removed image corresponding to the watermark-containing image. The first watermark adding image and the first watermark removing image may be used to calculate a cyclic consistency loss value corresponding to the watermark removing model to be trained.
The watermark removal model to be trained can also take a clean image as input of a first generator, take a watermark image as input of a second generator, carry out third air grid migration on the clean image through the first generator to obtain a second watermark removal image corresponding to the clean image, and carry out third air grid migration on the watermark image through the second generator to obtain a second watermark addition image corresponding to the watermark image. The second watermark removal image and the first watermark addition image may be used to calculate an identity reconstruction loss value corresponding to the watermark removal model to be trained. The watermark removal model to be trained can thus calculate the loop consistency loss by judging the first watermark added image and the difference between the first watermark removed image and the corresponding input image. And performing tertiary air grid migration on the clean image through the first generator to obtain a second watermark removed image, and performing tertiary air grid migration on the watermark image through the second generator to obtain a second watermark added image. Whereby the watermark removal model to be trained may calculate the identity reconstruction loss by judging the difference between the second watermark removal image and the second watermark addition image and the corresponding input image.
In one embodiment, the generator of the watermark removal model to be trained includes a first generator and a second generator, and the step of identifying the target image in the style migration result according to the watermark image and the clean image to obtain the image identification result includes: taking the watermark-free image and the watermark-containing image in the style migration result as target images; and inputting the target image into a corresponding discriminator of the watermark removal model to be trained for recognition, and obtaining an image recognition result. The method comprises the steps that a server inputs watermark images and clean images into a watermark removal model to be trained, a generator in the watermark removal model to be trained carries out tertiary air grid migration on the watermark images and the clean images, after a style migration result is obtained, watermark-free images and watermark-containing images in the style migration result can be extracted, and the watermark-free images and the watermark-containing images are used as target images. The watermark-free image is an image output by the first generator, and the watermark-free image is an image output by the second generator. The watermark removal model to be trained comprises two discriminators, a first discriminator and a second discriminator, wherein the first discriminator can be connected with the first generator, and the second discriminator can be connected with the second discriminator. Taking the watermark-free image in the target image as the input of a first discriminator, taking the watermark-containing image in the target image as the input of a second discriminator, recognizing the difference between the watermark-free image and the clean image through the first discriminator, outputting the probability that the watermark-free image is a real image, and taking the output of the first discriminator as a first recognition result corresponding to the watermark-free image. And identifying the difference between the watermarked image and the watermark image through a second discriminator, and outputting the probability that the watermarked image is a real image. And taking the output of the second discriminator as a second identification result corresponding to the watermarked image. Thus, an image recognition result can be obtained according to the first recognition result and the second recognition result.
In this embodiment, the identifier in the watermark removal model to be trained is used to identify the watermark-free image and the watermark-containing image in the style migration result, so that the probability that the watermark-free image is a real image and the probability that the watermark-containing image is a real image can be obtained, which is beneficial to the server to calculate the countermeasures loss value according to the output of the identifier, and the pattern gap between the generated watermark-free image and the watermark-containing image and the corresponding input image can be removed.
In one embodiment, as shown in fig. 6, the step of calculating the countermeasures loss value, the cyclic consistency loss value and the identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image includes:
Step 602, calculating a counterdamage value corresponding to the watermark removal model to be trained according to the image recognition result.
Step 604, calculating a cyclic consistency loss value corresponding to the watermark removal model to be trained according to the first watermark elimination image, the first watermark addition image, the watermark image and the clean image in the style migration result.
Step 606, calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark added image, the second watermark removed image, the watermark image and the clean image in the style migration result.
The style migration result includes a watermark-free image, a watermarked image, a first watermark-removed image, a second watermark-added image, and a second watermark-removed image. The image recognition result includes a probability that the watermark-free image is a true image and a probability that the watermarked image is a true image.
After obtaining the style migration result and the image recognition result, the server may use an mse_loss (Mean Square Error ) loss function to calculate an fight loss value corresponding to the first generator according to the probability that the watermark-free image in the image recognition result is a real image, where the first generator may be denoted by G, and the fight loss value corresponding to the first generator may be denoted by loss_g_a. And calculating an antagonism loss value corresponding to the second generator according to the probability that the watermark image is a real image in the image recognition result by adopting an MSE_loss (Mean Square Error ) loss function. The second generator may be denoted by F and the corresponding loss of antagonism value by loss_g_b. The server may also calculate, by using an average absolute error l1_loss function, a countermeasures loss value corresponding to the first discriminator according to a probability that the non-watermark image in the image recognition result is a true image, where the first discriminator may be represented by D Y, and the countermeasures loss value corresponding to the first discriminator may be represented by loss_d_a. And calculating an antagonism loss value corresponding to the second discriminator according to the probability that the watermark image is a real image in the image recognition result by adopting an average absolute error L1_loss loss function. The second arbiter may be represented by D X and the corresponding loss of challenge value by loss D B. The server may use the sum of loss_g_a and loss_g_b as the counter-loss value for the generator in the watermark removal model to be trained, and the sum of loss_d_a and loss_d_b as the counter-loss value for the discriminator in the watermark removal model to be trained.
The server extracts a first watermark adding image from the style migration result, and calculates the difference between the first watermark adding image and the watermark image by adopting an average absolute error L1_loss function, so as to obtain a first cycle consistency loss value loss_cycle_A of a generator in the watermark removing model to be trained. And extracting a first watermark removal image from the style migration result, and calculating the difference between the first watermark removal image and the clean image by adopting an average absolute error L1_loss function, thereby obtaining a second cycle consistency loss value loss_cycle_B of a generator in the watermark removal model to be trained. The server may add the loss_cycle_a to the loss_cycle_b to obtain a cyclic consistency loss value corresponding to the watermark removal model to be trained.
The server extracts a second watermark removal image from the style migration result, and calculates the difference between the second watermark addition image and the clean image by adopting an average absolute error L1_loss function, so as to obtain a first identity reconstruction loss value idt_A corresponding to the watermark removal model to be trained. And the server extracts a second watermark adding image from the style migration result, and calculates the difference between the second watermark adding image and the watermark image by adopting an average absolute error L1_loss function, thereby obtaining a second identity reconstruction loss value idt_B corresponding to the watermark removing model to be trained. The server may add idt_a and idt_b to obtain an identity reconstruction loss value corresponding to the watermark removal model to be trained.
In this embodiment, the server calculates an countermeasures loss value corresponding to the watermark removal model to be trained according to the image recognition result, calculates a cyclic consistency loss value corresponding to the watermark removal model to be trained according to the first watermark removal image and the first watermark addition image, the watermark image and the clean image in the style migration result, and calculates an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark addition image and the second watermark removal image, the watermark image and the clean image in the style migration result. The loss of the watermark removal model to be trained can be calculated rapidly and comprehensively.
In one embodiment, the challenge loss value comprises a generator challenge loss value and a arbiter challenge loss value, and calculating the target loss value for the watermark removal model to be trained from the challenge loss value, the cyclic consistency loss value, and the identity reconstruction loss value comprises: acquiring an antagonism weight corresponding to the antagonism loss value, a cyclic weight corresponding to the cyclic consistency loss value and an identity weight corresponding to the identity reconstruction loss value of a generator in the antagonism loss value, wherein the antagonism weight, the cyclic weight and the identity weight have a preset relationship; calculating a generator loss value corresponding to the watermark removal model to be trained according to the generator counterloss and counterweight, the cyclic consistency loss value and cyclic weight, and the identity reconstruction loss value and the identity weight; and taking the discriminator counter-loss value in the generator loss value as a target loss value corresponding to the watermark removal model to be trained.
Since the challenge loss value includes the challenge loss value of the generator and the challenge loss value of the arbiter in the watermark removal model to be trained, the server may acquire the challenge weight corresponding to the challenge loss value of the generator in the challenge loss value, the cyclic weight corresponding to the cyclic consistency loss value, and the identity weight corresponding to the identity reconstruction loss value. The countermeasure weight, the circulation weight and the identity weight have a preset relation, and the preset relation can be added to be one. The assignment between weights may be made while training the model. And the server performs weighted calculation on the antagonism loss value, the cyclic consistency loss value and the identity reconstruction loss value of the generator to obtain the generator loss value corresponding to the watermark removal model to be trained. The calculation formula of the generator loss value can be as follows:
loss_G=w1(loss_G_A+loss_G_B)+w2(loss_cycle_A+loss_cycle_B)+w3(idt_A+idt_B)
Where loss_g represents the generator penalty value, w1 represents the counterweights, w2 represents the recurring weights, and w3 represents the identity weights. loss_g_a represents the counter-loss value corresponding to the first generator, loss_g_b represents the counter-loss value corresponding to the second generator, loss_cycle_a represents the first cyclical consistency loss value, loss_cycle_b represents the second cyclical consistency loss value, idt_a represents the first identity reconstruction loss value, and idt_b represents the second identity reconstruction loss value.
The discriminator of the challenge loss values may be calculated using the following formula:
loss_D=loss_D_A+loss_D_B
Where loss_d represents the discriminator contrast loss value, loss_d_a represents the contrast loss value corresponding to the first discriminator, and loss_d_b represents the contrast loss value corresponding to the second discriminator.
The server may use the generator loss value and the discriminator counter loss value of the counter loss values as the target loss value corresponding to the watermark removal model to be trained. In this embodiment, the server performs weighted calculation on each loss value of the generator to obtain the generator loss value, and uses the discriminator counter loss value in the generator loss value and the counter loss value as the target loss value, so that the target loss value corresponding to the watermark removal model to be trained can be calculated according to the importance of each loss in the whole loss calculation process, and the calculation accuracy of the model loss can be improved, thereby improving the training efficiency and accuracy of the watermark removal model to be trained.
In one embodiment, as shown in fig. 7, the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the trained watermark removal model includes:
step 702, fixing the generation parameters of the decoder in each generator of the watermark removal model to be trained, and adjusting the discrimination parameters of the discriminators in each sub-network according to the target loss value to obtain the adjusted loss value.
Step 704, fixing the discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting the generation parameters of the decoder according to the adjusted loss value, wherein the discrimination parameters comprise the coding parameters corresponding to the encoder.
And 706, repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain a watermark removal model after training, wherein the current generator parameters comprise current encoder parameters corresponding to an encoder in the model generator.
The watermark removal model to be trained comprises two sub-network structures, each sub-network structure comprises a generator and a discriminator, the generator comprises an encoder and a first decoder, the generator and the discriminator share one encoder, and the discriminator comprises the encoder and a second decoder.
The watermark removal model to be trained can be trained in a decoupling training mode. Decoupling training refers to the adjustment of the coding parameters of the encoder only when training the discriminators in the model. Therefore, when training the discriminators in the model, the generating parameters of the decoder in each generator of the watermark removal model to be trained need to be fixed, and the discriminating parameters of the discriminators in each sub-network are adjusted according to the target loss value, so that the adjusted loss value is obtained. The discriminating parameters of the discriminator comprise the coding parameters of the coder, and the discriminating parameters comprise the coding parameters corresponding to the coder. When the generator in the model is trained, the discrimination parameters of each discriminator of the watermark removal model to be trained are required to be fixed, and the generation parameters of the decoder are adjusted according to the adjusted loss value, so that decoupling training is realized. Model parameters are gradually optimized through countermeasure learning of two pairs of generators and discriminators in a watermark removal model to be trained, so that a first generator can generate a watermarked image into a cleaner watermark removal image, a second generator can generate a watermark-free image into a watermarked image which is close to the existing watermark style, the process is a process of countermeasure assisting learning, and model training is stopped by performing iterative training on the model until preset conditions are reached. The preset condition may be that the loss value of the model is no longer reduced or smaller than a threshold value. And the server stores the current generation parameters corresponding to the model generator and the model generator, so that a trained watermark removal model is obtained.
In this embodiment, the quality of the clean image output by the watermark removal model after training can be improved by performing decoupling training on the watermark removal model to be trained.
In one embodiment, as shown in fig. 8, there is provided an image watermark removal method, which is described by taking an example that the method is applied to the server in fig. 1, and includes the following steps:
step 802, an image to be processed is acquired.
Step 804, inputting the image to be processed into a generator of a trained watermark removal model, encoding the image to be processed through an encoder in the generator, outputting image characteristics, wherein the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the process of countermeasure training, a target loss value corresponding to the watermark removal model is obtained by calculating according to a countermeasure loss value, a cycle consistency loss value and an identity reconstruction loss value; the challenge loss value, the cyclic consistency loss value and the identity reconstruction loss value are obtained by performing style migration calculation on the sample image dataset.
And step 806, performing style migration on the image characteristics through the generator to obtain a clean image corresponding to the image to be processed.
The image to be processed refers to an image that needs watermark removal processing. The watermark contained in the image to be processed may be a digital watermark, and the number, position, color, transparency, etc. of the digital watermark may be random. When watermark removal is required, the terminal can send the image to be processed to the server. And after the server acquires the image to be processed, calling the trained watermark removal model. The trained watermark removal model is obtained by performing countermeasure training according to the sample image data set. The manner of countermeasure training may be decoupling training. Decoupling training refers to the adjustment of the coding parameters of the encoder only when training the discriminators in the model. In the countermeasure training process, watermark images and clean images corresponding to the watermark images are extracted from a sample image data set, the watermark images and the clean images are input into a watermark removal model to be trained, tertiary air grid migration is carried out on the watermark images and the clean images through the watermark removal model to be trained, style migration results are obtained, and target images in the style migration results are identified according to the watermark images and the clean images, so that image identification results are obtained. According to the style migration result, the image recognition result, the watermark image and the clean image, calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained, calculating a target loss value corresponding to the watermark removal model to be trained according to the antagonism loss value, the cyclic consistency loss value and the identity reconstruction loss value, adjusting model parameters of the watermark removal model to be trained through the target loss value, and finally reserving current generation parameters corresponding to a model generator and the model generator, thereby obtaining the watermark removal model after training. The model generator refers to a generator for generating an image without a watermark. The model generator includes an encoder and a corresponding decoder.
The server inputs the image to be processed into a trained watermark removal model, wherein the watermark removal model comprises a generator, the generator comprises an encoder and a corresponding decoder, and the image to be processed is encoded through the encoder to obtain image characteristics. And the image characteristics are used as the input of a decoder, and the style migration is carried out on the image characteristics through the decoder, so that a clean image corresponding to the image to be processed is obtained.
In this embodiment, since the watermark removal model after training is obtained by performing countermeasure training according to the sample image dataset, the quality of the clean image to be output can be improved. In the countermeasure training process, the target loss value corresponding to the watermark removal model is calculated according to the countermeasure loss value, the cyclic consistency loss value and the identity reconstruction loss value, and the countermeasure loss value, the cyclic consistency loss value and the identity reconstruction loss value are obtained by performing style migration calculation on the sample image data set. The contrast loss value can remove the generated watermark-free image and the mode gap between the watermark-free image and the corresponding input image, and the cyclical consistency loss value can ensure that the content of the image is not changed before and after watermark removal. The identity reconstruction loss value can ensure that the color composition of the images is the same before and after watermark removal, so that the trained watermark removal model effectively removes the watermark and ensures the original information of the input image, thereby avoiding the loss of the original information and effectively improving the quality of the output clean image.
It should be understood that, although the steps in the flowcharts of fig. 2 to 3,5 to 8 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps of fig. 2 to 3,5 to 8 may comprise sub-steps or phases which are not necessarily performed at the same time but may be performed at different times, nor does the order of execution of the sub-steps or phases necessarily follow one another, but may be performed alternately or alternately with at least some of the other steps or sub-steps of other steps.
In one embodiment, as shown in fig. 9, there is provided a watermark removal model training apparatus, including: a sample acquisition module 902, a style migration module 904, a loss calculation module 906 and an countermeasure training module 908,
Wherein:
A sample acquisition module 902 for acquiring a sample image dataset.
The style migration module 904 is configured to extract a watermark image and a clean image corresponding to the watermark image in the sample image dataset; and inputting the watermark image and the clean image into a watermark removal model to be trained, and performing tertiary air grid migration on the watermark image and the clean image through the watermark removal model to obtain a style migration result.
The loss calculation module 906 is configured to calculate an anti-loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model according to the style migration result, the watermark image and the clean image; and calculating a target loss value corresponding to the watermark removal model according to the antagonism loss value, the cyclic consistency loss value and the identity reconstruction loss value.
And the countermeasure training module 908 is configured to perform countermeasure training on the watermark removal model according to the target loss value until a preset condition is reached, and stop model training to obtain a trained watermark removal model.
In one embodiment, the style migration module 904 is further configured to input the watermark image and the clean image into a corresponding generator of the watermark removal model to be trained to perform tertiary air grid migration, so as to obtain a watermark-free image corresponding to the watermark image, a watermark-containing image corresponding to the clean image, a first watermark-added image corresponding to the watermark-free image, a first watermark-removed image corresponding to the watermark image, a second watermark-added image corresponding to the watermark image, and a second watermark-removed image corresponding to the clean image; and generating a style migration result according to the watermark-free image, the watermarked image, the first watermarking image, the second watermarking image and the second watermarking image.
In one embodiment, the generators of the watermark removal model to be trained include a first generator and a second generator, and the style migration module 904 is further configured to input the watermark image into the first generator of the watermark removal model to be trained for a first style migration, input the clean image into the second generator of the watermark removal model to be trained for a first style migration, output a watermark-free image corresponding to the watermark image through the first generator, and output a watermark-bearing image corresponding to the clean image through the second generator; inputting the watermark-free image into a second generator for second air grid migration, inputting the watermarked image into the second generator for second air grid migration, outputting a first watermark added image corresponding to the watermark-free image through the second generator, and outputting a first watermark removed image corresponding to the watermark-free image through the first generator; and inputting the clean image into a first generator for third air grid migration, inputting the watermark image into a second generator for third air grid migration, outputting a second watermark removed image corresponding to the clean image through the first generator, and outputting a second watermark added image corresponding to the watermark image through the second generator.
In one embodiment, the style migration module 904 is further configured to use the watermark-free image and the watermarked image in the style migration result as target images; and inputting the target image into a corresponding discriminator of the watermark removal model to be trained for recognition, and obtaining an image recognition result, wherein the image recognition result comprises a first recognition result corresponding to the watermark-free image and a second recognition result corresponding to the watermark-free image.
In one embodiment, the style migration result includes a watermark-free image, a watermarked image, a first watermark removal image, a second watermark addition map, and a second watermark removal image, and the loss calculation module 906 is further configured to calculate an counterloss value corresponding to the watermark removal model to be trained according to the image recognition result; calculating a cyclic consistency loss value corresponding to a watermark removal model to be trained according to the first watermark elimination image, the first watermark addition image, the watermark image and the clean image in the style migration result; and calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark added image, the second watermark removed image, the watermark image and the clean image in the style migration result.
In one embodiment, the counterdamage value includes a generator counterdamage value and a arbiter counterdamage value, and the loss calculation module 906 is further configured to obtain a counterweight corresponding to the generator counterdamage value, a cyclic weight corresponding to the cyclic consistency loss value, and an identity weight corresponding to the identity reconstruction loss value in the counterdamage value, where the counterweight, the cyclic weight, and the identity weight have a preset relationship; calculating a generator loss value corresponding to the watermark removal model to be trained according to the generator counterloss and counterweight, the cyclic consistency loss value and cyclic weight, and the identity reconstruction loss value and the identity weight; and taking the discriminator counter-loss value in the generator loss value as a target loss value corresponding to the watermark removal model to be trained.
In one embodiment, the sample acquisition module 902 is further configured to acquire an initial image dataset; performing size adjustment on the initial image data set to obtain an adjusted initial image data set; clipping the adjusted initial image data set to obtain a clipped initial image data set; and carrying out normalization processing on the cut initial image data set to obtain a sample data set.
In one embodiment, the watermark removal model to be trained includes two sub-networks, each sub-network includes a generator and a discriminator, each of the generator and the discriminator includes an encoder and shares one encoder, the countermeasure training module 908 is further configured to fix a generation parameter of a decoder in each generator of the watermark removal model to be trained, and adjust the discrimination parameter of the discriminator in each sub-network according to the target loss value to obtain an adjusted loss value; fixing discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting the generation parameters of the decoder according to the adjusted loss value, wherein the discrimination parameters comprise coding parameters corresponding to the encoder; repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain a watermark removal model after training, wherein the current generator parameters comprise current encoder parameters corresponding to an encoder in the model generator.
In one embodiment, as shown in fig. 10, there is provided an image watermark removal apparatus including: an image acquisition module 1002, an image encoding module 1004, and a watermark removal module 1006, wherein:
an image acquisition module 1002, configured to acquire an image to be processed.
The image encoding module 1004 is configured to input an image to be processed into an encoder of a trained watermark removal model for encoding, and output image features, where the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the countermeasure training process, a target loss value corresponding to the watermark removal model is calculated according to the countermeasure loss value, the cyclic consistency loss value, and the identity reconstruction loss value.
The watermark removing module 1006 is configured to input the image features into a generator of the trained watermark removing model to remove the watermark, so as to obtain a clean image corresponding to the image to be processed.
For specific limitations of the watermark removal model training apparatus, reference may be made to the above limitation of the watermark removal model training method, and no further description is given here. For specific limitations of the image watermark removal apparatus, reference may be made to the above limitations of the image watermark removal method, and no further description is given here. The above-described watermark removal model training apparatus and each module in the image watermark removal apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 11. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data of a watermark removal model training method or data of an image watermark removal method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a watermark removal model training method or an image watermark removal method.
It will be appreciated by those skilled in the art that the structure shown in FIG. 11 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the various embodiments described above when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the various embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A watermark removal model training method, the method comprising:
acquiring a sample image dataset;
Extracting a watermark image from the sample image dataset and a clean image corresponding to the watermark image;
Inputting the watermark image and the clean image into a corresponding generator of a watermark removal model to be trained for tertiary air grid migration to obtain an anhydrous watermark image corresponding to the watermark image, a watermarked image corresponding to the clean image, a first watermark adding image corresponding to the anhydrous watermark image, a first watermark removing image corresponding to the watermarked image, a second watermark adding image corresponding to the watermark image and a second watermark removing image corresponding to the clean image; generating a style migration result according to the watermark-free image, the watermarked image, the first watermark removed image, the second watermark added image and the second watermark removed image;
Identifying a target image in the style migration result according to the watermark image and the clean image to obtain an image identification result;
Calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the style migration result, the image recognition result, the watermark image and the clean image;
calculating a target loss value corresponding to the watermark removal model to be trained according to the counterloss value, the cyclical consistency loss value and the identity reconstruction loss value;
And performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, and obtaining the watermark removal model after training.
2. The method of claim 1, wherein the generator of the watermark removal model to be trained comprises a first generator and a second generator, wherein the inputting the watermark image and the clean image into the corresponding generator of the watermark removal model to be trained performs tertiary air grid migration to obtain an anhydrous watermark image corresponding to the watermark image, a watermarked image corresponding to the clean image, a first watermark added image corresponding to the anhydrous watermark image, and a first watermark removed image corresponding to the watermarked image, a second watermark added image corresponding to the watermark image, and a second watermark removed image corresponding to the clean image comprises:
inputting the watermark image into a first generator of the watermark removal model to be trained for first style migration, inputting the clean image into a second generator of the watermark removal model to be trained for first style migration, outputting a watermark-free image corresponding to the watermark image through the first generator, and outputting a watermark-bearing image corresponding to the clean image through the second generator;
Inputting the watermark-free image into the second generator for second time of grid migration, inputting the watermarked image into the second generator for second time of grid migration, outputting a first watermark-added image corresponding to the watermark-free image through the second generator, and outputting a first watermark-removed image corresponding to the watermark-free image through the first generator;
And inputting the clean image into the first generator for tertiary air grid migration, inputting the watermark image into the second generator for tertiary air grid migration, outputting a second watermark removed image corresponding to the clean image through the first generator, and outputting a second watermark added image corresponding to the watermark image through the second generator.
3. The method of claim 1, wherein the style migration result includes a watermark-free image, a watermarked image, a first watermark-removed image, a second watermark-added image, and a second watermark-removed image, and wherein calculating the counter loss value, the cyclic consistency loss value, and the identity reconstruction loss value corresponding to the watermark removal model to be trained based on the style migration result, the image recognition result, the watermark image, and the clean image includes:
Calculating a counterdamage value corresponding to the watermark removal model to be trained according to the image recognition result;
Calculating a cyclic consistency loss value corresponding to the watermark removal model to be trained according to the first watermark elimination image and the first watermark addition image in the style migration result, the watermark image and the clean image;
and calculating an identity reconstruction loss value corresponding to the watermark removal model to be trained according to the second watermark added image and the second watermark removed image in the style migration result, the watermark image and the clean image.
4. The method of claim 1, wherein the watermark removal model to be trained comprises two sub-networks, each sub-network comprising a generator and a arbiter, the generator and the arbiter each comprising an encoder and sharing one encoder, wherein the training of the watermark removal model to be trained against the target loss value until a predetermined condition is reached, the model training is stopped, and the trained watermark removal model comprises:
fixing the generation parameters of decoders in all generators of the watermark removal model to be trained, and adjusting the discrimination parameters of the discriminators in all sub-networks according to the target loss value to obtain an adjusted loss value;
Fixing discrimination parameters of each discriminator of the watermark removal model to be trained, and adjusting generation parameters of the decoder according to the adjusted loss value, wherein the discrimination parameters comprise coding parameters corresponding to an encoder;
repeating the step of performing countermeasure training on the watermark removal model to be trained according to the target loss value until a preset condition is reached, stopping model training, determining a model generator in the watermark removal model to be trained, and storing current generation parameters corresponding to the model generator and the model generator to obtain a watermark removal model after training, wherein the current generation parameters comprise current coding parameters corresponding to an encoder in the model generator.
5. An image watermark removal method, the method comprising:
acquiring an image to be processed;
Inputting the image to be processed into the trained watermark removal model according to any one of claims 1 to 4, encoding the image to be processed, outputting image characteristics, wherein the trained watermark removal model is obtained by performing countermeasure training according to a sample image data set, and in the process of countermeasure training, a target loss value corresponding to the watermark removal model is calculated according to a countermeasure loss value, a cyclic consistency loss value and an identity reconstruction loss value; the challenge loss value, the cyclical identity loss value and the identity reconstruction loss value are obtained by performing style migration calculation on the sample image dataset;
And watermark removal is carried out on the image characteristics through the trained watermark removal model, so that a clean image corresponding to the image to be processed is obtained.
6. A watermark removal model training apparatus, the apparatus comprising:
The sample acquisition module is used for acquiring a sample image data set;
The style migration module is used for extracting watermark images and clean images corresponding to the watermark images from the sample image data set; inputting the watermark image and the clean image into a corresponding generator of a watermark removal model to be trained for tertiary air grid migration to obtain an anhydrous watermark image corresponding to the watermark image, a watermarked image corresponding to the clean image, a first watermark adding image corresponding to the anhydrous watermark image, a first watermark removing image corresponding to the watermarked image, a second watermark adding image corresponding to the watermark image and a second watermark removing image corresponding to the clean image; generating a style migration result according to the watermark-free image, the watermarked image, the first watermark removed image, the second watermark added image and the second watermark removed image; identifying a target image in the style migration result according to the watermark image and the clean image to obtain an image identification result;
The loss calculation module is used for calculating an antagonism loss value, a cyclic consistency loss value and an identity reconstruction loss value corresponding to the watermark removal model according to the style migration result, the image recognition result, the watermark image and the clean image; calculating a target loss value corresponding to the watermark removal model according to the counterloss value, the cyclic consistency loss value and the identity reconstruction loss value;
And the countermeasure training module is used for performing countermeasure training on the watermark removal model according to the target loss value until a preset condition is reached, stopping model training, and obtaining the trained watermark removal model.
7. The apparatus of claim 6, wherein the generator of the watermark removal model to be trained comprises a first generator and a second generator, wherein the style migration module is further configured to input the watermark image into the first generator of the watermark removal model to be trained for a first style migration, and input the clean image into the second generator of the watermark removal model to be trained for the first style migration, output a watermark-free image corresponding to the watermark image through the first generator, and output a watermark-bearing image corresponding to the clean image through the second generator; inputting the watermark-free image into a second generator for second air grid migration, inputting the watermarked image into the second generator for second air grid migration, outputting a first watermark added image corresponding to the watermark-free image through the second generator, and outputting a first watermark removed image corresponding to the watermark-free image through the first generator; and inputting the clean image into a first generator for third air grid migration, inputting the watermark image into a second generator for third air grid migration, outputting a second watermark removed image corresponding to the clean image through the first generator, and outputting a second watermark added image corresponding to the watermark image through the second generator.
8. An image watermark removal apparatus, the apparatus comprising:
the image acquisition module is used for acquiring an image to be processed;
An image coding module, configured to input the image to be processed into the trained watermark removal model according to any one of claims 1 to 4, code the image to be processed, and output image features, where the trained watermark removal model is obtained by performing countermeasure training according to a sample image dataset, and in the countermeasure training process, a target loss value corresponding to the watermark removal model is calculated according to a countermeasure loss value, a cyclic consistency loss value, and an identity reconstruction loss value; the challenge loss value, the cyclical identity loss value and the identity reconstruction loss value are obtained by performing style migration calculation on the sample image dataset;
and the watermark removing module is used for removing the watermark from the image characteristics through the trained watermark removing model to obtain a clean image corresponding to the image to be processed.
9. A computer device comprising a memory and a processor, the memory storing a computer program executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 5 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN202011238056.2A 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium Active CN112330522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011238056.2A CN112330522B (en) 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011238056.2A CN112330522B (en) 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112330522A CN112330522A (en) 2021-02-05
CN112330522B true CN112330522B (en) 2024-06-04

Family

ID=74316929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011238056.2A Active CN112330522B (en) 2020-11-09 2020-11-09 Watermark removal model training method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112330522B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950458B (en) * 2021-03-19 2022-06-21 润联软件***(深圳)有限公司 Image seal removing method and device based on countermeasure generation network and related equipment
CN113052068B (en) * 2021-03-24 2024-04-30 深圳威富云数科技有限公司 Image processing method, device, computer equipment and storage medium
CN113822976A (en) * 2021-06-08 2021-12-21 腾讯科技(深圳)有限公司 Training method and device of generator, storage medium and electronic device
CN113379585B (en) * 2021-06-23 2022-05-27 景德镇陶瓷大学 Ceramic watermark model training method and embedding method for frameless positioning
CN113591856A (en) * 2021-08-23 2021-11-02 中国银行股份有限公司 Bill picture processing method and device
CN113781352A (en) * 2021-09-16 2021-12-10 科大讯飞股份有限公司 Light removal method and device, electronic equipment and storage medium
CN113793258A (en) * 2021-09-18 2021-12-14 超级视线科技有限公司 Privacy protection method and device for monitoring video image
CN117034219A (en) * 2022-09-09 2023-11-10 腾讯科技(深圳)有限公司 Data processing method, device, equipment and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN110796583A (en) * 2019-10-25 2020-02-14 南京航空航天大学 Stylized visible watermark adding method
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network
CN111696046A (en) * 2019-03-13 2020-09-22 北京奇虎科技有限公司 Watermark removing method and device based on generating type countermeasure network
CN111753908A (en) * 2020-06-24 2020-10-09 北京百度网讯科技有限公司 Image classification method and device and style migration model training method and device
CN111862274A (en) * 2020-07-21 2020-10-30 有半岛(北京)信息科技有限公司 Training method for generating confrontation network, and image style migration method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7139444B2 (en) * 2002-01-04 2006-11-21 Autodesk, Inc. Method for applying a digital watermark to an output image from a computer program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN111696046A (en) * 2019-03-13 2020-09-22 北京奇虎科技有限公司 Watermark removing method and device based on generating type countermeasure network
CN110599387A (en) * 2019-08-08 2019-12-20 北京邮电大学 Method and device for automatically removing image watermark
CN110796583A (en) * 2019-10-25 2020-02-14 南京航空航天大学 Stylized visible watermark adding method
CN111105336A (en) * 2019-12-04 2020-05-05 山东浪潮人工智能研究院有限公司 Image watermarking removing method based on countermeasure network
CN111753908A (en) * 2020-06-24 2020-10-09 北京百度网讯科技有限公司 Image classification method and device and style migration model training method and device
CN111862274A (en) * 2020-07-21 2020-10-30 有半岛(北京)信息科技有限公司 Training method for generating confrontation network, and image style migration method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于生成对抗网络的图片风格迁移;许哲豪;陈玮;;软件导刊(06);全文 *

Also Published As

Publication number Publication date
CN112330522A (en) 2021-02-05

Similar Documents

Publication Publication Date Title
CN112330522B (en) Watermark removal model training method, device, computer equipment and storage medium
Li et al. No-reference and robust image sharpness evaluation based on multiscale spatial and spectral features
CN111179177B (en) Image reconstruction model training method, image reconstruction method, device and medium
CN110148085B (en) Face image super-resolution reconstruction method and computer readable storage medium
CN112598579A (en) Image super-resolution method and device for monitoring scene and storage medium
CN111160313B (en) Face representation attack detection method based on LBP-VAE anomaly detection model
Shi et al. Steganalysis versus splicing detection
US20230076017A1 (en) Method for training neural network by using de-identified image and server providing same
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN117095019B (en) Image segmentation method and related device
CN114140831B (en) Human body posture estimation method and device, electronic equipment and storage medium
CN117079083A (en) Image restoration model training method and device, electronic equipment and storage medium
CN117557689B (en) Image processing method, device, electronic equipment and storage medium
CN114694074A (en) Method, device and storage medium for generating video by using image
CN114494387A (en) Data set network generation model and fog map generation method
CN117496338A (en) Method, equipment and system for defending website picture tampering and picture immediate transmission
WO2021124324A1 (en) System and method for reconstruction of faces from anonymized media using neural network based steganography
CN109345440B (en) Digital media watermark processing method, computer device and storage medium
CN115761837A (en) Face recognition quality detection method, system, device and medium
Li et al. An image watermark removal method for secure internet of things applications based on federated learning
CN114694065A (en) Video processing method, device, computer equipment and storage medium
CN116264606A (en) Method, apparatus and computer program product for processing video
Sedghi et al. Low-dimensional decomposition of manifolds in presence of outliers
CN114727113B (en) Method and device for robust video watermarking in real-time scene
CN116912345B (en) Portrait cartoon processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant