CN115348360B - GAN-based self-adaptive embedded digital tag information hiding method - Google Patents

GAN-based self-adaptive embedded digital tag information hiding method Download PDF

Info

Publication number
CN115348360B
CN115348360B CN202210963950.9A CN202210963950A CN115348360B CN 115348360 B CN115348360 B CN 115348360B CN 202210963950 A CN202210963950 A CN 202210963950A CN 115348360 B CN115348360 B CN 115348360B
Authority
CN
China
Prior art keywords
image
target
sample
inputting
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210963950.9A
Other languages
Chinese (zh)
Other versions
CN115348360A (en
Inventor
刘圣龙
张舸
赵涛
吕艳丽
彭潇
周鑫
江伊雯
王迪
李云昭
姜嘉伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Big Data Center Of State Grid Corp Of China
Original Assignee
Big Data Center Of State Grid Corp Of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Big Data Center Of State Grid Corp Of China filed Critical Big Data Center Of State Grid Corp Of China
Priority to CN202210963950.9A priority Critical patent/CN115348360B/en
Publication of CN115348360A publication Critical patent/CN115348360A/en
Application granted granted Critical
Publication of CN115348360B publication Critical patent/CN115348360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32267Methods relating to embedding, encoding, decoding, detection or retrieval operations combined with processing of the image
    • H04N1/32272Encryption or ciphering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioethics (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses a GAN-based self-adaptive embedded digital tag information hiding method. The method comprises the following steps: acquiring an initial carrier image, a target key and initial digital label information; performing strengthening treatment on the initial carrier image to obtain a strengthened image; determining a modification probability image according to the enhanced image; the method and the device for determining the secret image according to the intensified image, the modification probability image, the target key and the initial digital label information solve the problem that the initial carrier image is maliciously tampered by an attacker, and can improve safety.

Description

GAN-based self-adaptive embedded digital tag information hiding method
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a GAN-based self-adaptive embedded digital tag information hiding method.
Background
With the development of information technology and the advent of the big data age, the initial carrier image has the characteristics of easy replication and modifiable, so that the initial carrier image faces the potential threat of being maliciously tampered with by an attacker, and further causes greater risks in user privacy or system security.
Disclosure of Invention
The embodiment of the invention provides a GAN-based self-adaptive embedded digital tag information hiding method, which aims to solve the problem that an initial carrier image is maliciously tampered by an attacker and can improve the safety.
According to an aspect of the present invention, there is provided a GAN-based adaptive embedded digital tag information hiding method, including:
acquiring an initial carrier image, a target key and initial digital label information;
performing strengthening treatment on the initial carrier image to obtain a strengthened image;
determining a modification probability image according to the enhanced image;
and determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information.
According to another aspect of the present invention, there is provided a GAN-based adaptive embedded digital tag information hiding apparatus including:
the acquisition module is used for acquiring the initial carrier image, the target key and the initial digital label information;
the processing module is used for carrying out strengthening treatment on the initial carrier image to obtain a strengthening image;
the determining module is used for determining a modification probability image according to the enhanced image;
And the encryption module is used for determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the GAN-based adaptive embedded digital tag information hiding method of any embodiment of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the GAN-based adaptive embedded digital tag information hiding method according to any embodiment of the present invention when executed.
The embodiment of the invention obtains the initial carrier image, the target key and the initial digital label information; performing strengthening treatment on the initial carrier image to obtain a strengthened image; determining a modification probability image according to the enhanced image; and determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information, solving the problem that the initial carrier image is maliciously tampered by an attacker, and improving the safety.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a GAN-based adaptive embedded digital tag information hiding method in an embodiment of the present invention;
FIG. 2 is a GAN-based adaptive embedded digital tag information hiding framework diagram in an embodiment of the present invention;
FIG. 3 is a diagram of a enhancer model framework in an embodiment of the invention;
FIG. 4 is a diagram of a hidden analyzer sub-model framework in an embodiment of the invention;
FIG. 5 is a diagram of a generator model framework in an embodiment of the invention;
FIG. 6 is a SE layer frame diagram in an embodiment of the invention;
FIG. 7 is a diagram of an embedder model framework in an embodiment of the invention;
FIG. 8 is a diagram of an extractor model framework in an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a GAN-based adaptive embedded digital tag information hiding device according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device in an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 1 is a flowchart of a GAN-based adaptive embedded digital tag information hiding method according to an embodiment of the present invention, where the method may be performed by a GAN-based adaptive embedded digital tag information hiding device according to an embodiment of the present invention, and the device may be implemented in software and/or hardware, as shown in fig. 1, and the method specifically includes the following steps:
s110, acquiring an initial carrier image, a target key and initial digital label information.
The initial carrier image is an image to be encrypted, the target key is used for encrypting the initial carrier image, the target key is also used for extracting digital label information, the target key can be preset or generated randomly, and the acquisition mode of the target key is not limited in the embodiment of the invention.
The initial digital label information may be preset or randomly generated, which is not limited in the embodiment of the present invention.
And S120, performing strengthening treatment on the initial carrier image to obtain a strengthening image.
Specifically, the method for strengthening the initial carrier image to obtain the strengthened image may be: inputting the initial carrier image into a enhancer model to obtain an enhanced image. The generation mode of the enhancer model may be: obtaining a target sample set, wherein the target sample set comprises: an initial carrier image sample; randomly generating a noise image; generating a noisy image sample from the initial carrier image sample and the noisy image; inputting the noisy image sample into a hidden analyzer sub-model to obtain an antagonism gradient image sample; generating an anti-noise image sample according to the anti-gradient image sample and a first target parameter; generating an enhanced image sample from the anti-noise image sample and an initial carrier image sample; determining a dense image sample from the enhanced image sample; inputting the image sample containing the secret into an analyzer network to obtain an analysis error rate; and adjusting the first target parameter according to the analysis error rate until the analysis error rate is greater than a set threshold value, and generating an enhancer model according to the first target parameter and the hidden analyzer sub-model.
S130, determining a modification probability image according to the enhanced image.
Specifically, the manner of determining the modification probability image according to the enhanced image may be: and inputting the enhanced image into a generator model to obtain a modified probability image. The manner of determining the modification probability image from the enhanced image may also be: inputting the enhanced image into an SE layer to obtain a target characteristic image; inputting the target feature images into three groups of first target layers to obtain first contracted images, wherein the first target layers comprise: conv-BN-leak ReLU layer and SE layer; inputting the first shrinkage image into four groups of Conv-BN-leak ReLU layers to obtain a target shrinkage feature map, wherein the number of channels of the target shrinkage feature map is a preset value, and the size of the target shrinkage feature map is a set value; inputting the target shrinkage feature map into seven groups of second target layers to obtain a first expansion image, wherein the second target layers comprise: a Deconv-BN-leak ReLU layer and a connection layer; and adjusting the modification probability value of the first expansion image to a preset value to obtain a modification probability image.
And S140, determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information.
Specifically, the manner of determining the secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information may be: and inputting the enhanced image, the modification probability image, the target key and the initial digital label information into an embedder model to obtain a secret-containing image. The method for determining the secret image according to the enhanced image, the modification probability image, the target key and the initial digital label information may further be: inputting the initial digital label information sample and the modification probability image sample into a first objective function to obtain a modification image sample; determining a dense image sample from the modified image sample and the enhanced image; inputting the dense image sample into an analyzer network to obtain an analysis error rate; adjusting a second target parameter according to the analysis error rate until the analysis error rate is greater than a set threshold value, and generating an embedder model according to the second target parameter and the first target function; and inputting the enhanced image, the modification probability image, the target key and the initial digital label information into an embedder model to obtain a secret-containing image.
In a specific example, as shown in fig. 2, the initial carrier image is input into a enhancer model to obtain an enhanced image, and the enhanced image is input into a generator model to obtain a modified probability image; and inputting the intensified image, the modification probability image, the target key and the initial digital label information into an embedder model to obtain a secret-containing image.
Optionally, performing reinforcement processing on the initial carrier image to obtain a reinforced image, including:
obtaining a target sample set, wherein the target sample set comprises: an initial carrier image sample;
randomly generating a noise image;
generating a noisy image sample from the initial carrier image sample and the noisy image;
inputting the noisy image sample into a hidden analyzer sub-model to obtain an antagonism gradient image sample;
generating an anti-noise image sample according to the anti-gradient image sample and a first target parameter;
generating an enhanced image sample from the anti-noise image sample and an initial carrier image sample;
determining a dense image sample from the enhanced image sample;
inputting the image sample containing the secret into an analyzer network to obtain an analysis error rate;
adjusting the first target parameter according to the analysis error rate until the analysis error rate is greater than a set threshold value, and generating an enhancer model according to the first target parameter and the hidden analyzer sub-model;
Inputting the initial carrier image into the enhancer model to obtain an enhanced image.
Specifically, as shown in FIG. 3, the enhancer model contains a hidden analysis sub-model. The enhancer model has an input as an initial carrier image I co One, one (a)The output is enhanced image I str The whole frame is internally provided with a circulating structure, a countermeasure sample which can interfere with the classification result of the hidden analysis submodel is continuously generated, and the capability of resisting the hidden analysis of the carrier image is improved in continuous circulating superposition.
Initial Carrier image I co Input enhancer network system, and the system will generate random noise figure with same sizeAnd the two are overlapped to obtain a noise-containing image +.>This process simulates the digital tag information embedding process, which is then input into a pre-trained hidden analysis sub-model to obtain an contrast gradient image +.>Then multiplying it with Epsilon coefficients to obtain an anti-noise image +.>Will->With the initial carrier image I co Superimposed, the first circularly output enhanced image is obtained>The calculation formula is as follows:
I str1 =I co +ε·η ad1
wherein Epsilon is Epsilon coefficient, eta ad1 To combat gradient images.
Continuously cycling the process to continuously generate the intensified image until the nth cycle generates I strn At this time, performance test was performed thereon. The performance test process is the actual embedding test, I strn Digital label information hiding as carrier image by respectivelyAnd hiding k groups of randomly generated digital label information, inputting the encrypted image into a hiding analyzer, and if the analysis error rate exceeds a set value E, passing a performance test, otherwise, carrying out n times of circulation. If the set performance test result is met, the enhanced image I at the moment is obtained strn Output I as a booster network str
Optionally, inputting the noisy image sample into a hidden analyzer sub-model to obtain an opposing gradient image sample, comprising:
inputting the noisy image sample into a residual layer to obtain a first residual characteristic image sample;
inputting the first residual characteristic image sample into a quantization and cut-off layer to obtain a second residual characteristic image sample;
inputting the second residual characteristic image sample into a co-occurrence matrix layer to obtain a co-occurrence matrix sample;
and determining an antagonism gradient image sample according to the symbiotic matrix sample.
Specifically, as shown in fig. 4, the hidden analysis submodel includes: residual layer, quantization and cutoff layer, co-occurrence matrix layer, conv-ReLU-BN layer, max-pooling layer, and full-connection layer.
Inputting the noisy image into a residual layer to obtain a residual characteristic diagram I re1 The calculation formula is as follows:
wherein R is i,j For the pixel value, X, of the pixel point in the residual feature image i,j N is the pixel value of the pixel point in the noisy image i,j Is X i,j The pixel value of the neighborhood is determined,is based on N i,j The predicted pixel value, c, is the residual order, and the residual feature map can reflect the region with weak correlation between pixels.
Next, I will be re1 Input quantityThe conversion and truncation layer obtains updated I re2 Quantification and interception of I re1 To a smaller extent, these regions are more likely to be hidden key regions to reduce the burden of subsequent computation while reducing I re1 Is a dimension of (c). Finally I is as follows re2 Inputting the symbiotic matrix layer to generate a characteristic symbiotic matrix M co The features are presented in the form of a 4-order joint distribution in the co-occurrence matrix. Obtaining M co And then inputting the images into a hidden classifying unit, and obtaining the antagonism gradient image through a Conv-ReLU-BN layer, a maximum pooling layer and a full connection layer.
Optionally, determining a modification probability image according to the enhanced image includes:
inputting the enhanced image into an SE layer to obtain a target characteristic image;
inputting the target feature images into three groups of first target layers to obtain first contracted images, wherein the first target layers comprise: conv-BN-LeakyReLU and SE layers;
Inputting the first shrinkage image into four groups of Conv-BN-LeakyReLU layers to obtain a target shrinkage feature map, wherein the number of channels of the target shrinkage feature map is a preset value, and the size of the target shrinkage feature map is a set value;
inputting the target shrinkage feature map into seven groups of second target layers to obtain a first expansion image, wherein the second target layers comprise: a Deconv-BN-LeakyReLU layer and a connection layer;
and adjusting the modification probability value of the first expansion image to a preset value to obtain a modification probability image.
Wherein, the preset value may be any value between [0,0.5], which is not limited in the embodiment of the present invention.
Specifically, as shown in FIG. 5, I pr1 The method comprises the steps of inputting, firstly, performing extrusion operation, namely global average pooling operation, wherein the step performs extrusion operation on a feature map, and each of 16 channels outputs a mean real number, so that global features are integrated onto a new feature map; next, a full connection layer (hereinafter referred to as FC layer) is input, which forms a whole with the ReLU layer and the following FC layerThe parameterized gating mechanism outputs a feature map which is still 16 channels, and aims to extract the correlation of each channel of the feature map and learn the attention factor of each channel; the feature map is then activated using a Sigmoid activation function, which acts to make the feature map sufficiently nonlinear; then output the characteristic diagram I se1 And I input to the module pr1 Multiplying to obtain I with the same size se2 The output characteristic diagram of the SE module is regarded as the importance weight of each channel, and the importance weight is weighted to the initial characteristic diagram to highlight important characteristics; finally, weighted feature diagram I se2 And I pr1 Adding to obtain the output I of the module pr2 . The SE layer is used for carrying out weight adjustment on the feature map, amplifying important features and reducing the influence of unimportant features, introducing an attention mechanism to a generator and focusing computation on the important features.
Then, three Conv-BN-LeakyReLU and three SE layers are calculated to obtain I pr3 -I pr8 The sizes are shown in Table 1, and are input into four groups of continuous Conv-BN-LeakyReLU to obtain I respectively pr9 -I pr12 The number of channels of the feature map is increased to 256, the size of the feature map is reduced to 1×1, and the contraction phase is ended. The contraction stage realizes the operation of I pr1 The number of channels is changed from 1 to 256 step by step, the loss of the features is reduced to the greatest extent, the important features are highlighted by introducing through the attention mechanism of the SE layer, and support is provided for finally generating the effective modification probability map.
In an extended network, I will be pr12 Input Deconv (deconvolution layer) -BN-LeakyReLU, output feature map I pr13 At this point, expansion of the feature extracted by the contracted network begins. To prevent the loss of shallow features during the extension process, a connection layer is then introduced, I pr11 And I pr13 Performing connection on the dimension of the feature channel to obtain a feature diagram I pr14 At the moment, feature sharing of the contracted network and the expanded network is realized, and the integrity of the features is guaranteed. Then obtaining I through six groups of Deconv-BN-LeakyReLU and connection operation pr15 -I pr26 Which is provided withThe parameters of Deconv and the number of layers to which the connection layer is connected are shown in Table 1. Finally, the result is input into the final group of Deconv-BN-LeakyReLU operation to obtain I pr27 In order to ensure that as few pixels as possible are changed, the modification probability value is adjusted to [0,0.5 ] by using the final layer of ReLU-Sigmoid-0.5]Obtaining a modified probability map I pr . The extracted features are mapped into the modified probability map step by step through the expansion network, so that the features are prevented from being lost to the greatest extent.
The generator model can effectively extract the enhanced image I through the contraction and expansion process str And is converted into I pr Pre-preparations are made for digital label information embedding within an embedder network.
In one specific example, as shown in FIG. 5, the generator model includes a 28-layer structure. The enhanced image with the size of 1×256×256 is input into the Conv-BN-LeakyReLU layer 401 for processing, so as to obtain a characteristic image Ipr1 with the size of 16×128×128, the Conv operation widens the channel number to 16, which is favorable for extracting more characteristics, the downsampling operation shrinks the size to 128×128, the image characteristics are concentrated, the gradient is prevented from disappearing by the BN layer, the training is accelerated, and the nonlinearity of the enhanced image is increased by the LeakyReLU. Then, the SE layer 405, that is, the squeeze and exercise layer, is input to obtain Ipr2, then, three sets of Conv-BN-leak relu401 and SE layer 405 are further calculated to obtain Ipr3-Ipr8, and then, four sets of continuous Conv-BN-leak relu401 are input to obtain Ipr9-Ipr12, wherein the size of Ipr12 is 256×1×1, at this time, since the number of channels of the feature map is already expanded to 256, the size of the feature map is reduced to 1×1, and the calculation load brought by taking a larger number of channels to the network is far greater than the performance improvement, the maximum number of channels is set to 256, and at the same time, since the number of channels is no longer expanded, the SE layer 408 does not need to be added therein to adjust the feature weight, so as to finish the contraction stage. The contraction stage realizes feature extraction of the Ipr1, the number of channels is changed from 1 to 256 step by step, the loss of the features is reduced to the maximum extent, important features are highlighted by introducing through the attention mechanism of the SE layer 405, and support is provided for finally generating an effective modification probability map.
Ipr12 is input to Deconv-BN-LeakyReLU403, and a feature map Ipr13 of 256×2×2 is output, at which time expansion of features extracted by the contracted network is started to be gradually realized. In order to prevent the loss of shallow features in the expansion process, a connection layer 404 is introduced, and the Ipr11 and the Ipr13 are connected in the feature channel dimension to obtain a feature map Ipr14 with the size of 512×2×2, so that feature sharing of a contracted network and an expanded network is realized, and the integrity of the features is guaranteed. And performing operation on six groups of Deconv-BN-LeakyReLU403 and the connection layer 404 to obtain Ipr15-Ipr26 respectively. Finally, the modified probability value is input into the final group of Deconv-BN-LeakyReLU403 to obtain an Ipr27 with the size of 1 multiplied by 256, and in order to ensure that the least pixels are changed, the modified probability value is adjusted to be 0,0.5 by using the final layer of ReLU-Sigmoid-0.5 layer 402, so as to obtain a modified probability map Ipr406 with the size of 1 multiplied by 256. The extracted features are mapped into the modified probability map step by step through the expansion network, so that the feature is prevented from being lost to the greatest extent.
The network parameters for each layer are shown in table 1:
TABLE 1
Optionally, inputting the enhanced image into the SE layer to obtain a target feature image, including:
Extruding the reinforced image to obtain an extruded image;
acquiring a first characteristic image corresponding to the extrusion image;
activating the first characteristic image to obtain an activated first characteristic image;
determining a second characteristic image according to the activated first characteristic image and the enhanced image;
and determining a target characteristic image according to the second characteristic image and the enhanced image.
Specifically, as shown in fig. 6, the enhanced image is input into the GAP layer to obtain an extruded image, the extruded image is input into the FC layer, the ReLU layer and the FC layer to obtain a first feature image, activation is performed based on a Sigmoid function to obtain an activated first feature image Ise1, a second feature image Ise2 is determined according to the activated first feature image Ise1 and the enhanced image, and a target feature image Ipr2 is determined according to Ise2 and the enhanced image.
Specifically, the SE (SqueezeandExcitation) layer: inputting Ipr1, firstly performing extrusion operation, namely global average pooling operation, wherein the step extrudes the feature map from 16×128×128 to 16×1×1, and each of 16 channels outputs a mean real number, so that global features are integrated on a new feature map; then, the full connection layer is input, a parameterized gating mechanism is formed by the full connection layer, the ReLU layer and the subsequent full connection layer, the output characteristic diagram is still 16 multiplied by 1, and the aim is to extract the correlation of each channel of the characteristic diagram and learn the attention factor of each channel; the feature map is then activated using a Sigmoid activation function, which acts to make the feature map sufficiently nonlinear; then multiplying the output characteristic diagram Ise1 with the size of 16 multiplied by 1 multiplied by the Ipr1501 input into the module to obtain Ise2 with the size of 16 multiplied by 128, namely, the output characteristic diagram of the SE module is regarded as the importance weight of each channel, and the importance weight is weighted to the initial characteristic diagram to highlight important characteristics; and finally, adding the weighted characteristic diagram Ise2 and the Ipr1 to obtain the output Ipr2 of the module, wherein the size of the output Ipr2 is 16 multiplied by 128. The SE layer is used for carrying out weight adjustment on the feature map, amplifying important features and reducing the influence of unimportant features, introducing a attention mechanism to the generator, concentrating calculation on the important features, simplifying the training process and being beneficial to improving the training efficiency.
Optionally, determining the secret image according to the enhanced image, the modification probability image, the target key and the initial digital label information includes:
inputting the initial digital label information sample and the modification probability image sample into a first objective function to obtain a modification image sample;
determining a dense image sample from the modified image sample and the enhanced image;
inputting the dense image sample into an analyzer network to obtain an analysis error rate;
adjusting a second target parameter according to the analysis error rate until the analysis error rate is greater than a set threshold value, and generating an embedder model according to the second target parameter and the first target function;
and inputting the enhanced image, the modification probability image, the target key and the initial digital label information into an embedder model to obtain a secret-containing image.
Wherein the first objective function is as follows:
wherein I is da For the initial digital label information, tanh is hyperbolic tangent function, beta is a second target parameter, I mo To modify an image, I pr To modify the probability image, I da Is a digital label information image.
Specifically, as shown in FIG. 7, in training the embedder model, initial digital tag information D conforming to an even distribution between (0, 1) is randomly generated by an "np.random.rand" function or The best performance of the system in the case of random generation of digital label information images can be obtained through training. In the actual transmission stage, the user is required to input the digital label information to be transmitted, and the digital label information is embedded through network coding.
Image I of digital label information da Modification of the summaryRate image I pr Together with the first objective function, to obtain a modified image I mo Finally, the obtained modified image I mo Intensified image I superimposed on input str On top of this, a dense image of size D X W X H is obtained, i.e. for the enhanced image I str The modification of the pixel level is carried out, and the embedding of the digital label information in the training stage is realized.
Optionally, inputting the enhanced image, the modified probability image, the target key and the initial digital label information into an embedder model to obtain a secret-containing image, including:
inputting the initial digital label information into a format conversion layer to obtain a digital label information vector;
and inputting the digital tag information vector, the enhanced image, the modification probability image and the target key into an encoder to obtain a secret-containing image.
Specifically, as shown in fig. 7, after training is completed, the actual embedding stage is entered, and the initial digital label information D in text format is first entered or The input format conversion layer is used for converting text into byte and then into bit to generate binary digit label information vector V da . The embedder network selectively embeds the digital label information by using the STC encoder, and firstly embeds I pr The distortion cost image I is obtained through calculation according to the following formula cos
And then with the shared secret key K, V da I str Inputting the digital label information into STC encoder to embed digital label information, thereby obtaining a dense image I ste The actual embedding process of the digital label information is completed. Because the actual transmission process needs to realize the complete end-to-end process of embedding and extracting the digital label information, the STC encoder is selected to embed the digital label information, and the modified probability map, the shared key, the binary digital label information vector and the enhanced image are input togetherThe STC encoder performs digital tag information embedding so as to obtain a dense image, and thus the actual embedding process of the digital tag information is completed.
Optionally, after determining the secret image according to the enhanced image, the modification probability image, the target key and the initial digital label information, the method further comprises:
and determining target digital label information according to the secret-containing image and the target key.
Optionally, determining target digital label information according to the secret image and the target key includes:
inputting the encrypted image and the target key into a decoder to obtain a digital tag information vector;
and inputting the digital label information vector into a format conversion layer to obtain target digital label information.
Specifically, the extractor model includes an STC decoder and a format conversion layer; will contain the dense image I ste The digital label information is input into an STC decoder together with a shared secret key K, and digital label information vector V 'is obtained by extracting the digital label information through the decoder' da Then it is input into the format conversion layer, and V' da Converting bits data in the digital label into byte and then into text to finally obtain the extracted digital label information D ex
Optionally, after determining the secret image according to the enhanced image, the modification probability image, the target key and the initial digital label information, the method further comprises:
and determining an analysis error rate according to the dense image and the enhanced image.
Specifically, the manner of determining the analysis error rate according to the dense image and the enhanced image may be: inputting the dense image and the enhanced image into an analyzer model to obtain an analysis error rate.
Optionally, determining an analysis error rate according to the dense image and the enhanced image includes:
determining a first connection image from the dense image and the enhanced image;
inputting the first connection image into a preprocessing convolution layer to obtain a first residual image;
inputting the first residual image into a Conv-ABS-BNTANh layer to obtain a second residual image;
inputting the second residual image into a pooling layer to obtain a first residual characteristic image;
inputting the first residual characteristic image into four groups of third target layers to obtain a second residual characteristic image, wherein the third target layers comprise: a Conv-BN-Tanh layer and a pooling layer;
inputting the second residual characteristic image into a Conv-BN-ReLU layer and a pooling layer to obtain a third residual characteristic image;
and inputting the third residual characteristic image into a classification layer to obtain an analysis error rate.
In particular, the analyzer model consists of 15 layers in series. The analyzer network performs the role of an attacker in the network for I transmitted into the common channel str And I ste The two images are subjected to hidden analysis and judgment, and classification judgment is performed by extracting the characteristic values of 256 channels.
The analyzer network has two inputs, enhanced image I str And a dense image I ste Having an output, analysis category T ste The network specific parameters are shown in table 2:
TABLE 2
The analyzer model first takes two inputs I str And I ste Connecting to obtain I with larger dimension ste1 And inputs it into the preprocessing convolutional layer. Pretreatment convolution layer respectively extracting I ste1 Middle horizontal first order residual, vertical first order residual, horizontal second order residual, vertical second order residual, SQUARE3×3 residual and squire 5×5 residual images, the specific values of which are shown in table 3:
TABLE 3 Table 3
Now connect it as a convolution kernel, for I ste1 Performing convolution calculation to expand the channel number to 6 to obtain I composed of 6 residual images ste2 And the subsequent further extraction of the image features is facilitated. The pretreated I is then ste2 The Conv-ABS-BNTANh of the first layer is input, the convolution calculation is used for continuously widening the number of image channels to 8, six residual images are extracted in pretreatment, so that the residual images have symmetry, the ABS operation can take non-negative values, the calculation burden of an analysis network is effectively simplified, the BN calculation normalization also plays a role in improving efficiency, the situation of local optimal solution can be effectively prevented, and the Tanh activation function enables the residual images I to be formed ste2 Is more nonlinear, is favorable for extracting characteristics and finally obtains I ste3 . Introducing the image into an average pooling layer with the step length of 2, integrating the features extracted from the first layer onto each channel, and reducing the image size to obtain a feature map I ste4 . Immediately after pair I ste4 Four Conv-BN-Tanh and AveragePooling combined operations are used to respectively obtain I ste5 -I ste12 The number of channels is gradually increased, the image is gradually reduced, and the integrity of the extracted features is ensured to the greatest extent. In the last group of operation, reLU is selected as an activation function, conv-BN-ReLU and averagePooling combined operation is carried out, and I is obtained ste13 And I ste14 The averaging pooling operation shrinks 256 eigenvalues onto 256 channels and prepares for the last classification layer. The classification layer consists of a full connection layer and a Softmax layer, wherein the full connection layer multiplies the obtained characteristic values of 256 channels by corresponding weights respectively and then adds the characteristic values to obtain a characteristic diagram I containing 2 node values ste15 Finally, the Softmax layer is used for converting the characteristic values into two characteristic values with the sum of 1, and the characteristic values represent the analysis category T ste I.e. representing the pair of analyzer networksImage I ste The prediction of probabilities that are classified into those that contain digital label information and those that do not.
In a specific example, as shown in fig. 8, a connection operation 805 is performed on the enhanced image and the dense image to obtain an Iste1 with a larger dimension, a convolution calculation is performed on the Iste1 based on a convolution kernel 801 with a size of 6×5×5, and the number of channels of the convolution calculation is expanded to 6, so that an Iste2 composed of 6 residual images is obtained, which is favorable for further extracting image features. And inputting the preprocessed Iste2 into a first layer Conv-ABS-BNTANh803, continuously widening the number of image channels to 8 by using convolution calculation, and extracting six residual images in the preprocessing, so that the method has symmetry, can take non-negative values by using ABS operation, effectively simplifies the calculation burden of an analysis network, has the function of improving the efficiency by BN calculation normalization, can effectively prevent the occurrence of the situation of local optimal solution, and ensures that the residual image Iste2 has nonlinearity more by a Tanh activation function, is favorable for extracting characteristics, and finally obtains Iste3. The feature extracted from the first layer is integrated onto each channel by introducing the feature into an average pooling layer GAP806 with the step length of 2, and the image size is reduced to obtain a feature map Iste4 with the size of 8 multiplied by 128. And then, performing combined operation on Iste4 by using four Conv-BN-Tanh802 and GAP806 to respectively obtain Iste5-Iste12, gradually increasing the channel number to 128, gradually shrinking the image to 16 multiplied by 16, and ensuring the integrity of the extracted features to the greatest extent. In the last group of operations, reLU is selected as an activation function, because in the deep convolutional neural network, different activation functions are adopted to enable deep features to have nonlinearity, the activation function types of the whole analysis network are enriched, conv-BN-ReLU804 and GAP806 combined operation is carried out, iste13 and Iste14 are obtained, 256 feature values are contracted to 256 channels by GAP806, and preparation is also made for the final classification layer. The classifying layer consists of a full-connection layer and a Softmax layer, the full-connection layer multiplies the obtained characteristic values of 256 channels by corresponding weights respectively and then adds the characteristic values to obtain a characteristic graph Iste15 containing 2 node values, and finally the characteristic graph Iste15 is converted into two characteristic values with the sum of 1 by using the Softmax layer to represent analysis types, namely representing the prediction of the probability that an analyzer network belongs to two types of the input dense image containing digital label information and the input dense image not containing digital label information.
According to the technical scheme, an initial carrier image, a target key and initial digital label information are acquired; performing strengthening treatment on the initial carrier image to obtain a strengthened image; determining a modification probability image according to the enhanced image; and determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information, solving the problem that the initial carrier image is maliciously tampered by an attacker, and improving the safety.
Fig. 9 is a schematic structural diagram of a GAN-based adaptive embedded digital tag information hiding device according to an embodiment of the present invention. The embodiment may be applicable to the case of encryption, and the device may be implemented in a software and/or hardware manner, and may be integrated in any device that provides a GAN-based adaptive embedded digital tag information hiding function, as shown in fig. 9, where the GAN-based adaptive embedded digital tag information hiding device specifically includes: the system comprises an acquisition module 210, a processing module 220, a determination module 230 and an encryption module 240.
The acquisition module is used for acquiring the initial carrier image, the target key and the initial digital label information;
The processing module is used for carrying out strengthening treatment on the initial carrier image to obtain a strengthening image;
the determining module is used for determining a modification probability image according to the enhanced image;
and the encryption module is used for determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information.
The product can execute the method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
According to the technical scheme, an initial carrier image, a target key and initial digital label information are acquired; performing strengthening treatment on the initial carrier image to obtain a strengthened image; determining a modification probability image according to the enhanced image; and determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information, solving the problem that the initial carrier image is maliciously tampered by an attacker, and improving the safety.
Fig. 10 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 10, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM12 and the RAM13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the GAN-based adaptive embedded digital tag information hiding method.
In some embodiments, the GAN-based adaptive embedded digital tag information hiding method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM12 and/or the communication unit 19. When the computer program is loaded into RAM13 and executed by processor 11, one or more of the steps of the GAN-based adaptive embedded digital tag information hiding method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the GAN-based adaptive embedded digital tag information hiding method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present invention may be performed in parallel, sequentially, or in a different order, so long as the desired results of the technical solution of the present invention are achieved, and the present invention is not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (13)

1. The self-adaptive embedded digital tag information hiding method based on GAN is characterized by comprising the following steps:
acquiring an initial carrier image, a target key and initial digital label information;
performing strengthening treatment on the initial carrier image to obtain a strengthened image;
determining a modification probability image according to the enhanced image;
determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information;
The method for strengthening the initial carrier image to obtain a strengthened image comprises the following steps:
obtaining a target sample set, wherein the target sample set comprises: an initial carrier image sample;
randomly generating a noise image;
generating a noisy image sample from the initial carrier image sample and the noisy image;
inputting the noisy image sample into a hidden analyzer sub-model to obtain an antagonism gradient image sample;
generating an anti-noise image sample according to the anti-gradient image sample and a first target parameter;
generating an enhanced image sample from the noise-combating image sample and the initial carrier image sample;
determining a dense image sample from the enhanced image sample;
inputting the image sample containing the secret into an analyzer network to obtain an analysis error rate;
adjusting the first target parameter according to the analysis error rate until the analysis error rate is greater than a set threshold value, and generating an enhancer model according to the first target parameter and the hidden analyzer sub-model;
inputting the initial carrier image into the enhancer model to obtain an enhanced image.
2. The method of claim 1, wherein inputting the noisy image samples into the hidden analyzer sub-model results in the contrast gradient image samples, comprising:
Inputting the noisy image sample into a residual layer to obtain a first residual characteristic image sample;
inputting the first residual characteristic image sample into a quantization and cut-off layer to obtain a second residual characteristic image sample;
inputting the second residual characteristic image sample into a co-occurrence matrix layer to obtain a co-occurrence matrix sample;
and determining an antagonism gradient image sample according to the symbiotic matrix sample.
3. The method of claim 1, wherein determining a modified probability image from the enhanced image comprises:
inputting the enhanced image into an SE layer to obtain a target characteristic image;
inputting the target feature images into three groups of first target layers to obtain first contracted images, wherein the first target layers comprise: conv-BN-leak ReLU layer and SE layer;
inputting the first shrinkage image into four groups of Conv-BN-leak ReLU layers to obtain a target shrinkage feature map, wherein the number of channels of the target shrinkage feature map is a preset value, and the size of the target shrinkage feature map is a set value;
inputting the target shrinkage feature map into seven groups of second target layers to obtain a first expansion image, wherein the second target layers comprise: a Deconv-BN-leak ReLU layer and a connection layer;
And adjusting the modification probability value of the first expansion image to a preset value to obtain a modification probability image.
4. A method according to claim 3, wherein inputting the enhanced image into the SE layer results in the target feature image, comprising:
extruding the reinforced image to obtain an extruded image;
acquiring a first characteristic image corresponding to the extrusion image;
activating the first characteristic image to obtain an activated first characteristic image;
determining a second characteristic image according to the activated first characteristic image and the enhanced image;
and determining a target characteristic image according to the second characteristic image and the enhanced image.
5. The method of claim 1, wherein determining a secret image from the enhanced image, the modified probabilistic image, the target key, and the initial digital label information comprises:
inputting the initial digital label information sample and the modification probability image sample into a first objective function to obtain a modification image sample;
determining a dense image sample from the modified image sample and the enhanced image;
inputting the dense image sample into an analyzer network to obtain an analysis error rate;
Adjusting a second target parameter according to the analysis error rate until the analysis error rate is greater than a set threshold value, and generating an embedder model according to the second target parameter and the first target function;
and inputting the enhanced image, the modification probability image, the target key and the initial digital label information into an embedder model to obtain a secret-containing image.
6. The method of claim 5, wherein inputting the enhanced image, the modified probability image, the target key, and the initial digital label information into an embedder model to obtain a secret-containing image, comprises:
inputting the initial digital label information into a format conversion layer to obtain a digital label information vector;
and inputting the digital tag information vector, the enhanced image, the modification probability image and the target key into an encoder to obtain a secret-containing image.
7. The method of claim 1, further comprising, after determining a secret image from the enhanced image, the modified probabilistic image, the target key, and the initial digital label information:
and determining target digital label information according to the secret-containing image and the target key.
8. The method of claim 7, wherein determining target digital label information from the encrypted image and the target key comprises:
inputting the encrypted image and the target key into a decoder to obtain a digital tag information vector;
and inputting the digital label information vector into a format conversion layer to obtain target digital label information.
9. The method of claim 1, further comprising, after determining a secret image from the enhanced image, the modified probabilistic image, the target key, and the initial digital label information:
and determining an analysis error rate according to the dense image and the enhanced image.
10. The method of claim 9, wherein determining an analysis error rate from the dense image and the enhanced image comprises:
determining a first connection image from the dense image and the enhanced image;
inputting the first connection image into a preprocessing convolution layer to obtain a first residual image;
inputting the first residual image into a Conv-ABS-BNTANh layer to obtain a second residual image;
inputting the second residual image into a pooling layer to obtain a first residual characteristic image;
Inputting the first residual characteristic image into four groups of third target layers to obtain a second residual characteristic image, wherein the third target layers comprise: a Conv-BN-Tanh layer and a pooling layer;
inputting the second residual characteristic image into a Conv-BN-ReLU layer and a pooling layer to obtain a third residual characteristic image;
and inputting the third residual characteristic image into a classification layer to obtain an analysis error rate.
11. A GAN-based adaptive embedded digital tag information hiding apparatus, comprising:
the acquisition module is used for acquiring the initial carrier image, the target key and the initial digital label information;
the processing module is used for carrying out strengthening treatment on the initial carrier image to obtain a strengthening image;
the determining module is used for determining a modification probability image according to the enhanced image;
the encryption module is used for determining a secret-containing image according to the enhanced image, the modification probability image, the target key and the initial digital label information;
the processing module is specifically configured to:
obtaining a target sample set, wherein the target sample set comprises: an initial carrier image sample;
randomly generating a noise image;
generating a noisy image sample from the initial carrier image sample and the noisy image;
Inputting the noisy image sample into a hidden analyzer sub-model to obtain an antagonism gradient image sample;
generating an anti-noise image sample according to the anti-gradient image sample and a first target parameter;
generating an enhanced image sample from the noise-combating image sample and the initial carrier image sample;
determining a dense image sample from the enhanced image sample;
inputting the image sample containing the secret into an analyzer network to obtain an analysis error rate;
adjusting the first target parameter according to the analysis error rate until the analysis error rate is greater than a set threshold value, and generating an enhancer model according to the first target parameter and the hidden analyzer sub-model;
inputting the initial carrier image into the enhancer model to obtain an enhanced image.
12. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the GAN-based adaptive embedded digital tag information hiding method of any one of claims 1-10.
13. A computer readable storage medium storing computer instructions for causing a processor to implement the GAN-based adaptive embedded digital tag information hiding method of any one of claims 1-10 when executed.
CN202210963950.9A 2022-08-11 2022-08-11 GAN-based self-adaptive embedded digital tag information hiding method Active CN115348360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210963950.9A CN115348360B (en) 2022-08-11 2022-08-11 GAN-based self-adaptive embedded digital tag information hiding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210963950.9A CN115348360B (en) 2022-08-11 2022-08-11 GAN-based self-adaptive embedded digital tag information hiding method

Publications (2)

Publication Number Publication Date
CN115348360A CN115348360A (en) 2022-11-15
CN115348360B true CN115348360B (en) 2023-11-07

Family

ID=83951720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210963950.9A Active CN115348360B (en) 2022-08-11 2022-08-11 GAN-based self-adaptive embedded digital tag information hiding method

Country Status (1)

Country Link
CN (1) CN115348360B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996036163A2 (en) * 1995-05-08 1996-11-14 Digimarc Corporation Steganography systems
CN108346125A (en) * 2018-03-15 2018-07-31 中山大学 A kind of spatial domain picture steganography method and system based on generation confrontation network
WO2020170071A1 (en) * 2019-02-22 2020-08-27 Alma Mater Studiorum - Universita' Di Bologna Encryption method and system based on images.
CN111681154A (en) * 2020-06-09 2020-09-18 湖南大学 Color image steganography distortion function design method based on generation countermeasure network
CN111951149A (en) * 2020-08-14 2020-11-17 中国人民武装警察部队工程大学 Image information steganography method based on neural network
CN112767226A (en) * 2021-01-15 2021-05-07 南京信息工程大学 Image steganography method and system based on GAN network structure automatic learning distortion
CN113222800A (en) * 2021-04-12 2021-08-06 国网江苏省电力有限公司营销服务中心 Robust image watermark embedding and extracting method and system based on deep learning
CN113538202A (en) * 2021-08-05 2021-10-22 齐鲁工业大学 Image steganography method and system based on generative steganography confrontation
CN114257697A (en) * 2021-12-21 2022-03-29 四川大学 High-capacity universal image information hiding method
CN114339258A (en) * 2021-12-28 2022-04-12 中国人民武装警察部队工程大学 Information steganography method and device based on video carrier
CN114676446A (en) * 2022-04-14 2022-06-28 国网山西省电力公司信息通信分公司 LS-GAN-based image steganography method
CN114820380A (en) * 2022-05-13 2022-07-29 四川大学 Spatial domain steganographic carrier image enhancement method based on content self-adaption disturbance resistance
CN114827379A (en) * 2022-04-27 2022-07-29 四川大学 Carrier image enhancement method based on generative network

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120024B (en) * 2019-05-20 2021-08-17 百度在线网络技术(北京)有限公司 Image processing method, device, equipment and storage medium
AU2020437435B2 (en) * 2020-03-26 2023-07-20 Shenzhen Institutes Of Advanced Technology Adversarial image generation method, apparatus, device, and readable storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996036163A2 (en) * 1995-05-08 1996-11-14 Digimarc Corporation Steganography systems
CN108346125A (en) * 2018-03-15 2018-07-31 中山大学 A kind of spatial domain picture steganography method and system based on generation confrontation network
WO2020170071A1 (en) * 2019-02-22 2020-08-27 Alma Mater Studiorum - Universita' Di Bologna Encryption method and system based on images.
CN111681154A (en) * 2020-06-09 2020-09-18 湖南大学 Color image steganography distortion function design method based on generation countermeasure network
CN111951149A (en) * 2020-08-14 2020-11-17 中国人民武装警察部队工程大学 Image information steganography method based on neural network
CN112767226A (en) * 2021-01-15 2021-05-07 南京信息工程大学 Image steganography method and system based on GAN network structure automatic learning distortion
CN113222800A (en) * 2021-04-12 2021-08-06 国网江苏省电力有限公司营销服务中心 Robust image watermark embedding and extracting method and system based on deep learning
CN113538202A (en) * 2021-08-05 2021-10-22 齐鲁工业大学 Image steganography method and system based on generative steganography confrontation
CN114257697A (en) * 2021-12-21 2022-03-29 四川大学 High-capacity universal image information hiding method
CN114339258A (en) * 2021-12-28 2022-04-12 中国人民武装警察部队工程大学 Information steganography method and device based on video carrier
CN114676446A (en) * 2022-04-14 2022-06-28 国网山西省电力公司信息通信分公司 LS-GAN-based image steganography method
CN114827379A (en) * 2022-04-27 2022-07-29 四川大学 Carrier image enhancement method based on generative network
CN114820380A (en) * 2022-05-13 2022-07-29 四川大学 Spatial domain steganographic carrier image enhancement method based on content self-adaption disturbance resistance

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
A GAN-based adaptive embedded digital label information hiding scheme;Ge Zhang 等;《2022 3rd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE)》;355-358 *
基于深度学习的图像隐写方法研究;付章杰;王帆;孙星明;王彦;;计算机学报(09);全文 *
基于深度对抗网络的数字图像隐写方法研究;刘甜梦;中国优秀硕士学位论文全文数据库信息科技辑(第01期);I138-88 *
深度学习在图像隐写术与隐写分析领域中的研究进展;翟黎明;嘉炬;任魏翔;徐一波;王丽娜;;信息安全学报(06);全文 *
生成对抗网络在图像隐写中的应用;刘佳;柯彦;雷雨;李军;刘明明;杨晓元;张敏情;;武汉大学学报(理学版)(02);全文 *
空域图像隐藏信息检测技术研究;周翠红;秦姣华;左伟明;贾丽媛;;湖南城市学院学报(自然科学版)(02);全文 *

Also Published As

Publication number Publication date
CN115348360A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN110084734B (en) Big data ownership protection method based on object local generation countermeasure network
CN112884758B (en) Defect insulator sample generation method and system based on style migration method
CN113657397B (en) Training method for circularly generating network model, method and device for establishing word stock
CN113487618B (en) Portrait segmentation method, portrait segmentation device, electronic equipment and storage medium
CN114820871B (en) Font generation method, model training method, device, equipment and medium
CN113792526B (en) Training method of character generation model, character generation method, device, equipment and medium
CN112949767A (en) Sample image increment, image detection model training and image detection method
CN113393371B (en) Image processing method and device and electronic equipment
CN111768326A (en) High-capacity data protection method based on GAN amplification image foreground object
CN114863229A (en) Image classification method and training method and device of image classification model
CN113627536A (en) Model training method, video classification method, device, equipment and storage medium
CN114495977B (en) Speech translation and model training method, device, electronic equipment and storage medium
CN112906800B (en) Image group self-adaptive collaborative saliency detection method
CN115348360B (en) GAN-based self-adaptive embedded digital tag information hiding method
Ketsoi et al. SREFBN: Enhanced feature block network for single‐image super‐resolution
CN111582284A (en) Privacy protection method and device for image recognition and electronic equipment
CN113963358B (en) Text recognition model training method, text recognition device and electronic equipment
CN116611491A (en) Training method and device of target detection model, electronic equipment and storage medium
CN116363429A (en) Training method of image recognition model, image recognition method, device and equipment
CN113139463B (en) Method, apparatus, device, medium and program product for training a model
CN112560848B (en) Training method and device for POI (Point of interest) pre-training model and electronic equipment
CN115457365A (en) Model interpretation method and device, electronic equipment and storage medium
CN113989152A (en) Image enhancement method, device, equipment and storage medium
CN113747480B (en) Processing method and device for 5G slice faults and computing equipment
CN114022357A (en) Image reconstruction method, training method, device and equipment of image reconstruction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant