CN112132738A - Image robust steganography method with reference generation - Google Patents

Image robust steganography method with reference generation Download PDF

Info

Publication number
CN112132738A
CN112132738A CN202011085366.5A CN202011085366A CN112132738A CN 112132738 A CN112132738 A CN 112132738A CN 202011085366 A CN202011085366 A CN 202011085366A CN 112132738 A CN112132738 A CN 112132738A
Authority
CN
China
Prior art keywords
image
steganographic
secret information
convolution
loss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011085366.5A
Other languages
Chinese (zh)
Other versions
CN112132738B (en
Inventor
张敏情
李宗翰
刘佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Engineering University of Chinese Peoples Armed Police Force
Original Assignee
Engineering University of Chinese Peoples Armed Police Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Engineering University of Chinese Peoples Armed Police Force filed Critical Engineering University of Chinese Peoples Armed Police Force
Priority to CN202011085366.5A priority Critical patent/CN112132738B/en
Publication of CN112132738A publication Critical patent/CN112132738A/en
Application granted granted Critical
Publication of CN112132738B publication Critical patent/CN112132738B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

The invention discloses an image robust steganography method with reference generation, which relates to the technical field of deep learning and comprises the following steps: taking an existing neural network model as an encoder, taking an original image and secret information as the input of the encoder, and taking a steganographic image as the output; the existing neural network model is used as a discriminator, the steganographic image is used as the input of the discriminator, and the difference between the mean values of the original image and the steganographic image is used as loss to judge the authenticity of the steganographic image; and taking the existing neural network model as a decoder, taking the hidden image after the interference is added as input, and extracting and outputting the secret information. Compared with other existing image robust steganography methods, the method can improve the accuracy rate of extracting the secret information to 98.55%, and under the condition of adding interference, the extraction rate of the secret information can also reach more than 90%.

Description

Image robust steganography method with reference generation
Technical Field
The invention relates to the technical field of deep learning, in particular to an image robust steganography method with reference generation.
Background
Traditional image steganography algorithms can be divided into two categories: a spatial domain steganography algorithm and a frequency domain steganography algorithm. The spatial domain steganography algorithm embeds secret information by modifying image pixels, such as LSB replacement and matching algorithm; the frequency domain steganography algorithm carries out the embedding of secret information by modifying certain specified frequency domain coefficients in the host signal, such as a Discrete Cosine Transform (DCT) algorithm, a Discrete Fourier Transform (DFT) algorithm, a Discrete Wavelet Transform (DWT) algorithm and the like. However, these traditional steganography algorithms lack robustness, and when the secret information is transmitted in a damaged channel such as a social network and wireless communication, the secret information cannot be correctly extracted due to slight interference.
Based on the defects, a robust lossless information hiding algorithm is proposed, robustness is achieved through cyclic interpretation of double mapping conversion by using a patchwork theory and modulo 256 addition operation, but salt and pepper noise is easily generated by the method; an algorithm based on integer wavelet transform has also been proposed to embed information by changing the average of HL1 and LH1 coefficients. However, this method requires the embedding side and the extracting side to share side information such as a threshold value, and has a low capacity.
With the development of deep learning, many scholars use deep learning to realize robust steganography, and a hiddenn model is proposed, which is divided into four parts: encoder E, decoder D, noise layer N and discriminator a. The encoder E inputs the image and the secret information and outputs the image and the secret information as an image; the decoder inputs the image and outputs the image as secret information; the discriminator a is responsible for discriminating the difference between the encoder generated image and the input image. The HiDDeN model has good performance in the aspect of secret information embedding amount, the proposed end-to-end steganography framework allows new noise to be added to a noise layer on the basis of the original framework, so that robustness to the new noise is realized, and the framework has certain expandability but is insufficient in steganography image quality. Still another person has proposed a stegasamp model on the basis of the hiddenn model, and image processing operations such as perspective transformation, color transformation, blurring and the like are added to a noise layer, so that the change existing in the process of obtaining a new image by printing and photographing a steganographic image is simulated, the problem that the secret information of the hiddenn cannot be decrypted after physical transmission is solved, and the method is an improvement on the application aspect of the hiddenn. However, images generated by the Stegasamp model have obvious gray patches, and the gray patches are more obvious as the message embedding amount is increased.
Disclosure of Invention
In order to solve the above problems, the present invention provides a robust steganography method for an image with reference generation, which encodes secret information jointly with an original image into a steganography image and then transfers the steganography image to a decoder through an image processing layer, the decoder restores the secret information, and a discriminator is used to discriminate the steganography image from the original image, wherein the encoder is constrained by image reconstruction loss to make the texture and distribution of the steganography image approach to the original image, the accuracy of the restored information is constrained by decoding loss, and the method can also obtain robustness to this noise by adding new noise, and can also change the embedding amount of secret information and the image quality by replacing the encoder, the decoder, and the discriminator, and has a higher embedding amount of secret information and a higher steganography image quality.
In order to achieve the purpose, the invention adopts the technical scheme that: a method for robust steganography of an image with reference generation, comprising:
s1: hiding of secret information
S11: a Residual-encoder model with a Residual structure is taken as an encoder, an original image and secret information are taken as input of the encoder, and a steganographic image is taken as output; the encoder generates image content by adopting a perception loss constraint, and a calculation formula of the perception loss is shown in an expression (1):
Figure 96413DEST_PATH_IMAGE002
(1)
wherein,
Figure 356493DEST_PATH_IMAGE004
respectively representing the original image and the steganographic image,
Figure 100002_DEST_PATH_IMAGE005
represents the encoder loss;
s12: taking a Basic-discriminator model as a discriminator, taking the steganographic image in the step S11 as the input of the discriminator, and taking the difference between the mean values of the original image and the steganographic image as loss to judge the authenticity of the steganographic image;
Figure 100002_DEST_PATH_IMAGE007
(2)
s13: jointly constraining the image quality of the generated steganographic image through the encoder loss and the discriminator loss;
s2: processing the steganographic image: the image processing layer receives the steganographic image generated by the encoder, adds analog interference and outputs the steganographic image;
s3: secret information extraction: taking a Basic-decoder model with a simple convolutional neural network structure as a decoder, taking the steganographic image after the interference is added as input, extracting and outputting the secret information; wherein the decoder loss is calculated by equation (3):
Figure 100002_DEST_PATH_IMAGE009
(3)
wherein,
Figure 100002_DEST_PATH_IMAGE011
representing the decoder loss, and adopting a sigmoid cross entropy loss function;
Figure 100002_DEST_PATH_IMAGE013
respectively representing the input secret information and the information recovered by the decoder.
Further, the specific process of hiding the secret information is as follows: firstly, constructing a 1 × 256 × 256 information tensor, wherein the first 100 bits are secret information, and the rest bits are filled with random 0 and 1; splicing the tensor with a tensor obtained by convolution of an original image, and obtaining an encoded image through three times of convolution, wherein the encoder totally adopts a 4-time convolution structure, in the former 3 times of convolution, a convolution kernel is 3 multiplied by 3, the step length is 1, the filling is 1, a LeakyReLU activation function is adopted, and finally batch normalization is adopted; and after the last convolution operation, performing hyperbolic tangent activation function and combining the input images to jointly obtain the steganographic image.
Furthermore, the specific process of constraining the steganographic image by the discriminator is as follows:
the Basic-discrete interpolator model comprises 4 convolution layers, the convolution kernel size of each convolution layer is 3 multiplied by 3, the step length is 1, the filling is 1, the first 3 layers are subjected to leakyReLU activation function and batch normalization after convolution, the output of the last layer after convolution is a mean value of 256 multiplied by 1, the original image and the steganographic image are respectively input into a discriminator network, the respective mean values are obtained, and the discriminator network uses a loss function to constrain the mean value difference of the original image and the steganographic image, so that the image quality of the steganographic image is constrained.
Further, the secret information extraction process is as follows:
the Basic-decoder comprises 4 convolutional layers, the size of a convolution kernel of each convolutional layer is 3 multiplied by 3, the step length is 1, the filling is 1, the former 3 layers use a LEAKYRELU activation function and batch normalization after convolution, the last layer outputs a tensor of 256 multiplied by 1 after convolution, and the tensor obtained by the last layer of convolution is straightened by flutten to calculate the message loss.
Further, the disturbances in the step S3 include gaussian noise, JPEG compression, color transformation, blurring, and occlusion.
Still further, the method further comprises a process of countertraining the decoder for enabling the decoder to decode steganographic images of different distortion levels for robustness.
Further, the method further comprises a calculation process of the total loss, and the calculation formula is shown as formula (4):
Figure 100002_DEST_PATH_IMAGE015
(4)。
the invention has the beneficial effects that:
the invention relates to a method for generating image robustness steganography with reference, which jointly encodes secret information and an original image into steganography image, then transmits the steganography image to a decoder through an image processing layer, the decoder restores the secret information, and a discriminator is used for discriminating the steganography image and the original image, wherein, the encoder is restricted by image reconstruction loss to lead the texture and the distribution of the steganography image to be close to the original image, the accuracy of the restored information is restricted by decoding loss, the method can also obtain the robustness to the noise by adding new noise, and can change the embedding amount of the secret information and the image quality by replacing the encoder, the decoder and the discriminator.
Compared with other existing image robust steganography methods, the method can enable the accuracy of secret information extraction to be as high as 98.55%, and under the condition of adding interference, the secret information extraction rate can also be more than 90%.
In addition to the objects, features and advantages described above, other objects, features and advantages of the present invention are also provided. The present invention will be described in further detail below with reference to the drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the invention and, together with the description, serve to explain the invention and not to limit the invention.
FIG. 1 is a block diagram of a robust steganography method with reference generation in accordance with an embodiment of the present invention;
FIG. 2 is a block diagram of a Residual-encoder model of an encoder according to an embodiment of the present invention;
FIG. 3 is a diagram of an arbiter architecture embodying the present invention;
FIG. 4 is a block diagram of a Basic-decoder model of a decoder according to an embodiment of the present invention;
FIG. 5 is a generated image of different codec combinations according to an embodiment of the present invention;
FIG. 6 shows the decoding accuracy after adding different interferences in the embodiment of the present invention;
fig. 7 shows secret information extraction accuracy under various interferences according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, a method for robust steganography of an image with reference generation includes:
s1: hiding of secret information
S11: a Residual-encoder model with a Residual structure is taken as an encoder, an original image and secret information are taken as input of the encoder, and a steganographic image is taken as output; the encoder generates image content by adopting a perception loss constraint, and a calculation formula of the perception loss is shown in an expression (1):
Figure DEST_PATH_IMAGE017
(1)
wherein,
Figure DEST_PATH_IMAGE019
respectively representing the original image and the steganographic image,
Figure 920461DEST_PATH_IMAGE005
represents the encoder loss;
referring to fig. 2, specifically, the specific process of hiding the secret information is as follows: firstly, constructing a 1 × 256 × 256 information tensor, wherein the first 100 bits are secret information, and the rest bits are filled with random 0 and 1; splicing the tensor with a tensor obtained by convolution of an original image, and obtaining an encoded image through three times of convolution, wherein the encoder totally adopts a 4-time convolution structure, in the former 3 times of convolution, a convolution kernel is 3 multiplied by 3, the step length is 1, the filling is 1, a LeakyReLU activation function is adopted, and finally batch normalization is adopted; and after the last convolution operation, performing hyperbolic tangent activation function and combining the input images to jointly obtain the steganographic image.
S12: taking a Basic-discriminator model as a discriminator, taking the steganographic image in the step S11 as the input of the discriminator, and taking the difference between the mean values of the original image and the steganographic image as loss to judge the authenticity of the steganographic image;
Figure DEST_PATH_IMAGE021
(2)
wherein,
Figure DEST_PATH_IMAGE023
represents a discriminator loss;
s13: jointly constraining the image quality of the generated steganographic image through the encoder loss and the discriminator loss;
referring to fig. 3, specifically, the specific process of constraining the steganographic image by the discriminator is as follows:
the Basic-discrete interpolator model comprises 4 convolution layers, the convolution kernel size of each convolution layer is 3 multiplied by 3, the step length is 1, the filling is 1, the first 3 layers are subjected to leakyReLU activation function and batch normalization after convolution, the output of the last layer after convolution is a mean value of 256 multiplied by 1, the original image and the steganographic image are respectively input into a discriminator network, the respective mean values are obtained, and the discriminator network uses a loss function to constrain the mean value difference of the original image and the steganographic image, so that the image quality of the steganographic image is constrained.
S2: processing the steganographic image: the image processing layer receives the steganographic image generated by the encoder, adds analog interference and outputs the steganographic image;
wherein, the interference comprises Gaussian noise, JPEG compression, color transformation, blurring, occlusion and the like;
s3: secret information extraction: taking a Basic-decoder model with a simple convolutional neural network structure as a decoder, taking the steganographic image after the interference is added as input, extracting and outputting the secret information; wherein the decoder loss is calculated by equation (3):
Figure DEST_PATH_IMAGE025
(3)
Figure DEST_PATH_IMAGE027
representing the decoder loss, and adopting a sigmoid cross entropy loss function;
Figure DEST_PATH_IMAGE029
respectively representing the input secret information and the information recovered by the decoder.
Referring to fig. 4, specifically, the process of extracting the secret information is as follows:
the Basic-decoder model comprises 4 convolutional layers, the convolution kernel size of each convolutional layer is 3 multiplied by 3, the step length is 1, the filling is 1, the former 3 layers use a LEAKYRELU activation function and batch normalization after convolution, the last layer outputs a tensor of 256 multiplied by 1 after convolution, and the tensor obtained by the last layer of convolution is straightened by flutten to calculate the message loss.
The robust steganography method (RB-RGRS) method of the present invention further includes a process of countertraining a decoder for enabling the decoder to decode steganography images of different distortion degrees, thereby having robustness.
Among them, the Residual-encoder model with Residual structure, the Basic-decoder model with simple convolutional neural network structure and the Basic-decoder model references "ZHANG K A, CUESTA-INFANTE A, XU L, et al.
The method of the invention also comprises a calculation process of the total loss, and the calculation formula is shown as formula (4):
Figure 74231DEST_PATH_IMAGE031
(4)。
to verify the accuracy of the inventive method for the extraction of secret information, the experiment was carried out with different encoders and decoders into the method of the experiment to verify the accuracy of the extraction of secret information, among which the DenseNet model (from the document "HUANG, LIU Z, MAATEN L V D, et al. Densely Connected Networks; proceedings of the CVPR, F, 2017 [ C ]") and the UNET model (from the document "TANCIK M, MILDENHALL B, NG R. Stegasamp: instant Hyperlines in Physical photos [ J ]. 2019"). The DenseNet is named as Dense-encoder when used as an encoder, and the structure of the DenseNet is based on the encoder Residual-encoder used in the method, each layer is spliced with all the layers before convolution, and finally a steganographic image is output; when the decoder is used, the decoder is named as a Dense-decoder, and the structure of the decoder is based on the Basic-decoder used by the method of the invention, starting from the tensor obtained by the 2 nd convolution, and before the convolution operation, splicing all the previous tensors obtained by convolution together and then performing the operations of convolution, activation and the like. And finally, performing flatten straightening on the tensor obtained by convolution, and calculating the cross entropy loss by taking the same number of bits as the secret information and the input secret information. The results are shown in Table 1:
TABLE 1 secret information extraction accuracy rates of different encoders and decoders
Figure 609992DEST_PATH_IMAGE033
It can be seen from table 1 that when the input secret information is 100 bits, the secret information extraction accuracy is the highest and can reach 0.985, when the input secret information is the same in length, the accuracy is higher than that of the UNET model, and when the input secret information is increased in length, the accuracy is still as high as 0.98. In addition, when the encoder is a Residual-encoder and a Dense-encoder and the decoder is a Basic-decoder or a Dense-decoder, the extraction accuracy of the obtained secret information can reach more than 90%; and when the encoder is a Basic-encoder, the extraction accuracy rate of the secret information is not high no matter what decoder is selected.
The image quality of the combination in table 1 with secret information extraction accuracy greater than 0.96 was measured. The image quality evaluation index adopts three indexes of full reference image quality evaluation indexes PSNR, SSIM, FSIM and the like to measure the image similarity, and an FID measurement encoder is adopted to generate the similarity between the distribution of the image and the distribution of the training data set. The measurement results are shown in table 2:
TABLE 2 image quality indicators for different encoder and decoder combinations
Figure 91789DEST_PATH_IMAGE035
When measured by using a full-reference evaluation index, the UNET-CNN combination proposed by the literature performs relatively well, but as shown in fig. 5, a steganographic image generated by the combination has obvious gray patches, and the longer the number of bits of the input secret information is, the more obvious the gray patches are; when the distribution of the image is measured by using the FID without reference evaluation index, the RB-RGRS algorithm provided by the invention is relatively better when the bit number of the input secret information is 200 bits, and compared with the image generated by UNET-CNN combination, the RB-RGRS algorithm does not have obvious gray blocks.
In order to test the robustness of the image generated by the method, 100 steganographic images are generated by using a Residual-Basic model, then JPEG compression, blurring, Gaussian noise, color conversion, shielding, compression and other operations are respectively carried out on the 100 steganographic images, the Residual-Basic model is used for decoding the steganographic images added with interference and restoring secret information, the average decoding accuracy is calculated, and the result is shown in figure 6.
The secret information extraction accuracy of the steganographic image after various interferences is drawn into a line graph as shown in fig. 7. Under the condition of not adding any interference, the secret information extraction accuracy of the steganographic image is 100%, and no error correcting code is required to be added for error correction, so that the embedded information is 200 bits, and the embedding capacity is
Figure DEST_PATH_IMAGE037
bpp. In addition, the extraction accuracy of the secret information under the interference except the fuzzy interference and the shielding edge is over 90 percent, the extraction accuracy of the secret information can be improved by adding an error correcting code, and the robustness is high. Since the above experiment confirmed that the color band appearing in the image is secret informationMoreover, the color stripes are blurred or disappeared by the blurring operation and the shielding edge, so that the secret information extraction accuracy is about 0.5, that is, the secret information cannot be extracted. On the contrary, any position except the color strip is shielded, and the extraction of the secret information is not influenced.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (7)

1. A method for robust steganography of an image with reference generation, comprising:
s1: hiding of secret information
S11: a Residual-encoder model with a Residual structure is taken as an encoder, an original image and secret information are taken as input of the encoder, and a steganographic image is taken as output; the encoder generates image content by adopting a perception loss constraint, and a calculation formula of the perception loss is shown in an expression (1):
Figure 563493DEST_PATH_IMAGE002
(1)
wherein,
Figure 805993DEST_PATH_IMAGE004
respectively representing the original image and the steganographic image,
Figure DEST_PATH_IMAGE005
represents the encoder loss;
s12: taking a Basic-discriminator model as a discriminator, taking the steganographic image in the step S11 as the input of the discriminator, and taking the difference between the mean values of the original image and the steganographic image as loss to judge the authenticity of the steganographic image;
Figure DEST_PATH_IMAGE007
(2)
s13: jointly constraining the image quality of the generated steganographic image through the encoder loss and the discriminator loss;
s2: processing the steganographic image: the image processing layer receives the steganographic image generated by the encoder, adds analog interference and outputs the steganographic image;
s3: secret information extraction: taking a Basic-decoder model with a simple convolutional neural network structure as a decoder, taking the steganographic image after the interference is added as input, extracting and outputting the secret information; wherein the decoder loss is calculated by equation (3):
Figure DEST_PATH_IMAGE009
(3)
wherein,
Figure DEST_PATH_IMAGE011
representing the decoder loss, and adopting a sigmoid cross entropy loss function;
Figure DEST_PATH_IMAGE013
respectively representing the input secret information and the information recovered by the decoder.
2. The method for robust steganography of an image with reference generation as claimed in claim 1, wherein the specific process of hiding the secret information is as follows: firstly, constructing a 1 × 256 × 256 information tensor, wherein the first 100 bits are secret information, and the rest bits are filled with random 0 and 1; splicing the tensor with a tensor obtained by convolution of an original image, and obtaining an encoded image through three times of convolution, wherein the encoder totally adopts a 4-time convolution structure, in the former 3 times of convolution, a convolution kernel is 3 multiplied by 3, the step length is 1, the filling is 1, a LeakyReLU activation function is adopted, and finally batch normalization is adopted; and after the last convolution operation, performing hyperbolic tangent activation function and combining the input images to jointly obtain the steganographic image.
3. The method for image robust steganography with reference generation as claimed in claim 1, wherein the specific process of the discriminator to constrain steganography image is as follows:
the discriminator Basic-discriminator model comprises 4 convolution layers, the convolution kernel size of each convolution layer is 3 multiplied by 3, the step length is 1, the filling is 1, the first 3 layers are convoluted and then are normalized in batches by using a LEAKYRELU activation function, the final layer is convoluted and then is output as a mean value of 256 multiplied by 1, the original image and the steganographic image are respectively input into a discriminator network, the mean values of the original image and the steganographic image are obtained, and the discriminator network uses a loss function to constrain the mean value difference of the original image and the steganographic image, so that the image quality of the steganographic image is constrained.
4. The method for robust steganography of an image with reference generation as claimed in claim 1, wherein the secret information is extracted by:
the Basic-decoder model comprises 4 convolutional layers, the convolution kernel size of each convolutional layer is 3 multiplied by 3, the step length is 1, the filling is 1, the former 3 layers use a LEAKYRELU activation function and batch normalization after convolution, the last layer outputs a tensor of 256 multiplied by 1 after convolution, and the tensor obtained by the last layer of convolution is straightened by flutten to calculate the message loss.
5. The robust steganography method for reference generated images of claim 1, wherein the disturbances in step S3 include gaussian noise, JPEG compression, color transformation, blurring and occlusion.
6. The method for robust steganography of a reference generated image as recited in claim 1, further comprising a process of counter training the decoder for enabling the decoder to decode steganography images of different distortion levels for robustness.
7. The method for robust steganography with reference generation of image according to claim 1, further comprising a calculation process of total loss, the calculation formula is shown in formula (4):
Figure DEST_PATH_IMAGE015
(4)。
CN202011085366.5A 2020-10-12 2020-10-12 Image robust steganography method with reference generation Active CN112132738B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011085366.5A CN112132738B (en) 2020-10-12 2020-10-12 Image robust steganography method with reference generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011085366.5A CN112132738B (en) 2020-10-12 2020-10-12 Image robust steganography method with reference generation

Publications (2)

Publication Number Publication Date
CN112132738A true CN112132738A (en) 2020-12-25
CN112132738B CN112132738B (en) 2023-11-07

Family

ID=73852573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011085366.5A Active CN112132738B (en) 2020-10-12 2020-10-12 Image robust steganography method with reference generation

Country Status (1)

Country Link
CN (1) CN112132738B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113726976A (en) * 2021-09-01 2021-11-30 南京信息工程大学 High-capacity graph hiding method and system based on coding-decoding network
CN114157773A (en) * 2021-12-01 2022-03-08 杭州电子科技大学 Image steganography method based on convolutional neural network and frequency domain attention
CN114782697A (en) * 2022-04-29 2022-07-22 四川大学 Adaptive steganography detection method for confrontation sub-field

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158167A1 (en) * 2016-12-03 2018-06-07 Zensar Technologies Ltd. Computer implemented system and method for steganography
CN111598762A (en) * 2020-04-21 2020-08-28 中山大学 Generating type robust image steganography method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180158167A1 (en) * 2016-12-03 2018-06-07 Zensar Technologies Ltd. Computer implemented system and method for steganography
CN111598762A (en) * 2020-04-21 2020-08-28 中山大学 Generating type robust image steganography method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曹寅;潘子宇;: "一种改进的基于生成对抗网络的信息隐藏模型", 现代信息科技, no. 16 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113726976A (en) * 2021-09-01 2021-11-30 南京信息工程大学 High-capacity graph hiding method and system based on coding-decoding network
CN113726976B (en) * 2021-09-01 2023-07-11 南京信息工程大学 Large-capacity graph hiding method and system based on coding-decoding network
CN114157773A (en) * 2021-12-01 2022-03-08 杭州电子科技大学 Image steganography method based on convolutional neural network and frequency domain attention
CN114157773B (en) * 2021-12-01 2024-02-09 杭州电子科技大学 Image steganography method based on convolutional neural network and frequency domain attention
CN114782697A (en) * 2022-04-29 2022-07-22 四川大学 Adaptive steganography detection method for confrontation sub-field
CN114782697B (en) * 2022-04-29 2023-05-23 四川大学 Self-adaptive steganography detection method for anti-domain

Also Published As

Publication number Publication date
CN112132738B (en) 2023-11-07

Similar Documents

Publication Publication Date Title
CN112132738B (en) Image robust steganography method with reference generation
CN113658051B (en) Image defogging method and system based on cyclic generation countermeasure network
CN103533343B (en) Stereo image quality objective evaluation method based on digital watermarking
Wang et al. Quality-aware images
Ernawan et al. An improved watermarking technique for copyright protection based on tchebichef moments
CN109993678B (en) Robust information hiding method based on deep confrontation generation network
Wang et al. Adaptive watermarking and tree structure based image quality estimation
CN112132737B (en) Image robust steganography method without reference generation
CN107908969B (en) JPEG image self-adaptive steganography method based on spatial domain characteristics
Zhang et al. An Image Steganography Algorithm Based on Quantization Index Modulation Resisting Scaling Attacks and Statistical Detection.
Feng et al. Robust image watermarking based on tucker decomposition and adaptive-lattice quantization index modulation
Singh et al. Hybrid technique for robust and imperceptible dual watermarking using error correcting codes for application in telemedicine
CN113628090B (en) Anti-interference message steganography and extraction method, system, computer equipment and terminal
CN112714231A (en) Robust steganography method based on DCT (discrete cosine transformation) symbol replacement
CN113660386B (en) Color image encryption compression and super-resolution reconstruction system and method
CN114730450A (en) Watermark-based image reconstruction
Rahim et al. Exploiting de-noising convolutional neural networks DnCNNs for an efficient watermarking scheme: a case for information retrieval
Sharmaa et al. Hybrid watermarking algorithm using finite radon and fractional Fourier transform
Li et al. A robust watermarking scheme with high security and low computational complexity
CN111065000B (en) Video watermark processing method, device and storage medium
CN116883222A (en) JPEG-compression-resistant robust image watermarking method based on multi-scale automatic encoder
Zhou et al. Reduced reference stereoscopic image quality assessment using digital watermarking
Zeng et al. Quality-aware video based on robust embedding of intra-and inter-frame reduced-reference features
CN114529442A (en) Robust image watermarking method adopting two-stage precoding and wavelet network
Amsaveni et al. Reversible data hiding based on radon and integer lifting wavelet transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant