CN116132682A - Picture hiding method and system based on pixel-level rewarding mechanism - Google Patents
Picture hiding method and system based on pixel-level rewarding mechanism Download PDFInfo
- Publication number
- CN116132682A CN116132682A CN202310033855.3A CN202310033855A CN116132682A CN 116132682 A CN116132682 A CN 116132682A CN 202310033855 A CN202310033855 A CN 202310033855A CN 116132682 A CN116132682 A CN 116132682A
- Authority
- CN
- China
- Prior art keywords
- image
- operation group
- convolution operation
- secret
- branch
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/154—Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
- H04N19/467—Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a picture hiding method and a system based on a pixel-level rewarding mechanism, which are used for inputting a color carrier image and a gray secret image into a trained picture hiding network to generate a secret-containing image when the secret image is hidden; when secret image extraction is carried out, inputting the secret image into a trained graph-hidden graph network to obtain a reconstructed secret image; training in a graph hiding network, comprising: constructing a graph hiding network by utilizing an image cascade hidden network, a reconstruction network based on a U-Net++ structure and a referee network; constructing a total loss function of the hidden graph network according to the loss function of the image cascade hidden network, the loss function of the reconstruction network based on the U-Net++ structure and the judging result of the judge network and the pixel-level rewarding matrix; and optimizing the graph hiding network by taking the minimum total loss function as a target to train, and obtaining the trained graph hiding network. The advantages are that: the invention has high concealment, high safety and high calculation efficiency.
Description
Technical Field
The invention relates to a method and a system for hiding a graph based on a pixel-level rewarding mechanism, and belongs to the technical field of image processing.
Background
The hidden image is a technology of hiding the secret image in the cover image by embedding algorithm, and then extracting the secret image by the receiver by extracting algorithm. Unlike cryptography, the image hiding not only can ensure the security of the secret information, but also can enhance the security of the secret information in the transmission process. In recent years, the technology of hiding drawings has been applied to many fields such as data secret communication and copyright protection.
In 2017, baluja published on NIPS a first deep learning steganography algorithm in graph hidden [ Baluja S.Hiding images in plain sight: deep steganography [ C ]. In Proceedings of the Neural Information Processing systems.Cambridge: MIT Press,2017:2069-2079 ], and since then various deep learning networks were used to make massive emergence of steganography models in graph hidden. A good graph hiding technology is needed to solve two major problems: covert and security, that is, the secret image should not be detected by the human eye and by the steganographic analysis model. However, it is difficult to balance the two in the prior art. In addition, how to reduce the computational complexity of the model is also an important problem in the field of graph hiding.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art, and provide a method and a system for hiding pictures based on a pixel-level rewarding mechanism, which have high hiding capacity and high concealment.
In order to solve the above technical problems, the present invention provides a method for hiding a graph based on a pixel-level rewarding mechanism, including:
when the secret image is hidden, inputting the color carrier image and the gray secret image into a trained graph hiding network to generate a secret-containing image;
when secret image extraction is carried out, inputting the secret image into a trained graph hiding network to obtain a reconstructed secret image;
the training of the graph hiding network comprises the following steps:
constructing the hidden image network by utilizing an image cascade hidden network, a reconstruction network based on a U-Net++ structure and a referee network, wherein the image cascade hidden network is used for generating a secret-containing image according to a color carrier image and a gray secret image; the reconstruction network based on the U-Net++ structure is used for reconstructing the secret-containing image to obtain a reconstructed secret image; the judge network is used for judging the dense-containing image output by the image cascade hidden network in the training of the image hiding network to obtain a judging result and a pixel-level rewarding matrix;
Constructing the total loss function of the hidden graph network according to the loss function of the image cascade hidden network, the loss function of the reconstruction network based on the U-Net++ structure, the judging result of the judge network and the pixel-level rewarding matrix;
and optimizing the graph hiding network with the minimum total loss function as a target, and completing training when the loss is reduced and kept stable to obtain a trained graph hiding network, wherein the weight of the referee network is fixed in the optimization process.
Further, the inputting the color carrier image and the gray secret image into the trained hidden image network includes:
preprocessing the carrier image/secret image by adopting resolution diversification operation to obtain three pairs of carrier image/secret image pairs with different resolutions of original size, half size and quarter size;
three pairs of carrier image/secret image pairs of different resolutions are input into a hidden image network.
Further, the image cascade hidden network includes: the device comprises a low-resolution semantic branch, a medium-resolution detail branch, a high-resolution detail branch, a cascade feature first fusion module, a cascade feature second fusion module and an up-sampling operation group;
The input of the low resolution semantic branch is a quarter-sized carrier image/secret image pair;
the input of the medium resolution detail branch is a carrier image/secret image pair of half size;
the input of the high-resolution detail branch is a carrier image/secret image pair with the original size;
the output of the low-resolution semantic branch and the middle-resolution detail branch is used as the input of a first fusion module of the cascade characteristics;
the output of the high-resolution detail branch is used as the input of the cascade feature second fusion module; the output of the cascade characteristic second fusion module is connected with the up-sampling operation group;
the output of the up-sampling operation set is a dense image.
Further, the low resolution semantic branching includes: the vector branch hiding probability guidance module comprises a vector branch first convolution operation group, a vector branch second convolution operation group, a vector branch third convolution operation group, a vector branch fourth convolution operation group, a vector branch fifth convolution operation group, a secret branch first convolution operation group, a secret branch second convolution operation group, a secret branch third convolution operation group, a secret branch fourth convolution operation group, a secret branch fifth convolution operation group, a deconvolution first operation group, a deconvolution second operation group and a deconvolution third operation group;
The input of the carrier branch hiding probability guiding module is a carrier image with a quarter size;
the input of the first convolution operation group of the carrier branch is the output of the carrier branch hiding probability guiding module;
the input of the second convolution operation group of the carrier branch is the output of the first convolution operation group of the carrier branch;
the input of the third convolution operation group of the carrier branch is the output of the second convolution operation group of the carrier branch;
the input of the carrier branch fourth convolution operation group is the output of the carrier branch third convolution operation group;
the input of the fifth convolution operation group of the carrier branch is the output of the fourth convolution operation group of the carrier branch;
the input of the first convolution operation group of the secret branch is a secret image with a quarter size;
the input of the second convolution operation group of the secret branch is the output of the first convolution operation group of the secret branch;
the input of the third convolution operation group of the secret branch is the output of the second convolution operation group of the secret branch;
the input of the third convolution operation group of the secret branch is the output of the third convolution operation group of the secret branch;
the input of the secret branch fifth convolution operation group is the output of the secret branch fourth convolution operation group;
The outputs of the carrier branch fifth convolution operation group and the secret branch fifth convolution operation group are input into the deconvolution first operation group after being subjected to channel splicing;
the outputs of the carrier branch fourth convolution operation group, the secret branch fourth convolution operation group and the deconvolution first operation group are input into the deconvolution second operation group after being subjected to channel splicing to form a jump structure,
the outputs of the carrier branch third convolution operation group, the secret branch third convolution operation group and the deconvolution second operation group are input into the deconvolution third operation group after being subjected to channel splicing to form a jump structure;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
Further, the carrier branch concealment probability guidance module is expressed as:
p(m,n,d,θ)=count{(k,a),(l,b)∈(N x ×N y )×(N x ×N y )}
wherein p (m, N, d, θ) is a gray level co-occurrence matrix of the quarter-size carrier image I, m and N are two different gray levels, d is a distance between two pixels of the quarter-size carrier image, θ is an angle between two pixels of the quarter-size carrier image, count {.θ represents a total number of elements contained in the calculation set, (k, a), (l, b) is two pixels of the quarter-size carrier image, N x And N y X is the width and height of the quarter-size carrier image t An entropy image of the quarter-size carrier image, BN (·) is a batch normalization operation, σ 1 (. Cndot.) is a ReLU activation function, F 1 (. Cndot.) is a 3 x 3 convolution transform function,for pixel-wise multiplication operations +.>For pixel-by-pixel addition operation, x e Hiding the output of the probability guidance module for the carrier branch.
Further, the middle resolution detail branch includes: a carrier branch first convolution operation group, a carrier branch second convolution operation group, a secret branch first convolution operation group and a secret branch second convolution operation group; performing channel splicing operation;
the input of the first convolution operation group of the carrier branch is a carrier image with half size;
the input of the second convolution operation group of the carrier branch is the output of the first convolution operation group of the carrier branch;
the input of the first convolution operation group of the secret branch is a secret image with half size;
the input of the second convolution operation group of the secret branch is the output of the first convolution operation group of the secret branch;
the outputs of the carrier branch second convolution operation group and the secret branch second convolution operation group are the inputs of channel splicing operation;
One convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
Further, the high resolution detail branch includes: a carrier branch first convolution operation group, a carrier branch second convolution operation group, a secret branch first convolution operation group, a secret branch second convolution operation group and a channel splicing operation;
the input of the first convolution operation group of the carrier branch is a carrier image with the original size;
the input of the second convolution operation group of the carrier branch is the output of the first convolution operation group of the carrier branch;
the input of the first convolution operation group of the secret branch is a secret image with the original size;
the input of the second convolution operation group of the secret branch is the output of the first convolution operation group of the secret branch;
the outputs of the carrier branch second convolution operation group and the secret branch second convolution operation group are the inputs of channel splicing operation;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
Further, the cascade feature first fusion module is expressed as:
wherein f 1 Output for low resolution semantic branching, f 2 For output of the medium resolution detail branch, up (·) is a 2×2 upsampling operation, F 2 (. Cndot.) is a 1X 1 convolution transform function, σ 2 (. Cndot.) is the LeakyReLU activation function,f 3 and outputting the first fusion module which is the cascade characteristic.
Further, the cascade feature second fusion module is expressed as:
wherein f 4 For output of high resolution detail branches, f 5 And the output of the second fusion module is the cascade characteristic. Further, the upsampling operation set is expressed as:
c'=σ 3 (F 3 (BN(σ 1 (F 3 (BN(σ 1 (F 3 (f 5 ))))))))
wherein F is 3 (. Cndot.) is a 4X 4 deconvolution transform function, σ 3 (. Cndot.) is the Tanh activation function and c' is the dense image.
Further, the reconstruction network based on the U-Net++ structure is a jump connection structure and comprises: the method comprises the following steps of a first convolution operation group, a second convolution operation group, a third convolution operation group, a fourth convolution operation group, a first deconvolution operation group, a fifth convolution operation group, a second deconvolution operation group, a sixth convolution operation group, a third deconvolution operation group, a seventh convolution operation group, a fourth deconvolution operation group, a eighth convolution operation group, a fifth deconvolution operation group, a ninth convolution operation group, a sixth deconvolution operation group and a tenth convolution operation group;
The output of the deconvolution operation group I and the output of the convolution operation group I are input into the convolution operation group V after channel splicing;
the output of the deconvolution operation group II and the output of the convolution operation group II are input into the convolution operation group VI after channel splicing;
the outputs of the deconvolution operation group III, the convolution operation group I and the convolution operation group V are input into the convolution operation group seven after channel splicing, so as to form a jump connection structure;
the outputs of the deconvolution operation group IV and the convolution operation group III are input into the convolution operation group eight after channel splicing;
the outputs of the deconvolution operation group five, the convolution operation group two and the convolution operation group six are input into the convolution operation group nine after channel splicing to form a jump connection structure;
the deconvolution operation group six, the convolution operation group one, the convolution operation group five and the convolution operation group seven are input into the convolution operation group ten after channel splicing is carried out on the outputs of the convolution operation group seven, so as to form a jump connection structure;
the output of the convolution operation group ten is a reconstructed secret image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
Further, the referee network is an XuNet steganalysis network, and is obtained through the following training steps:
collecting a plurality of image samples, and determining labels of the images of the samples; the label is the probability of including a secret image in the corresponding sample image;
and taking each image sample as input, and taking the label of each image sample as output training XuNet steganalysis network to obtain a referee network.
Further, the pixel level bonus matrix is expressed as:
wherein sigma 1 (. Cndot.) is a ReLU activation function, F k The kth channel of the feature map output for the last convolution layer of the referee network, alpha k Weight of kth feature map, R ij For the pixel level bonus matrix i and j represent the positions of the rows and columns of image pixels, respectively, H ' and W ' are the height and width of the kth feature map, z ' is the prediction result of the referee network,represents F k The value at the (i, j) th pixel.
Further, the total loss function L is:
L=βL c +ηL s
wherein L is c Hiding network loss for image concatenation, L s Beta and eta are weights for controlling the image cascade hidden network loss and the reconstruction network loss based on the U-Net++ structure;
the image cascade hidden network loss L c The calculation formula of (2) is as follows:
L c =μL v +ωL a
wherein L is v Representing quality loss, for measuring visual quality of dense images, L a Represents the security loss, is used for measuring the security of the image containing the secret, mu and omega are weights for controlling the quality loss and the security loss, c is a carrier image pixel, L is the total pixel value of the image, c' is the pixel of the image containing the secret, mu c 、μ c' Average values of c and c', respectively, representing the brightness, K, of the carrier image and the density-containing image 1 Is a constant less than or equal to 1, M is a custom scale, sigma c 、σ c' Standard deviation of c, c', respectively, representing contrast, σ, of the carrier image and the dense image cc' The covariance of c and c' represents the structural similarity between the carrier image and the dense-containing image, K 2 G is a Gaussian filter parameter, alpha and gamma are super parameters for controlling weights, H and W are the height and width of an image, z represents the real label of the image, and z' represents the predicted value of a referee network;
the reconstruction network loss L based on the U-Net++ structure s The calculation formula of (2) is as follows:
where s is the secret image pixel, s' is the reconstructed secret image pixel, μ s 、μ s' Average of s and s', respectively, representing the brightness of the secret image and the reconstructed secret image, σ s 、σ s' Standard deviations s, s', respectively, representing the contrast of the secret image and the reconstructed secret image, σ ss' The covariance of s and s' represents the structural similarity of the secret image and the reconstructed secret image.
A schematic system based on a pixel-level rewarding mechanism, comprising:
the generation unit is used for inputting the color carrier image and the gray-scale secret image into a trained graph hiding network to generate a secret-containing image when the secret image is hidden;
the reconstruction unit is used for inputting the secret image into a trained graph hiding network to obtain a reconstructed secret image when the secret image is extracted;
the training of the graph hiding network comprises the following steps:
constructing the hidden image network by utilizing an image cascade hidden network, a reconstruction network based on a U-Net++ structure and a referee network, wherein the image cascade hidden network is used for generating a secret-containing image according to a color carrier image and a gray secret image; the reconstruction network based on the U-Net++ structure is used for reconstructing the secret-containing image to obtain a reconstructed secret image; the judge network is used for judging the dense-containing image output by the image cascade hidden network in the training of the image hiding network to obtain a judging result and a pixel-level rewarding matrix;
Constructing the total loss function of the hidden graph network according to the loss function of the image cascade hidden network, the loss function of the reconstruction network based on the U-Net++ structure, the judging result of the judge network and the pixel-level rewarding matrix;
and optimizing the graph hiding network with the minimum total loss function as a target, and completing training when the loss is reduced and kept stable to obtain a trained graph hiding network, wherein the weight of the referee network is fixed in the optimization process.
The invention has the beneficial effects that:
the invention designs an image cascade hidden network, which improves the visual quality of the image containing the secret and reduces the calculation complexity;
the hidden probability guidance module is designed, the prior knowledge is used for guiding the hiding of the secret information, and the exploration cost of the neural network is reduced while the visual quality of the image containing the secret is improved;
and a pixel-level rewarding mechanism is designed, rewards are provided in real time according to the hiding effect of the secret-containing image, the secret image is guided to be hidden in a safer image area, and the safety of the secret-containing image is improved.
Drawings
FIG. 1 is a flow diagram of a method for hiding a graph based on a pixel level rewards mechanism of an embodiment;
FIG. 2 is a schematic diagram of an image cascade hidden network architecture of an embodiment;
FIG. 3 is a network architecture diagram of low resolution semantic branching in one embodiment;
FIG. 4 is a network architecture diagram of a medium resolution detail branch in one embodiment;
FIG. 5 is a network architecture diagram of a high resolution detail branch in one embodiment;
FIG. 6 is a schematic diagram of a reconstruction network based on a U-Net++ architecture for one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The principle of application of the invention is described in detail below with reference to the accompanying drawings.
Example 1
The embodiment of the invention provides a graph hiding method based on a pixel-level rewarding mechanism, which is shown in fig. 1 and specifically comprises the following steps:
s10, inputting a color carrier image and a gray secret image into an image cascade hidden network to generate a secret-containing image;
in the specific implementation process, the carrier image/secret image pairs are preprocessed by adopting resolution diversification operation, so that three pairs of carrier image/secret image pairs with different resolutions of original size, half size and quarter size are obtained. Three pairs of carrier image/secret image pairs of different resolutions are input into an image cascade concealment network.
In a specific implementation manner of the embodiment of the present invention, as shown in fig. 2, the image cascade hidden network includes: the device comprises a low-resolution semantic branch LSB, a medium-resolution detail branch MDB, a high-resolution detail branch HDB, a cascade feature first fusion module CFF_1, a cascade feature second fusion module CFF_2 and an up-sampling operation group UPG;
the input of the low resolution semantic branch is a quarter-sized carrier image/secret image pair;
the input of the medium resolution detail branch is a carrier image/secret image pair of half size;
The input of the high-resolution detail branch is a carrier image/secret image pair with the original size;
the output of the low-resolution semantic branch and the middle-resolution detail branch is used as the input of a first fusion module of the cascade characteristics;
the output of the high-resolution detail branch is used as the input of the cascade feature second fusion module; the output of the cascade characteristic second fusion module is connected with the up-sampling operation group;
the output of the up-sampling operation set is a dense image.
As shown in fig. 3, the low resolution semantic branches include a carrier branch hidden probability guidance module HPGM, a carrier branch first convolution operation group l_c_conv_1, a carrier branch second convolution operation group l_c_conv_2, a carrier branch third convolution operation group l_c_conv_3, a carrier branch fourth convolution operation group l_c_conv_4, a carrier branch fifth convolution operation group l_c_conv_5, a secret branch first convolution operation group l_s_conv_1, a secret branch second convolution operation group l_s_conv_2, a secret branch third convolution operation group l_s_conv_3, a secret branch fourth convolution operation group l_s_conv_4, a secret branch fifth convolution operation group l_s_conv_5, a deconvolution first operation group l_convt_1, a deconvolution second operation group l_convt_2, and a deconvolution third operation group l_conv_3;
The outputs of the carrier branch fifth convolution operation group and the secret branch fifth convolution operation group are input into the deconvolution first operation group after being subjected to channel splicing;
the outputs of the carrier branch fourth convolution operation group, the secret branch fourth convolution operation group and the deconvolution first operation group are input into the deconvolution second operation group after being subjected to channel splicing to form a jump structure,
the outputs of the third convolution operation group of the carrier branch, the third convolution operation group of the secret branch and the second operation group of the deconvolution are input into the third operation group of the deconvolution after being subjected to channel splicing to form a jump structure;
one convolution operation group comprises a convolution layer Conv, an activation layer LeakyReLU and a batch standardization layer BN which are sequentially arranged; one deconvolution operation group includes a deconvolution layer ConvT, an activation layer LeakyReLU, and a batch normalization layer BN, which are sequentially arranged.
The carrier branch hiding probability guidance module is expressed as:
p(m,n,d,θ)=count{(k,a),(l,b)∈(N x ×N y )×(N x ×N y )}
wherein p (m, N, d, θ) is a gray level co-occurrence matrix of the quarter-size carrier image I, m and N are two different gray levels, d is a distance between two pixels of the quarter-size carrier image, θ is an angle between two pixels of the quarter-size carrier image, count {.θ represents a total number of elements contained in the calculation set, (k, a), (l, b) is two pixels of the quarter-size carrier image, N x And N y X is the width and height of the quarter-size carrier image t An entropy image of the quarter-size carrier image, BN (·) is a batch normalization operation, σ 1 (. Cndot.) is a ReLU activation function, F 1 (. Cndot.) is a 3 x 3 convolution transform function,for pixel-wise multiplication operations +.>For pixel-by-pixel addition operation, x e Hiding the output of the probability guidance module for the carrier branch.
As shown in fig. 4, the middle resolution detail branches include a first convolution operation group m_c_conv_1 of carrier branches, a second convolution operation group m_c_conv_2 of carrier branches, a first convolution operation group m_s_conv_1 of secret branches, and a second convolution operation group m_s_conv_2 of secret branches; channel splicing operation Concat;
the outputs of the carrier branch second convolution operation group and the secret branch second convolution operation group are the inputs of the channel splicing operation.
As shown in fig. 5, the high resolution detail branches include a first convolution operation group h_c_conv_1 of carrier branches, a second convolution operation group h_c_conv_2 of carrier branches, a first convolution operation group h_s_conv_1 of secret branches, a second convolution operation group h_s_conv_2 of secret branches, and a channel splicing operation Concat;
the outputs of the carrier branch second convolution operation group and the secret branch second convolution operation group are the inputs of the channel splicing operation.
The cascade feature first fusion module is expressed as:
wherein f 1 Output for low resolution semantic branching, f 2 For output of the medium resolution detail branch, up (·) is a 2×2 upsampling operation, F 2 (. Cndot.) is a 1X 1 convolution transform function, σ 2 (. Cndot.) is a LeakyReLU activation function, f 3 And outputting the first fusion module which is the cascade characteristic.
The cascade feature second fusion module is expressed as:
wherein f 4 For output of high resolution detail branches, f 5 And the output of the second fusion module is the cascade characteristic.
The set of upsampling operations is represented as:
c'=σ 3 (F 3 (BN(σ 1 (F 3 (BN(σ 1 (F 3 (f 5 ))))))))
wherein F is 3 (. Cndot.) is a 4X 4 deconvolution transform function, σ 3 (. Cndot.) is the Tanh activation function and c' is the dense image.
It can be seen that the image cascade network takes as input a cascade image pair of different resolutions. The low resolution image pair (1/4 size) is input to a complex semantic network to extract coarse-grained semantic information, while the medium and high resolution image pair (1/2 size and original size) is input to a lightweight detail network to refine fine-grained detail information. Although the semantic network consists of complex up-sampling and down-sampling layers, the input resolution is very low, and abundant image information can be extracted under the condition of lower computational complexity, so that the visual quality of the dense image is improved. In addition, the hidden probability guiding module adaptively enhances semantic information of a high-frequency region of the carrier image, restricts the hidden of the secret image in a complex texture part, provides a correct direction for hiding the secret image, and effectively reduces the exploration cost of a hidden network.
S20, inputting the secret image into a reconstruction network based on a U-Net++ structure to obtain a reconstructed secret image;
in a specific implementation manner of the embodiment of the present invention, the reconfiguration network based on the U-net++ structure is a jump connection structure, as shown in fig. 6, including: convolution operation set 1, conv_1, convolution operation set 2, conv_2, convolution operation set 3, conv_3, convolution operation set 4, conv_4, deconvolution operation set 1, convt_1, convolution operation set 5, conv_5, deconvolution operation set 2, convt_2, convolution operation set 6, conv_6, deconvolution operation set 3, convt_3, convolution operation set 7, conv_7, deconvolution operation set 4, convt_4, convolution operation set 8, conv_8, deconvolution operation set 5, convt_5, convolution operation set 9, conv_9, deconvolution operation set 6, convt_6, convolution operation set 10, conv_10;
the deconvolution operation group 1 and the output of the convolution operation group 1 are input into the convolution operation group 5 after channel splicing;
the deconvolution operation group 2 and the output of the convolution operation group 2 are input into the convolution operation group 6 after being subjected to channel splicing;
the outputs of the deconvolution operation group 3, the convolution operation group 1 and the convolution operation group 5 are input into the convolution operation group 7 after being subjected to channel splicing to form a jump connection structure;
The outputs of the deconvolution operation group 4 and the convolution operation group 3 are input into the convolution operation group 8 after being subjected to channel splicing;
the outputs of the deconvolution operation group 5, the convolution operation group 2 and the convolution operation group 6 are input into the convolution operation group 9 after being subjected to channel splicing to form a jump connection structure;
the outputs of the deconvolution operation group 6, the convolution operation group 1, the convolution operation group 5 and the convolution operation group 7 are input into the convolution operation group 10 after being subjected to channel splicing to form a jump connection structure;
the output of the convolution operation set 10 is a reconstructed secret image;
one convolution operation group comprises a convolution layer Conv, an activation layer LeakyReLU and a batch standardization layer BN which are sequentially arranged; one deconvolution operation group includes a deconvolution layer ConvT, an activation layer LeakyReLU or Tanh, and a batch normalization layer BN, which are sequentially arranged.
And S30, judging the dense image by adopting a pre-trained judge network to obtain a judging result and a pixel-level rewarding matrix. The discrimination result comprises that the image containing the secret is identified as a carrier image or a image containing the secret;
after the confidential image is obtained, whether the embedding position of the confidential image is reasonable or not is judged in order to improve the security of the confidential image, and a steganographic analyzer XuNet is selected as a referee network, so that the confidential image and the carrier image can be effectively distinguished. Therefore, in the implementation process, the referee network may select XuNet network to learn the dense images generated by different airspace according to the graph hiding algorithm, for example: balun, balun and UDH; and inputting the dense-containing image into a referee network for discrimination, and outputting a classification result, namely, identifying the watermark-containing image as a carrier image or a watermark-containing image, and a pixel-level rewarding matrix.
In a specific implementation manner of the embodiment of the present invention, the referee network is obtained through the following training steps:
collecting a plurality of image samples, and determining labels of the images of the samples; the tag includes a probability that the corresponding sample image includes a secret image;
and taking each image sample as input, and taking the label of each image sample as output training XuNet to obtain a referee network.
The pixel level bonus matrix is expressed as:
wherein F is k The kth channel of the feature map output for the last convolution layer of the referee network, alpha k Weight of kth feature map, R ij For the pixel level bonus matrix, i and j represent the positions of the rows and columns of image pixels, respectively, H ' and W ' are the height and width of the kth feature map, and z ' is the predicted outcome of the referee network.
S40, designing a mixed loss function according to the quality of the secret-containing image, the quality of the reconstructed secret image, the judging network discrimination result and the pixel-level rewarding matrix, taking the mixed loss function as a total loss function of the graph-hiding network, optimizing the graph-hiding network by taking the minimum total loss function as a target, and considering that training is finished when the loss is reduced and kept stable; the graph hiding network comprises an image cascade hidden network, a reconstruction network based on a U-Net++ structure and a referee network. In the optimization process, the weight of the referee network is fixed.
In a specific implementation manner of the embodiment of the present invention, the total loss function of the graph hiding network is:
L=βL c +ηL s
wherein L is c Hiding network loss for image concatenation, L s Beta and eta are weights for controlling the image cascade hidden network loss and the reconstruction network loss based on the U-Net++ structure.
The image cascade hidden network loss L c The calculation formula of (2) is as follows:
L c =μL v +ωL a
wherein L is v Representing quality loss, for measuring visual quality of dense images, L a Represents the security loss, is used for measuring the security of the dense image, mu and omega are weights for controlling the quality loss and the security loss, wherein c is a carrier image pixel, and c= { c i I 1, 2..l }, L is the total pixel value of the image, c 'is the dense-containing image pixel, c' = { c i '|1,2,...,L},μ c 、μ c' Average of c and c', respectively, also represents the brightness of the carrier image and the density-containing image, K 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5, sigma c 、σ c' Standard deviation of c, c', respectively, also represents contrast of the carrier image and the dense image, σ cc' Covariance of c and c', also representing structural similarity of the carrier image and the dense-containing image, K 2 G is a Gaussian filter parameter, alpha and gamma are super parameters for controlling weights, H and W are the height and width of an image, z represents the true label of the image, and z' represents the predicted value of a referee network.
The reconstruction network loss L based on the U-Net++ structure s The calculation formula of (2) is as follows:
where s is the secret image pixel, s' is the reconstructed secret image pixel, μ s 、μ s' Average of s and s', respectively, also represents the brightness of the secret image and the reconstructed secret image, K 1 Is a constant less than or equal to 1, M is a custom scale, and the value is 5, sigma s 、σ s' Standard deviations s, s', respectively, also represent the contrast of the secret image and the reconstructed secret image, σ ss' Covariance of s and s' also represents structural similarity of the secret image and the reconstructed secret image, K 2 G is a gaussian filter parameter, which is a constant of 1 or less.
In practical application, a color carrier image and a gray secret image are input into a trained image cascade hidden network to obtain a secret image, then the secret image is input into a trained reconstruction network based on a U-Net++ structure, and the secret image hidden in the secret image is extracted.
In summary, the pixel-level rewarding mechanism-based image hiding method in the embodiment of the invention inputs a color carrier image and a gray secret image into the image cascade hidden network to generate a secret-containing image; inputting the secret-containing image into a reconstruction network based on a U-Net++ structure to obtain a reconstructed secret image; the judgment network trained in advance is adopted to judge whether the secret image is hidden in the secret image or not, and pixel-level rewards are distributed according to the hiding effect, so that the hidden type image processing method has high concealment, high safety and high calculation efficiency.
To verify the effect of the present invention, the proposed schematic model is first trained on the public dataset PASCAL-VOC2012 and tested on the public dataset LFW. The experimental results in terms of image quality are shown in table 1. Wherein the Baluja [ shomeet Baluja.Hiding images in plain sight: deep steganograph.InNIPS, pages 2069-2079,2017 ] model is the first graph model, the UDH [ Chaoning Zhang, philipp Benz, fail Karjauv, geng Sun and In So Kwen.UDH: universal Deep hiding for steganography, watermark, and light field messaging.InNIPS, pages 10223-10234,2020 ] model is the first graph model independent of the texture of the carrier image, and the ISGAN [ Ru Zhang, shiqi Dong and Jianyi Liu. Invisible steganography via generative adversarial networks.Multimedia Tools and Applications,78 (7): 9-8575,2019 ] model is the best model for embedding the grayscale secret image, generating the secret image and reconstructing the secret image. The experimental results in terms of safety are shown in Table 2, where XuNet [ Guanshuo Xu, han-Zhou Wu and Yun-Qing shi. Structural design of convolutional neural networks for steganalysis. IEEE Signal Processing Letters,23 (5): 708-712,2016 ], yeNet [ Jian Ye, jiangqun Ni and Yang Yi. Deep learning hierarchical representations for image steganalysis. IEEE Transactions on Information Forensics and Security,12 (11): 2545-2557,2017 ], and Yedoroudj-Net [ Mehdi Yedoroudj, free d ric Comby and Marc Chaumon. Yedoroudj-Net: an efficient CNN for spatial steganalysis. InICASSP, pages 2-2096,2018 ] are three model of steganographic analysis that are currently better. The experimental results in terms of computational complexity are shown in table 3.
TABLE 1
TABLE 2
TABLE 3 Table 3
By hiding the pattern | Floating point operations per second number (×10) 6 )↓ |
Baluja | 29125.51 |
UDH | 10976.46 |
ISGAN | 54084.83 |
The invention is that | 1594.06 |
Example 2
Based on the same inventive concept as embodiment 1, in an embodiment of the present invention, a schematic view collection system based on a pixel-level rewarding mechanism is provided, including:
the generation unit is used for inputting a carrier image and a secret image into the image cascade hiding network to obtain a secret-containing image with the secret image hidden;
the reconstruction unit is used for inputting the secret image into a reconstruction network based on a U-Net++ structure to obtain a reconstructed secret image;
the judging unit is used for judging the dense image by adopting a pre-trained judging network to obtain a judging result and a pixel-level rewarding matrix, wherein the judging result comprises the step of recognizing the dense image as a carrier image or a dense image;
the training unit is used for designing a mixed loss function according to the quality of the secret-containing image, the quality of the reconstructed secret image, the judging network discrimination result and the pixel-level rewarding matrix, taking the mixed loss function as a total loss function of the graph-hiding network, optimizing the graph-hiding network by taking the minimum total loss function as a target, and considering that the training is finished when the loss is reduced and kept stable; the graph hiding network comprises an image cascade hidden network, a reconstruction network based on a U-Net++ structure and a referee network. In the optimization process, the weight of the referee network is fixed;
And the steganography unit is used for generating a secret image according to the carrier image and the secret image by using the trained graph-hiding network, and reconstructing the secret image from the secret image.
Specific limitations regarding the pictographic system based on the pixel-level rewarding mechanism may be found in the above description of the pictographic method based on the pixel-level rewarding mechanism, and will not be described herein. The various modules in the above-described hidden graphics system based on pixel-level rewarding mechanisms may be implemented in whole or in part in software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
It should be noted that, the term "first\second\third" in the embodiments of the present application is merely to distinguish similar objects, and does not represent a specific order for the objects, and it is understood that "first\second\third" may interchange a specific order or sequence where allowed. It is to be understood that the "first\second\third" distinguishing objects may be interchanged where appropriate to enable embodiments of the present application described herein to be implemented in sequences other than those illustrated or described herein.
The terms "comprising" and "having" and any variations thereof, in embodiments of the present application, are intended to cover non-exclusive inclusions. For example, a process, method, apparatus, article, or device that comprises a list of steps or modules is not limited to the particular steps or modules listed and may optionally include additional steps or modules not listed or inherent to such process, method, article, or device.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.
Claims (15)
1. A method for hiding a graph based on a pixel level rewarding mechanism, comprising:
when the secret image is hidden, inputting the color carrier image and the gray secret image into a trained graph hiding network to generate a secret-containing image;
When secret image extraction is carried out, inputting the secret image into a trained graph hiding network to obtain a reconstructed secret image;
the training of the graph hiding network comprises the following steps:
constructing the hidden image network by utilizing an image cascade hidden network, a reconstruction network based on a U-Net++ structure and a referee network, wherein the image cascade hidden network is used for generating a secret-containing image according to a color carrier image and a gray secret image; the reconstruction network based on the U-Net++ structure is used for reconstructing the secret-containing image to obtain a reconstructed secret image; the judge network is used for judging the dense-containing image output by the image cascade hidden network in the training of the image hiding network to obtain a judging result and a pixel-level rewarding matrix;
constructing the total loss function of the hidden graph network according to the loss function of the image cascade hidden network, the loss function of the reconstruction network based on the U-Net++ structure, the judging result of the judge network and the pixel-level rewarding matrix;
and optimizing the graph hiding network with the minimum total loss function as a target, and completing training when the loss is reduced and kept stable to obtain a trained graph hiding network, wherein the weight of the referee network is fixed in the optimization process.
2. The method for hiding patterns based on pixel-level rewards mechanism of claim 1, wherein said inputting color carrier images and gray scale secret images into a trained hiding pattern network comprises:
preprocessing the carrier image/secret image by adopting resolution diversification operation to obtain three pairs of carrier image/secret image pairs with different resolutions of original size, half size and quarter size;
three pairs of carrier image/secret image pairs of different resolutions are input into a hidden image network.
3. The pixel-level rewards mechanism based diagramming method of claim 2 wherein said image cascade concealment network comprises: the device comprises a low-resolution semantic branch, a medium-resolution detail branch, a high-resolution detail branch, a cascade feature first fusion module, a cascade feature second fusion module and an up-sampling operation group;
the input of the low resolution semantic branch is a quarter-sized carrier image/secret image pair;
the input of the medium resolution detail branch is a carrier image/secret image pair of half size;
the input of the high-resolution detail branch is a carrier image/secret image pair with the original size;
The output of the low-resolution semantic branch and the middle-resolution detail branch is used as the input of a first fusion module of the cascade characteristics;
the output of the high-resolution detail branch is used as the input of the cascade feature second fusion module; the output of the cascade characteristic second fusion module is connected with the up-sampling operation group;
the output of the up-sampling operation set is a dense image.
4. The pixel-level rewards mechanism based diagramming method of claim 3 wherein said low resolution semantic branches comprise: the vector branch hiding probability guidance module comprises a vector branch first convolution operation group, a vector branch second convolution operation group, a vector branch third convolution operation group, a vector branch fourth convolution operation group, a vector branch fifth convolution operation group, a secret branch first convolution operation group, a secret branch second convolution operation group, a secret branch third convolution operation group, a secret branch fourth convolution operation group, a secret branch fifth convolution operation group, a deconvolution first operation group, a deconvolution second operation group and a deconvolution third operation group;
the input of the carrier branch hiding probability guiding module is a carrier image with a quarter size;
The input of the first convolution operation group of the carrier branch is the output of the carrier branch hiding probability guiding module;
the input of the second convolution operation group of the carrier branch is the output of the first convolution operation group of the carrier branch;
the input of the third convolution operation group of the carrier branch is the output of the second convolution operation group of the carrier branch;
the input of the carrier branch fourth convolution operation group is the output of the carrier branch third convolution operation group;
the input of the fifth convolution operation group of the carrier branch is the output of the fourth convolution operation group of the carrier branch;
the input of the first convolution operation group of the secret branch is a secret image with a quarter size;
the input of the second convolution operation group of the secret branch is the output of the first convolution operation group of the secret branch;
the input of the third convolution operation group of the secret branch is the output of the second convolution operation group of the secret branch;
the input of the third convolution operation group of the secret branch is the output of the third convolution operation group of the secret branch;
the input of the secret branch fifth convolution operation group is the output of the secret branch fourth convolution operation group;
the outputs of the carrier branch fifth convolution operation group and the secret branch fifth convolution operation group are input into the deconvolution first operation group after being subjected to channel splicing;
The outputs of the carrier branch fourth convolution operation group, the secret branch fourth convolution operation group and the deconvolution first operation group are input into the deconvolution second operation group after being subjected to channel splicing to form a jump structure,
the outputs of the carrier branch third convolution operation group, the secret branch third convolution operation group and the deconvolution second operation group are input into the deconvolution third operation group after being subjected to channel splicing to form a jump structure;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
5. The pixel-level rewards mechanism based diagramming method of claim 4 wherein said carrier branch concealment probability guidance module is represented as:
p(m,n,d,θ)=count{(k,a),(l,b)∈(N x ×N y )×(N x ×N y )}
wherein p (m, N, d, θ) is a gray level co-occurrence matrix of the quarter-size carrier image I, m and N are two different gray levels, d is a distance between two pixels of the quarter-size carrier image, θ is an angle between two pixels of the quarter-size carrier image, count {.θ represents a total number of elements contained in the calculation set, (k, a), (l, b) is two pixels of the quarter-size carrier image, N x And N y X is the width and height of the quarter-size carrier image t An entropy image of the quarter-size carrier image, BN (·) is a batch normalization operation, σ 1 (. Cndot.) is a ReLU activation function, F 1 (. Cndot.) is a 3 x 3 convolution transform function,for pixel-wise multiplication operations +.>For pixel-by-pixel addition operation, x e Hiding the output of the probability guidance module for the carrier branch.
6. A pixel-level rewarding mechanism based diagramming method of claim 3 wherein said medium resolution detail branch comprises: a carrier branch first convolution operation group, a carrier branch second convolution operation group, a secret branch first convolution operation group and a secret branch second convolution operation group; performing channel splicing operation;
the input of the first convolution operation group of the carrier branch is a carrier image with half size;
the input of the second convolution operation group of the carrier branch is the output of the first convolution operation group of the carrier branch;
the input of the first convolution operation group of the secret branch is a secret image with half size;
the input of the second convolution operation group of the secret branch is the output of the first convolution operation group of the secret branch;
the outputs of the carrier branch second convolution operation group and the secret branch second convolution operation group are the inputs of channel splicing operation;
One convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
7. A pixel-level rewarding mechanism based diagramming method as claimed in claim 3, wherein said high resolution detail branch comprises: a carrier branch first convolution operation group, a carrier branch second convolution operation group, a secret branch first convolution operation group, a secret branch second convolution operation group and a channel splicing operation;
the input of the first convolution operation group of the carrier branch is a carrier image with the original size;
the input of the second convolution operation group of the carrier branch is the output of the first convolution operation group of the carrier branch;
the input of the first convolution operation group of the secret branch is a secret image with the original size;
the input of the second convolution operation group of the secret branch is the output of the first convolution operation group of the secret branch;
the outputs of the carrier branch second convolution operation group and the secret branch second convolution operation group are the inputs of channel splicing operation;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
8. The pixel-level rewards mechanism based diagramming method of claim 3 wherein said cascading features first fusion module is represented as:
wherein f 1 Output for low resolution semantic branching, f 2 For output of the medium resolution detail branch, up (·) is a 2×2 upsampling operation, F 2 (. Cndot.) is a 1X 1 convolution transform function, σ 2 (. Cndot.) is a LeakyReLU activation function, f 3 And outputting the first fusion module which is the cascade characteristic.
10. The pixel-level rewards mechanism based diagramming method of claim 9 wherein said set of upsampling operations is represented as:
c'=σ 3 (F 3 (BN(σ 1 (F 3 (BN(σ 1 (F 3 (f 5 ))))))))
wherein F is 3 (. Cndot.) is a 4X 4 deconvolution transform function, σ 3 (. Cndot.) is the Tanh activation function and c' is the dense image.
11. The method for hiding patterns based on pixel-level rewards mechanism of claim 1, wherein said reconstruction network based on U-net++ structure is a jump connection structure comprising: the method comprises the following steps of a first convolution operation group, a second convolution operation group, a third convolution operation group, a fourth convolution operation group, a first deconvolution operation group, a fifth convolution operation group, a second deconvolution operation group, a sixth convolution operation group, a third deconvolution operation group, a seventh convolution operation group, a fourth deconvolution operation group, a eighth convolution operation group, a fifth deconvolution operation group, a ninth convolution operation group, a sixth deconvolution operation group and a tenth convolution operation group;
The output of the deconvolution operation group I and the output of the convolution operation group I are input into the convolution operation group V after channel splicing;
the output of the deconvolution operation group II and the output of the convolution operation group II are input into the convolution operation group VI after channel splicing;
the outputs of the deconvolution operation group III, the convolution operation group I and the convolution operation group V are input into the convolution operation group seven after channel splicing, so as to form a jump connection structure;
the outputs of the deconvolution operation group IV and the convolution operation group III are input into the convolution operation group eight after channel splicing;
the outputs of the deconvolution operation group five, the convolution operation group two and the convolution operation group six are input into the convolution operation group nine after channel splicing to form a jump connection structure;
the deconvolution operation group six, the convolution operation group one, the convolution operation group five and the convolution operation group seven are input into the convolution operation group ten after channel splicing is carried out on the outputs of the convolution operation group seven, so as to form a jump connection structure;
the output of the convolution operation group ten is a reconstructed secret image;
one convolution operation group comprises a convolution layer, an activation layer and a batch standardization layer which are sequentially arranged; one deconvolution operation group comprises a deconvolution layer, an activation layer and a batch normalization layer which are sequentially arranged.
12. The method for hiding patterns based on pixel-level rewarding mechanism of claim 1, wherein said referee network is XuNet steganalysis network, obtained by training steps of:
collecting a plurality of image samples, and determining labels of the images of the samples; the label is the probability of including a secret image in the corresponding sample image;
and taking each image sample as input, and taking the label of each image sample as output training XuNet steganalysis network to obtain a referee network.
13. The method for pictorially hiding a pixel-level rewards mechanism of claim 1, wherein said pixel-level rewards matrix is expressed as:
wherein sigma 1 (. Cndot.) is a ReLU activation function, F k The kth channel of the feature map output for the last convolution layer of the referee network, alpha k Weight of kth feature map, R ij For a pixel level matrix of rewards, i and j represent the positions of the rows and columns of pixels, respectively, of the imageH ' and W ' are the height and width of the kth feature map, z ' is the predicted outcome of the referee network,represents F k The value at the (i, j) th pixel.
14. The method for hiding a map based on a pixel-level rewards mechanism of claim 13, wherein said total loss function L is:
L=βL c +ηL s
Wherein L is c Hiding network loss for image concatenation, L s Beta and eta are weights for controlling the image cascade hidden network loss and the reconstruction network loss based on the U-Net++ structure;
the image cascade hidden network loss L c The calculation formula of (2) is as follows:
L c =μL v +ωL a
wherein L is v Representing quality loss, for measuring visual quality of dense images, L a Represents the security loss, is used for measuring the security of the image containing the secret, mu and omega are weights for controlling the quality loss and the security loss, c is a carrier image pixel, L is the total pixel value of the image, c' is the pixel of the image containing the secret, mu c 、μ c' Average values of c and c', respectively, representing the brightness, K, of the carrier image and the density-containing image 1 Is a constant less than or equal to 1, M is a custom scale, sigma c 、σ c' Standard deviation of c, c', respectively, representing contrast, σ, of the carrier image and the dense image cc' The covariance of c and c' represents the structural similarity between the carrier image and the dense-containing image, K 2 G is a Gaussian filter parameter, alpha and gamma are super parameters for controlling weights, H and W are the height and width of an image, z represents the real label of the image, and z' represents the predicted value of a referee network;
The reconstruction network loss L based on the U-Net++ structure s The calculation formula of (2) is as follows:
where s is the secret image pixel, s' is the reconstructed secret image pixel, μ s 、μ s' Average of s and s', respectively, representing the brightness of the secret image and the reconstructed secret image, σ s 、σ s' Standard deviations s, s', respectively, representing the contrast of the secret image and the reconstructed secret image, σ ss' The covariance of s and s' represents the structural similarity of the secret image and the reconstructed secret image.
15. A schematic system based on a pixel-level rewarding mechanism, comprising:
the generation unit is used for inputting the color carrier image and the gray-scale secret image into a trained graph hiding network to generate a secret-containing image when the secret image is hidden;
the reconstruction unit is used for inputting the secret image into a trained graph hiding network to obtain a reconstructed secret image when the secret image is extracted;
the training of the graph hiding network comprises the following steps:
constructing the hidden image network by utilizing an image cascade hidden network, a reconstruction network based on a U-Net++ structure and a referee network, wherein the image cascade hidden network is used for generating a secret-containing image according to a color carrier image and a gray secret image; the reconstruction network based on the U-Net++ structure is used for reconstructing the secret-containing image to obtain a reconstructed secret image; the judge network is used for judging the dense-containing image output by the image cascade hidden network in the training of the image hiding network to obtain a judging result and a pixel-level rewarding matrix;
Constructing the total loss function of the hidden graph network according to the loss function of the image cascade hidden network, the loss function of the reconstruction network based on the U-Net++ structure, the judging result of the judge network and the pixel-level rewarding matrix;
and optimizing the graph hiding network with the minimum total loss function as a target, and completing training when the loss is reduced and kept stable to obtain a trained graph hiding network, wherein the weight of the referee network is fixed in the optimization process.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310033855.3A CN116132682A (en) | 2023-01-10 | 2023-01-10 | Picture hiding method and system based on pixel-level rewarding mechanism |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310033855.3A CN116132682A (en) | 2023-01-10 | 2023-01-10 | Picture hiding method and system based on pixel-level rewarding mechanism |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116132682A true CN116132682A (en) | 2023-05-16 |
Family
ID=86293967
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310033855.3A Pending CN116132682A (en) | 2023-01-10 | 2023-01-10 | Picture hiding method and system based on pixel-level rewarding mechanism |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116132682A (en) |
-
2023
- 2023-01-10 CN CN202310033855.3A patent/CN116132682A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Liu et al. | Dynamic feature integration for simultaneous detection of salient object, edge, and skeleton | |
CN106683048B (en) | Image super-resolution method and device | |
CN111950453B (en) | Random shape text recognition method based on selective attention mechanism | |
CN112651978B (en) | Sublingual microcirculation image segmentation method and device, electronic equipment and storage medium | |
Wang et al. | FE-YOLOv5: Feature enhancement network based on YOLOv5 for small object detection | |
EP3690804A1 (en) | Information processing device | |
CN116309648A (en) | Medical image segmentation model construction method based on multi-attention fusion | |
CN109657538B (en) | Scene segmentation method and system based on context information guidance | |
CN113822951A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111062381B (en) | License plate position detection method based on deep learning | |
CN112149526A (en) | Lane line detection method and system based on long-distance information fusion | |
Wang et al. | SLMS-SSD: Improving the balance of semantic and spatial information in object detection | |
US11481919B2 (en) | Information processing device | |
CN114677349A (en) | Image segmentation method and system for edge information enhancement and attention guidance of encoding and decoding | |
Kang et al. | ASF-YOLO: A novel YOLO model with attentional scale sequence fusion for cell instance segmentation | |
CN111753714B (en) | Multidirectional natural scene text detection method based on character segmentation | |
CN117893858A (en) | Image tampering positioning method integrating multi-level multi-scale and boundary information | |
Zheng et al. | Transformer-based hierarchical dynamic decoders for salient object detection | |
CN110738213B (en) | Image identification method and device comprising surrounding environment | |
Hesham et al. | Image colorization using Scaled-YOLOv4 detector | |
Li et al. | A new algorithm of vehicle license plate location based on convolutional neural network | |
CN116132682A (en) | Picture hiding method and system based on pixel-level rewarding mechanism | |
CN115482463A (en) | Method and system for identifying land cover of mine area of generated confrontation network | |
CN114943204A (en) | Chinese character font synthesis method based on generation countermeasure network | |
CN114332103A (en) | Image segmentation method based on improved FastFCN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |