CN115936961B - Steganalysis method, equipment and medium based on few-sample comparison learning network - Google Patents

Steganalysis method, equipment and medium based on few-sample comparison learning network Download PDF

Info

Publication number
CN115936961B
CN115936961B CN202211462057.4A CN202211462057A CN115936961B CN 115936961 B CN115936961 B CN 115936961B CN 202211462057 A CN202211462057 A CN 202211462057A CN 115936961 B CN115936961 B CN 115936961B
Authority
CN
China
Prior art keywords
sample
image
loss
negative
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211462057.4A
Other languages
Chinese (zh)
Other versions
CN115936961A (en
Inventor
尹子豪
周志立
袁程胜
孟若涵
廖灵芝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202211462057.4A priority Critical patent/CN115936961B/en
Publication of CN115936961A publication Critical patent/CN115936961A/en
Application granted granted Critical
Publication of CN115936961B publication Critical patent/CN115936961B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a steganography analysis method, equipment and medium based on a few-sample comparison learning network, wherein the method aims at a small number of steganography image data pairs and generates a plurality of groups of extra available training samples in a data enhancement mode; and randomly selecting a pair of carrier images and a secret image, subtracting the carrier images from the secret image to obtain a residual image, forming a group of training data by the three images and the carrier image which is irrelevant to the other carrier image, and inputting the training data into a contrast learning network in the forms of an anchor point sample, a negative sample, a positive sample and a guide sample. Firstly, extracting features of a network, classifying whether an anchor point sample, a positive sample and a negative sample are secret images or not, and calculating classification loss; calculating contrast loss for the feature vectors of the input anchor point sample, positive sample and negative sample; calculating contrast guidance loss for the input negative sample and the feature vector of the guidance sample; optimizing the network by combining the classification loss, the comparison loss and the comparison guidance loss; one branch in the network structure is extracted as an application network.

Description

Steganalysis method, equipment and medium based on few-sample comparison learning network
Technical Field
The invention belongs to the field of digital evidence obtaining, and particularly relates to a steganography analysis method, equipment and medium based on a few-sample comparison learning network.
Background
The popularization and application of the multimedia technology bring convenience to society on one hand and bring a plurality of risks such as information leakage, malicious tampering, privacy stealing and the like on the other hand. There is increasing emphasis on information security and privacy protection in multimedia distribution. The existing communication security guarantee mainly comprises encryption and information hiding, wherein the encryption mainly operates secret information, but a plaintext after special treatment is more easily suspected by a third party; and the information hiding conceals the existence of the secret data, so that the secret data can be subjected to concealed communication without doubt of a third party. Therefore, information hiding such communication security with camouflage features is attracting more and more attention. Steganography is a technique that unknowingly conceals secret information in a multimedia carrier, making the information invisible. Along with the continuous popularization of information hiding technology, steganography gradually becomes a double-edged sword, and an lawbreaker can obtain personal benefits or apply to terrorist attacks by using the double-edged sword while guaranteeing the communication safety of people. As an adversary to steganography, the purpose of steganalysis is to determine whether secret information hidden in a multimedia carrier is present.
In the beginning of research, researchers often use traditional machine learning based methods for image steganalysis. Generally, image statistics are manually extracted based on conventional machine learning based methods and a trained binary classifier is used to detect whether a given image is a secret image. While traditional machine learning based steganography is useful for target steganography algorithms, designing efficient features is a difficult task requiring strong steganography knowledge.
In recent years, with the development of deep learning, neural networks have received extensive attention in the field of computer vision. In view of this fact, researchers began using deep neural networks for steganalysis, improving detection accuracy by jointly optimizing image features and classifiers. Researchers in 2015 have proposed a special convolutional neural network (Gaussian-Neuron CNN) to automatically learn the effective features for steganalysis tasks. Subsequently, xuNet networks were proposed that employ absolute value layers and TanH activation at the front of the network. This is the first model to achieve competitive performance compared to machine-based learning models. The SRNet proposed in 2019 is a complete end-to-end model that does not include a fixed preprocessing layer. During the training process, the network may automatically learn the best filter to extract steganographic features. The network achieves the best performance at that time. The above deep learning based steganalysis networks all require a large number of steganalysis image pair data to be provided in training. Deep learning networks rely on iterative optimization of a large amount of input data to ultimately achieve satisfactory results.
Although the steganalysis research based on the deep learning has achieved great achievement, the existing steganalysis image detection method based on the deep learning mainly has the following technical problems:
1. existing steganographic analysis methods based on deep learning often need to provide a large amount of steganographic image pair data in training, but in a real situation, for unknown steganographic algorithms and steganographic pictures, only a small amount of samples are generally obtained for research and training.
2. The existing steganalysis method based on deep learning can lead the network to be in overfitting if training is directly carried out by using a small amount of samples, so that the network has good performance on a training data set, but the error rate is increased in a test data set or actual use.
Disclosure of Invention
The invention aims to: in order to solve the problems in the prior art, the invention provides a steganalysis method based on a few-sample contrast learning network.
The technical scheme is as follows: the invention provides a steganalysis method based on a few-sample comparison learning network, which specifically comprises the following steps:
step 1: performing data enhancement operation on the existing steganographic image data pairs to generate a plurality of groups of additional image data pairs, and forming an enhanced image data set together with the existing steganographic image data pairs; the existing steganographic image data pair and the additional image data pair are composed of a carrier image and a corresponding secret image;
step 2: randomly selecting a pair of carrier images I c1 And contains secret image I s Subtracting the pair of images to obtain a residual image I r (and enhancing the residual image to make its characteristics more apparent), the three images are separated from one another by a carrier image I c2 Form a group of training data { I } c1 ,I s ,I c2 ,I r };
Step 3: image I of the support c1 As anchor sample I anchor Will contain secret image I s As negative sample I negative Residual image I to be image enhanced r As instruction sample I guidance Another unrelated carrier image I c2 As positive sample I positive A formal input contrast learning network;
step 4: in a contrast learning network:
firstly, carrying out feature extraction on an anchor point sample, a negative sample, a guide sample and a positive sample based on a feature extraction network;
then, classifying whether the anchor point sample, the positive sample and the negative sample are secret images or not according to the extracted feature vectors, and calculating classification loss according to known labels of the anchor point sample, the positive sample and the negative sample;
then, calculating contrast loss according to the feature vectors of the anchor point sample, the negative sample and the positive sample;
then, calculating contrast guidance loss according to the feature vectors of the negative sample and the guidance sample;
finally, comprehensively comparing the loss, the classification loss and the comparison guiding loss to obtain total loss, and performing iterative training of a comparison learning network;
step 5; and after the extraction training is finished, performing steganalysis on one network branch in the comparison learning network.
Further, the data enhancement operation in step 1 includes mirror image flipping, rotation, image shifting, or color space conversion.
Further, in the step 3I c2 Is divided by (I) c1 ,I s ) And any carrier image of the external of the additional image data generated by the data enhancement.
Further, the first 12 layers of the pre-trained SRNet are used as the feature extraction network in step 4.
Further, the classification loss in the step 4 is as follows: wherein y is i Label representing image sample, y if image sample i is positive i 1, image sample i is negative and y i Is 0, p i Representing the probability that the image sample i is predicted to be a positive class.
Further, the contrast loss in the step 4 is:wherein f anch 、f negative 、f positive The feature vectors of the anchor sample, the negative sample and the positive sample are respectively represented.
Further, the comparison guidance loss in the step 4 is as follows: l (L) gui =-f negative T f guidance The method comprises the steps of carrying out a first treatment on the surface of the Wherein f guidance Representing the feature vector of the guide sample.
Further, the total loss in the step 4 is as follows: l=l cla +L con +L gui
The present invention also provides a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the steganalysis method as described above.
The present invention also provides an electronic device comprising one or more processors, one or more memories, and one or more programs, wherein the one or more programs are stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the steganalysis method as described above.
The beneficial effects are that:
(1) The invention adopts a data enhancement mode to preprocess the obtained limited steganographic image pair, increases the data quantity available for network learning, is beneficial to improving the generalization of the network and helps the network to learn the bottom layer characteristics of steganographic signals more accurately;
(2) According to the invention, a contrast learning mode is adopted to carry out contrast loss calculation on the high-dimensional features extracted from the anchor point sample, the positive sample and the negative sample, so that the network distinguishes the steganography signal and the normal picture signal through the distance relation among the features, and the distinction between the carrier image and the secret-containing image is maximally distinguished;
(3) The invention improves the general mode of contrast learning aiming at a steganography analysis task, generates a residual image through subtracting a pair of carrier images and a secret image, takes the residual image as an additional available guidance sample to be input into a network, carries out contrast guidance loss calculation on high-dimensional characteristics extracted by a negative sample and a guidance sample, and guides the network to extract a space where the characteristics of the secret image are close to the characteristics of a steganography signal;
(4) The invention combines the classification loss, the comparison loss and the comparison guidance loss, the classification loss guides the network to learn whether the classification image is hidden or not, and simultaneously adopts the comparison loss and the comparison guidance loss to guide the optimization of the network from the other different directions, thereby preventing the network from being overfitted under the condition of few samples.
Drawings
FIG. 1 is a schematic diagram of a frame of the present invention;
fig. 2 is a schematic diagram of an embodiment of the present invention.
Detailed Description
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention.
As shown in fig. 1, the present embodiment provides a steganalysis method based on a few-sample contrast learning network:
in the image data enhancement stage, performing data enhancement operations such as mirror image overturning, rotation, image shifting, color space conversion and the like on images in an original image data set to generate a plurality of groups of additionally available training samples;
randomly selecting a pair of carrier images and secret images, subtracting the pair of images to obtain a residual image, performing 50 times enhancement on the residual image, and forming training data by the three images and the carrier image which is irrelevant to the other image;
and respectively taking the carrier image, the secret image, the other irrelevant carrier image and the residual image as anchor point samples, positive samples, negative samples and guide samples to be input into a contrast learning network.
In the network learning stage, firstly, extracting features of all input images respectively, wherein the extracted features are used for classifying whether the input images are secret images or not, and classifying results and known labels of the images calculate classifying losses. And on the other hand, the method is used for calculating contrast loss of the characteristic vector between the input anchor point sample, the negative sample and the positive sample image and calculating contrast guidance loss of the characteristic vector between the negative sample and the guidance sample.
In the network training stage, the invention optimizes the network by integrating the contrast loss, the classification loss and the contrast guidance loss, the classification loss guides the network to learn whether the classification image is hidden or not, and simultaneously, the optimization of the network is guided from other two different directions by adopting the contrast loss and the contrast guidance loss. The contrast loss guide network causes the features extracted from the anchor sample to be more closely spaced from the features extracted from the positive sample and more distant from the features extracted from the negative sample. The contrast directs the loss to guide the network to extract the space where the feature of the secret image is close to the feature of the steganography signal, so as to prevent the network from being overfitted under the condition of few samples.
In one embodiment, as shown in fig. 2, a steganalysis method based on a few-sample contrast learning network comprises the following steps:
step 1: performing data enhancement operations such as mirror image overturning, rotation, image shifting, color space conversion and the like on the existing available carrier image and secret image-containing data pairs to generate a plurality of groups of additional available training samples;
step 2: randomly selecting a pair of carrier images I c1 And contains secret image I s Subtracting the pair of images to obtain a residual image I r And enhancing it by 50 times, and separating the three images from one another c2 Forming a set of training data;
step 3: image I of the support c1 As anchor sample I anchor Will contain secret image I s As negative sample I negative Another unrelated carrier image I c2 As positive sample I positive Residual image I r As instruction sample I guidance A formal input contrast learning network;
step 4: firstly, carrying out feature extraction on all input images through a feature extraction network by using the first 12 layers of the pre-trained SRNet as the feature extraction network to carry out residual noise extraction and feature vector dimension compression;
step 5: the last full-connection layer linear classifier of the network classifies whether the anchor point sample, the positive sample and the negative sample are secret images or not respectively, and calculates classification loss according to known labels of the images: wherein y is i Labels representing image samples, positive class 1, negative class 0, p i Representing a probability that the image sample is predicted to be a positive class;
step 6: calculating contrast loss according to the feature vectors of the input anchor point sample, the negative sample and the positive sample image:
step 7: the network calculates contrast guidance losses for the feature vectors of the input negative sample and guidance sample images: l (L) gui =-f negative T f guidance
Step 8: optimizing the network by integrating the comparison loss, the classification loss and the comparison guidance loss, and converging the network loss after training for a plurality of rounds;
step 9: and extracting one network branch in the comparison learning network structure as a subsequent practical application network.
Based on the same technical solution, the present invention also discloses a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the above-described steganalysis method.
Based on the same technical scheme, the invention also discloses a computing device, which comprises one or more processors, one or more memories and one or more programs, wherein the one or more programs are stored in the one or more memories and are configured to be executed by the one or more processors, and the one or more programs comprise instructions for executing the steganography analysis method.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above embodiments are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereto, and any modification made on the basis of the technical scheme according to the technical idea of the present invention falls within the protection scope of the present invention.
The embodiments of the present invention have been described in detail with reference to the drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the spirit of the present invention.

Claims (6)

1. The steganalysis method based on the less sample comparison learning network is characterized by comprising the following steps of:
step 1: performing data enhancement operation on the existing steganographic image data pairs to generate a plurality of groups of additional image data pairs, and forming an enhanced image data set together with the existing steganographic image data pairs;
step 2: for any of the image data pairs { I } in the enhanced image data set c1 ,I s Carrier image I therein c1 And contains secret image I s Subtracting to obtain residual image I r ,I c1 、I s 、I r Enhancing any of the image dataset and I c1 、I s 、I r Extraneous further carrier image I c2 Forming a set of training data;
step 3: will I c1 As anchor sample I anchor Will I s As negative sample I negative Will I r As instruction sample I guidance 、I c2 As positive sample I positivc As input to a contrast learning network;
step 4: in a contrast learning network:
firstly, carrying out feature extraction on an anchor point sample, a negative sample, a guide sample and a positive sample based on a feature extraction network;
then, the feature vectors of the extracted anchor point samples, positive samples and negative samples are input into a full-connection layer linear classifier, classification of whether the anchor point samples, the positive samples and the negative samples are secret images or not is respectively carried out, and classification loss is calculated;
then, calculating contrast loss according to the extracted feature vectors of the anchor point sample, the negative sample and the positive sample;
then, calculating contrast guidance loss according to the extracted negative sample and the feature vector of the guidance sample;
finally, comprehensively comparing the loss, the classification loss and the comparison guiding loss to obtain total loss, and performing iterative training of a comparison learning network;
step 5; after extraction training is completed, performing steganography analysis on one network branch in the comparison learning network;
the classification loss in the step 4 is as follows:wherein y is i A label representing an image sample i, y if the image sample i is a positive class i 1, image sample i is negative and y i Is 0, p i Representing the probability that the image sample i is predicted to be a positive class;
the contrast loss in the step 4 is as follows:wherein f anchor 、f negative 、f positive Characteristic vectors respectively representing an anchor point sample, a negative sample and a positive sample;
the comparison guidance loss in the step 4 is as follows: l (L) gui =-f negative T f guidance Wherein f negative 、f guidance Respectively representing the characteristic vectors of the positive samples and the guiding samples;
the total loss in the step 4 is as follows: l=l cla +L con +L gui Wherein L is cla 、L con 、L gui The classification loss, the comparison loss, and the comparison guidance loss are shown respectively.
2. The steganalysis method based on the less-sample contrast learning network according to claim 1, wherein the data enhancement operation in step 1 includes mirror-inversion, rotation, image shifting or color space conversion.
3. The steganalysis method based on the less-sample contrast learning network according to claim 1, wherein the step 2 further comprises the steps of r Enhancement is performed.
4. The steganalysis method based on a few-sample contrast learning network according to claim 1, wherein the first 12 layers of the pre-trained SRNet are used as feature extraction network in step 4.
5. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform the steganalysis method of any of claims 1-4.
6. An electronic device comprising one or more processors, one or more memories, and one or more programs, wherein the one or more programs are stored in the one or more memories and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing the steganography analysis method of any of claims 1-4.
CN202211462057.4A 2022-11-21 2022-11-21 Steganalysis method, equipment and medium based on few-sample comparison learning network Active CN115936961B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211462057.4A CN115936961B (en) 2022-11-21 2022-11-21 Steganalysis method, equipment and medium based on few-sample comparison learning network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211462057.4A CN115936961B (en) 2022-11-21 2022-11-21 Steganalysis method, equipment and medium based on few-sample comparison learning network

Publications (2)

Publication Number Publication Date
CN115936961A CN115936961A (en) 2023-04-07
CN115936961B true CN115936961B (en) 2024-01-02

Family

ID=86651640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211462057.4A Active CN115936961B (en) 2022-11-21 2022-11-21 Steganalysis method, equipment and medium based on few-sample comparison learning network

Country Status (1)

Country Link
CN (1) CN115936961B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117496243B (en) * 2023-11-06 2024-05-31 南宁师范大学 Small sample classification method and system based on contrast learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533570A (en) * 2019-08-27 2019-12-03 南京工程学院 A kind of general steganography method based on deep learning
CN112767226A (en) * 2021-01-15 2021-05-07 南京信息工程大学 Image steganography method and system based on GAN network structure automatic learning distortion
CN112785479A (en) * 2021-01-21 2021-05-11 南京信息工程大学 Image invisible watermark universal detection method based on less-sample learning
CN113538202A (en) * 2021-08-05 2021-10-22 齐鲁工业大学 Image steganography method and system based on generative steganography confrontation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016165082A1 (en) * 2015-04-15 2016-10-20 中国科学院自动化研究所 Image stego-detection method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110533570A (en) * 2019-08-27 2019-12-03 南京工程学院 A kind of general steganography method based on deep learning
CN112767226A (en) * 2021-01-15 2021-05-07 南京信息工程大学 Image steganography method and system based on GAN network structure automatic learning distortion
CN112785479A (en) * 2021-01-21 2021-05-11 南京信息工程大学 Image invisible watermark universal detection method based on less-sample learning
CN113538202A (en) * 2021-08-05 2021-10-22 齐鲁工业大学 Image steganography method and system based on generative steganography confrontation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的图像隐写方法研究;付章杰;王帆;孙星明;王彦;;计算机学报(第09期);70-86 *

Also Published As

Publication number Publication date
CN115936961A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN113554089B (en) Image classification countermeasure sample defense method and system and data processing terminal
Guo et al. Fake face detection via adaptive manipulation traces extraction network
CN109587372B (en) Invisible image steganography based on generation of countermeasure network
CN110349136A (en) A kind of tampered image detection method based on deep learning
CN108764270B (en) Information hiding detection method integrated by using convolutional neural network
CN106203333A (en) Face identification method and system
CN112150450B (en) Image tampering detection method and device based on dual-channel U-Net model
CN109657551B (en) Face detection method based on context information enhancement
CN110968845B (en) Detection method for LSB steganography based on convolutional neural network generation
CN112818862A (en) Face tampering detection method and system based on multi-source clues and mixed attention
Meng et al. A survey of image information hiding algorithms based on deep learning
CN115936961B (en) Steganalysis method, equipment and medium based on few-sample comparison learning network
CN111899251A (en) Copy-move type forged image detection method for distinguishing forged source and target area
CN112861671A (en) Method for identifying deeply forged face image and video
CN111476727B (en) Video motion enhancement method for face-changing video detection
Liu et al. Making DeepFakes more spurious: evading deep face forgery detection via trace removal attack
Ismail et al. An integrated spatiotemporal-based methodology for deepfake detection
Maiano et al. Depthfake: a depth-based strategy for detecting deepfake videos
Saealal et al. Three-Dimensional Convolutional Approaches for the Verification of Deepfake Videos: The Effect of Image Depth Size on Authentication Performance
CN111737688B (en) Attack defense system based on user portrait
Wu et al. DHGAN: Generative adversarial network with dark channel prior for single‐image dehazing
CN114596609A (en) Audio-visual counterfeit detection method and device
CN115578768A (en) Training method of image detection network, image detection method and system
CN112215868B (en) Method for removing gesture image background based on generation of countermeasure network
Alharbi et al. Spoofing Face Detection Using Novel Edge-Net Autoencoder for Security.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant