CN111490872B - Method for embedding and extracting deep learning model watermark based on public and private key pair - Google Patents

Method for embedding and extracting deep learning model watermark based on public and private key pair Download PDF

Info

Publication number
CN111490872B
CN111490872B CN202010197449.7A CN202010197449A CN111490872B CN 111490872 B CN111490872 B CN 111490872B CN 202010197449 A CN202010197449 A CN 202010197449A CN 111490872 B CN111490872 B CN 111490872B
Authority
CN
China
Prior art keywords
model
watermark
private key
public
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010197449.7A
Other languages
Chinese (zh)
Other versions
CN111490872A (en
Inventor
杨余久
庄新瑞
杨芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202010197449.7A priority Critical patent/CN111490872B/en
Publication of CN111490872A publication Critical patent/CN111490872A/en
Application granted granted Critical
Publication of CN111490872B publication Critical patent/CN111490872B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0863Generation of secret information including derivation or calculation of cryptographic keys or passwords involving passwords or one-time passwords
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/08Key distribution or management, e.g. generation, sharing or updating, of cryptographic keys or passwords
    • H04L9/0861Generation of secret information including derivation or calculation of cryptographic keys or passwords
    • H04L9/0869Generation of secret information including derivation or calculation of cryptographic keys or passwords involving random numbers or seeds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

A method for embedding and extracting a deep learning model watermark based on a public and private key pair comprises the following steps: the method comprises the steps of converting a batch normalization layer in a deep convolutional neural network model into a watermark layer, embedding authentication information into any one layer or any plurality of layers of the model in a training process of the model through a watermark embedding loss function to form a specific watermark, wherein scaling factors in the batch normalization layer are changed into a public key and a private key, the private key is an unlearned scaling factor, a characteristic diagram is randomly selected in a model initialization stage to generate, the public key is a learnable scaling factor, and the public key is synchronously changed along with the change of the private key in the training process. By extracting the authentication information in the model watermark, the authentication of the model ownership can be realized.

Description

Method for embedding and extracting deep learning model watermark based on public and private key pair
Technical Field
The invention relates to the fields of computer vision, deep learning, model watermarking, medical images, model security and the like, in particular to a method for embedding and extracting a deep learning model watermarking based on a public and private key pair.
Background
With the rapid discovery of the deep learning technology in the last decade, more and more traditional computer vision tasks are gradually replaced by a deep neural network model method, and the performance of a deep convolutional neural network is remarkably improved in the tasks such as traditional classification, detection and segmentation related to natural images or medical images. In the process, the open source in the aspects of deep learning framework, model, data and the like plays an extremely key role, and the development of the related deep learning model is simple due to the easy acquisition of the open source resources. Nevertheless, training a deep learning model with excellent performance on a specific image task, which can meet the use standard of practical products, is not easy, and has the following main difficulties:
(1) a large amount of labeled data with good labeling is needed, especially for complex tasks such as medical image segmentation, which needs a professional doctor to complete, and is a time-consuming and labor-consuming task, so the labeling cost is often very high;
(2) sufficient computing resources are needed to complete the training work of the deep learning model, and the training of the deep learning model at the business level is often completed by a large amount of GPU resources and a long training period;
(3) for an excellent model, research developers are also required to repeatedly adjust parameters of the model to obtain an optimal parameter structure, and the process also needs a lot of time.
The development of the deep learning model is still a costly process due to the reasons, however, at present, a feasible operation is usually to perform fine tuning or migration learning on a small-scale data set by using an existing pre-training model, and the pre-training model is often built on a super-large-scale data set. The operation needs small data scale and lower time and calculation cost, and the model after fine tuning can approximate the effect of the pre-training model. The mode is that for a model developer, a double-edged sword is often used, so that the development of the model can be accelerated, and meanwhile, how to protect own intellectual property rights needs to be considered. The deep learning model is also becoming an important product for many commercial companies, so the protection of the model becomes important, and serious economic loss can be caused once the theft or secondary development based on the theft occurs. Therefore, in order to ensure the healthy and rapid development of the whole field and encourage the open source sharing of people, the protection of the intellectual property of the deep learning model is particularly important.
For the currently common deep convolutional neural network models, such as VGGNet, ResNet, and the like, the models themselves do not contain any information containing identities, so that the deployed models are difficult to find in time if unauthorized copying or stealing occurs, and the dilemma of difficult finding and evidence demonstration exists. The models that many companies wish to share are only used for scientific research purposes and are prevented from being stolen by competitors, and the characteristics of the current models are often difficult to meet. In order to ensure that the deep learning model is not stolen or secondary development is performed at low cost, the following two points need to be realized: (1) the model is not available without authorization; (2) the model can prove ownership of the model when the model is stolen.
Referring to digital watermarking technology, which is widely used in multimedia content, it may be attempted to add the same digital watermark to a deep learning model to achieve proof of identity. At present, research on a digital watermarking technology of a deep learning model is still in an initial stage, and the existing methods often have the problem of fuzzy identity authentication, namely if an unauthorized third party acquires the model, if the watermark exists, the watermark can be imitated by a series of reverse engineering methods, or the existing watermark can be removed by a method for removing attacks, so that the authentication difficulty on the attribute attribution of the model is caused.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the technology makes up the defects of the technology, and provides a method for embedding and extracting the deep learning model watermark based on the public and private key pair, so that the model is ensured to be unavailable under an unauthorized condition, and the proof of ownership can be realized by extracting the identity information contained in the watermark even under the condition of stealing.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for embedding a deep learning model watermark based on a public and private key pair comprises the following steps: the method comprises the steps of converting a batch normalization layer in a deep convolutional neural network model into a watermark layer, embedding authentication information into any one layer or any plurality of layers of the model in a training process of the model through a watermark embedding loss function to form a specific watermark, wherein scaling factors in the batch normalization layer are changed into a public key and a private key, the private key is an unlearned scaling factor, a characteristic diagram is randomly selected in a model initialization stage to generate, the public key is a learnable scaling factor, and the public key is synchronously changed along with the change of the private key in the training process.
Further, the method comprises the following steps:
and converting the format of the information to be embedded, converting the character string into an ASCI code and then converting the ASCI code into a binary code, and selecting to use a {0,1} code or a {1, -1} code according to the characteristics of an embedding loss function.
The generation process of the private key comprises the following steps: in the initialization stage of a network model, a certain number of images are randomly selected from a training data set to serve as random seeds for generating a private key, then the random seeds are input into a pre-training network to respectively obtain a feature map of each layer, if a certain layer is selected to serve as a watermark layer, a certain number of feature maps are randomly selected from the feature maps of the layer to serve as private key generation sources, and then after average operation, the obtained scaling factors are used as the private key.
In the process of generating the private key, keeping a normal convolution layer and a Relu activation layer in a deep convolution neural network model convolution block unchanged, only changing learnable scaling factors and translation factors in the batch normalization layer into non-learnable parameters, and obtaining features with the same shapes as the learnable scaling factors and the translation factors through the average operation, wherein the scaling factors are the private key watermark.
Public and private key pair watermarks are obtained through training of a training strategy of a teacher-student model, and in the training process, two sub-networks which are shared by parameters except the watermark layer are used for performing collaborative training to obtain paired public and private keys; the private key and the public key are generated in pairs and are respectively positioned in two sub-networks, the public key corresponds to the private key, and the public key and the private key keep approximate updating degree in the training process so as to ensure that the obtained public and private key pair can realize the same performance in the network.
And adding an additional supervision loss function to the public and private key pair in the training process.
The embedding loss function includes a cosine similarity loss function L CS The following are:
Figure BDA0002418127160000031
wherein the gamma-ray absorption spectrum of the compound,
Figure BDA0002418127160000032
watermarks, gamma, respectively, for corresponding public and private keys i And
Figure BDA0002418127160000033
respectively representing the ith element of the watermark representing vector, C representing the number of channels in the model, and E being a set constant;
using a cross entropy loss function as L for a classification task model CE To classify the loss function, the overall loss function L of the model is as follows:
L CE (y,p)=-∑y i logp i
Figure BDA0002418127160000034
wherein y represents the label of the sample and p represents the probability distribution of the sample prediction; y is i Corresponding to the i-th sample marker, p i Then the probability of being positive, L, is predicted for the ith sample ID (W) represents a loss function of the embedded watermark recognition when the network parameter W represents the convolution layer correspondence weight λ of the network 1 ,λ 2 Are all hyper-parameters and are used to balance different loss functions.
Watermarks are also added to the convolutional layers of the deep convolutional neural network, where the watermarks are added in the model training phase, or in the subsequent fine tuning phase of the model.
A method for extracting a deep learning model watermark based on a public and private key pair is used for extracting a specific watermark embedded in a model by using the method for embedding the deep learning model watermark based on the public and private key pair, wherein when the ownership of the model is verified, a public key in the model is replaced by a private key, so that the replaced model can keep consistent performance and can extract authentication information through the private key, and the authentication of the ownership of the model is completed.
The authentication information may be extracted from the watermark embedded model using a sign function for the watermark layer.
Compared with the prior art, the invention has the advantages that:
1) the invention provides a method for embedding and extracting a public-private key watermark for a deep learning model. The method can realize high-efficiency embedding without influencing the original performance of the model, and can realize the authentication of the ownership of the model by extracting the authentication information (usually, the information including the identity of a model creator) in the watermark of the model; 2) batch normalization in the deep convolutional neural network is converted into a watermark layer to obtain paired public and private key watermarks, so that the security of the embedded watermarks can be effectively ensured; 3) experiments on different data sets show that the watermark embedded in the deep learning model has good guarantee, strong robustness and safety, is beneficial to protecting intellectual property of the deep learning model, and promotes healthy and orderly development of relevant directions.
Drawings
Fig. 1 is a diagram of a rolling block structure in a normal deep learning model in a model watermark embedding method for a public-private key pair according to an embodiment of the present invention, where a learnable scaling factor in a batch normalization layer is a public key watermark;
FIG. 2 is a diagram of a convolution block for modifying a batch normalization layer into a private key watermark layer in a model watermark embedding method for a public-private key pair according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a deep learning model structure including a public and private key watermark layer in a method for embedding a model watermark of a public and private key pair according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
Referring to fig. 1 to fig. 3, an embodiment of the present invention provides a method for embedding a deep learning model watermark based on a public-private key pair, including: the method comprises the steps of converting a batch normalization layer in a deep convolutional neural network model into a watermark layer, embedding authentication information (such as information containing identity of a model creator) into any one layer or any several layers of the model in a training process of the model through a watermark embedding loss function to form a specific watermark, wherein scaling factors in the batch normalization layer are changed into a public key and a private key, the private key is an unlearned scaling factor, a feature map is randomly selected in a model initialization stage to generate, the public key is a learnable scaling factor, and the public key is changed synchronously along with the change of the private key in the training process. The public and private key pair is used as a part of the model parameters, so that the normal reasoning performance of the model can be ensured, and once the model is lost, the model cannot realize the normal test effect, so that the public and private key pair can be used as an important judgment basis for the ownership of the model. Besides, the embedded watermark can still keep strong robustness when the embedded watermark comprises common model modification methods such as model fine adjustment, model pruning and the like.
By applying the embodiment of the invention, the public key watermark is distributed to the users along with the model parameters, wherein the public key watermark does not contain any identity information, thereby ensuring the security of the watermark; the private key contains identity information, and the general private key is saved by a model developer or an owner and is used for the ownership authentication of the model later. In the process of model verification, the public key in the model can be replaced by the private key in the model, and information in the private key is extracted to realize the proof of the model ownership. By the model watermarking method based on the public and private keys, the model can not realize normal reasoning performance if a third-party user does not have or forge the public key under the unauthorized condition. Meanwhile, when the model is embezzled or modified on the basis, the information embedded into the watermark is extracted, so that the attribution certification of the model can be completed. The embedding and extracting method of the deep learning model watermark based on the public and private key pair has high fidelity, the watermark model can keep the watermark to have strong robustness when removing attacks (including model fine adjustment, model pruning and the like), and meanwhile, the security of the watermark is ensured by the form of the public and private key pair. The method is suitable for most mainstream deep convolutional neural network models, and by adding the watermark, the intellectual property rights of model developers are protected, the phenomena of embezzlement and abuse of deep learning models are avoided to a certain extent, and the benign and healthy development of the field is facilitated.
In a preferred embodiment, in order to protect the security of the embedded watermark, a teacher-student model is trained to use two subnetworks, except for a watermark layer, with the weight sharing of the rest of the layers, to perform watermark embedding, so as to generate a public-private key watermark pair. In the training process, two subnets sharing parameters except the overprints are used for collaborative training to obtain paired public and private keys, so that the same performance of the model under two groups of parameters can be realized.
In some embodiments, by making the public and private key pair watermarks numerically close but with some difference in sign, it can be guaranteed that the public key will have difficulty finding the watermark information embedded therein even when it is public.
In some embodiments, the embedding of the identity information in the private key is achieved by using an additional watermark embedding function in the training process, like a regularization term in a deep learning model. The watermark embedding model extracts the identity information in the watermark layer by using a sign function, thereby completing the authentication of the model identity.
In different embodiments, the watermark in the model can be embedded by selecting any layer in the model, and the number of embedding can be one layer or several layers, so that the security of the watermark can be ensured to a certain extent.
In some embodiments, to ensure that the public and private keys are good, an additional supervised loss function is added to the public and private key pair during the training process.
In some embodiments, the watermark may be added to the convolutional layer by the same method, and the watermarking process may be performed in a model training stage or in a subsequent fine tuning stage of the model.
The watermark embedding method can be applied to computer vision tasks of classification, detection, segmentation and the like using a deep learning model.
According to the preferred embodiment of the invention, the batch normalization layer commonly used in the deep convolutional neural network is changed into the watermark layer, and the scale factor and the translation factor which can be learnt in the batch normalization layer are replaced by the private key which can not be learnt. The method comprises the steps of randomly selecting a certain number of images from a training data set as random seeds for generating the private key in the initialization stage of a network model, then inputting the images into a pre-training network to respectively obtain the characteristics of each layer, if a certain layer is selected as a watermark layer, randomly selecting a certain number of characteristic graphs from the characteristic graphs of the layer as a private key generation source, and then carrying out averaging operation to obtain the final private key. The private key and the public key are generated in pairs and are respectively positioned in the two sub-networks, the public key corresponds to the private key, the updating degree similar to that of the private key is kept in the training process, and the obtained public and private keys can ensure that the network can realize the same performance in the network. The public and private key pair has similar watermark in value but has certain difference in sign, and ensures that the public key is difficult to find the embedded watermark information even under the public condition. Therefore, a loss function is added between the public key and the private key in the model training process, so that good and different characteristics can be realized.
The generation of the public and private key pair is to use two sub-networks with shared parameters except the watermark layer to carry out cooperative optimization in the training process, and use the training strategy of a teacher-student model to change the public key after the private key is changed by the change of the parameter of the convolution layer in the training process.
After the watermark embedding process is completed, the sub-network containing the public key is distributed to users for use, and complete identity information is difficult to extract from the public key, so that the security of embedding the watermark is ensured. When the ownership of the model is verified, the public key parameters in the model are replaced by the private key, the model using the private key can ensure consistent performance and extract identity information containing a model developer through the private key, and therefore the authentication of the ownership of the model is completed. The public and private key pair is used as a part of model parameters, so that the normal reasoning performance of the model can be ensured, once the model is lost, the normal test effect cannot be realized, and the guarantee of the model is also used as an important judgment basis for the ownership of the model.
In addition, the public and private key watermark in the model can be embedded by selecting any layer in the model, and the embedding quantity can be one layer or a plurality of layers, so that the security of the watermark can be ensured to a certain extent.
Experiments on different data sets show that the original performance of the model is not affected by the watermark embedding in the form, and meanwhile, the embedded watermark can still keep strong robustness when common model modification methods such as model fine adjustment, model pruning and the like are adopted.
According to the embedding and extracting method of the deep learning model watermark based on the public and private key pair, the identity information of the model creator is embedded into the deep learning model, the model is guaranteed to be unusable under an unauthorized condition, and the proof of ownership can be realized by extracting the identity information contained in the watermark even under the condition of stealing.
The following is a further description by way of specific embodiments in conjunction with the accompanying drawings.
Fig. 1 shows a schematic diagram of a convolution block structure including a normal batch normalization layer, which includes a convolution layer, a batch normalization layer, and a Relu activation layer, which are basic constituent units in a deep convolutional neural network. In the normal process flow of the batch normalization layer, the features obtained by the convolutional layer are normalized, that is, the mean value of a batch of features is subtracted and then divided by the standard deviation of the batch of features, and in order to prevent the denominator from being 0, a very small constant term is added to the denominator. The normalized features are multiplied by a scaling factor y and then added by a translation factor β to increase the expressive power of the non-linearity of the network. The scaling factor and the translation factor of the part are both learnable parameters and are continuously updated in the process of model training. The learnable scaling factor is referred to as public key watermarking in the following two-branch subnetwork.
Then, the normal batch normalization layer is converted into a watermark layer, as shown in fig. 2, the specific operation is to keep the normal convolution layer in the convolution block, the Relu activation layer is unchanged, and only the learnable scaling factor and the translation factor in the batch normalization layer are changed into the non-learnable parameters. Before the model begins to train, fromAnd randomly extracting a certain number of pictures from the training data set as the generation seeds of the private key, and inputting the pictures into a pre-trained network to obtain the characteristic diagram of each layer. Then if a layer is selected as the watermark layer, the characteristic diagram N C of the layer is 1 Randomly selecting C from H W 1 Figure 1C 1 H W, then obtaining the feature 1C with the same shape as the scaling and translation factors through an averaging operation 2 1 × 1, wherein the scaling factor is the required private key watermark. The private key watermark is updated along with the updating of the parameter of the convolutional layer in the training process of the model and is an unlearned scalar.
For identity information which is to be embedded into the watermark, firstly, characters in a character string are respectively converted into corresponding ASCII codes, then, the corresponding ASCII codes are converted into corresponding binary codes, and then, the obtained binary character string is converted according to the specific requirements of an embedding function. Taking the english word 'Hi' as an example, the ASCII code '72, 105' corresponding to each character is obtained first, and then converted into the corresponding 8-bit binary number '0100100001101001'. Using the fold loss function, as shown below, since the required label form of the loss function is { -1,1}, the resulting string is converted into '-11-1-11-1-1-1-111-11-1-11', and this string d is the label in the training process, where γ is the private key.
Figure BDA0002418127160000081
For the training strategy of the watermark model, a teacher-student model framework is used, as shown in fig. 3, a watermark layer in a teacher network is embedded in identity information of a model developer through a watermark embedding function, a corresponding layer in a student network is a normal batch normalization layer, a non-learnable scaling factor in the teacher network becomes a private key watermark, and a learnable scaling factor in the student network becomes a public key watermark. After model training is completed, the student network is distributed to users for use. The network weights of the rest layers of the two sub-networks except the corresponding watermark layers are different are mutually shared. In order to ensure that the public key watermark and the private key watermark can enable the network to realize the same performance, but the public key watermark and the private key watermark have difference, a cosine inversion similarity loss function is added between corresponding layers to realize the effect. The cosine similarity loss function is shown below:
Figure BDA0002418127160000082
wherein the gamma-ray absorption spectrum of the compound,
Figure BDA0002418127160000083
watermarks, gamma, respectively, for corresponding public and private keys i And
Figure BDA0002418127160000085
respectively representing the ith element of the watermark representing vector, C representing the number of channels in the model, and epsilon being a constant value to avoid the denominator being 0, preferably 10 -8
Using a cross entropy loss function as L for a classification task model CE To classify the loss function, the overall loss function L of the model is as follows:
L CE(y,p) =-∑y i logp i
Figure BDA0002418127160000086
wherein y represents the label of the sample and p represents the probability distribution of the sample prediction; y is i Corresponding to the i-th sample marker, p i Then the probability of being positive, L, is predicted for the ith sample ID (W) represents a loss function of the embedded watermark recognition when the network parameter W represents the convolution layer correspondence weight λ of the network 1 ,λ 2 Are all hyper-parameters and are used to balance different loss functions.
After the student watermark model after the training is finished is distributed to users, the users may be confronted with a third party to make certain modifications on the model, wherein the modifications comprise fine adjustment on the model on a small-scale data set, and model pruning used for reducing the parameter quantity of the model. Experiments on a certain series of image data sets (Cifar-10, Cifar-100, OCT2017 and HAM10000) are carried out on a watermark model, and the deep learning model watermark embedding method based on the public and private keys can still keep strong robustness under various attacks of destroying model watermarks on the basis of ensuring the accuracy of the model. After the watermark model is subjected to fine adjustment, the watermark still can obtain the detection rate of 100%, and after the watermark model is subjected to pruning with the weight of 60%, the identity information in the watermark still can reach the detection accuracy rate of 99%.
The background section of the present invention may contain background information related to the problem or environment of the present invention and does not necessarily describe prior art. Accordingly, the inclusion in the background section is not an admission of prior art by the applicant.
The foregoing is a more detailed description of the invention in connection with specific/preferred embodiments and is not intended to limit the practice of the invention to those descriptions. It will be apparent to those skilled in the art that various substitutions and modifications can be made to the described embodiments without departing from the spirit of the invention, and these substitutions and modifications should be considered to fall within the scope of the invention. In the description herein, references to the description of the term "one embodiment," "some embodiments," "preferred embodiments," "an example," "a specific example," or "some examples" or the like are intended to mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Various embodiments or examples and features of various embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction. Although embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the claims.

Claims (8)

1. A method for embedding a deep learning model watermark based on a public and private key pair is characterized by comprising the following steps: converting a batch normalization layer in a deep convolution neural network model into a watermark layer, embedding model authentication information into any layer or any layers of the model in a training process of the model through a watermark embedding loss function to form a specific watermark, wherein scaling factors in the batch normalization layer are changed into a public key and a private key, the private key is an unsalable scaling factor, a characteristic diagram is randomly selected to generate in a model initialization stage, the public key is a learnable scaling factor, and the public key is changed synchronously along with the change of the private key in the training process;
wherein the generation process of the private key comprises the following steps: in the initialization stage of a network model, randomly selecting a certain number of images from a training data set as random seeds for generating a private key, then inputting the images into a pre-training network to respectively obtain a feature map of each layer, if a certain layer is selected as a watermark layer, randomly selecting a certain number of feature maps from the feature maps of the layer as a private key generation source, and then carrying out average operation to obtain a scaling factor as the private key;
in the training process, two sub-networks which are shared by parameters except the watermark layer are used for carrying out collaborative training to obtain paired public and private keys; the private key and the public key are generated in pairs and are respectively positioned in two sub-networks, the public key corresponds to the private key, and the public key and the private key are kept in a synchronous updating degree in a training process so as to ensure that the obtained public and private key pair can realize the same performance in a network.
2. The method of claim 1, wherein the information to be embedded is first converted into ASCII code and then into binary code, and {0,1} code or {1, -1} code is selected according to the characteristics of the embedded loss function.
3. The method for embedding public and private key pair-based deep learning model watermark, according to claim 1, wherein in the process of generating the private key, a normal convolutional layer and a Relu activation layer in a convolutional block of a deep convolutional neural network model are kept unchanged, only learnable scaling factors and translation factors in the batch normalization layer are changed into non-learnable parameters, and features with the same shape as the learnable scaling factors and the translation factors are obtained through the averaging operation, wherein the scaling factors are the private key.
4. The method for embedding deep learning model watermark based on public-private key pair as claimed in any one of claims 1 to 3, wherein an additional supervised loss function is added to the public-private key pair during training.
5. The method for embedding public-private key pair-based deep learning model watermarks according to any of claims 1 to 3, wherein the embedding loss function comprises a cosine similarity loss function L CS The following are:
Figure DEST_PATH_IMAGE001
wherein
Figure 431515DEST_PATH_IMAGE002
Respectively a corresponding public key watermark and a private key watermark,
Figure DEST_PATH_IMAGE003
and
Figure 896125DEST_PATH_IMAGE004
the i-th elements of the representation vector representing the public key watermark and the private key watermark respectively,Cthe number of channels in the model is represented,
Figure DEST_PATH_IMAGE005
is a set constant;
using a cross entropy loss function as L for a classification task model CE To classify the loss function, the overall loss function L of the model is as follows:
Figure 472600DEST_PATH_IMAGE006
Figure DEST_PATH_IMAGE007
wherein the content of the first and second substances,
Figure 664547DEST_PATH_IMAGE008
a marker indicative of the position of the sample,
Figure DEST_PATH_IMAGE009
representing a probability distribution of a sample prediction;
Figure 524925DEST_PATH_IMAGE010
corresponding to the mark of the ith sample,
Figure DEST_PATH_IMAGE011
the probability of being positive for the ith sample prediction,
Figure 878546DEST_PATH_IMAGE012
a loss function representing the identification of the embedded watermark when the network parameter W represents the convolutional layer correspondence weight λ of the network 1 , λ 2 Are all hyper-parameters and are used to balance different loss functions.
6. The method for embedding a public-private key pair-based deep learning model watermark according to any one of claims 1 to 3, wherein the watermark is further added to convolution layers of a deep convolutional neural network, wherein the watermark is added in a model training phase or in a subsequent fine tuning phase of the model.
7. A method for extracting a deep learning model watermark based on a public and private key pair is characterized in that a specific watermark embedded in a model by using the method for embedding the deep learning model watermark based on the public and private key pair as claimed in any one of claims 1 to 6 is extracted, wherein when the ownership of the model is verified, a public key in the model is replaced by a private key, so that the replaced model can maintain consistent performance and can extract the authentication information through the private key, thereby completing the authentication of the ownership of the model.
8. The method for extracting deep learning model watermark based on public-private key pair as claimed in claim 7, wherein the authentication information is extracted from the watermark embedded model using a sign function for a watermark layer.
CN202010197449.7A 2020-03-19 2020-03-19 Method for embedding and extracting deep learning model watermark based on public and private key pair Active CN111490872B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010197449.7A CN111490872B (en) 2020-03-19 2020-03-19 Method for embedding and extracting deep learning model watermark based on public and private key pair

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010197449.7A CN111490872B (en) 2020-03-19 2020-03-19 Method for embedding and extracting deep learning model watermark based on public and private key pair

Publications (2)

Publication Number Publication Date
CN111490872A CN111490872A (en) 2020-08-04
CN111490872B true CN111490872B (en) 2022-09-16

Family

ID=71810789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010197449.7A Active CN111490872B (en) 2020-03-19 2020-03-19 Method for embedding and extracting deep learning model watermark based on public and private key pair

Country Status (1)

Country Link
CN (1) CN111490872B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113902121B (en) * 2021-07-15 2023-07-21 陈九廷 Method, device, equipment and medium for verifying battery degradation estimation device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165306A (en) * 2018-08-09 2019-01-08 长沙理工大学 Image search method based on the study of multitask Hash
JP2019053542A (en) * 2017-09-15 2019-04-04 Kddi株式会社 Information processing apparatus, information processing method and program
CN110766598A (en) * 2019-10-29 2020-02-07 厦门大学嘉庚学院 Intelligent model watermark embedding and extracting method and system based on convolutional neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020009208A1 (en) * 1995-08-09 2002-01-24 Adnan Alattar Authentication of physical and electronic media objects using digital watermarks
US20110055585A1 (en) * 2008-07-25 2011-03-03 Kok-Wah Lee Methods and Systems to Create Big Memorizable Secrets and Their Applications in Information Engineering
US10902543B2 (en) * 2018-03-15 2021-01-26 Tata Consultancy Services Limited Neural network based insertion of watermark into images and tampering detection thereof
US11575500B2 (en) * 2018-07-25 2023-02-07 Sap Se Encrypted protection system for a trained neural network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019053542A (en) * 2017-09-15 2019-04-04 Kddi株式会社 Information processing apparatus, information processing method and program
CN109165306A (en) * 2018-08-09 2019-01-08 长沙理工大学 Image search method based on the study of multitask Hash
CN110766598A (en) * 2019-10-29 2020-02-07 厦门大学嘉庚学院 Intelligent model watermark embedding and extracting method and system based on convolutional neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Self supervised Feature Learning for 3D Medical Images by Playing a Rubiks Cube;Zhuang Xinrui,et al;《22nd International Conference on Medical Image Computing and Computer-Assisted Intervention -MICCAI》;20191017;第420-428页 *

Also Published As

Publication number Publication date
CN111490872A (en) 2020-08-04

Similar Documents

Publication Publication Date Title
Yang et al. An embedding cost learning framework using GAN
CA2941352C (en) Neural network and method of neural network training
Wang et al. SSteGAN: self-learning steganography based on generative adversarial networks
CN107704877A (en) A kind of image privacy cognitive method based on deep learning
CN111191709A (en) Continuous learning framework and continuous learning method of deep neural network
Deng et al. What is it like down there? generating dense ground-level views and image features from overhead imagery using conditional generative adversarial networks
CN113128588B (en) Model training method, device, computer equipment and computer storage medium
CN111490872B (en) Method for embedding and extracting deep learning model watermark based on public and private key pair
KR20200094938A (en) Data imbalance solution method using Generative adversarial network
Duan et al. A coverless steganography method based on generative adversarial network
Komarov Reducing the search area of genetic algorithm using neural network autoencoder
Li et al. Face Recognition Based on the Combination of Enhanced Local Texture Feature and DBN under Complex Illumination Conditions.
Nakashima et al. Incremental learning of fuzzy rule-based classifiers for large data sets
Şahin Citizenship and Citizen Participation in Metaverse
Quintero et al. An ontology-driven approach for the extraction and description of geographic objects contained in raster spatial data
Fabrikant et al. Charting the ICA world of cartography 1999–2009
Liang et al. [Retracted] Painting Classification in Art Teaching under Machine Learning from the Perspective of Emotional Semantic Analysis
Cao et al. Dynasty recognition algorithm of an adaptive enhancement capsule network for ancient mural images
Luo et al. Halftone image steganalysis by reconstructing grayscale image
Sivapragasam et al. Fuzzy logic for reservoir operation with reduced rules
Al-Anni et al. Un-imperceptible image steganography approach (U-IISA) in excel sheet
Kumar A novel approach to image steganography using quadtree partition
Srivastava et al. A Novel Image Steganography and Steganalysis Technique Based on Pattern Searching
Zhang et al. An Improved Algorithm for Facial Image Restoration Based on GAN
Jie et al. Histogram of Oriented Gradient Random Template Protection for Face Verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant