CN116304144A - Image processing method and device based on antagonistic neural network structure search - Google Patents

Image processing method and device based on antagonistic neural network structure search Download PDF

Info

Publication number
CN116304144A
CN116304144A CN202211690326.2A CN202211690326A CN116304144A CN 116304144 A CN116304144 A CN 116304144A CN 202211690326 A CN202211690326 A CN 202211690326A CN 116304144 A CN116304144 A CN 116304144A
Authority
CN
China
Prior art keywords
network
dnn
vulnerability
channel
cell
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211690326.2A
Other languages
Chinese (zh)
Inventor
王滨
张峰
钱亚冠
王伟
王星
李超豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Publication of CN116304144A publication Critical patent/CN116304144A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • G06F16/532Query formulation, e.g. graphical querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • Library & Information Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides an image processing method and device based on antagonistic neural network structure search, wherein the method comprises the following steps: for any epoch in the DNN network structure searching process, using a gradient descent algorithm, an obtained image training set and an obtained image verification set to update the operation parameters and the structure parameters of the DNN network in an iterative and alternative manner until the iterative times reach a first iterative times; and carrying out iterative updating on the structural parameters of the DNN network by using a preset network vulnerability constraint condition and an obtained image verification set until the iterative times in the epoch reach a second iterative times; and under the condition that the searched epochs reach the first epochs or the DNN network model is converged, generating a target DNN network for image processing according to the obtained structural parameters, and performing image processing on an image to be subjected to image processing by using the target DNN network. The method can improve the accuracy of image processing by using the DNN network.

Description

Image processing method and device based on antagonistic neural network structure search
Technical Field
The application relates to the technical field of artificial intelligence safety, in particular to an image processing method and device based on antagonistic neural network structure search.
Background
Currently, deep neural networks (Deep Neural Networks, DNN for short) exhibit excellent performance in various image processing applications, such as image classification, object detection, semantic segmentation, and the like.
DNN networks are vulnerable to resistant sample attacks. I.e. adding some deliberate human-imperceptible fine interference to the input samples, will result in the network model giving an erroneous output with high confidence.
Taking image classification as an example, adding some carefully designed human-imperceptible fine interference to the image to be classified can result in erroneous classification results when the trained DNN network classifies the image.
Disclosure of Invention
In view of the foregoing, the present application provides an image processing method and apparatus based on an antagonistic neural network structure search.
Specifically, the application is realized by the following technical scheme:
according to a first aspect of embodiments of the present application, there is provided an image processing method based on an antagonistic neural network structure search, including:
for any round of epoch in the DNN network structure searching process, using a gradient descent algorithm, an obtained image training set and an obtained image verification set to carry out iterative alternate updating on the operation parameters and the structure parameters of the DNN network until the iterative times reach a first iterative times;
And carrying out iterative updating on the structural parameters of the DNN network by using a preset network vulnerability constraint condition and an obtained image verification set until the iterative times in the epoch reach a second iterative times, so as to obtain the structural parameters meeting the preset network vulnerability constraint condition; the network vulnerability is used for representing the difference of characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network, and the second iteration times are larger than the first iteration times;
and under the condition that the searched epochs reach the first epochs or the DNN network model is converged, generating a target DNN network for image processing according to the obtained structural parameters, and performing image processing on an image to be subjected to image processing by using the target DNN network.
According to a second aspect of embodiments of the present application, there is provided an image processing apparatus based on an antagonistic neural network structure search, including:
the first search unit is used for carrying out iterative alternate updating on the operation parameters and the structure parameters of the DNN network by using a gradient descent algorithm, the obtained image training set and the obtained image verification set for any round epoch in the DNN network structure search process until the iterative times reach a first iterative times;
The second search unit is used for carrying out iterative updating on the structural parameters of the DNN network by using the preset network vulnerability constraint condition and the obtained image verification set until the iterative times in the epoch reach the second iterative times, so as to obtain the structural parameters meeting the preset network vulnerability constraint condition; the network vulnerability is used for representing the difference of characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network, and the second iteration times are larger than the first iteration times;
the generation unit is used for generating a target DNN network for image processing according to the obtained structural parameters under the condition that the searched epochs reach the first epochs or the DNN network model converges, and performing image processing on an image to be subjected to image processing by using the target DNN network.
According to the image processing method based on the antagonistic neural network structure search, the vulnerability of the DNN is measured by proposing the network vulnerability used for representing the difference of the characteristic distribution of the clean image sample and the corresponding antagonistic image sample in the DNN, and the network vulnerability is restrained in the DNN structure search process, so that the network structure with lower network vulnerability is searched, the DNN with higher antagonistic robustness is obtained, the antagonistic robustness of the DNN is effectively improved, and further, the accuracy of image processing by using the DNN can be effectively improved.
Drawings
Fig. 1 is a flowchart illustrating an image processing method based on an antagonistic neural network structure search according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of an image processing method based on an antagonistic neural network structure search according to an exemplary embodiment of the present application;
fig. 3 is a schematic structural view of an image processing apparatus based on an antagonistic neural network structure search according to an exemplary embodiment of the present application;
fig. 4 is a schematic structural view of another image processing apparatus based on an antagonistic neural network structure search according to still another exemplary embodiment of the present application;
fig. 5 is a schematic structural view of another image processing apparatus based on an antagonistic neural network structure search according to still another exemplary embodiment of the present application;
fig. 6 is a schematic diagram of a hardware structure of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
The terminology used in the present application is for the purpose of describing particular embodiments only and is not intended to be limiting of the present application. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
In order to enable those skilled in the art to better understand the technical solutions provided in the embodiments of the present application, some terms related to the embodiments of the present application will be briefly described below.
Deep Neural Network (DNN): the data set is expressed as d= { X, Y }, where X is the sample set, Y is the corresponding tag set, X e X is any sample instance, and the corresponding tag is a function f for Y: X-Y represents a neural network having L layers. The output vector f of the neural network θ (x) Can be expressed as:
f θ (x)=f (L-1) (f (L-2) (…(f (1) (x))))
where θ is an operating parameter in the neural network, θ= { W (1) ,…,W (L-1) ,b (1) ,…b (L-1) }。W (i) And b (i) Respectively a weight vector and a bias vector. Defining the output eigenvector of the first layer as z (l) Assuming that the activation function is ReLU, f is defined (l) (. Cndot.) is:
Figure SMS_1
in the case of convolutional neural networks:
Figure SMS_2
wherein the method comprises the steps of
Figure SMS_3
Denoted as convolution operation in convolutional neural network, define z (l,k) Is a feature map (feature map) of the first layer of the convolutional neural network.
Challenge sample: an antagonistic sample refers to an input sample in the data set formed by adding a subtle disturbance (often a disturbance not noticeable by the human eye) that would cause the model to give an erroneous output with high confidence. For natural samples
Figure SMS_4
Its correct class label is y. If there is a disturbance delta present, |δ| 2 < ε, so that X' =X+δ satisfies
Figure SMS_5
And y 'noteqy, then X' is called the challenge sample.
Challenge training: challenge training is proposed as a data enhancement method, in which challenge samples are generated by an attack method, and then a network is trained on the challenge samples to enhance the robustness of the network.
Edge computing and cloud edge cooperation: edge computing is an extension of the concept of cloud computing, which depend on each other to work together. Cloud edge collaboration becomes the mainstream mode, and in this collaboration mode, cloud computing is advanced towards a more global distributed node combination new form.
DARTS (Differentiable Architecture Search, a differentiable search framework): a differentiable search framework search space is defined based on cells (neurons) and has N nodes { x ] 0 ,x 1 ,…,x N-1 Directed Acyclic Graph (DAG)Each node represents a layer in the network.
An operation space o is defined in which each element o (i.e., element o) represents an operation of the network in each layer (e.g., 3 x 3 convolution, 5 x 5 convolution, pooling, etc.). For each cell, the goal is to select the most appropriate operational connection from the operating space O to connect each pair of nodes in the cell. Edge f for information flow between layers (i.e. node i to node j) (i,j) Denoted by f (i,j) From a group of architecture parameters alpha (i,j) The operational composition, which is the weight, is expressed as:
Figure SMS_6
wherein x is i For the output of the i-th node,
Figure SMS_7
is an operation o (x i ) Is a weight of (2). The input to one node is the sum of the outputs of all the precursor nodes, x j =∑ i<j f i,j (x i ) The output of the whole cell is x N-1 =concat(x 0 ,x 1 ,…,x N-2 ). Where concat refers to connecting all channels.
The parameters of operation on each side of the DARTS network are operational parameters θ (operation parameter), and the operational space of DARTS is scalable, so that operational parameters θ and structural parameters α can be updated alternately in an end-to-end fashion during the search process. After the search is completed, at each edge f (i,j) Operation o with the largest structural parameter alpha is reserved. And storing the finally obtained cell. And splicing M cells to form a target network by taking the obtained cells as basic units.
The search objective function of DARTS can be expressed as:
Figure SMS_8
Figure SMS_9
I.e. based on making L train Determining the minimum θ, let L val The minimum α, the structural parameter α is trained by the above bi-layer optimization problem.
Wherein L is train And L val Representing training loss and validation loss, respectively.
In order to make the above objects, features and advantages of the embodiments of the present application more comprehensible, the following describes the technical solutions of the embodiments of the present application in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of an image processing method based on an antagonistic neural network structure search according to an embodiment of the present application is provided, and the flowchart is applied to an electronic device. Alternatively, as an embodiment, the electronic device may be an internet of things terminal device such as a video terminal, an access control device, or the like, to which the deep neural network is applied. As another embodiment, the electronic device may be a backend device such as a server, and the embodiment is not particularly limited. As shown in fig. 1, the image processing method based on the antagonistic neural network structure search may include the steps of:
step S100, for any round epoch in the DNN network structure searching process, using a gradient descent algorithm, an obtained image training set and an obtained image verification set to update the operation parameters and the structure parameters of the DNN network in an iterative and alternative manner until the iterative times reach a first iterative times.
Step S110, carrying out iterative updating on structural parameters of the DNN network by using preset network vulnerability constraint conditions and an obtained image verification set until the iterative times in the epoch reach second iterative times, so as to obtain the structural parameters meeting the preset network vulnerability constraint conditions; the network vulnerability is used for representing deviation of characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network, and the second iteration times are larger than the first iteration times.
In the embodiment of the present application, considering that the network structure of DNN is also a key factor affecting the robustness of the network (i.e. the robustness of the network to the attack), under the condition that the robustness of the network is improved by the weight optimization method, the influence of the network structure on the robustness of the network is additionally considered, so that the upper limit of the robustness of the network can be effectively improved.
Based on the above consideration, in the embodiment of the present application, a network vulnerability for characterizing the difference between the characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network is proposed to measure the vulnerability of the DNN network, and the network vulnerability is constrained in the search process to search for a network structure with lower network vulnerability, so as to obtain a DNN network with higher countermeasure robustness.
Illustratively, the smaller the network vulnerability of the DNN network, the smaller the difference in the feature distribution of the clean image sample and its corresponding challenge image sample in the DNN network, and the higher the challenge robustness of the DNN network.
For example, in the process of performing robust structure search on the DNN network, for any epoch (round) in the process of searching the DNN network structure, a gradient descent algorithm, such as SGD (Stochastic Gradient Descent, random gradient descent) algorithm, an obtained image training set and an obtained image verification set, may be used to perform iterative alternate update on the operation parameters and the structure parameters of the DNN network until the iteration number reaches a preset iteration number (may be referred to as a first iteration number, and the value may be set according to the actual requirement), that is, perform unconstrained search, and search for a better structure in a larger parameter space without constraint.
Under the condition that the above iterative alternate updating of the operation parameters and the structure parameters of the DNN network is completed, the structural parameters of the DNN network may be continuously iteratively updated by using the preset network vulnerability constraint condition and the obtained image verification set, so as to obtain the structural parameters meeting the preset network vulnerability constraint condition, until the number of iterations in the epoch reaches the preset number of iterations (referred to herein as the second number of iterations, and the value may be set according to the actual requirement).
The second iteration number is greater than the first iteration number, that is, for any epoch, the iteration number of iteratively updating the structural parameter of the DNN network using the preset network vulnerability condition is the difference between the second iteration number and the first iteration number.
And step S120, generating a target DNN network for image processing according to the obtained structural parameters under the condition that the searched epochs reach the first epochs or the DNN network model is converged, and performing image processing on an image to be subjected to image processing by using the target DNN network.
In this embodiment of the present application, a plurality of epochs may be searched for the DNN network model according to the manner described in steps S100 to S110 until the number of epochs searched for reaches a preset number of epochs (referred to herein as the first number of epochs), or the DNN network model converges.
And under the condition that the epoch number of DNN network structure search reaches the first epoch number or the DNN network model converges, generating a target DNN network according to the obtained structure parameters.
For any side of the DNN network, the operation with the maximum structural parameter is reserved, the final cell is obtained, and the target DNN network is formed by splicing according to the obtained final cell.
In the case where the target DNN network is obtained in the above manner, the image to be subjected to the image processing may be subjected to the image processing by using the target DNN network.
By way of example, the image processing may include, but is not limited to, image classification, object detection, or semantic segmentation, among others.
Taking image processing as an example of image classification, the image to be subjected to image classification can be subjected to image classification by using the target DNN network, so as to obtain a more accurate image classification result.
Therefore, in the method flow shown in fig. 1, the vulnerability of the DNN network is measured by proposing the network vulnerability for representing the difference of the characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network, and the network vulnerability is constrained in the DNN network structure searching process, so as to search the network structure with lower network vulnerability, obtain the DNN network with higher countermeasure robustness, effectively improve the countermeasure robustness of the DNN network, and further effectively improve the accuracy of image processing by using the DNN network.
In some embodiments, the iterative alternate updating of the operation parameters and the structure parameters of the DNN network using the gradient descent algorithm, the obtained image training set, and the obtained image verification set may include:
In any iteration process, fixing the current structural parameters of the DNN, and updating the operation parameters of the DNN on the obtained image training set by utilizing a gradient descent algorithm;
and fixing the current operation parameters of the DNN, and updating the structural parameters of the DNN on the obtained image verification set by using a gradient descent algorithm.
For example, in the process of searching the network structure of the DNN network, the update of the structural parameter α can be additionally added on the basis of the conventional neural network training (updating the operation parameter θ).
For example, for any iteration process, the structural parameters of the DNN network may be fixed first, and the operation parameters of the DNN network may be updated on the obtained image training set by using a gradient descent algorithm, such as an SGD algorithm, to obtain the updated operation parameters in the iteration process.
Under the condition that the updated operation parameters in the iteration process are completed, the current operation parameters of the DNN network (namely the updated operation parameters in the iteration process) can be fixed, the operation parameters of the DNN network are updated on the obtained image verification set by utilizing a gradient descent algorithm such as an SGD algorithm, the updated structure parameters in the iteration process are obtained, namely the training result is subjected to model verification by utilizing the verification set, a new network structure is selected according to the verification structure, and the iteration process is completed.
It should be noted that, in one iteration process, the current structural parameters of the DNN network may be fixed first, the operation parameters of the DNN network are updated on the obtained image training set by using a gradient descent algorithm, then, the current operation parameters of the DNN network are fixed, and the structural parameters of the DNN network are updated on the obtained image verification set by using a gradient descent algorithm; the current operation parameters of the DNN network can be fixed firstly, the structural parameters of the DNN network are updated on the obtained image verification set by utilizing a gradient descent algorithm, then the current structural parameters of the DNN network are fixed, and the operation parameters of the DNN network are updated on the obtained image training set by utilizing the gradient descent algorithm.
Illustratively, during the search process, a portion of the dataset (either the acquired image training set or the acquired image verification set) is selected per iteration, referred to as a batch, and a batch size sample is selected in the dataset for searching per iteration, and one epoch is completed with one search using all image samples in the dataset.
For example, the current structural parameters of the DNN network may be fixed first, and a gradient descent algorithm is used to select a batch size image sample from the obtained image training set to update the operation parameters of the DNN network; then, fixing the current operation parameters of the DNN, and selecting an image sample of the batch size from the obtained image verification set by using a gradient descent algorithm to update the structural parameters of the DNN so as to finish one iteration.
In the process of searching an epoch, when the operation parameters and the structure parameters of the DNN network are iteratively and alternately updated, the ratio of the batch number obtained by performing batch division on the obtained image training set to the batch number obtained by performing batch division on the obtained image verification set may be N1: n2, N1 < N2.
In one epoch, the obtained image samples in the image verification set can be used for updating structural parameters in the constrained search process, and can also be used for updating structural parameters in the constrained search process.
Assuming that the obtained image training set is divided into N1 batches and the obtained image verification set is divided into N2 batches, in one epoch, the number of iterations in the unconstrained search process may be N1, and the number of iterations in the constrained search process may be (N2-N1).
Illustratively, if the sample number ratio of the obtained image training set to the obtained image verification set is N1:N2, the obtained image training set is subjected to batch division, and the obtained image verification set is subjected to batch division, so that the batch size is consistent; if the sample number ratio of the obtained image training set to the obtained image verification set is not M1:N2, the obtained image training set is subjected to batch division and the obtained image verification set is subjected to batch division, so that the batch size is inconsistent.
If the number of iterations does not reach the first number of iterations, continuing the next iteration in the above manner until the number of iterations reaches the first number of iterations; if the number of iterations reaches the first number of iterations, the iteration may end.
It can be seen that by iteratively updating the operating parameters and the structural parameters of the DNN network without constraint (which may be referred to as an unconstrained search), a better network structure is searched for in a larger parameter space without constraint, and by jointly optimizing the weights and the architecture against the samples, a structural parameter update starting point (i.e., a structural parameter initial value) with good robust information is provided for the structural parameter update with constraint in the subsequent process.
In some embodiments, the iteratively updating the structural parameters of the DNN network using the preset network vulnerability constraint and the obtained image verification set may include:
determining an optional structural parameter set which enables the network vulnerability of the DNN network to meet the preset network vulnerability constraint condition according to the preset network vulnerability constraint condition;
and iteratively updating the optional structural parameters which are determined from the optional structural parameter set and are closest to the current structural parameters of the DNN network by using a gradient descent algorithm and the obtained image verification set.
Illustratively, in the case where the iterative alternate updating of the operational parameters and the structural parameters without constraints is completed in the above manner, the structural parameters may be further iteratively updated by introducing network vulnerability constraints, i.e., the structural parameters may be iteratively updated with constraints (which may be referred to as a constrained search).
Illustratively, in the iterative process under constraint, an optional set of structural parameters (which may be referred to as a feasible set) that enables network vulnerabilities of the DNN network to meet the preset network vulnerability constraint may be determined according to the preset network vulnerability constraint.
In the case that a feasible set is obtained, a gradient descent algorithm, such as an SGD algorithm, may be used to iteratively update the selectable structural parameters determined from the feasible set that are closest to the current structural parameters of the DNN network.
For example, one optional configuration parameter may be selected randomly from the feasible set as the closest optional configuration parameter to the current configuration parameter of the DNN network (may be referred to as α P ) And uses gradient descent algorithm to alpha P Iterative updating to obtain alpha from available set P Approaching a step by step.
For example, one may first randomly select an α from the feasible set P As starting point and utilizing SGD algorithm to make alpha P Updating one step to obtain a new alpha selected from the feasible set P (the alpha P The distance between the alpha and alpha is smaller than alpha before updating P Distance from alpha) and with the new alpha P As a new starting point, the SGD algorithm pair α is reused P Updating until the iteration number reaches the preset number, such as ending the iteration updating under the condition of the difference value between the second iteration number and the first iteration number, wherein the obtained alpha P I.e. the selectable structural parameter (which may be referred to as the target structural parameter) that is determined to be closest to alpha in the feasible set.
It can be seen that by introducing the network vulnerability constraint condition, the structural parameters of the DNN network are updated under the constraint of the network vulnerability constraint condition, so as to search the network structure with lower network vulnerability, and obtain the DNN network with higher robustness.
In one example, the determining the set of selectable structural parameters that enable the network vulnerability of the DNN network to meet the preset network vulnerability constraint according to the preset network vulnerability constraint may include:
and determining an optional structure parameter set which enables the network vulnerability of the DNN network to be less than or equal to a preset threshold.
For example, taking a preset network vulnerability constraint condition as an example that the network vulnerability of the DNN network is smaller than or equal to a preset threshold, when the structural parameters of the DNN network are iteratively updated under the constraint condition, the updated structural parameters need to be ensured to make the network vulnerability of the DNN network smaller than or equal to the preset threshold.
Accordingly, an optional structure parameter set (i.e., a feasible set) for enabling the network vulnerability of the DNN network to be smaller than or equal to a preset threshold value may be determined according to a preset network vulnerability constraint condition, and a target structure parameter closest to the structure parameter of the DNN network may be found from the feasible set, and the structure parameter of the DNN network may be projected to the target structure parameter.
In some embodiments, the vulnerability of the DNN network may be determined by:
determining the channel vulnerability of each channel in the DNN network;
determining the layer vulnerability of each layer in the DNN according to the channel vulnerability of each channel in the DNN;
determining cell vulnerability of each cell in the DNN network according to the layer vulnerability of each layer in the DNN network;
and determining the network vulnerability of the DNN according to the cell vulnerability of each cell in the DNN.
Illustratively, to determine the network vulnerability of the DNN network, concepts of channel vulnerability, layer vulnerability, cell vulnerability, and network vulnerability may be introduced, respectively.
Illustratively, the network vulnerability of the DNN network may be dependent on the cell vulnerability of each cell in the DNN network; cell vulnerability can be determined from the layer vulnerabilities of the layers in the cell; layer vulnerabilities may be determined from channel vulnerabilities of channels in the layer.
As an example, for any channel in the DNN network, the channel vulnerability of that channel is determined from the difference in distribution of the clean image samples and their corresponding interference image samples at the output characteristics of that channel.
For any channel in the DNN network, the channel vulnerability of the channel may be determined by the difference in the distribution of the clean image samples and their corresponding interference image samples at the output characteristics of the channel.
For example, for any channel of the DNN network, the desire for the relative entropy (or KL (Kullback-Leibler) divergence) of the clean image samples and their corresponding interference image samples at the output characteristics of that channel is determined as the channel vulnerability of that channel.
As an example, for any layer in the DNN network, the average of the channel vulnerabilities of the channels in that layer may be determined as the layer vulnerability of that layer.
As an example, for any cell of the DNN network, the layer vulnerability of the egress layer of the cell may be determined as the cell vulnerability of the cell.
As an example, for any DNN network, a mean value of cell vulnerabilities of cells in the DNN network may be determined as the network vulnerability of the DNN network.
It should be appreciated that the manner in which the network vulnerability is determined in the above embodiments is merely exemplary of the embodiments of the present application, and is not intended to limit the scope of the present application, i.e., the network vulnerability may be determined in other manners in the embodiments of the present application. For example, for a cell in a DNN network, the average value of the layer vulnerabilities of the layers in the cell may also be determined as the cell vulnerability of the cell. For any layer in the DNN network, the maximum, minimum, or weighted average of the channel vulnerabilities of the channels in that layer may also be determined as the layer vulnerability of that layer.
In some embodiments, before any epoch in the above search process for the DNN network structure uses the gradient descent algorithm, the obtained image training set and the obtained image verification set to iteratively and alternately update the operation parameters and the structure parameters of the DNN network, the method may further include:
and fixing structural parameters of the DNN, and searching the operation parameters of the DNN on the obtained image training set by utilizing a gradient descent algorithm until the number of epochs searched reaches a second epochs number which is smaller than the first epochs number.
Illustratively, given that the operating parameters of the DNN network model are typically randomly initialized, less efficient knowledge is involved, not enough to guide the search for network structure, the operating parameters of the DNN network may be searched for a certain epoch (which may be referred to as pre-training) prior to searching for the operating parameters and the structural parameters in the manner described in the above embodiments.
For example, the structural parameters of the DNN network may be fixed, and the operating parameters of the DNN network may be pre-trained on the obtained image training set using a gradient descent algorithm, such as the SGD algorithm, until the pre-trained epochs number reaches a preset epochs number (referred to herein as a second epochs number), and based on the pre-trained DNN model, the search for the operating parameters and the structural parameters may be performed according to the procedure shown in fig. 1.
Illustratively, the second number of epochs is less than the first number of epochs, i.e., the number of epochs searched in the flow shown in FIG. 1 is the difference between the second number of epochs and the first number of epochs.
In order to enable those skilled in the art to better understand the technical solutions provided by the embodiments of the present application, the technical solutions provided by the embodiments of the present application are described below in conjunction with specific embodiments.
In this embodiment, a network vulnerability index is defined, and a vulnerability constraint is applied to the structural parameters by using the network vulnerability in the search process, so that the search result is biased towards a robust structure.
The definition of network vulnerability in the embodiments of the present application is described below.
By way of example, the vulnerability of the network may be measured using the expectation of KL divergence (i.e., relative entropy) of the different feature values generated in the network by the clean image sample and its corresponding challenge image sample.
1. Channel vulnerability
Illustratively, the channel vulnerability of the kth channel of the first layer may be defined as:
Figure SMS_10
wherein z is (l,k) Representing feature values on a kth feature map of a first layer of the network when a clean image sample is input,
Figure SMS_11
the feature value on the kth feature map of the first layer of the network when the challenge image sample is input is shown.
2. Layer vulnerability
Illustratively, the layer vulnerabilities may be defined using a mean of the channel vulnerabilities in a layer, and the layer vulnerabilities of the first layer may be defined as:
Figure SMS_12
wherein N is (l) Number of channels of the first layer.
3. cell vulnerability
Illustratively, the cell vulnerability is defined as the layer vulnerability of the export layer of each cell, and the vulnerability of the ith cell may be defined as:
Figure SMS_13
wherein,,
Figure SMS_14
for inputting the characteristic value of the kth characteristic diagram of the ith cell output layer in the network when the clean image sample is input, +.>
Figure SMS_15
To input the feature value of the kth feature map of the ith cell output layer in the network at the time of the challenge image sample,
Figure SMS_16
The number of the layer characteristic diagrams is output for the ith cell in the network.
4. Network vulnerability
For example, the average of the vulnerabilities of all cells contained throughout the network may be referred to as network vulnerability (also referred to as model vulnerability):
Figure SMS_17
wherein M is the number of cells in the whole network.
In this embodiment, a robust architecture search method (Robust Network Architecture Search, abbreviated as RNAS) may be used, whose objective function is:
Figure SMS_18
Figure SMS_19
i.e. based on the use of
Figure SMS_20
Minimum θ (i.e. the above-mentioned operating parameters), determining to make the challenge verification loss +.>
Figure SMS_21
And clean authentication loss->
Figure SMS_22
And a (i.e. the above-mentioned structural parameters) that minimizes the sum of the network vulnerabilities F (a).
Wherein,,
Figure SMS_23
and->
Figure SMS_24
Representing challenge training loss and challenge verification loss, respectively.
The objective function of RNAs is a double-layer multi-objective optimization problem, alpha is an upper-layer variable, theta is a lower-layer variable, and the objective function comprises two optimization objectives: countering authentication loss
Figure SMS_25
And clean authentication loss->
Figure SMS_26
The sum is minimal and the network vulnerability F (α) is minimal.
Illustratively, to solve the above problem, network vulnerability can be translated into a constraint.
A network vulnerability upper threshold H may be set to convert the original problem into a single-objective optimization problem with two constraints:
Figure SMS_27
Figure SMS_28
Figure SMS_29
Projecting the structural parameter alpha to the nearest alpha in the feasible set that satisfies the network vulnerability constraint P
Figure SMS_30
Figure SMS_31
That is, find the α that makes the network vulnerability F (α) smaller than or equal to H and closest to α (the current structural parameter) P Updating alpha to alpha P
Illustratively, the solution can be performed by the Lagrangian multiplier method:
Figure SMS_32
it can be seen that in this embodiment, the challenge sample defense method based on the robust structure search may include two phases:
and updating the operation parameter theta of the DNN network in the stage one.
Illustratively, since the operation parameters of the DNN network are generally randomly initialized, and do not include enough effective knowledge, and are insufficient to guide the search of the network structure, the structure parameters α of the DNN network may be fixed first, and the operation parameters θ of the DNN network may be pre-trained by updating a number of epochs (i.e., the second epochs number described above).
For example, the operation parameters of the DNN network may be fixed, and the structural parameters θ of the DNN network may be updated by using the SGD algorithm, so as to pretrain 15 epochs (i.e., taking the second epochs number as 15 as an example).
Stage two, may include the following two steps (step):
step1, unconstrained search, searching for better structures in a larger parameter space without constraint.
Illustratively, the structural parameter α may be fixed, and the operational parameter θ may be updated one step on the acquired image training set using SGD; then, the operating parameter θ is fixed, and the structural parameter α is updated one step on the obtained image verification set using the SGD.
And (3) alternately updating the operation parameter theta and the structural parameter alpha by loop iteration until the iteration number reaches the first iteration number.
Step2, based on the vulnerability constraint of the structural parameters, projecting the structural parameters alpha output by Step1 to the nearest point in the feasible set, and then optimizing.
Illustratively, the structural parameter α may be projected to the closest point α in the feasible set in accordance with the network vulnerability constraints P To obtain a network structure that satisfies the network vulnerability constraint and is most similar to the structure searched in step 1.
Step2 is executed in a loop iteration mode until the iteration times reach the second iteration times in one epoch, namely step2 iterates (the second iteration times-the first iteration times), and one epoch is ended.
Step1 and step2 are performed in a loop until the number of epochs trained reaches the first number of epochs, or the model converges.
Illustratively, after the search is completed, at each edge f (i,j) The operation omicron with the largest structural parameter alpha is reserved, and the finally obtained cell is stored. And (3) splicing M cells to form a target network by taking the obtained cells as a basic unit, wherein the specific implementation flow can be shown in fig. 2.
As shown in fig. 2, the implementation flow may include the following steps:
s200, dividing the data set D into image training sets D in average train And image verification set D vaild The number of trained epochs E (i.e., the first number of epochs) is set, and the number of iterations per epoch T (i.e., the second number of iterations) is set. The operating parameter θ and the structural parameter α are randomly initialized.
S210, fixing the structural parameter alpha, updating the operation parameter theta on the image training set by utilizing an SGD algorithm, and training 15 epochs (namely, taking the second epoch number as 15 as an example).
S220, fixing the structural parameter alpha, and updating the operation parameter theta on the image training set by utilizing an SGD algorithm.
S230, fixing the operation parameter theta, and updating the structural parameter alpha on the image verification set by utilizing an SGD algorithm. If the maximum unconstrained search iteration number u (i.e., the first iteration number) is reached, step S240 is entered; otherwise, go to step S220.
Step S240, projecting alpha to the nearest point alpha of the feasible set P So that it meets the network vulnerability constraint. Updating alpha on an image verification set using SGD algorithm P And ending one epoch until the iteration number of each epoch reaches T. If the number of the training epochs reaches E, the step S250 is entered; otherwise, go to step S220.
S250, at each edge f (i,j) The operation omicron with the largest structural parameter alpha is reserved, and the finally obtained cell is stored.
And S260, splicing M cells by taking the obtained cells as a basic unit to form a target DNN network.
For example, in the case where the target DNN network is obtained in the above-described manner, the image to be subjected to the image processing may be subjected to the image processing using the target DNN network.
By way of example, the image processing may include, but is not limited to, image classification, object detection, or semantic segmentation, among others.
In order for those skilled in the art to better understand the technical effects of the embodiments of the present application, the embodiments of the present application will be further described with reference to specific experimental analysis.
A network structure search is first performed on CIFAR-10. The searched structure is then applied to other data sets, such as SVHN, CIFAR-100.
Extensive evaluations were performed on three datasets, CIFAR-10, CIFAR-100 and SVHN, to verify the effectiveness of the scheme (i.e., RNAs) provided by the examples of the present application against various attack methods. First a powerful baseline is established, then the RANS scheme is shown to be significantly better than the traditional scheme, achieving optimal robustness.
1. Experimental setup
1.1, dataset: three data sets, CIFAR-10, CIFAR-100, and SVHN were used.
1.2, baseline: the following baselines were selected for extensive comparison:
1.2.1 common model (VGG 16, resNet18, denseNet 121)
1.2.2. artificially designed lightweight networks (mobiletv 2, suffeetv 2, squeezeNet)
1.2.3 non-gradient based search method (NasNet, amoebaNet, PNAS)
1.2.4 gradient-based search methods (DARTS, P-DARTS, PC-DARTS, SNAS)
1.2.5 method based on robust Structure search (RobNet, DSRNA, RACL)
1.3 setting of super ginseng
The experiment is mainly divided into three parts of searching, training and evaluating, and the specific parameter settings are as follows:
1.3.1 search
Following the previous work, robust structure search is performed on CIFAR-10, and training data is divided into an image training set and an image verification set according to the proportion of 1:2 in the search stage, and optimization of structural parameters and operation parameters is performed respectively. The search space includes 8 candidate operations, 3×3 and 5×5separable convolutions (separable convolutions), 3×3 and 5×5dilated separable convolutions (dilated separable convolutions), 3×3max pooling (max pooling), 3×3average pooling, skip connection and zero. The super network is stacked by 8 cells, including 6 normal cells (normal neurons) and 2 reduction cells (reduction neurons), each cell containing 6 nodes. The network training in the searching process is set as follows: training 60 epochs, batch size 128, using a driven SGD as an optimizer, an initial learning rate of 0.1, a momentum of 0.9, a weight decay of 0.0003, and a cosine learning rate to adjust the learning rate. For the structural parameter α, adam was used as an optimizer, the learning rate was 0.0006, and the weight decay was 0.001.
1.3.2 training
And after the search is finished, performing operation sampling in all operations of the continuous edge of the cell according to a sampling strategy to obtain a normal cell and a reduction cell, and expanding and combining the normal cell and the reduction cell to form a target network. And retraining the target network by using the whole data set.
Because the robustness of the network is mainly concerned, the network can be subjected to countermeasure training, a PGD method is adopted to generate countermeasure image samples, the parameters are set to be maximum disturbance epsilon=8/255, the number of attack iteration steps is 7, and the step length is 2/255. 600epochs are trained, the batch size is 128, SGD with driving quantity is used as an optimizer, the initial learning rate is 0.1, the momentum is 0.9, the weight attenuation is 0.0003, and the learning rate is adjusted by using the cosine learning rate.
1.3.3 evaluation
For fair comparison, all models were fully trained with 600epochs, and the remaining parameters were the same as those of the trained RNAs model. And various countermeasure attack methods are utilized to generate countermeasure image samples for evaluating the model, wherein the attack methods comprise common FGSM, PGD and CW, and in addition, an Autoatack method is introduced for evaluating the robustness of the model more accurately, and specific parameters are set as follows:
1.3.3.1 FGSM maximum perturbation epsilon=0.03 (8/255).
1.3.3.2, PGD maximum perturbation epsilon=0.03 (8/255), number of attack iteration steps 20, step size 2/255.
1.3.3.3 the number of iterative steps of the C & W attack is 1000.
1.3.3.4 Autoattack uses an infinite norm attack with a maximum perturbation of ε=0.03 (8/255).
2. Experimental results
2.1 results of experiments on CIFAR-10
Extensive studies were performed on the CIFAR-10 dataset to evaluate the robust resistance of RNAS based on experimental verification and to compare extensively to the most advanced baseline.
The normal cell and reduction cell architectures obtained by searching the CIFAR-10 are shown in the figure, and two architectures are obtained according to two different upper limits H of network vulnerability (namely, constraints on different degrees of network vulnerability): RNAs-H and RNAs-L.
Model robustness was evaluated by combining 20 cells into a target network under natural training (natural training) and challenge training (adversarial training), respectively, using multiple attack methods. The results are shown in Table 1 and Table 2, respectively.
TABLE 1 evaluation of robustness under challenge training for different models on CIFAR-10
Figure SMS_33
Figure SMS_34
Wherein RNAs-H and RNAs-L are two RNAs frameworks obtained according to different upper limits of network vulnerability (H and L, H > L respectively). FGSM, PGD, CW, autoattack correspond to different attack modes, respectively. clean corresponds to a clean image sample.
As table 1, under challenge training, the challenge robust architecture resulting from the RNAS search has good robustness performance over other advanced networks. From the results shown in table 1, the following conclusions can be drawn:
2.1.1, RNAs faced FGSM, PGD, CW, autotatck attacks with robust accuracies of 56.91%,52.88%,76.24%,48.32%, respectively. In a broad comparison, the base model is more robust against robustness than most manually designed base models. This illustrates that NAS methods can more efficiently design robust structures.
2.1.2 under various attacks, the accuracy of RNAs was far higher than other NAS baselines. Such as AmoebaNet, ENAS, SNAS, PNAS, P-DARTS, PC-DARTS. This suggests that RNAs are more resistant to various attacks than these baselines.
2.1.3, compared with the existing robust structure search method based on NAS (Network Architecture Search, network structure search), the method still has better robust performance as RACL, robNet, DSRNA, RNAS. The DSRNA robust precision with the best comparison performance of RNAs-L under FGSM, PGD, CW and Autotack attack is respectively improved by-2.98, +2.49, +0.30, +0.05, and RNAs-H is respectively improved by +0.72, +3.33, +2.47 and +0.95 than DSRNA. The method and the device indicate that the vulnerability of the network and the robustness of the model have stronger correlation in the searching process, and the constraint on the vulnerability of the network can better promote the robustness of the model.
In addition, RNAs not only perform better in the face of challenge image samples, but also achieve higher accuracy in the face of clean image samples. Compared to RACL, robNet and DSRNA have improved clean accuracy by +1.15, +6.56, +1.29, respectively.
To further demonstrate that the network structure contributes to robustness, the robustness of the model is evaluated without optimization of weights by means of countertraining, but with direct natural training. From this table, as in table 2, the following observations can be made:
the accuracy achieved by RNAs is very close to the NAS optimum accuracy. This suggests that RNAs are also very accurate without attack. RNAs remain more robust than other methods under natural training.
TABLE 2 accuracy of different methods to CIFAR-10 under natural training conditions and robust accuracy under PGD attack
Figure SMS_35
2.2, results of experiments on CIFAR-100
To further verify the effectiveness of RNAs to promote model robustness, the results may be extended to CIFAR-100 data sets, i.e., cells searched for on CIFAR-10 form a network, and countermeasure training performed on CIFAR-100, as compared to common models and SOTA methods, with the results shown in Table 3.
Table 3 results of robustness assessment under challenge training for different models on CIFAR-100.
Figure SMS_36
Figure SMS_37
As shown in Table 3, the structure found using RNAS search at CIFAR-10 can further improve robustness against training at CIFAR-100. From the results in table 3, the following conclusions can be drawn:
the robust precision of RNAs-L on CIFAR-100 data set is 32.52%,25.73%,54.83% and 23.64% when facing FGSM, PGD, CW and Autotack attacks. In a broad comparison, similar results were shown on CIFAR-10, with RNAs still having better robustness against robustness on CIFAR-100 than most of the baseline model.
2.3 results of experiments on SVHN
Following the rules of multi-dataset verification, the results may be further extended to SVHN datasets, i.e., cells searched on CIFAR-10 to form a network, and challenge training on SVHN, with the results shown in table 4.RNAs achieved similar results in SVHN for CIFAR-10 and CIFAR-100.
TABLE 4 evaluation of robustness on SVHN for different models under challenge training
Figure SMS_38
/>
Figure SMS_39
3. Vulnerability upper limit parameter H
In RNAS H represents the vulnerability constraint upper bound, H is a hyper-parameter used to control the strength of the vulnerability constraint, and its value range is (0, + -infinity), the larger H represents a more relaxed constraint on the vulnerability of the network structure, and the smaller H represents a more constrained strength on the vulnerability of the network structure. When H is 0, the vulnerability of the network structure is 0, i.e. there is no deviation on the output of any cell after a challenge image sample and its corresponding clean image sample are input into the network. Intuitively, the smaller H is better in the training process, so that the vulnerability of the network is smaller, but experiments prove that too small H can generate an excessive fitting phenomenon on training data in the network training process, so that the network structure obtained by searching is excessively fitted to a certain data set, and the generalization capability of the model is reduced.
Since the network structure search requires huge computational overhead, and the addition of the countermeasure training to the RNAS during the search further increases the computational overhead. In practical use, it is generally necessary to follow a search on a proxy dataset and then migrate the structure to an application on a target dataset. The mobility of the searched structure is also an important feature to consider. The vulnerability upper limit parameter H directly affects the mobility of the network architecture, so that it is important to select a suitable H for the RNAS method. Thus, several experimental studies were performed:
and (3) performing model search on the CIFAR-10 data set by using different H, and popularizing the model to the CIFAR-100 and SVHN data sets to obtain clean precision under different H values and robust precision under PGD attack. It can be seen that as the H value decreases (i.e., the constraint intensity increases), the PGD robustness on CIFAR-10 increases gradually, while the PGD robustness on CIFAR-100 and SVHN increases first to a certain vertex and then decreases rapidly. This suggests that too small an H value over fits the model to the CIFAR-10 dataset, reducing the generalization ability of the network. And too large a value of H does not achieve the effect of constraining network vulnerability. Furthermore, there is a slight decrease in the clean accuracy over all three data sets as the H value decreases. This further embodies the advantage of H adjustability in the optimization method provided by the embodiments of the present application. The search may be performed using different H's according to different tasks.
In order to enable those skilled in the art to better understand the technical effects of the technical solutions provided in the embodiments of the present application, the effects of the embodiments of the present application are described below with reference to specific application examples.
The technical scheme provided by the embodiment of the application can be applied to image processing tasks including but not limited to image classification, object detection or semantic segmentation, and has a wide use range in practical specific applications. Such as face recognition, autopilot, etc.
The following examples are illustrative.
1. Face recognition
At present, face recognition has been widely used in various systems such as airports, stations, banks, etc.
In the attack resistance research of some face recognition algorithms, the face detection algorithm can be successfully bypassed by adding fine interference which cannot be detected by human eyes into an original image so that the face cannot be positioned by the detection algorithm.
In addition, the face recognition system can be attacked by the counterimage sample to enable the counterimage sample to recognize the appointed error category.
The above-mentioned attack resistance greatly limits the application of face recognition systems, especially in security sensitive fields, such as aviation, banking, securities, etc.
For the above problem, for a DNN network (may be referred to as a face recognition model) for face recognition, in any epoch in the structure search process of the face recognition model, the gradient descent algorithm, the obtained image training set and the obtained image verification set may be utilized to perform iterative alternate update on the operation parameters and the structure parameters of the face recognition model until the iteration number reaches the first iteration number;
and carrying out iterative updating on the structural parameters of the face recognition model by using the preset network vulnerability constraint condition and the obtained image verification set until the iterative times in the epoch reach the second iterative times, so as to obtain the structural parameters meeting the preset network vulnerability constraint condition.
And under the condition that the number of the epochs searched reaches the first epochs number or the face recognition model is converged, generating a face recognition model for face recognition according to the obtained structural parameters, and carrying out face recognition on an image to be subjected to face recognition by using the face recognition model.
Therefore, the face recognition model obtained through the robust architecture search algorithm has strong defensive performance on the challenge image sample, so that the challenge image sample can be avoided to a great extent, and the face recognition model is used as a basic network of the face recognition system, so that the accuracy of face recognition can be improved, and the face recognition system is safer and more stable.
2. Autopilot
In recent years, the research on automatic driving is very hot, and the automatic driving has a very wide application prospect, and is the development direction of intelligent traffic in the future. But its security is a concern for the life safety of the user, and it is difficult for people to trust this complex technology before its security is fully verified. Since autopilot introduces a large number of deep neural network models, it is inevitable to face a threat against the sample.
Currently, there has been a great deal of research demonstrating that antagonizing the sample in the physical world can also cause some attacks on the autopilot system. For example, attaching several countermeasure sample stickers at specific locations on the road may allow the vehicle in the autopilot mode to incorporate into the reverse lane; attaching a plurality of countermeasure sample stickers to specific positions of the traffic indicator, so that the automatic driving can make wrong recognition on the road signs, thereby making wrong behaviors; the challenge sample may "stealth" the pedestrian from the deep neural network model.
As can be seen, the challenge sample remains a significant challenge for autopilot.
For the above problem, for a DNN network (may be referred to as an autopilot model) used in an autopilot system, in any epoch in the structure search process of the autopilot model, the operation parameters and the structure parameters of the autopilot model may be iteratively and alternately updated by using a gradient descent algorithm, an obtained image training set and an obtained image verification set until the iteration number reaches the first iteration number;
And carrying out iterative updating on the structural parameters of the automatic driving model by using the preset network vulnerability constraint condition and the obtained image verification set until the iterative times in the epoch reach the second iterative times, so as to obtain the structural parameters meeting the preset network vulnerability constraint condition.
And under the condition that the searched epoch number reaches the first epoch number or the automatic driving model is converged, generating the automatic driving model according to the obtained structural parameters, and detecting pedestrians, vehicles or road marks in the automatic driving process of the vehicle by using the automatic driving model.
Therefore, the autopilot model obtained by the anti-robust architecture search algorithm has strong defensive performance on the anti-sample, so that the attack of the anti-sample can be avoided to a great extent, and compared with the traditional DNN, the autopilot model has stronger anti-robust performance when applied to an autopilot system, and the autopilot safety is greatly improved.
The methods provided herein are described above. The apparatus provided in this application is described below:
referring to fig. 3, a schematic structural diagram of an image processing apparatus based on an antagonistic neural network structure search according to an embodiment of the present application, as shown in fig. 3, the image processing apparatus based on an antagonistic neural network structure search may include:
A first search unit 310, configured to, for any round of epoch in the DNN network structure search process, perform iterative alternate update on the operation parameters and the structure parameters of the DNN network by using the gradient descent algorithm, the obtained image training set, and the obtained image verification set, until the iteration number reaches a first iteration number;
a second search unit 320, configured to iteratively update the structural parameters of the DNN network using a preset network vulnerability constraint condition and an obtained image verification set, until the number of iterations in the epoch reaches a second number of iterations, to obtain structural parameters that satisfy the preset network vulnerability constraint condition; the network vulnerability is used for representing the difference of characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network, and the second iteration times are larger than the first iteration times;
a generating unit 330, configured to generate a target DNN network for image processing according to the obtained structural parameter when the number of epochs searched reaches the first number of epochs, or when the DNN network model converges;
an image processing unit 340, configured to perform image processing on an image to be subjected to image processing using the target DNN network.
In some embodiments, the first search unit 310 iteratively and alternately updates the operation parameters and the structure parameters of the DNN network using a gradient descent algorithm, an obtained training set of images, and an obtained verification set of images, including:
in any iteration process, fixing the current structural parameters of the DNN network, and updating the operation parameters of the DNN network on the obtained image training set by utilizing a gradient descent algorithm;
and fixing the current operation parameters of the DNN, and updating the structural parameters of the DNN on the obtained image verification set by using a gradient descent algorithm.
In some embodiments, the second searching unit 320 iteratively updates the structural parameters of the DNN network using a preset network vulnerability constraint and an obtained image verification set, including:
determining an optional structural parameter set which enables the network vulnerability of the DNN network to meet the preset network vulnerability constraint condition according to the preset network vulnerability constraint condition;
and iteratively updating the optional structural parameters which are determined from the optional structural parameter set and are closest to the current structural parameters of the DNN network by using a gradient descent algorithm and the obtained image verification set.
In some embodiments, the second searching unit 320 determines, according to the preset network vulnerability constraint, an optional set of structural parameters that enable the network vulnerability of the DNN network to meet the preset network vulnerability constraint, including:
and determining an optional structure parameter set which enables the network vulnerability of the DNN network to be smaller than or equal to a preset threshold value.
In some embodiments, as shown in fig. 4, the apparatus further comprises:
a determining unit 350 for determining vulnerability of the DNN network by:
determining the channel vulnerability of each channel in the DNN network;
determining the layer vulnerability of each layer in the DNN according to the channel vulnerability of each channel in the DNN;
determining cell vulnerability of each neuron cell in the DNN network according to the layer vulnerability of each layer in the DNN network;
and determining the network vulnerability of the DNN according to the cell vulnerability of each cell in the DNN.
In some embodiments, for any channel in the DNN network, the channel vulnerability of the channel is determined from the difference in distribution of the clean image sample and its corresponding interference image sample in the output characteristics of the channel;
the determining unit 350 determines layer vulnerabilities of layers in the DNN network according to the channel vulnerabilities of the channels in the DNN network, including:
For any layer in the DNN network, determining the average value of the channel vulnerabilities of the channels in the layer as the layer vulnerability of the layer.
In some embodiments, the determining unit 350 determines the cell vulnerability of each neuronal cell in the DNN network according to the layer vulnerabilities of each layer in the DNN network, including:
for any cell of the DNN network, determining the layer vulnerability of the output layer of the cell as the cell vulnerability of the cell.
In some embodiments, the determining unit 350 determines the network vulnerability of the DNN network according to the cell vulnerability of each cell in the DNN network, including:
and determining the average value of the cell vulnerability of each cell in the DNN network as the network vulnerability of the DNN network.
In some embodiments, the determining unit 350 determines channel vulnerabilities of channels in the DNN network, including:
for any channel of the DNN network, the expectation of the relative entropy of the clean image sample and its corresponding interference image sample at the output characteristics of that channel is determined as the channel vulnerability of that channel.
In some embodiments, as shown in fig. 5, the apparatus further comprises:
and a third searching unit 360, configured to fix structural parameters of the DNN network, and search, using a gradient descent algorithm, operating parameters of the DNN network on the obtained image training set until the number of epochs searched reaches a second epoch number, where the second epoch number is smaller than the first epoch number.
An embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores machine executable instructions executable by the processor, and the processor is configured to execute the machine executable instructions to implement the above-described image processing method based on an antagonistic neural network structure search.
Fig. 6 is a schematic hardware structure of an electronic device according to an embodiment of the present application. The electronic device may include a processor 601, a memory 602 storing machine-executable instructions. The processor 601 and memory 602 may communicate via a system bus 603. Also, the processor 601 may perform the above-described image processing method based on the antagonistic neural network structure search by reading and executing the machine-executable instructions in the memory 602 corresponding to the antagonistic sample defense logic based on the robust structure search.
The memory 602 referred to herein may be any electronic, magnetic, optical, or other physical storage device that may contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
In some embodiments, a machine-readable storage medium, such as memory 602 in fig. 6, is also provided, having stored thereon machine-executable instructions that when executed by a processor implement the above-described image processing method based on antagonistic neural network structure search. For example, the machine-readable storage medium may be ROM, RAM, CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
The present embodiments also provide a computer program stored on a machine-readable storage medium, such as the memory 602 in fig. 6, and which when executed by a processor causes the processor 601 to perform the above described image processing method based on antagonistic neural network structure search.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing description of the preferred embodiments of the present invention is not intended to limit the invention to the precise form disclosed, and any modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. An image processing method based on an antagonistic neural network structure search, comprising:
for any round of epoch in the DNN network structure searching process of the deep neural network, using a gradient descent algorithm, an obtained image training set and an obtained image verification set to perform iterative alternate updating on the operation parameters and the structure parameters of the DNN network until the iterative times reach a first iterative times;
and carrying out iterative updating on the structural parameters of the DNN network by using a preset network vulnerability constraint condition and an obtained image verification set until the iterative times in the epoch reach a second iterative times, so as to obtain the structural parameters meeting the preset network vulnerability constraint condition; the network vulnerability is used for representing the difference of characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network, and the second iteration times are larger than the first iteration times;
And under the condition that the searched epochs reach the first epochs or the DNN network model is converged, generating a target DNN network for image processing according to the obtained structural parameters, and performing image processing on an image to be subjected to image processing by using the target DNN network.
2. The method of claim 1, wherein iteratively and alternately updating the operating parameters and the structural parameters of the DNN network using a gradient descent algorithm, the acquired training set of images, and the acquired validation set of images, comprises:
in any iteration process, fixing the current structural parameters of the DNN network, and updating the operation parameters of the DNN network on the obtained image training set by utilizing a gradient descent algorithm;
and fixing the current operation parameters of the DNN, and updating the structural parameters of the DNN on the obtained image verification set by using a gradient descent algorithm.
3. The method of claim 1, wherein iteratively updating structural parameters of the DNN network using preset network vulnerability constraints and an obtained image validation set comprises:
determining an optional structural parameter set which enables the network vulnerability of the DNN network to meet the preset network vulnerability constraint condition according to the preset network vulnerability constraint condition;
And iteratively updating the optional structural parameters which are determined from the optional structural parameter set and are closest to the current structural parameters of the DNN network by using a gradient descent algorithm and the obtained image verification set.
4. A method according to claim 3, wherein said determining, in dependence on the preset network vulnerability constraint, a set of selectable structural parameters that cause the network vulnerability of the DNN network to meet the preset network vulnerability constraint comprises:
and determining an optional structure parameter set which enables the network vulnerability of the DNN network to be smaller than or equal to a preset threshold value.
5. The method according to any of claims 1-4, characterized in that the vulnerability of the DNN network is determined by:
determining the channel vulnerability of each channel in the DNN network;
determining the layer vulnerability of each layer in the DNN according to the channel vulnerability of each channel in the DNN;
determining cell vulnerability of each neuron cell in the DNN network according to the layer vulnerability of each layer in the DNN network;
and determining the network vulnerability of the DNN according to the cell vulnerability of each cell in the DNN.
6. The method of claim 5, wherein for any channel in the DNN network, the channel vulnerability of the channel is determined from a difference in distribution of the clean image samples and their corresponding interference image samples at the output characteristics of the channel;
The determining the layer vulnerability of each layer in the DNN network according to the channel vulnerability of each channel in the DNN network comprises the following steps:
for any layer in the DNN network, determining the average value of the channel vulnerabilities of all channels in the layer as the layer vulnerability of the layer;
and/or the number of the groups of groups,
the determining the cell vulnerability of each neuron cell in the DNN network according to the layer vulnerability of each layer in the DNN network comprises:
for any cell of the DNN network, determining the layer vulnerability of the output layer of the cell as the cell vulnerability of the cell;
and/or the number of the groups of groups,
the determining the network vulnerability of the DNN network according to the cell vulnerability of each cell in the DNN network includes:
and determining the average value of the cell vulnerability of each cell in the DNN network as the network vulnerability of the DNN network.
7. The method of claim 6, wherein determining the channel vulnerability of each channel in the DNN network comprises:
for any channel of the DNN network, the expectation of the relative entropy of the clean image sample and its corresponding interference image sample at the output characteristics of that channel is determined as the channel vulnerability of that channel.
8. The method according to any one of claims 1-4, wherein for any epoch in the deep neural network DNN network structure search process, before iteratively and alternately updating the DNN network operating parameters and structure parameters using a gradient descent algorithm, an acquired image training set, and an acquired image verification set, further comprising:
And fixing structural parameters of the DNN, and searching the operation parameters of the DNN on the obtained image training set by utilizing a gradient descent algorithm until the number of the searched epochs reaches a second epochs number, wherein the second epochs number is smaller than the first epochs number.
9. An image processing apparatus based on an antagonistic neural network structure search, comprising:
the first search unit is used for carrying out iterative alternate update on the operation parameters and the structure parameters of the DNN by utilizing a gradient descent algorithm for any round of epoch, the obtained image training set and the obtained image verification set in the DNN structure search process of the deep neural network until the iterative times reach a first iterative times;
the second search unit is used for carrying out iterative updating on the structural parameters of the DNN network by using the preset network vulnerability constraint condition and the obtained image verification set until the iterative times in the epoch reach the second iterative times, so as to obtain the structural parameters meeting the preset network vulnerability constraint condition; the network vulnerability is used for representing the difference of characteristic distribution of the clean image sample and the corresponding countermeasure image sample in the DNN network, and the second iteration times are larger than the first iteration times;
The generation unit is used for generating a target DNN network for image processing according to the obtained structural parameters under the condition that the searched epochs reach the first epochs or the DNN network model converges, and performing image processing on an image to be subjected to image processing by using the target DNN network.
10. The apparatus of claim 9, wherein the first search unit iteratively and alternately updates the operating parameters and the structural parameters of the DNN network using a gradient descent algorithm, the acquired training set of images, and the acquired validation set of images, comprising:
in any iteration process, fixing the current structural parameters of the DNN network, and updating the operation parameters of the DNN network on the obtained image training set by utilizing a gradient descent algorithm;
fixing the current operation parameters of the DNN, and updating the structural parameters of the DNN on the obtained image verification set by using a gradient descent algorithm;
and/or the number of the groups of groups,
the second search unit iteratively updates structural parameters of the DNN network using a preset network vulnerability constraint condition and an obtained image verification set, including:
Determining an optional structural parameter set which enables the network vulnerability of the DNN network to meet the preset network vulnerability constraint condition according to the preset network vulnerability constraint condition;
iteratively updating the optional structural parameters which are determined from the optional structural parameter set and are closest to the current structural parameters of the DNN network by using a gradient descent algorithm and the obtained image verification set;
the second search unit determines, according to the preset network vulnerability constraint condition, an optional structure parameter set that enables network vulnerability of the DNN network to meet the preset network vulnerability constraint condition, where the determining includes:
determining an optional structure parameter set which enables the network vulnerability of the DNN network to be smaller than or equal to a preset threshold value;
and/or the number of the groups of groups,
the apparatus further comprises:
a determination unit for determining vulnerability of the DNN network by:
determining the channel vulnerability of each channel in the DNN network;
determining the layer vulnerability of each layer in the DNN according to the channel vulnerability of each channel in the DNN;
determining cell vulnerability of each neuron cell in the DNN network according to the layer vulnerability of each layer in the DNN network;
determining the network vulnerability of the DNN according to the cell vulnerability of each cell in the DNN;
For any channel in the DNN network, the channel vulnerability of the channel is determined according to the distribution difference of the output characteristics of the clean image sample and the corresponding interference image sample in the channel;
the determining unit determines layer vulnerabilities of all layers in the DNN network according to the channel vulnerabilities of all channels in the DNN network, including:
for any layer in the DNN network, determining the average value of the channel vulnerabilities of all channels in the layer as the layer vulnerability of the layer;
and/or the number of the groups of groups,
the determining unit determines cell vulnerability of each neuron cell in the DNN network according to layer vulnerability of each layer in the DNN network, including:
for any cell of the DNN network, determining the layer vulnerability of the output layer of the cell as the cell vulnerability of the cell;
and/or the number of the groups of groups,
the determining unit determines the network vulnerability of the DNN network according to the cell vulnerability of each cell in the DNN network, including:
determining the average value of cell vulnerability of each cell in the DNN network as the network vulnerability of the DNN network;
the determining unit determines the channel vulnerability of each channel in the DNN network, including:
for any channel of the DNN network, determining the expectation of the relative entropy of the clean image sample and the corresponding interference image sample in the output characteristic of the channel as the channel vulnerability of the channel;
And/or the number of the groups of groups,
the apparatus further comprises:
and the third searching unit is used for fixing the structural parameters of the DNN, searching the operation parameters of the DNN on the obtained image training set by utilizing a gradient descent algorithm until the searched epochs reach a second epochs, wherein the second epochs are smaller than the first epochs.
CN202211690326.2A 2022-04-02 2022-12-27 Image processing method and device based on antagonistic neural network structure search Pending CN116304144A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210345400.0A CN114492800A (en) 2022-04-02 2022-04-02 Countermeasure sample defense method and device based on robust structure search
CN2022103454000 2022-04-02

Publications (1)

Publication Number Publication Date
CN116304144A true CN116304144A (en) 2023-06-23

Family

ID=81488629

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202210345400.0A Pending CN114492800A (en) 2022-04-02 2022-04-02 Countermeasure sample defense method and device based on robust structure search
CN202211690326.2A Pending CN116304144A (en) 2022-04-02 2022-12-27 Image processing method and device based on antagonistic neural network structure search

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN202210345400.0A Pending CN114492800A (en) 2022-04-02 2022-04-02 Countermeasure sample defense method and device based on robust structure search

Country Status (1)

Country Link
CN (2) CN114492800A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876221A (en) * 2024-03-12 2024-04-12 大连理工大学 Robust image splicing method based on neural network structure search

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117876221A (en) * 2024-03-12 2024-04-12 大连理工大学 Robust image splicing method based on neural network structure search

Also Published As

Publication number Publication date
CN114492800A (en) 2022-05-13

Similar Documents

Publication Publication Date Title
Lee et al. Training confidence-calibrated classifiers for detecting out-of-distribution samples
Liu et al. Learning to affiliate: Mutual centralized learning for few-shot classification
Shaw et al. Meta architecture search
Wang et al. Filter pruning with a feature map entropy importance criterion for convolution neural networks compressing
Li et al. HELP: An LSTM-based approach to hyperparameter exploration in neural network learning
CN111931505A (en) Cross-language entity alignment method based on subgraph embedding
US20230237309A1 (en) Normalization in deep convolutional neural networks
CN113159115B (en) Vehicle fine granularity identification method, system and device based on neural architecture search
CN116304144A (en) Image processing method and device based on antagonistic neural network structure search
Cao et al. Stacked residual recurrent neural network with word weight for text classification
CN113988312A (en) Member reasoning privacy attack method and system facing machine learning model
CN110232151B (en) Construction method of QoS (quality of service) prediction model for mixed probability distribution detection
Kim et al. StackNet: Stacking feature maps for Continual learning
Chen et al. Human face recognition based on adaptive deep Convolution Neural Network
CN104866901A (en) Optimized extreme learning machine binary classification method based on improved active set algorithms
Zou et al. DeepLTSC: Long-tail service classification via integrating category attentive deep neural network and feature augmentation
CN118035448A (en) Method, device and medium for classifying paper fields in citation network based on pseudo tag depolarization
Gao et al. Perturbation towards easy samples improves targeted adversarial transferability
CN108647784A (en) A kind of lifelong machine learning method based on depth belief network
CN112068088A (en) Radar radiation source threat assessment method based on optimized BP neural network
CN115018884B (en) Visible light infrared visual tracking method based on multi-strategy fusion tree
Long et al. Recurrent neural networks with finite memory length
Wang et al. Kernel-based deep learning for intelligent data analysis
Li et al. DARTS-PAP: differentiable neural architecture search by polarization of instance complexity weighted architecture parameters
Cai et al. Multi-centroid task descriptor for dynamic class incremental inference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination