CN115834153A - Node voting mechanism-based black box attack device and method for graph neural network model - Google Patents

Node voting mechanism-based black box attack device and method for graph neural network model Download PDF

Info

Publication number
CN115834153A
CN115834153A CN202211382657.XA CN202211382657A CN115834153A CN 115834153 A CN115834153 A CN 115834153A CN 202211382657 A CN202211382657 A CN 202211382657A CN 115834153 A CN115834153 A CN 115834153A
Authority
CN
China
Prior art keywords
node
voting
attack
graph
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211382657.XA
Other languages
Chinese (zh)
Inventor
梁吉业
温亮亮
姚凯旋
王智强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi University
Original Assignee
Shanxi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi University filed Critical Shanxi University
Priority to CN202211382657.XA priority Critical patent/CN115834153A/en
Publication of CN115834153A publication Critical patent/CN115834153A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a black box attack device and method of a graph neural network model based on a node voting mechanism, which relate to the technical field of artificial intelligence information security, and particularly comprise the following steps: acquiring graph data and hyper-parameters; initializing node voting capacity, voting score and voting weight of edges among different nodes; the node obtains voting scores from its neighbors and votes for the neighbors; according to the node voting scores, nodes with high scores are selected in an iterative mode to serve as nodes selected in each round; setting the voting capacity of each round of selected nodes to be 0, and attenuating the voting capacity of the neighbors of the selected nodes; selecting nodes meeting constraint conditions, and further generating an attack node set; constructing a disturbance vector according to domain knowledge related to the node classification task, and disturbing the set to generate a disturbance graph; and attacking the model by using the perturbation graph. The invention further improves the accuracy of selecting the black box attack based on the nodes by utilizing the topological information of the graph and combining the node voting mechanism.

Description

Node voting mechanism-based black box attack device and method for graph neural network model
Technical Field
The invention relates to the technical field of artificial intelligence information security, in particular to a black box attack device and method of a graph neural network model based on a node voting mechanism.
Background
In recent years, a graph neural network model has been widely applied in the fields of network node identification, recommendation systems, social networks and the like, and most researchers pay more attention to the performance (such as prediction accuracy) of the model but ignore the robustness of the model. The existing graph neural network model is easily attacked by 'countersample', for example, a graph neural network black box counterattack, even if the graph neural network model architecture and parameters cannot be obtained, the countersample can be generated by adding tiny disturbances (such as limited edge deletion and edge addition) on the graph, so that the error rate of the graph neural network model for identifying the node type is obviously increased. Therefore, the research on the attack resistance of the graph neural network is necessary, the graph neural network model can be further understood and the robustness of the model is improved, the method has important research significance for designing the more robust graph neural network model, and meanwhile, the method has important application value in the fields of social networks, finance and the like.
The mainstream strategy of the black box anti-attack in the neural network model of the current graph comprises the following steps: node-based selection using a proxy model. The black box attack realized by using the agent model not only needs to carry out interactive query with the attack model, but also needs to train the agent model, and consumes a large amount of resources. The idea is that a set of rules are designed for the topological structure of the graph to realize attack node selection, then disturbance is carried out on the attributes of the attack nodes, and a disturbance graph is generated to attack a target model. The difficulty of selecting the black box attack based on the nodes lies in how to design a set of effective rules to select effective attack nodes through a topological structure. Therefore, it is more challenging to consider a more rigorous and realistic black-box attack based on node selection, which does not require interactive queries with the attack model and consumes relatively few resources.
The existing black box based on node selection uses a state transition matrix to select attack nodes, power processing needs to be carried out on the matrix, and the problem of large calculation amount is caused.
Disclosure of Invention
Aiming at the defects in the prior art, the invention designs a set of rules based on a node voting mechanism, and fully considers the problems, so that the selected node is more representative, the disturbance propagation range is larger, and the calculated amount is smaller. The invention aims to provide a black box attack device of a graph neural network model based on a node voting mechanism, and the invention also aims to provide a black box attack method of the graph neural network model based on the node voting mechanism, which can effectively improve the success rate of the black box attack method.
In order to achieve the above object, one aspect of the present invention provides a black box attack apparatus for a neural network model based on a node voting mechanism, including:
s11: the input module is used for acquiring the graph data G and various condition constraints involved in the anti-attack process;
s12: the initialization module initializes the voting score and the voting capacity of each node in the graph G and initializes the voting weight for the edges between different nodes in the graph G;
s13: the attack node selection module iteratively selects attack nodes based on a node voting mechanism to generate an attack node set S;
s14: the node characteristic disturbance construction module is used for constructing a disturbance vector according to domain knowledge related to the node classification task, attacking the attack node set S and outputting a disturbed graph G';
s15: and the attack module is used for attacking the graph neural network model by using the disturbed graph G'.
In the above technical solution, in the initialization module S12, the initialization voting score and the voting capacity of each node are specifically expressed as: (vs) i ,va i ) The initialization is performed in the following manner,
va i =t ij wherein t is ij =1 or
Figure BDA0003929145930000021
vs i =0,
Wherein the voting score vs i The voting ability va obtained by the node i from its neighbors i Represents the capacity size that node i can vote for its neighbors; the initializing voting weight for the edge between different nodes in the graph G specifically includes: the voting weight of the connecting edge between different nodes i and j in the graph G is specifically represented as: VW ij For example, the voting weight of the connection between node i and node j is defined in the following two ways:
VW ij =1,
Figure BDA0003929145930000022
wherein d is i =|N i |,N i Representing a set of neighbor nodes for node i.
In the above technical solution, the attack node selection module S13 includes the following steps:
(1) Voting: the voting score of each node is calculated in the following manner:
Figure BDA0003929145930000023
wherein d is i =|N i |,N i A set of neighbor nodes representing node i;
(2) Selecting a node: calculating the voting score of each node according to the previous step (1), and selecting the node with the highest score as the attack node selected in the current round;
(3) And (3) zeroing: setting the voting capacity of the selected attack node to be 0, specifically va i =0;
(4) Attenuation: the specific steps for attenuating the voting ability of the two-hop neighbor of the selected attack node are as follows: the voting ability attenuation degrees of the two-hop neighbors are different, the attenuation degree of the one-hop neighbor node is stronger than that of the two-hop neighbor node, and the attenuation mode is as follows:
Figure BDA0003929145930000031
wherein μ ∈ [0,1] is a decay factor;
(5) And (3) iterative updating: and (3) updating the voting information of the selected attack node neighbors, repeatedly executing the steps (1), (2), (3) and (4), and circularly iterating until the nodes meeting the requirement number are selected, and then generating an attack node set S.
In the above technical solution, in the node feature disturbance construction module S14, the disturbance vector is constructed according to domain knowledge related to the node classification task, and a feature disturbance vector construction method is as follows:
Figure BDA0003929145930000032
where λ is the magnitude of the modification, X ij Is the jth feature of node i,
Figure BDA0003929145930000033
the specific expression of attacking the attack node set S is as follows: and constructing a characteristic disturbance vector by using domain knowledge related to the node classification task, and disturbing the characteristic vector of each node in the attack node set S to further generate a disturbed graph G'.
In order to achieve the object, another aspect of the present invention provides a black box attack method for a graph neural network model based on a node voting mechanism, including:
s21: inputting original image data including nodes, edges and characteristics, and inputting each hyper-parameter;
s22: initializing the voting score of the node and the voting capacity of the node for each node in the graph according to a designed rule, and initializing the voting weight for the edge between different nodes in the graph;
s23: iteratively selecting attack nodes based on a node voting mechanism to generate an attack node set S;
s24: constructing a disturbance vector according to domain knowledge related to the node classification task, disturbing the characteristics of the nodes in the set S, and outputting a disturbed graph G';
s25: and attacking the graph neural network model by using the disturbed graph G'.
In the above technical solution, in the step S21, the graph data G and each hyper-parameter involved in the attack countermeasure process specifically include:
the graph data is specifically represented as: g = (V, E, X), V representing a set of nodes, E representing a set of edges, X representing a feature matrix of nodes;
the hyper-parameters specifically include the number r of the input attack nodes, the number k of attenuated neighbor hops, and an attenuation factor μ.
In the above technical solution, in the step S22, the initializing the voting score of the node and the voting capacity of the node for each node in the graph specifically includes: (vs) i ,va i ) The initialization is performed in the following manner,
va i =t ij wherein t is ij =1 or
Figure BDA0003929145930000041
vs i =0,
Wherein the voting score vs i The voting capacity va obtained by the node i from its neighbors i Represents the capacity size that node i can vote for its neighbors; the initializing voting weight for the edge between different nodes in the graph is specifically represented as: the voting weight of the connecting edge between different nodes i and j in the graph G is specifically represented as VW ij For example, the voting weight of the connection between node i and node j is defined in the following two ways:
VW ij =1,
Figure BDA0003929145930000042
wherein d is i =|N i |,N i Representing a set of neighbor nodes for node i.
In the above technical solution, in the step S23, the iteratively selecting an attack node based on the node voting mechanism to generate an attack node set S includes the following steps:
(1) Voting: the voting score of each node is calculated in the following manner:
Figure BDA0003929145930000043
wherein d is i =|N i |,N i A set of neighbor nodes representing node i;
(2) Selecting a node: calculating the voting score of each node according to the previous step (1), and selecting the node with the highest score as the attack node selected in the current round;
(3) And (3) zeroing: setting the voting capacity of the selected attack node to be 0, specifically va i =0;
(4) Attenuation: the specific steps for attenuating the voting ability of the two-hop neighbor of the selected attack node are as follows: the voting ability attenuation degrees of the two-hop neighbors are different, the attenuation degree of the one-hop neighbor node is stronger than that of the two-hop neighbor node, and the attenuation mode is as follows:
Figure BDA0003929145930000044
wherein μ ∈ [0,1] is a decay factor;
(5) And (3) iterative updating: and (3) updating the voting information of the selected attack node neighbors, repeatedly executing the steps (1), (2), (3) and (4), and circularly iterating until the nodes meeting the requirement number are selected, and then generating an attack node set S.
In the foregoing technical solution, in the step S24, the disturbance vector is constructed according to the domain knowledge related to the node classification task, and a construction method of the feature disturbance vector is as follows:
Figure BDA0003929145930000051
where λ is the magnitude of the modification, X ij Is the jth feature of node i,
Figure BDA0003929145930000052
the specific expression of attacking the attack node set S is as follows: and constructing a characteristic disturbance vector by using domain knowledge related to the node classification task, and disturbing the characteristic vector of each node in the attack node set S to further generate a disturbed graph G'.
In the above technical solution, in the step S25, the attacking the graph neural network model by using the disturbed graph G' is specifically represented as: and taking the disturbed graph G 'as the input of the graph neural network model to realize the attack of the graph G' on the graph neural network model.
Compared with the prior art, the invention can realize the following benefits:
1. the invention provides a black box attack device and method based on a node voting mechanism for a graph neural network model, which belong to black box counterattack with strict constraint conditions, only attack the node characteristics of partial nodes and are more close to practical application scenes.
2. The black box attack device and the method for the graph neural network model based on the node voting mechanism can effectively influence the performance of the graph neural network model, play an important role in the research of the robustness of the graph neural network model, and provide some references for the research of a more robust graph model; in addition, the method also provides important application value in the field of artificial intelligence information security.
Drawings
In order to more clearly describe the method of the present disclosure, the drawings used in the method of the present disclosure will be briefly introduced below, it being understood that the following drawings are only for the better understanding of the invention by the reader, and serve as an aid only, and should not be taken as limiting the scope.
Fig. 1 is a schematic structural diagram of a black box attack apparatus of a graph neural network model based on a node voting mechanism provided by the invention.
Fig. 2 is a schematic flow diagram of a black box attack method of a graph neural network model based on a node voting mechanism provided by the invention.
Detailed Description
The technical solution of the present invention will be further described in more detail with reference to the following embodiments. It is to be understood that the described embodiments are merely a few embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Although the graph neural network model performs well in the graph modeling task, as well as other deep learning, the model is susceptible to misleading of small disturbances, which results in severe performance degradation of the model, and the model can produce completely erroneous results by only adding or deleting a small number of nodes or edges. The study on the robustness of the graph model is necessary, and particularly, the black box attack model of the graph neural network model is closer to an actual scene and more challenging.
Referring to fig. 1, the attack apparatus of the present invention includes the following modules: the system comprises an input module S11, an initialization module S12, an attack node selection module S13, a node characteristic disturbance construction module S14 and an attack module S15.
S11: the input module is used for acquiring graph data G and various condition constraints involved in the anti-attack process;
s12: the initialization module initializes the voting score and the voting capacity of each node in the graph G and initializes the voting weight for the edges between different nodes in the graph G;
s13: the attack node selection module is used for iteratively selecting attack nodes based on a node voting mechanism to generate an attack node set S;
s14: the node characteristic disturbance construction module is used for constructing a disturbance vector according to domain knowledge related to the node classification task, attacking the attack node set S and outputting a disturbed graph G';
s15: and the attack module is used for attacking the graph neural network model by using the disturbed graph G'.
In the above technical solution, in the initialization module S12, the initialization voting score and the voting ability of each node are specifically expressed as: (vs) i ,va i ) The initialization is performed in the following manner,
va i =t ij wherein t is ij =1 or
Figure BDA0003929145930000061
vs i =0,
Wherein the voting score vs i The voting ability va obtained by the node i from its neighbors i Represents the capacity size that node i can vote for its neighbors; the initializing voting weight for the edge between different nodes in the graph G specifically includes: the voting weight of the connecting edge between different nodes i and j in the graph G is specifically expressed as follows: VW ij For example, the voting weight of the connection between node i and node j is defined in the following two ways:
VW ij =1,
Figure BDA0003929145930000062
wherein d is i =|N i |,N i Representing a set of neighbor nodes for node i.
In the above technical solution, the attack node selection module S13 includes the following steps:
(1) Voting: the voting score of each node is calculated in the following manner:
Figure BDA0003929145930000071
wherein d is i =|N i |,N i A set of neighbor nodes representing node i;
(2) Selecting a node: calculating the voting score of each node according to the previous step (1), and selecting the node with the highest score as the attack node selected in the current round;
(3) And (3) zeroing: setting the voting capacity of the selected attack node to be 0, specifically va i =0;
(4) Attenuation: the specific steps for attenuating the voting ability of the two-hop neighbor of the selected attack node are as follows: the voting ability attenuation degrees of the two-hop neighbors are different, the attenuation degree of the one-hop neighbor node is stronger than that of the two-hop neighbor node, and the attenuation mode is as follows:
Figure BDA0003929145930000072
wherein μ ∈ [0,1] is a decay factor;
(5) And (3) iterative updating: and (3) updating the voting information of the selected attack node neighbors, repeatedly executing the steps (1), (2), (3) and (4), and circularly iterating until the nodes meeting the requirement number are selected, and then generating an attack node set S.
In the above technical solution, in the node feature disturbance construction module S14, the disturbance vector is constructed according to domain knowledge related to the node classification task, and a feature disturbance vector construction method is as follows:
Figure BDA0003929145930000073
where λ is the magnitude of the modification, X ij Is the jth feature of node i,
Figure BDA0003929145930000074
the specific expression of attacking the attack node set S is as follows: and constructing a characteristic disturbance vector by using domain knowledge related to the node classification task, and disturbing the characteristic vector of each node in the attack node set S to further generate a disturbed graph G'.
In the embodiment of the invention, the attack node is selected by utilizing the topological structure information of the graph and combining the node voting mechanism, so that the attack success rate of selecting the black box attack device based on the node is further improved.
Next, a black box attack method of a graph neural network model based on a node voting mechanism according to an embodiment of the present invention is described with reference to the accompanying drawings. Referring to fig. 2, the method of the present invention comprises the steps of:
s21: inputting original image data including nodes, edges and characteristics, and inputting each hyper-parameter;
s22: initializing the voting score of the node and the voting capacity of the node for each node in the graph according to a designed rule, and initializing the voting weight for the edge between different nodes in the graph;
s23: iteratively selecting attack nodes based on a node voting mechanism to generate an attack node set S;
s24: constructing a disturbance vector according to domain knowledge related to the node classification task, disturbing the characteristics of the nodes in the set S, and outputting a disturbed graph G';
s25: and attacking the graph neural network model by using the disturbed graph G'.
In the above technical solution, in the step S21, the graph data G and each hyper-parameter involved in the attack countermeasure process specifically include:
the graph data includes nodes, side information and characteristic information in the graph, and is specifically expressed as: g = (V, E, X), V representing a set of nodes, E representing a set of edges, X representing a feature matrix of nodes;
the hyper-parameters specifically include the number r of input attack nodes, the number k of attenuated neighbor hops, and an attenuation factor μ.
In the above technical solution, in the step S22, the initializing the voting score of the node and the voting capacity of the node for each node in the graph specifically includes: (vs) i ,va i ) The initialization is performed in the following manner,
va i =t ij wherein t is ij =1 or
Figure BDA0003929145930000081
vs i =0,
Wherein the voting score vs i The voting ability va obtained by the node i from its neighbors i Represents the capacity size that node i can vote for its neighbors; when degree information of a node is considered, and the position of the node, its mathematical expression is defined as follows:
Figure BDA0003929145930000082
wherein d is i Is the degree of node i, d max Is the maximum degree of a node in the graph; the voting weight is initialized for the edge between different nodes in the graph, and the relationship between different nodes is usually strong or weak, so the voting weight is assigned to the edge between different nodes, for example, the voting weight of the connection between the node i and the node j is defined in the following two ways:
VW ij =1,
Figure BDA0003929145930000083
wherein d is i =|N i |,N i Representing a set of neighbor nodes for node i.
In the above technical solution, in the step S23, the iteratively selecting an attack node based on the node voting mechanism to generate an attack node set S includes the following steps:
(1) Voting: in each round of voting, a node obtains a voting score from its neighbor and votes for its neighbor. The formula for node i to obtain the voting score from its neighbors defines the following two ways:
Figure BDA0003929145930000084
Figure BDA0003929145930000091
calculating the voting score of each node according to the formula;
(2) Selecting a node: calculating the voting score vs of each node according to the previous step (1) i Selecting the node with the highest score as the attack node selected in the current round;
(3) And (3) zeroing: in order to make the selected nodes distributed more uniformly and widely in the network, the voting capacity of the selected attack nodes is set to be 0, specifically va i =0;
(4) Attenuation: in order to make full use of node information, the attenuation of the voting ability of the two-hop neighbor of the selected attack node is specifically as follows: the voting ability attenuation degrees of the two-hop neighbors are different, the attenuation degree of the one-hop neighbor node is stronger than that of the two-hop neighbor node, and the attenuation mode is as follows:
Figure BDA0003929145930000092
wherein μ ∈ [0,1] is a decay factor;
(5) And (3) iterative updating: the voting information of the selected attack node neighbors is updated, and (1) (2) (3) (4) is repeatedly executed, namely in the process of a new round of voting, the neighbors of the selected attack node in the previous round are updated with VW (virtual WW) ij 、vs i Value, then select vs in node again i And taking the node with the highest score as a new round of importance node, and circularly iterating until r nodes are selected to generate an attack node set S.
In the above technical solution, in the step S24, the constructing a perturbation vector according to the domain knowledge related to the node classification task, and attacking the attack node set S specifically includes: using the domain knowledge related to the node classification task to construct a feature perturbation vector, wherein the feature perturbation vector is constructed in the following way:
Figure BDA0003929145930000093
where λ is the magnitude of the modification, X ij Is the jth feature of node i,
Figure BDA0003929145930000094
and (5) disturbing the feature vector of each node in the attack node set S, and further generating a disturbed graph G'.
In the above technical solution, in the step S25, the attacking the graph neural network model by using the disturbed graph G' is specifically represented as: and taking the disturbed graph G 'as the input of the graph neural network model to realize the attack of the graph G' on the graph neural network model.
In the embodiment of the invention, the black box attack method based on the node voting mechanism for the graph neural network model further enriches the black box attack method based on node selection.

Claims (10)

1. A black box attack device of a graph neural network model based on a node voting mechanism is characterized by comprising the following components:
s11: the input module is used for acquiring graph data G and various condition constraints involved in the anti-attack process;
s12: the initialization module initializes the voting score and the voting capacity of each node in the graph G and initializes the voting weight for the edges between different nodes in the graph G;
s13: the attack node selection module iteratively selects attack nodes based on a node voting mechanism to generate an attack node set S;
s14: the node characteristic disturbance construction module is used for constructing a disturbance vector according to domain knowledge related to the node classification task, attacking the attack node set S and outputting a disturbed graph G';
s15: and the attack module is used for attacking the graph neural network model by using the disturbed graph G'.
2. The apparatus for black box attack on a neural network model based on node voting mechanism according to claim 1, wherein in the initialization module S12, the initialization voting score and voting ability of each node are specifically expressed as: (vs) i ,va i ) At the beginningThe way of the formation is as follows,
va i =t ij wherein t is ij =1 or
Figure FDA0003929145920000011
vs i =0,
Wherein the voting score vs i The voting ability va obtained by the node i from its neighbors i Represents the capacity size that node i can vote for its neighbors; the initializing voting weight for the edge between different nodes in the graph G specifically includes: the voting weight of the connecting edge between different nodes i and j in the graph G is specifically represented as: VW ij For example, the voting weight of the connection between node i and node j is defined in the following two ways:
VW ij =1,
Figure FDA0003929145920000012
wherein d is i =|N i |,N i Representing a set of neighbor nodes for node i.
3. The node voting mechanism-based graph neural network model black box attack device according to claim 1, wherein the attack node selection module S13 comprises the following steps:
(1) Voting: the voting score of each node is calculated in the following manner:
Figure FDA0003929145920000013
wherein d is i =|N i |,N i A set of neighbor nodes representing node i;
(2) Selecting a node: calculating the voting score of each node according to the previous step (1), and selecting the node with the highest score as the attack node selected in the current round;
(3) And (4) zeroing: setting the voting capacity of the selected attack node to be 0, specifically va i =0;
(4) Attenuation: the specific steps for attenuating the voting ability of the two-hop neighbor of the selected attack node are as follows: the voting ability attenuation degrees of the two-hop neighbors are different, the attenuation degree of the one-hop neighbor node is stronger than that of the two-hop neighbor node, and the attenuation modes are as follows:
Figure FDA0003929145920000021
wherein μ ∈ [0,1] is a decay factor;
(5) And (3) iterative updating: and (3) updating the voting information of the selected attack node neighbors, repeatedly executing the steps (1), (2), (3) and (4), and circularly iterating until the nodes meeting the requirement number are selected, and then generating an attack node set S.
4. The node voting mechanism-based graph neural network model black box attack device according to claim 1, wherein in the node feature perturbation constructing module S14, the perturbation vector is constructed according to domain knowledge related to the node classification task in the following manner:
Figure FDA0003929145920000022
where λ is the magnitude of the modification, X ij Is the jth feature of node i,
Figure FDA0003929145920000023
the specific expression of attacking the attack node set S is as follows: and constructing a characteristic disturbance vector by using the domain knowledge related to the node classification task, and disturbing the characteristic vector of each node in the attack node set S to generate a disturbed graph G'.
5. A black box attack method of a graph neural network model based on a node voting mechanism is characterized by comprising the following steps:
s21: inputting original image data including nodes, edges and characteristics, and inputting each hyper-parameter;
s22: initializing the voting score of the node and the voting capacity of the node for each node in the graph according to a designed rule, and initializing the voting weight for the edge between different nodes in the graph;
s23: iteratively selecting attack nodes based on a node voting mechanism to generate an attack node set S;
s24: constructing a disturbance vector according to domain knowledge related to the node classification task, disturbing the characteristics of the nodes in the set S, and outputting a disturbed graph G';
s25: and attacking the graph neural network model by using the disturbed graph G'.
6. The node voting mechanism-based graph neural network model black box attack method according to claim 5, wherein in the step S21, the graph data G and the respective hyper-parameters involved in the attack countermeasure process are specifically:
the graph data is specifically represented as: g = (V, E, X), V representing a set of nodes, E representing a set of edges, X representing a feature matrix of nodes;
the hyper-parameters specifically include the number r of input attack nodes, the number k of attenuated neighbor hops, and an attenuation factor μ.
7. The method for black box attack on the neural network model of the graph based on the node voting mechanism according to claim 5, wherein in step S22, the voting score of the node and the voting capability of the node are initialized for each node in the graph, which is specifically represented as: (vs) i ,va i ) The initialization is performed in the following manner,
va i =t ij wherein t is ij =1 or
Figure FDA0003929145920000031
vs i =0,
Wherein the voting score vs i The voting ability va obtained by the node i from its neighbors i Represents the capacity size that node i can vote for its neighbors; the initializing voting weight for the edge between different nodes in the graph is specifically represented as: the voting weight of the connecting edge between different nodes i and j in the graph G is specifically represented as VW ij For example, the voting weight of the connection between node i and node j is defined in the following two ways:
VW ij =1,
Figure FDA0003929145920000032
wherein d is i =|N i |,N i Representing a set of neighbor nodes for node i.
8. The node voting mechanism-based graph neural network model black box attack method according to claim 5, wherein in the step S23, the node voting mechanism-based graph neural network model iteratively selects attack nodes to generate an attack node set S, and the method comprises the following steps:
(1) Voting: the voting score of each node is calculated in the following manner:
Figure FDA0003929145920000033
wherein d is i =|N i |,N i A set of neighbor nodes representing node i;
(2) Selecting a node: calculating the voting score of each node according to the previous step (1), and selecting the node with the highest score as the attack node selected in the current round;
(3) And (3) zeroing: setting the voting capacity of the selected attack node to be 0, specifically va i =0;
(4) Attenuation: the specific steps for attenuating the voting ability of the two-hop neighbor of the selected attack node are as follows: the voting ability attenuation degrees of the two-hop neighbors are different, the attenuation degree of the one-hop neighbor node is stronger than that of the two-hop neighbor node, and the attenuation mode is as follows:
Figure FDA0003929145920000041
wherein μ ∈ [0,1] is a decay factor;
(5) And (3) iterative updating: and (3) updating the voting information of the selected attack node neighbors, repeatedly executing the steps (1), (2), (3) and (4), and circularly iterating until the nodes meeting the requirement number are selected, and then generating an attack node set S.
9. The node voting mechanism-based graph neural network model black box attack method according to claim 5, wherein in the step S24, the perturbation vectors are constructed according to domain knowledge related to the node classification task, and the characteristic perturbation vectors are constructed in the following manner:
Figure FDA0003929145920000042
where λ is the magnitude of the modification, X ij Is the jth characteristic of the node i,
Figure FDA0003929145920000043
the specific expression of attacking the attack node set S is as follows: and constructing a characteristic disturbance vector by using domain knowledge related to the node classification task, and disturbing the characteristic vector of each node in the attack node set S to further generate a disturbed graph G'.
10. The node voting mechanism-based graph neural network model black box attack method according to claim 5, wherein in the step S25, the attacking the graph neural network model by using the perturbed graph G' is specifically represented as: and taking the disturbed graph G 'as the input of the graph neural network model to realize the attack of the graph G' on the graph neural network model.
CN202211382657.XA 2022-11-07 2022-11-07 Node voting mechanism-based black box attack device and method for graph neural network model Pending CN115834153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211382657.XA CN115834153A (en) 2022-11-07 2022-11-07 Node voting mechanism-based black box attack device and method for graph neural network model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211382657.XA CN115834153A (en) 2022-11-07 2022-11-07 Node voting mechanism-based black box attack device and method for graph neural network model

Publications (1)

Publication Number Publication Date
CN115834153A true CN115834153A (en) 2023-03-21

Family

ID=85526823

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211382657.XA Pending CN115834153A (en) 2022-11-07 2022-11-07 Node voting mechanism-based black box attack device and method for graph neural network model

Country Status (1)

Country Link
CN (1) CN115834153A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117544472A (en) * 2024-01-08 2024-02-09 中国信息通信研究院 Node management method and device for distributed network, electronic equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117544472A (en) * 2024-01-08 2024-02-09 中国信息通信研究院 Node management method and device for distributed network, electronic equipment and storage medium
CN117544472B (en) * 2024-01-08 2024-03-22 中国信息通信研究院 Node management method and device for distributed network, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN112529168B (en) GCN-based attribute multilayer network representation learning method
Zhang et al. Decision-based evasion attacks on tree ensemble classifiers
Šiljak Dynamic graphs
Cao et al. Class-specific soft voting based multiple extreme learning machines ensemble
Cross et al. Inexact graph matching using genetic search
CN110334742B (en) Graph confrontation sample generation method based on reinforcement learning and used for document classification and adding false nodes
CN113806546B (en) Graph neural network countermeasure method and system based on collaborative training
CN110941794A (en) Anti-attack defense method based on universal inverse disturbance defense matrix
CN113505855B (en) Training method for challenge model
Song et al. Nonnegative Latent Factor Analysis-Incorporated and Feature-Weighted Fuzzy Double $ c $-Means Clustering for Incomplete Data
CN113254927B (en) Model processing method and device based on network defense and storage medium
Suzuki et al. Adversarial example generation using evolutionary multi-objective optimization
CN112580728B (en) Dynamic link prediction model robustness enhancement method based on reinforcement learning
CN115834153A (en) Node voting mechanism-based black box attack device and method for graph neural network model
Yu et al. Unsupervised euclidean distance attack on network embedding
CN115019102A (en) Construction method and application of confrontation sample generation model
CN114708479A (en) Self-adaptive defense method based on graph structure and characteristics
CN110889493A (en) Method and device for adding disturbance aiming at relational network
CN115470520A (en) Differential privacy and denoising data protection method under vertical federal framework
Li et al. Noise-aware clustering based on maximum correntropy criterion and adaptive graph regularization
Pan et al. A Graph-Based Soft Actor Critic Approach in Multi-Agent Reinforcement Learning
CN115510986A (en) Countermeasure sample generation method based on AdvGAN
Silvestre Reputation-based method to deal with bad sensor data
Du et al. [Retracted] Application of Improved Interactive Multimodel Algorithm in Player Trajectory Feature Matching
Jin et al. Local-global defense against unsupervised adversarial attacks on graphs

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination