CN117828514B - User network behavior data anomaly detection method based on graph structure learning - Google Patents

User network behavior data anomaly detection method based on graph structure learning Download PDF

Info

Publication number
CN117828514B
CN117828514B CN202410243752.4A CN202410243752A CN117828514B CN 117828514 B CN117828514 B CN 117828514B CN 202410243752 A CN202410243752 A CN 202410243752A CN 117828514 B CN117828514 B CN 117828514B
Authority
CN
China
Prior art keywords
graph
node
nodes
user network
network behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410243752.4A
Other languages
Chinese (zh)
Other versions
CN117828514A (en
Inventor
陈伟坚
王沛松
陈博奎
袁凤池
张永豪
刘菲雪
陈思琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202410243752.4A priority Critical patent/CN117828514B/en
Publication of CN117828514A publication Critical patent/CN117828514A/en
Application granted granted Critical
Publication of CN117828514B publication Critical patent/CN117828514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A method for detecting abnormality of user network behavior data based on graph structure learning comprises the following steps: s1, training a graph annotation meaning network GAT by using user network behavior history data to construct a graph structure, taking each user network behavior as a node, extracting the characteristics of the node, calculating attention weights among the nodes, and utilizing the attention weights to aggregate the characteristics of neighbor nodes to update the node characteristics; s2, filtering out edges with attention weights smaller than a set threshold value for the graph structure obtained in the step S1, sequencing neighbor nodes of each node, selecting a plurality of neighbor nodes with the forefront similarity, connecting the neighbor nodes to generate new edges, and generating a refined graph with enhanced structural characteristics; s3, using the graph annotation force network GAT based on the refined graph to execute a graph anomaly detection algorithm on the user network behavior data input to be detected, and identifying the user network behavior data with anomalies.

Description

User network behavior data anomaly detection method based on graph structure learning
Technical Field
The present invention relates to anomaly detection, and more particularly, to a Graph Structure Learning (GSL) -based anomaly detection method for user network behavior data.
Background
Graph Anomaly Detection (GAD) focuses on graph data, not only considers anomalies of data features, but also focuses on anomalies of structures, and is an important application of efficient anomaly detection algorithm graph anomaly detection to Graph Neural Networks (GNNs), aiming at identifying anomalies in graphs by considering node features and structural information of the graphs. These anomalies may represent various entities in a real scene, such as a robotic user. Anomalies are typically identified by their unique features worldwide, deviations from other nodes in the community, or abnormal connections. Without structural information, detecting the latter two types of anomalies is particularly challenging. Consider a social networking graph in which nodes represent individuals and edges represent relationships between them. In this figure, most people are in contact with their friends, colleagues or family. But it is assumed that one node is connected to an abnormally large number of other nodes. This may indicate that the account belongs to a social media affector or bot. Such an abnormality is detected by observing a connection pattern (pattern structure). This special anomaly can easily evade detection by merely checking personal attributes such as age or occupation, and ignoring the correlation between individuals in the network.
In recent years, graph anomaly detection algorithms have improved significantly, but because anomaly nodes disguise themselves by connecting with benign nodes, certain edges in the graph are not beneficial or even detrimental to anomaly detection. This results in the introduction of noise when employing the graph anomaly detection algorithm.
Challenges faced by graph anomaly detection include graph heterogeneity, disguising anomalous nodes, and extreme class imbalance. In graph anomaly detection, graph heteroleptic properties result in accuracy limitations. One major problem that is often ignored in graph anomaly detection is reliability of the topology in the data set.
Current graph dataset construction methods are rule-based. For example, the graph structure in the amazon dataset demonstrates three different relationship types: U-P-U, representing users who have reviewed at least one shared product; U-S-U, representing users who assign the same star level to any product within a week; and U-V-U, covering users with 5% top ranking of mutual comment similarity. It is worth noting, however, that such rule-based graph structures may sometimes be unreliable. Such unreliability stems from the inherent limitations of rules in capturing user interaction complexity, which may result in the representation of user relationships being too simple or inaccurate, thereby introducing noise in implementing GNNs. Furthermore, non-graph datasets may contain valuable structural information. The absence of the original topology makes it difficult to deploy the GAD algorithm on non-graph data.
It should be noted that the information disclosed in the above background section is only for understanding the background of the application and thus may include information that does not form the prior art that is already known to those of ordinary skill in the art.
Disclosure of Invention
The invention aims to overcome the defects of the background technology and provide a method for detecting the abnormality of the user network behavior data based on graph structure learning.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
A method for detecting abnormality of user network behavior data based on graph structure learning comprises the following steps:
S1, training a graph annotation meaning network GAT by using user network behavior history data to construct a graph structure, taking each user network behavior as a node, extracting the characteristics of the node, calculating attention weights among the nodes, and utilizing the attention weights to aggregate the characteristics of neighbor nodes to update the node characteristics;
S2, filtering out edges with attention weights smaller than a set threshold value for the graph structure obtained in the step S1, sequencing neighbor nodes of each node, selecting a plurality of neighbor nodes with the forefront similarity, connecting the neighbor nodes to generate new edges, and generating a refined graph with enhanced structural characteristics;
S3, using the graph annotation force network GAT based on the refined graph to execute a graph anomaly detection algorithm on the user network behavior data input to be detected, and identifying the user network behavior data with anomalies.
Further:
In step S1, in each training iteration of the graph annotation force network GAT, feature vectors of nodes are transformed using a learnable weight matrix; calculating attention weight according to the converted feature vector and the adjacency matrix; and carrying out weighted summation on the characteristics of the neighbor nodes by using the attention weights, aggregating the characteristics of the neighbor nodes, and updating the characteristics of the nodes.
In step S1, the original feature vector of the node is multiplied by the weight matrix to obtain the feature vector after conversion, and the calculation formula is as follows:
Wherein, Is the original eigenvector of node i,/>Is a weight matrix which can be learned, and/>Is the transformed feature vector.
In step S1, the calculation of the attention weight specifically includes: splicing the feature vectors of the neighbor nodes to form a combined feature vector; and processing the spliced combined feature vectors by using an activation function to introduce nonlinear characteristics, and extracting attention weights among nodes from the activation function by calculation: and carrying out indexing and normalization processing on the calculated attention weight to obtain the attention weight of each node to the target node.
In step S1, the attention weights are used to perform weighted summation on the features of the neighboring nodes, and the features of the neighboring nodes are aggregated, so as to update the features of the neighboring nodes according to the feature information of the attention weights, so that the updated feature vector integrates the feature information of the neighboring nodes, and meanwhile, the correlation between the nodes determined by the attention weights is considered.
In step S1, the result of the weighted summation is subjected to nonlinear conversion by an activation function, and the output of the linear combination is converted into a characteristic representation with nonlinear characteristics, so as to increase the nonlinear expression capability of the model.
In step S1, in the training process, the graph annotation force network GAT uses a binary cross entropy loss function to calculate the loss of the difference between the prediction result of the node class and the real label, and guides the iterative update of the graph.
In step S2, edges with attention weights smaller than a set threshold value in the original graph are filtered, and for each node, a plurality of nodes with highest similarity measurement values are connected to generate new edges, and edges reserved in the original graph and the generated edges are integrated into the new graph to obtain the refined graph.
The method also comprises the following steps: for data which is not a graph structure, preprocessing is carried out through feature engineering, and the processed features are constructed into an initial graph by using a k-nearest neighbor algorithm kNN; steps S1 to S3 are then performed.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the graph structure learning-based user network behavior data anomaly detection method.
The invention has the following beneficial effects:
The invention provides a graph structure learning-based user network behavior data anomaly detection method, which can be applied to finding user network anomaly behaviors such as false comments of abnormal users on electronic commerce and social networks, and provides a novel anomaly detection pipeline with graph structure learning GSL. In the method provided by the invention, based on historical data, the graph structure most suitable for graph anomaly detection can be effectively learned by fully utilizing the characteristics of the graph attention network. The invention can correct the graph structure before graph anomaly detection and can also preprocess non-graph data deployment graph anomaly monitoring algorithm. For datasets with original graph structures, the graph data is refined using the GSL method prior to deployment of the GAD algorithm; for datasets without original graph structures, GSLs are used to learn the appropriate graph structure for efficient anomaly detection, thus enabling the application of GAD on non-graph datasets. In short, the method of the present invention not only enhances the performance of the graph anomaly detection algorithm, but also is capable of processing non-graph data, such as tabular data. This not only improves upon anomaly detection performance, but also extends the application of the GAD algorithm beyond traditional graph-based datasets. The present invention exhibits improved performance over three common data sets and also achieves excellent results when applied to tabular data.
Other advantages of embodiments of the present invention are further described below.
Drawings
FIG. 1 is a flow chart of the method for detecting anomalies in user network behavior data based on graph structure learning.
FIG. 2 is a G-SLAD of a cascade framework of an embodiment of the invention.
FIG. 3 is a block flow diagram of diagram structure learning of an embodiment of the present invention.
FIG. 4 is a node representation of Amazon and Mammography in an embodiment of the present invention.
Detailed Description
The following describes embodiments of the present invention in detail. It should be emphasized that the following description is merely exemplary in nature and is in no way intended to limit the scope of the invention or its applications.
The graph neural network GNN represents a class of neural networks dedicated to processing graph structure data. The graph is made up of a set of nodes (denoted by V) and a set of edges (denoted by E) that exhibit connections or relationships between the nodes. Each node vi in the graph is connected to a feature vector xi, which captures the attributes or characteristics of the node.
GNNs use graph topology and node characteristics to learn the representation of each node in the graph. The basic concept of GNN is around repeatedly updating the representation of a node by accumulating information from neighboring nodes. This is typically accomplished through a messaging scheme in which each node receives messages from its neighbors, combines them with its own features, and updates its representation. These updated node representations are then used for further iterations or for tasks downstream in the flow. The expression is as follows:
Wherein, Representing node i at layer/>Updated representation of/>Presentation layer/>Weight matrix of (2)Representing the set of contiguous nodes of node i. AGGREGATE (·) represent an aggregation function that incorporates the information of these neighboring nodes. The sigma (·) symbol represents an activation function, which may be Sigmoid or ReLU. The choice of aggregation function may vary depending on the particular GNN architecture and task at hand.
The invention mainly focuses on identifying abnormal nodes in a static attribute map. Consider an attribute graph g= (V, a, X), where v= { V1, V2,..vn } represents a set of N nodes,Is an unweighted adjacency matrix, and/>Representing a feature matrix. In this context, each row vector xi=x (i,:) represents the d-dimensional feature vector of node vi.
Given a subset of a set of marker nodesThe task requires training a model that accurately classifies the remaining nodes into two distinct groups: normal or abnormal. Such classification is based on their characteristics and relationships in the drawings.
Graph structure learning is a method of refining the topology in a dataset. Graph structure learning involves refining an initial graph to produce a new graph with enhanced structural characteristics. Consider an initial graph g= (V, E, X), where v= { V1, V2,..vn } represents a set of N nodes,Represents a collection of edges, and X ε R N×d is a feature matrix. Each row vector xi=x (i, i) encodes the d-dimensional feature vector of node vi. The goal of graph structure learning is typically to apply a refining process R to the graph G, resulting in a new graph G '= (V, E', X), revealing a clearer or more informative structure. This refined graph G' aims to better capture complex dependencies and relationships in the data, providing a more powerful basis for subsequent graph-based tasks such as clustering, classification, and anomaly detection. The learning process utilizes graph topology and node characteristics to optimize the representation that clarifies the underlying graph structure.
Referring to fig. 1, an embodiment of the present invention provides a method for detecting anomalies of user network behavior data based on graph structure learning, including the following steps:
S1, training a graph annotation meaning network GAT by using user network behavior history data to construct a graph structure, taking each user network behavior as a node, extracting the characteristics of the node, calculating attention weights among the nodes, and utilizing the attention weights to aggregate the characteristics of neighbor nodes to update the node characteristics;
S2, filtering out edges with attention weights smaller than a set threshold value for the graph structure obtained in the step S1, sequencing neighbor nodes of each node, selecting the first K most similar neighbor nodes and connecting the first K most similar neighbor nodes to generate new edges, and generating a refined graph with enhanced structural characteristics;
S3, using the graph annotation force network GAT based on the refined graph to execute a graph anomaly detection algorithm on the user network behavior data input to be detected, and identifying the user network behavior data with anomalies.
The user network behavior described in the present invention may be user behavior on e-commerce, social, financial networks, such as user comments, etc.
In some embodiments, a graph anomaly detection algorithm (G-SLAD) based on graph structure learning is a cascade framework, as shown in FIG. 2. On the one hand, if the input is not graph data, such as table data, then a pre-processing by feature engineering is required, and the processed features are first constructed into an initial graph using kNN. This graph may then be processed by a graph structure learning module, which will be described later. After graph structure learning, the generated refined graph may be used to implement various graph anomaly detection GAD algorithms. On the other hand, if the input is graph data, it may start directly from graph structure learning.
First, the node characteristics are processed through the graph attention network GAT, resulting in an attention weight αij between node i and node j. In GAT, the attention weight calculation process is as follows:
(1)
Wherein the method comprises the steps of Is the original eigenvector of node i,/>Is a weight matrix which can be learned, and/>Is the transformed feature vector.
These attention weights are used to measure the relevance of edges in anomaly detection tasks. The features of the neighbor nodes are aggregated by attention weights to update the features of node i, expressed as follows:
Where e ij is the intermediate value of the edge weight calculation between nodes, k represents one neighbor node of node i, e ik is the intermediate value of the edge weight calculation between node i and all its neighbors, leakyReLU is a nonlinear function, increasing the complexity of the model to increase the expressive power, a T is a learnable parameter vector, h 'i and h' j are characteristic representations of nodes i and j, N (i) is the neighbor node set sigma of node i is a nonlinear activation function, such as ReLU, Is the updated feature vector for node i. The attention weights alpha ij in GAT are applied in graph updates, which in combination with node features facilitate messaging in the graph rolling network GCN.
(5)
(6)
GCN (H) represents the graph convolution network, H represents the input feature matrix, and GCN is the convolution operation for the graph data. By the GCN, the information of the neighbors can be gradually aggregated from the node characteristics, and the new characteristic representation of each node is obtained. Z=gcn (H) means that the input feature matrix H is transformed by the graph convolution network to obtain a new feature matrix as output Z.
Loss functionIs a binary cross entropy loss used to measure the difference between the predicted result of the model and the real label. /(I)Is the predictive output of the model,/>Is the corresponding real label. By comparing the predicted output with the true labels, the penalty for each node can be calculated.
The loss is calculated by the output Z of the graph neural network GNN and the label Y and directs the iterative updating of the graph.
The update process involves generating new edges based on the first K edges of kNN and filtering out edges with attention weights less than a set threshold. Let g= (V, E) be the original graph, where V represents the set of vertices and E represents the set of edges. Is provided withIs the attention weight of the edge (u, v) E, let sim (u, v) be the new similarity measure between nodes u and v. Given a threshold θ and an integer K, the edge set E ' of the new graph G ' = (V, E ') is defined as:
Where TopK (sim uv, K) represents the set of nodes v of the K highest similarity scores sim uv to node u. This evolving graph structure is the core of the training process. The whole graph structure learning flow is shown in fig. 3, after the detected user network enters the model, the weights among the edges are calculated through GAT, the edges are increased and reduced according to the threshold value of the weights and the similarity among the nodes, so that an optimized graph structure is obtained, and the graph structure uses GCN to learn node characteristics and detect the abnormality of the nodes.
For non-graph data, such as table information, after feature engineering processing, the non-graph data is converted into an initial graph by using a k-nearest neighbor algorithm kNN. This initial graph entry graph structure learning module is then refined for subsequent application of various graph anomaly detection GAD algorithms.
Experimental test:
for graph data, there are two settings: topology refinement and topology inference. The table data is processed according to the flow described above. The data set used included:
Reddit: the data set includes a comprehensive user-sub-layout interaction map, which records posts of each sub-layout within a month. Which provides an authentication tag for a disabled user. The data set mainly covers the interactions of the first 1000 most active sub-sections and 10000 users with highest participation, and the total number of interactions is 672447. In addition, it converts the posts into an analysis feature vector that contains text features defined by language queries and word count statistics LIWC categories.
Amazon: this dataset is intended to detect users who are rewarded in amazon. It contains a graph describing three types of relationships: U-P-U (users who comment on at least one identical product), U-S-U (users who rate the same star rating for any product in a week), and U-V-U (users who rate the top 5% of mutual similarity).
YelpChi: this dataset is intended to identify abnormal reviews that affect a product or business on yelp. It contains a graph comprising three types of edges: R-U-R (reviews written by the same user), R-S-R (reviews rated by the same star on the same product), and R-T-R (reviews posted by the same month for the same product).
These datasets follow the GADBench setup, trained using 40%, 70% and 70% of the data of datasets Reddit, amazon and YelpChi, respectively. The statistics of these datasets are shown in table 1.
TABLE 1
For graph anomaly detection, the reference setting of GADBench is followed. Some classical methods for anomaly detection have been chosen, such as MLP (multi-layer perceptron), XGBoost and XGBOD. In addition, a graph neural network GNN is selected, including GCN, GIN, graphSAGE and GT. For strong benchmarks, GNNs specifically designed for graph anomaly detection GAD are considered, such as GAS, DCI, bernNet, BWGNN and RFGraph.
For the effectiveness of the evaluated graph anomaly detection method, all models were implemented on the initial topology as benchmarks. The performance of the method is marked in the results as "refined". Furthermore, for the purpose of demonstrating the ability of the method to make topology inferences without an initial graph structure, the result is labeled kNN starting from kNN graphs constructed from three data sets. The kNN graph is constructed by representing data points as nodes and connecting each node to its K-nearest neighbor. Also shown is a method of evaluation on the tabular dataset Mammography. Fig. 4 shows node representations of Amazon and Mammography (showing feature changes before and after processing) in an embodiment of the invention. AUROC (area under the receiver operating characteristic) is used as an index of the evaluation method. AUROC is a popular indicator in anomaly detection because it is effective in handling unbalanced data sets, with normal instances far more than anomalies. It provides a comprehensive performance assessment by taking into account the trade-off between true positive rate TPR and false positive rate FPR at different thresholds. This feature makes it particularly useful because it evaluates the ability of the model to identify anomalies without being misled by the overwhelming presence of normal data. Furthermore, AUROC is threshold independent, providing an overall model performance metric without specifying a cut-off point, which is advantageous in cases where the anomaly distribution is unknown or variable. This index quantifies the performance of the classifier by calculating the area under the ROC curve. The ROC curve is a graphical tool showing the true positive rate TPR and false positive rate FPR of the model at different thresholds. True positive rate TPR is defined as:
where true positive TP represents the number of correctly predicted positive instances, while false negative FN is the number of actual positive instances misclassified as negative.
The false positive rate FPR is defined as:
Where false positives FP represent the number of false positives and true negatives TN represent the number of true negatives. AUROC has a value between 0 and 1, with higher values indicating superior classification performance. Ideally, a perfect classifier has an AUROC value of 1, indicating that it is able to fully distinguish between the two classes. In contrast, the AUROC value of the classifier, which is not better than the random guess, is 0.5.
In practical applications AUROC is widely used to compare the performance of different models and to facilitate the decision making process. It is particularly suitable for handling unbalanced data sets because it is not affected by category distribution.
Topology refinement: the performance improvements achieved by topology refinement are presented in table 2. The average AUC of each dataset and model in ten independent runs was reported, ensuring reproducibility of the results. These results indicate that seven of the thirteen GAD algorithms evaluated on Reddit perform better on the refined plot than the original plot. In Amazon and YelpChi datasets, this number is increased to thirteen and twelve, respectively. On average, refining of the graph resulted in 0.17%, 2.57% and 5.15% improvement in the three data sets, respectively. This result demonstrates the superior performance of the model in terms of topology refinement.
Referring to fig. 4, the method of the present invention represents a significant improvement in the topological refinement of Amazon and Yelp datasets. These results demonstrate that the method of the present invention exhibits a more pronounced positive effect and works effectively on larger data sets with higher proportions of abnormal samples.
Topology inference: to evaluate the topology inference capabilities of the method of the present invention, the original topology embedded in the dataset is ignored, and instead the graph is initialized using the kNN graph construction method. In Reddit, amazon and YelpChi datasets, two, eight, and seven algorithms, respectively, perform better on kNN plots than the original plots. The average improvements for these datasets were-2.29%, 2.26% and 3.64%, respectively.
The G-SLAD model of the invention is always better than the intuitive graph construction baseline, i.e. the kNN graph strategy, and achieves relatively significant improvement on all datasets. In particular in Reddit datasets, kNN graph strategies have proven ineffective in improving the performance of anomaly detection tasks. In contrast, the proposed method shows improved performance compared to the original model. This highlights the excellent topology inference capability of the model, enabling the graph to be built in the absence of the original structure.
Table anomaly detection: in table 3, the performance results of table anomaly detection are shown. Comprises comparing graph-based anomaly detection (GAD) algorithms, implemented on a k-nearest neighbor (knn) graph, and a graph-independent algorithm XGBOD as a baseline. Initially XGBOD exhibited superior performance over all graph-based approaches. However, after graph refinement, the GAD algorithm shows significant performance enhancement, DCI surpasses other algorithms. Notably, all GAD algorithms exhibit improved effectiveness after refining. It should be mentioned that the slight performance variation between the two XGBOD examples is due to the standardization of the features.
TABLE 2
TABLE 3 Table 3
In summary, the invention provides a graph structure learning-based user network behavior data anomaly detection method, which can be applied to discovering user network anomaly behaviors such as false comments of abnormal users on electronic commerce and social networks, and provides a novel anomaly detection pipeline with graph structure learning GSL. In the method provided by the invention, based on historical data, the graph structure most suitable for graph anomaly detection can be effectively learned by fully utilizing the characteristics of the graph attention network. The invention can correct the graph structure before graph anomaly detection and can also preprocess non-graph data deployment graph anomaly monitoring algorithm. For datasets with original graph structures, the graph data is refined using the GSL method prior to deployment of the GAD algorithm; for datasets without original graph structures, GSLs are used to learn the appropriate graph structure for efficient anomaly detection, thus enabling the application of GAD on non-graph datasets. In short, the method of the present invention not only enhances the performance of the graph anomaly detection algorithm, but also is capable of processing non-graph data, such as tabular data. This not only improves upon anomaly detection performance, but also extends the application of the GAD algorithm beyond traditional graph-based datasets. The present invention exhibits improved performance over three common data sets and also achieves excellent results when applied to tabular data.
The embodiments of the present invention also provide a storage medium storing a computer program which, when executed, performs at least the method as described above.
The embodiment of the invention also provides a control device, which comprises a processor and a storage medium for storing a computer program; wherein the processor is adapted to perform at least the method as described above when executing said computer program.
The embodiments of the present invention also provide a processor executing a computer program, at least performing the method as described above.
The storage medium may be implemented by any type of non-volatile storage device, or combination thereof. The nonvolatile Memory may be a Read Only Memory (ROM), a programmable Read Only Memory (PROM, programmable Read-Only Memory), an erasable programmable Read Only Memory (EPROM, erasableProgrammable Read-Only Memory), an electrically erasable programmable Read Only Memory (EEPROM, electricallyErasable Programmable Read-Only Memory), a magnetic random Access Memory (FRAM, ferromagneticRandom Access Memory), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a compact disk Read Only Memory (CD-ROM, compact Disc Read-Only Memory); the magnetic surface memory may be a disk memory or a tape memory. The storage media described in embodiments of the present invention are intended to comprise, without being limited to, these and any other suitable types of memory.
In the several embodiments provided by the present invention, it should be understood that the disclosed systems and methods may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present invention may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware associated with program instructions, where the foregoing program may be stored in a computer readable storage medium, and when executed, the program performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or optical disk, or the like, which can store program codes.
Or the above-described integrated units of the invention may be stored in a computer-readable storage medium if implemented in the form of software functional modules and sold or used as separate products. Based on such understanding, the technical solutions of the embodiments of the present invention may be embodied in essence or a part contributing to the prior art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the methods described in the embodiments of the present invention. And the aforementioned storage medium includes: a removable storage device, ROM, RAM, magnetic or optical disk, or other medium capable of storing program code.
The methods disclosed in the method embodiments provided by the invention can be arbitrarily combined under the condition of no conflict to obtain a new method embodiment.
The features disclosed in the several product embodiments provided by the invention can be combined arbitrarily under the condition of no conflict to obtain new product embodiments.
The features disclosed in the embodiments of the method or the apparatus provided by the invention can be arbitrarily combined without conflict to obtain new embodiments of the method or the apparatus.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several equivalent substitutions and obvious modifications can be made without departing from the spirit of the invention, and the same should be considered to be within the scope of the invention.

Claims (7)

1. The method for detecting the abnormality of the user network behavior data based on the graph structure learning is characterized by comprising the following steps:
S1, training a graph annotation meaning network GAT by using user network behavior history data to construct a graph structure, taking each user network behavior as a node, extracting the characteristics of the node, calculating attention weights among the nodes, and utilizing the attention weights to aggregate the characteristics of neighbor nodes to update the node characteristics;
S2, filtering out edges with attention weights smaller than a set threshold value for the graph structure obtained in the step S1, sequencing neighbor nodes of each node, selecting a plurality of neighbor nodes with the forefront similarity, connecting the neighbor nodes to generate new edges, and generating a refined graph with enhanced structural characteristics;
s3, using a graph annotation force network GAT based on the refined graph to execute a graph anomaly detection algorithm on the input user network behavior data, and identifying the user network behavior data with anomalies;
In step S1, in each training iteration of the graph annotation force network GAT, feature vectors of nodes are transformed using a learnable weight matrix; calculating attention weight according to the converted feature vector and the adjacency matrix; the attention weight is used for carrying out weighted summation on the characteristics of the neighbor nodes, the characteristics of the neighbor nodes are aggregated, and the characteristics of the nodes are updated, so that the characteristics of the nodes are updated according to the characteristic information of the attention weight aggregation neighbor nodes, the updated characteristic vector integrates the characteristic information of the neighbor nodes, and meanwhile, the correlation between the nodes, which is determined by the attention weight, is considered
In step S1, the calculation of the attention weight specifically includes: splicing the feature vectors of the neighbor nodes to form a combined feature vector; and processing the spliced combined feature vectors by using an activation function to introduce nonlinear characteristics, and extracting attention weights among nodes from the activation function by calculation: and carrying out indexing and normalization processing on the calculated attention weight to obtain the attention weight of each node to the target node.
2. The method for detecting abnormal user network behavior data based on graph structure learning according to claim 1, wherein in step S1, the original feature vector of the node is multiplied by the weight matrix to obtain the feature vector after conversion, and the calculation formula is as follows:
Wherein, Is the original eigenvector of node i,/>Is a weight matrix which can be learned, and/>Is the transformed feature vector.
3. The method for detecting anomalies according to claim 1, wherein in step S1, the result of the weighted summation is converted nonlinearly by an activation function, and the output of the linear combination is converted into a characteristic representation having nonlinear characteristics, so as to increase the nonlinear expression capacity of the model.
4. The method for detecting abnormal user network behavior data based on graph structure learning according to claim 1, wherein in step S1, the graph annotation force network GAT uses a binary cross entropy loss function to calculate the loss for the difference between the predicted result of the node class and the real label and guide the iterative update of the graph.
5. The method for detecting abnormal user network behavior data based on graph structure learning according to any one of claims 1 to 4, wherein in step S2, edges with attention weights smaller than a set threshold value in an original graph G are filtered, and for each node, a plurality of nodes with highest similarity measurement values are connected to generate new edges, and edges reserved in the original graph and the generated edges are integrated into the new graph to obtain the refined graph.
6. The method for detecting anomalies in user network behavior data based on graph structure learning as recited in any one of claims 1 to 4, further comprising the steps of: for data which is not a graph structure, preprocessing is carried out through feature engineering, and the processed features are constructed into an initial graph by using a k-nearest neighbor algorithm kNN; steps S1 to S3 are then performed.
7. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the graph structure learning-based user network behavior data anomaly detection method according to any one of claims 1 to 6.
CN202410243752.4A 2024-03-04 2024-03-04 User network behavior data anomaly detection method based on graph structure learning Active CN117828514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410243752.4A CN117828514B (en) 2024-03-04 2024-03-04 User network behavior data anomaly detection method based on graph structure learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410243752.4A CN117828514B (en) 2024-03-04 2024-03-04 User network behavior data anomaly detection method based on graph structure learning

Publications (2)

Publication Number Publication Date
CN117828514A CN117828514A (en) 2024-04-05
CN117828514B true CN117828514B (en) 2024-05-03

Family

ID=90523053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410243752.4A Active CN117828514B (en) 2024-03-04 2024-03-04 User network behavior data anomaly detection method based on graph structure learning

Country Status (1)

Country Link
CN (1) CN117828514B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905900A (en) * 2021-04-02 2021-06-04 辽宁工程技术大学 Collaborative filtering recommendation algorithm based on graph convolution attention mechanism
WO2021179838A1 (en) * 2020-03-10 2021-09-16 支付宝(杭州)信息技术有限公司 Prediction method and system based on heterogeneous graph neural network model
CN114077811A (en) * 2022-01-19 2022-02-22 华东交通大学 Electric power Internet of things equipment abnormality detection method based on graph neural network
CN114463141A (en) * 2022-02-09 2022-05-10 厦门理工学院 Medical insurance fraud detection algorithm based on multilayer attention machine mapping neural network and system thereof
WO2023087303A1 (en) * 2021-11-22 2023-05-25 Robert Bosch Gmbh Method and apparatus for classifying nodes of a graph
CN116304311A (en) * 2023-02-22 2023-06-23 天津大学 Online social network spam comment user detection method
CN117520995A (en) * 2024-01-03 2024-02-06 中国海洋大学 Abnormal user detection method and system in network information platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230025238A1 (en) * 2021-07-09 2023-01-26 Robert Bosch Gmbh Anomalous region detection with local neural transformations

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021179838A1 (en) * 2020-03-10 2021-09-16 支付宝(杭州)信息技术有限公司 Prediction method and system based on heterogeneous graph neural network model
CN112905900A (en) * 2021-04-02 2021-06-04 辽宁工程技术大学 Collaborative filtering recommendation algorithm based on graph convolution attention mechanism
WO2023087303A1 (en) * 2021-11-22 2023-05-25 Robert Bosch Gmbh Method and apparatus for classifying nodes of a graph
CN114077811A (en) * 2022-01-19 2022-02-22 华东交通大学 Electric power Internet of things equipment abnormality detection method based on graph neural network
CN114463141A (en) * 2022-02-09 2022-05-10 厦门理工学院 Medical insurance fraud detection algorithm based on multilayer attention machine mapping neural network and system thereof
CN116304311A (en) * 2023-02-22 2023-06-23 天津大学 Online social network spam comment user detection method
CN117520995A (en) * 2024-01-03 2024-02-06 中国海洋大学 Abnormal user detection method and system in network information platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于超图的工业数据异常检测;计算机应用研究;20201231;第37卷(第S2期);第253-255页 *

Also Published As

Publication number Publication date
CN117828514A (en) 2024-04-05

Similar Documents

Publication Publication Date Title
Yang et al. Voice2series: Reprogramming acoustic models for time series classification
Yu et al. A bearing fault and severity diagnostic technique using adaptive deep belief networks and Dempster–Shafer theory
Narayanan et al. Adaptive and scalable android malware detection through online learning
CN107577945B (en) URL attack detection method and device and electronic equipment
Hu A multivariate grey prediction model with grey relational analysis for bankruptcy prediction problems
WO2019175880A1 (en) Method and system for classifying data objects based on their network footprint
Ra et al. DeepAnti-PhishNet: Applying deep neural networks for phishing email detection
Verma et al. A network intrusion detection approach using variant of convolution neural network
CN114298176A (en) Method, device, medium and electronic equipment for detecting fraudulent user
CN113032525A (en) False news detection method and device, electronic equipment and storage medium
CN116167010A (en) Rapid identification method for abnormal events of power system with intelligent transfer learning capability
US20210357729A1 (en) System and method for explaining the behavior of neural networks
BOUIJIJ et al. Machine learning algorithms evaluation for phishing urls classification
CN117828514B (en) User network behavior data anomaly detection method based on graph structure learning
CN114119191A (en) Wind control method, overdue prediction method, model training method and related equipment
Grace et al. Malware detection for Android application using Aquila optimizer and Hybrid LSTM-SVM classifier
Shi et al. Rf-gnn: Random forest boosted graph neural network for social bot detection
Sharma et al. A BPSO and deep learning based hybrid approach for android feature selection and malware detection
CN111310176B (en) Intrusion detection method and device based on feature selection
Kasubi et al. A Comparative Study of Feature Selection Methods for Activity Recognition in the Smart Home Environment
Zheng et al. Multi-modal Causal Structure Learning and Root Cause Analysis
Yesaswini et al. A Hybrid Approach for Intrusion Detection System to Enhance Feature Selection
Ahli et al. Binary and Multi-Class Classification on the IoT-23 Dataset
Khatun et al. An Approach to Detect Phishing Websites with Features Selection Method and Ensemble Learning
Sharrab et al. Deep neural networks in social media forensics: unveiling suspicious patterns and advancing investigations on twitter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant