CN113935407A - Abnormal behavior recognition model determining method and device - Google Patents

Abnormal behavior recognition model determining method and device Download PDF

Info

Publication number
CN113935407A
CN113935407A CN202111155817.2A CN202111155817A CN113935407A CN 113935407 A CN113935407 A CN 113935407A CN 202111155817 A CN202111155817 A CN 202111155817A CN 113935407 A CN113935407 A CN 113935407A
Authority
CN
China
Prior art keywords
neural network
nodes
graph
network model
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111155817.2A
Other languages
Chinese (zh)
Inventor
额日和
李琨
田江
向小佳
丁永建
李璠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Everbright Technology Co ltd
Original Assignee
Everbright Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Everbright Technology Co ltd filed Critical Everbright Technology Co ltd
Priority to CN202111155817.2A priority Critical patent/CN113935407A/en
Publication of CN113935407A publication Critical patent/CN113935407A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Accounting & Taxation (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Security & Cryptography (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a method and a device for determining an abnormal behavior recognition model, wherein the method comprises the following steps: converting the acquired service data with the preset quantity into graph structure data, and extracting classification labels corresponding to the service data with the preset quantity; carrying out aggregation processing on the graph structure data to obtain dense vectors of the graph structure data; the original graph neural network model is trained through the dense vectors and the classification labels to obtain a trained target graph neural network model, the target graph neural network model is used for identifying abnormal behaviors, the problems that the effect of the abnormal behavior identification model is poor and aggregation in the model is insufficient in the related technology can be solved, graph datamation can be well represented through the graph neural network model according to various relations and attributes, the capability of achieving complex function approximation and learning data essential characteristics is higher in combination with deep learning, and the model effect is better.

Description

Abnormal behavior recognition model determining method and device
Technical Field
The invention relates to the field of data processing, in particular to a method and a device for determining an abnormal behavior recognition model.
Background
The existing anti-fraud technology mostly uses a machine learning model or a rule model, the existing data is difficult to describe certain relation among a large amount of data, and the model effect is poor when complex function approximation and the essential characteristics of the learning data are realized; and the GraphSage algorithm has certain deficiencies in aggregation.
Aiming at the problems that the abnormal behavior recognition model in the related technology has poor effect and aggregation in the model has certain deficiency, no solution is provided.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining an abnormal behavior recognition model, which are used for at least solving the problems that the abnormal behavior recognition model in the related technology has poor effect and aggregation in the model has certain defects.
According to an embodiment of the present invention, there is provided an abnormal behavior recognition model determining method including:
converting the acquired service data with the preset quantity into graph structure data, and extracting classification labels corresponding to the service data with the preset quantity;
performing aggregation processing on the graph structure data to obtain dense vectors of the graph structure data;
and training the original graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors.
Optionally, converting the obtained predetermined amount of service data into graph structure data, and extracting the classification label corresponding to the predetermined amount of service data includes:
respectively distributing node IDs for the service data of the preset quantity, wherein one service data corresponds to one node;
extracting the relation of the nodes with the preset number as an adjacency list of the graph structure data, defining a variable of a dictionary type, storing a key of the dictionary into a node ID, and taking the corresponding value as all the corresponding neighbor nodes;
extracting a feature matrix (N, M) of the nodes with the preset number, wherein M is the size of the constructed feature dimension, and N is the number of the nodes;
and extracting labels Y of the nodes with the preset number to form a label matrix of the graph structure data.
Optionally, the aggregating the graph structure data to obtain a dense vector of the graph structure data includes:
determining the similarity between the central node and the adjacent nodes according to the adjacency list, and acquiring n nodes with the similarity larger than a preset threshold;
and aggregating the central point and the neighbor nodes in the n nodes according to an aggregation function, and forming a dense vector of the graph structure data.
Optionally, determining the similarity between the central node and the adjacent nodes according to the adjacency list, and acquiring n nodes with the similarity greater than a preset threshold includes:
training to obtain a single-layer perception network according to the adjacency list, the label matrix of the graph structure data and the feature vectors of the nodes with the preset number;
calculating the predicted values of the preset number of nodes through the single-layer sensing network, and determining the L1 distance of the predicted values of every two nodes;
determining a similarity of the central node to the neighboring nodes using the L1 distance;
sequencing the similarity according to a descending order;
and acquiring the n nodes with the similarity larger than the preset threshold.
Optionally, aggregating the central node and the neighbor nodes of the n nodes according to an aggregation function, and forming a dense vector of the graph structure data includes:
determining a central node vector and a neighbor node vector of the n nodes;
splicing the central node vector and the neighbor node vector to obtain a spliced target vector;
performing aggregation processing on the target vector through an aggregation function to obtain an aggregation vector;
and filtering the aggregation vector to obtain a dense vector of the graph structure data.
Optionally, the training of the original graph neural network model through the dense vector and the classification label to obtain a trained target graph neural network model includes:
and training an original graph neural network model by using the dense vector and the classification label to obtain the target graph neural network model, wherein the dense vector is input into the original graph neural network model, and the label result corresponding to the dense vector output by the trained target graph neural network model and the classification result actually corresponding to the dense vector meet the following loss function:
Figure BDA0003288369950000031
wherein, yvIs the actual label result for node v, σ (z)v) Is the probability of the neural network prediction of the target graph, σ is the activation function, zvIs a dense vector of the node v, lambda | θ | | non calculation2Are constraints.
Optionally, after the original graph neural network model is trained through the dense vector and the classification label to obtain a trained target graph neural network model, the method further includes:
acquiring target service data;
inputting the target service data into a pre-trained target graph neural network model to obtain the probability of different classification results corresponding to the target service data input by the target graph neural network model, wherein the classification result with the probability larger than a preset threshold value is an abnormal behavior identification result corresponding to the target service data.
According to another embodiment of the present invention, there is also provided an abnormal behavior recognition model determination apparatus including:
the extraction module is used for converting the acquired service data with the preset quantity into graph structure data and extracting the classification labels corresponding to the service data with the preset quantity;
the aggregation module is used for carrying out aggregation processing on the graph structure data to obtain dense vectors of the graph structure data;
and the training module is used for training the original graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors.
Optionally, the extracting module is further configured to:
respectively distributing node IDs for the service data of the preset quantity, wherein one service data corresponds to one node;
extracting the relation of the nodes with the preset number as an adjacency list of the graph structure data, defining a variable of a dictionary type, storing a key of the dictionary into a node ID, and taking the corresponding value as all the corresponding neighbor nodes;
extracting a feature matrix (N, M) of the nodes with the preset number, wherein M is the size of the constructed feature dimension, and N is the number of the nodes;
and extracting labels Y of the nodes with the preset number to form a label matrix of the graph structure data.
Optionally, the aggregation module comprises:
the acquisition submodule is used for determining the similarity between the central node and the adjacent nodes according to the adjacency list and acquiring n nodes with the similarity larger than a preset threshold;
and the aggregation sub-module is used for aggregating the central point and the neighbor nodes in the n nodes according to an aggregation function and forming dense vectors of the graph structure data.
Optionally, the obtaining sub-module is further configured to:
training to obtain a single-layer perception network according to the adjacency list, the label matrix of the graph structure data and the feature vectors of the nodes with the preset number;
calculating the predicted values of the preset number of nodes through the single-layer sensing network, and determining the L1 distance of the predicted values of every two nodes;
determining a similarity of the central node to the neighboring nodes using the L1 distance;
sequencing the similarity according to a descending order;
and acquiring the n nodes with the similarity larger than the preset threshold.
Optionally, the aggregation submodule is further configured to:
determining a central node vector and a neighbor node vector of the n nodes;
splicing the central node vector and the neighbor node vector to obtain a spliced target vector;
performing aggregation processing on the target vector through an aggregation function to obtain an aggregation vector;
and filtering the aggregation vector to obtain a dense vector of the graph structure data.
Optionally, the training module is further configured to:
and training an original graph neural network model by using the dense vector and the classification label to obtain the target graph neural network model, wherein the dense vector is input into the original graph neural network model, and the label result corresponding to the dense vector output by the trained target graph neural network model and the classification result actually corresponding to the dense vector meet the following loss function:
Figure BDA0003288369950000051
wherein, yvIs the actual label result for node v, σ (z)v) Is the order of eyesProbability of prediction of the scalar neural network, σ is the activation function, zvIs a dense vector of the node v, lambda | θ | | non calculation2Are constraints.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring target service data;
and the input module is used for inputting the target business data into a pre-trained target graph neural network model to obtain the probability of different classification results corresponding to the target business data input by the target graph neural network model, wherein the classification result with the probability larger than a preset threshold value is an abnormal behavior identification result corresponding to the target business data.
According to a further embodiment of the present invention, a computer-readable storage medium is also provided, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above-described method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the acquired service data with the preset quantity is converted into the graph structure data, and the classification labels corresponding to the service data with the preset quantity are extracted; performing aggregation processing on the graph structure data to obtain dense vectors of the graph structure data; the original graph neural network model is trained through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors, the problems that the effect of the abnormal behavior identification model is poor and aggregation in the model is insufficient in the related technology can be solved, graph datamation can be well represented through the graph neural network model according to various relations and attributes, the capacity of achieving complex function approximation and learning data essential characteristics is higher in combination with deep learning, and the model effect is better.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware configuration of a mobile terminal of an abnormal behavior recognition model determination method according to an embodiment of the present invention;
FIG. 2 is a flow chart of an abnormal behavior recognition model determination method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a neural network according to an embodiment of the present invention;
FIG. 4 is a first schematic diagram illustrating aggregation in a neural network, in accordance with an embodiment of the present invention;
FIG. 5 is a second schematic diagram illustrating aggregation in the neural network, in accordance with an embodiment of the present invention;
fig. 6 is a block diagram of an abnormal behavior recognition model determination apparatus according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Example 1
The method provided by the first embodiment of the present application may be executed in a mobile terminal, a computer terminal, or a similar computing device. Taking a mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of the abnormal behavior recognition model determination method according to the embodiment of the present invention, as shown in fig. 1, the mobile terminal may include one or more processors 102 (only one is shown in fig. 1) (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, and optionally, the mobile terminal may further include a transmission device 106 for a communication function and an input/output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of application software, such as a computer program corresponding to the abnormal behavior recognition model determination method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the above-mentioned method. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In this embodiment, a method for determining an abnormal behavior recognition model operating in the mobile terminal or the network architecture is provided, and fig. 2 is a flowchart of the method for determining an abnormal behavior recognition model according to the embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, converting the acquired service data with the preset quantity into graph structure data, and extracting classification labels corresponding to the service data with the preset quantity;
step S204, carrying out aggregation processing on the graph structure data to obtain dense vectors of the graph structure data;
and S206, training the original graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors.
Through the steps S202 to S206, the problems that the effect of the abnormal behavior recognition model in the related technology is not good and aggregation in the model is insufficient are solved, through the graph neural network model, various relations and attributes can be well represented in graph datamation, the capability of realizing complex function approximation and learning data essential characteristics is higher in combination with deep learning, and the model effect is better.
The embodiment of the invention is improved from a business application level, the graph neural network is used for replacing the original technology, the graph neural network is a deep learning method based on a graph, a graph structure has advantages when describing data of types such as social networks, object relations and the like, particularly in a financial payment or credit scene, the business has a plurality of relation attributes, the existing data is difficult to describe certain relation among a large amount of data, the graph structure has natural advantages when describing the relations, and various relations and attributes can be well represented in graph datamation; meanwhile, the method has stronger capability of realizing complex function approximation and learning essential characteristics of data by combining deep learning, and the model effect is better than that of the traditional machine learning or rule model. The existing GraphSage algorithm has certain defects in aggregation, so the embodiment of the invention adjusts the existing graph neural network on a technical framework, and mainly comprises the following steps:
in the graph algorithm, the neighbor nodes of the central point need to be aggregated, and the aggregation method determines the performance of the model. The invention designs a perception network model for measuring the similarity of neighbor nodes and selectively aggregating the information of the neighbor nodes based on the similarity degree. And the existing method has no module for measuring the similarity of the nodes.
Based on the calculated similarity of the neighbor nodes, an optimized sampling method is designed, namely the neighbor nodes are selected according to an optimized threshold value, and the first N neighbor nodes with high similarity are aggregated together. The existing method is random sampling, so that a good effect is difficult to achieve.
The method takes the consideration that similar nodes are aggregated by neighbor nodes, does not need to use the attention mechanism, and can directly aggregate information of a central point and the neighbor nodes by adopting a splicing method when aggregating the central point.
In an embodiment of the present invention, the step S202 may specifically include:
s2021, respectively assigning node IDs to the predetermined number of service data, where one service data corresponds to one node;
s2022, extracting the relation of the nodes with the preset number as an adjacency list of the graph structure data, defining a variable of a dictionary type, storing a key of the dictionary into a node ID, and taking the corresponding value as all corresponding neighbor nodes;
s2023, extracting a feature matrix (N, M) of the nodes with the preset number, wherein M is the size of the constructed feature dimension, and N is the number of the nodes;
s2024, extracting labels Y of the nodes with the preset number to form a label matrix of the graph structure data.
In an embodiment of the present invention, the step S204 may specifically include:
s2041, determining the similarity between a central node and adjacent nodes according to the adjacency list, acquiring n nodes with the similarity larger than a preset threshold value, and further training to obtain a single-layer sensing network according to the adjacency list, the label matrix of the graph structure data and the feature vectors of the nodes with the preset number; calculating the predicted values of the preset number of nodes through the single-layer sensing network, and determining the L1 distance of the predicted values of every two nodes; determining a similarity of the central node and the neighboring nodes using the L1 distance; sequencing the similarity according to a descending order; acquiring the n nodes with the similarity larger than the preset threshold;
s2042, aggregating the central point and the neighbor nodes in the n nodes according to an aggregation function, forming dense vectors of the graph structure data, and further determining central node vectors and neighbor node vectors of the n nodes; splicing the central node vector and the neighbor node vector to obtain a spliced target vector; performing aggregation processing on the target vector through an aggregation function to obtain an aggregation vector; and filtering the aggregation vector to obtain a dense vector of the graph structure data, wherein the dense vector can be specifically filtered by using a ReLU function.
In an embodiment of the present invention, the step S206 may specifically include: and training an original graph neural network model by using the dense vector and the classification label to obtain the target graph neural network model, wherein the dense vector is input into the original graph neural network model, and the label result corresponding to the dense vector output by the trained target graph neural network model and the classification result actually corresponding to the dense vector meet the following loss function:
Figure BDA0003288369950000101
wherein, yvIs the actual label result for node v, σ (z)v) Is the probability of the neural network prediction of the target graph, σ is the activation function, zvIs a dense vector of the node v, lambda | θ | | non calculation2Are constraints.
In an optional embodiment, after the step S206, the method further includes: acquiring target service data; inputting the target service data into a pre-trained target graph neural network model to obtain the probability of different classification results corresponding to the target service data input by the target graph neural network model, wherein the classification result with the probability larger than a preset threshold value is an abnormal behavior identification result corresponding to the target service data.
The embodiment of the invention is mainly based on a graph neural network algorithm, combines the actual situation of a service, optimizes and improves a key module in the algorithm, and then applies the optimized and improved key module to a specific fraud scene. Data such as a social relationship network, payment transaction, equipment hardware relationship and the like are represented by a graph structure, meanwhile, a node aggregation mode and a sampling method in the existing graph neural network algorithm are optimized and improved, the effect of a model is improved, in addition, methods such as an attention mechanism are avoided, and the calculation cost is saved. The general algorithm framework will be introduced first, and then the technical details of the main innovation will be described in detail.
Fig. 3 is a schematic diagram of a Graph Neural Network according to an embodiment of the present invention, and as shown in fig. 3, the Graph Neural Network (Graph Neural Network) is a deep learning method based on a Graph structure, and mainly consists of two parts, namely a Graph structure and a Neural Network. The graph neural network mainly solves the fraud problem of the second classification, and uses a supervised learning mode to train a model through the input graph data characteristics and labels, and the method mainly comprises the following steps: data preprocessing, aggregate sampling, graph neural network structure and loss optimization.
The main function of the data preprocessing is to convert the service data into data of a graph structure, and represent the corresponding network relation according to the vertex, the connection relation and the adjacency list related to the specific scene definition. The aggregation sampling comprises an aggregation function and a node aggregation method of the graph neural network, and how to convert the nodes into dense vectors and put the dense vectors into a model for training. The neural network structure defines the number of layers of the network, network parameters and target value types of input and output. The loss optimization defines a specific loss calculation function, an optimization algorithm, the number of iterations and the like.
Business data usually contains a variety of complex information, where the corresponding relationships need to be extracted according to the target and converted into graph structured data, because graphs provide a data structure that better describes the relationships of real-world objects.
The figure shows the entities and their relationships, and is denoted as G ═ V, E. The graph consists of two sets, one set of nodes, V, and one set of edges, E. In the edge set E, an edge (u, v) connects a pair of nodes u and v, indicating that a relationship exists between the two nodes. The relationships between nodes are represented by adjacency lists, each node contains a list, and each element in the list is connected with the node. Each node entity contains a list (Array, linked list, set, etc.) to contain the nodes adjacent to the node, a node is connected to b and c nodes, b node is connected to a and g nodes, and so on. The adjacency list is as follows:
a->{b c}
b->{a g}
c->{a e}
……
the embodiment of the invention is realized by a deep learning framework of the pytorech, wherein the total amount of data samples is assumed to be N, each sample is unique, and the data processing mode comprises the following steps: and allocating a unique node ID to each sample, wherein the node ID is a unique primary key, namely N IDs are shared, and the value range of the ID is from 0 to N-1. And extracting the relationship of the nodes as an adjacency list, defining a variable of a dictionary type, storing a key of the dictionary into a node ID, and taking the corresponding value as all the neighbor nodes of the dictionary. And extracting a corresponding characteristic matrix, such as a dummy variable or a one-hot code, from each node, wherein the matrix size is (N, M), and M is the size of the constructed characteristic dimension. And finally extracting labels Y, Y to (0, 1) corresponding to each node.
Finally, three matrixes (feature matrix, adjacency list and label matrix) about the graph structure are obtained, so that the whole graph structure is represented as computable data, and can be put into a neural network for learning and training.
Aggregation sampling is based on graph sage of a graph neural network, on the basis, the algorithm is greatly improved, the model effect is improved, the improvement points mainly comprise three parts, and fig. 4 is a schematic diagram I of aggregation in the graph neural network according to the embodiment of the invention, as shown in fig. 4, the method comprises the following steps:
for the similarity of the central point and the neighbor nodes, filtering out dissimilar nodes based on a neighbor node sampling method, selecting the first n nodes with high similarity, and then aggregating the central point and the neighbor nodes according to an aggregation function to form a dense vector. The technical implementation of each process is described below in turn.
2.1 node similarity measure function;
in a fraud scene, a black product is usually disguised as a behavior mode of a normal user, the black product node is connected with a normal node due to the fact that the black product node is connected with the normal node on graph data, so that corresponding central nodes can sample behavior characteristics of the black product, namely noise is added to the characteristics, if the disguised black product node can be filtered during sampling, the model effect can be improved, a single-layer sensing network can be designed to predict labels of the nodes, and the similarity of the two nodes is measured by using the L1 distance. Defining nodes v, v', the L1 distance of the two nodes is calculated as follows:
Figure BDA0003288369950000121
wherein, MLP is a single layer perceptual network function,
Figure BDA0003288369950000122
is the node vector of the l-1 layer, namely, the prediction score of the l layer is calculated through the node vector of the l-1 layer (the vector characteristic of the node is used for saving calculation), and then the similarity of the two nodes is calculated. The similarity of the nodes is calculated as follows:
S(v,v')=1-D(v,v')
the similarity of each central node and the neighbor nodes thereof can be calculated through a single-layer perception network.
2.2 sampling neighbor nodes;
after the similarity between the central node and the neighbor nodes thereof is calculated, the neighbor nodes are required to be determined to be sampled, and the sampling principle is to select the nodes with high similarity, so that the prediction capability of the graph neural network can be improved. Since one central point may be connected with hundreds of neighbor nodes, if it is difficult to calculate the appropriate number of sampling nodes for each relationship, the existing graph neural network samples by a random sampling method, so that nodes containing noise information are easily sampled, and the model performance is affected.
The embodiment of the invention optimizes the method, namely, the calculated node similarity is utilized, a threshold value threshold is set for the central node v, the neighbor nodes of the central node v are sequenced from high to low, only the first n nodes with the similarity larger than the threshold value threshold are sampled (n is defined by a developer according to the data characteristics), so that the sampled information is ensured to contain less noise or no noise, and meanwhile, certain calculation time is saved.
2.3 node aggregation function.
After the selected neighbor node is determined, information of the central point and the neighbor node is required to be aggregated to form a dense vector. The existing method usually adopts an attention mechanism for node aggregation, the purpose of the mechanism is to select node information which is more important to the model, but a large amount of computing time is consumed. Here, a function is used in aggregation, information of a central point and neighbor nodes is spliced when the nodes are aggregated, operation is performed through the aggregation function, and finally a ReLU function is used for output, fig. 5 is a schematic diagram two of aggregation in the neural network according to the embodiment of the present invention, as shown in fig. 5, aggregation is performed at the l-th layer, and an aggregation expression is shown as follows:
Figure BDA0003288369950000131
Figure BDA0003288369950000141
is a vector of the central node point and,
Figure BDA0003288369950000142
is a vector of the neighboring nodes and,
Figure BDA0003288369950000143
representing the splicing and summing of two vectors, AGG is an aggregation function, where the aggregation function can be an averaging method or a weighting method,
for the averaging method, after the neighbor node vector and the central node vector are spliced, the total used node number of the spliced vector is divided by the total used node number.
For the weight method, when the neighbor node vectors are spliced, the neighbor node vectors are multiplied by a weight, and the weight is equal to a threshold used when the neighbor nodes are filtered.
By the method, a dense vector is obtained and is output after being filtered by the ReLU function. The ReLU function is a linear rectification function, i.e., a ramp function, and the function expression is:
and f (x) max (0, x), which helps to effectively decrease gradient descent and backward propagation, and simultaneously avoids the problems of gradient explosion and gradient disappearance, and also reduces the calculation cost of the whole neural network.
The neural network structure defines the number of layers of the network, network parameters and target value types of input and output. The embodiment of the invention uses two layers of neural networks during training, and the framework supports more network layers. The inputs to the network layer include:
inputting dimensions: defining the size of input according to the dimension and size of specific use data;
classification number: namely the number of the types of the predicted labels Y;
adjacency list: a matrix representing the connection relation of each vertex in the diagram is input into a dictionary format, a keys value is a central node label, and a values value is a node label connected with the keys value;
aggregation function: an aggregation function defined in the aggregation layer;
aggregation function method: an average method or a weight method, wherein the default value is the average method;
cuda: whether cuda is used for computation, the framework supports CPU computation.
The output of the graph neural network in the embodiment of the invention is a label of two classifications.
The loss optimization defines a specific loss calculation function, an optimization algorithm, the number of iterations and the like.
The loss function used by the invention is cross entropy and is mainly used for judging the closeness degree of actual output and expected output, and a loss expression is defined as:
Figure BDA0003288369950000144
wherein, yvIs the target node actual label result, σ (z)v) Is the probability of the neural network prediction of the target graph, σ is the activation function, zvIs a dense vector for node v. Lambada | | theta | | non-conducting phosphor2Is a constraint added, the main objective is to avoid overfitting, here regularized using the l 2-paradigm.
The optimization algorithm uses an Adam optimizer, an algorithm that performs a first order gradient optimization on a random objective function, which is based on adaptive low-order moment estimation. The Adam algorithm is easy to implement, and has high computational efficiency and low memory requirements. The Adam algorithm differs from the traditional random gradient descent. The stochastic gradient descent keeps a single learning rate (i.e., alpha) updating all weights, and the learning rate does not change during the training process. Adam, in turn, designs independent adaptive learning rates for different parameters by computing first and second order moment estimates of the gradient. Adam is a first-order optimization algorithm that can replace the traditional stochastic gradient descent process, and can iteratively update neural network weights based on training data. Adam is a very popular algorithm in the field of deep learning because it can achieve good results quickly. Empirical results prove that the Adam algorithm has excellent performance in practice and has great advantages compared with other kinds of random optimization algorithms.
The Adam optimizer in the framework has the following parameters:
params: iterable of the parameter to be optimized or ditt defining the parameter group;
lr: learning rate (default value: 1 e-3);
beta: coefficients for calculating the gradient and running average of the gradient squared (default: 0.999);
eps: an entry added to the denominator for increasing the stability of numerical calculations (default value: 1 e-8);
weight _ decay: the weight decays (penalty of L2) (default value: 0).
According to another embodiment of the present invention, there is also provided an abnormal behavior recognition model determining apparatus, and fig. 6 is a block diagram of the abnormal behavior recognition model determining apparatus according to the embodiment of the present invention, as shown in fig. 6, including:
an extracting module 62, configured to convert the obtained predetermined amount of service data into graph structure data, and extract a classification label corresponding to the predetermined amount of service data;
an aggregation module 64, configured to perform aggregation processing on the graph structure data to obtain a dense vector of the graph structure data;
and the training module 66 is configured to train the original graph neural network model through the dense vector and the classification label to obtain a trained target graph neural network model, where the target graph neural network model is used for identifying abnormal behaviors.
Optionally, the extracting module 62 is further configured to:
respectively distributing node IDs for the service data of the preset quantity, wherein one service data corresponds to one node;
extracting the relation of the nodes with the preset number as an adjacency list of the graph structure data, defining a variable of a dictionary type, storing a key of the dictionary into a node ID, and taking the corresponding value as all the corresponding neighbor nodes;
extracting a feature matrix (N, M) of the nodes with the preset number, wherein M is the size of the constructed feature dimension, and N is the number of the nodes;
and extracting labels Y of the nodes with the preset number to form a label matrix of the graph structure data.
Optionally, the aggregation module 64 includes:
the acquisition submodule is used for determining the similarity between the central node and the adjacent nodes according to the adjacency list and acquiring n nodes with the similarity larger than a preset threshold;
and the aggregation sub-module is used for aggregating the central point and the neighbor nodes in the n nodes according to an aggregation function and forming dense vectors of the graph structure data.
Optionally, the obtaining sub-module is further configured to:
training to obtain a single-layer perception network according to the adjacency list, the label matrix of the graph structure data and the feature vectors of the nodes with the preset number;
calculating the predicted values of the preset number of nodes through the single-layer sensing network, and determining the L1 distance of the predicted values of every two nodes;
determining a similarity of the central node and the neighboring nodes using the L1 distance;
sequencing the similarity according to a descending order;
and acquiring the n nodes with the similarity larger than the preset threshold.
Optionally, the aggregation submodule is further configured to:
determining a central node vector and a neighbor node vector of the n nodes;
splicing the central node vector and the neighbor node vector to obtain a spliced target vector;
performing aggregation processing on the target vector through an aggregation function to obtain an aggregation vector;
and filtering the aggregation vector to obtain a dense vector of the graph structure data.
Optionally, the training module 66 is further configured to:
and training an original graph neural network model by using the dense vector and the classification label to obtain the target graph neural network model, wherein the dense vector is input into the original graph neural network model, and the label result corresponding to the dense vector output by the trained target graph neural network model and the classification result actually corresponding to the dense vector meet the following loss function:
Figure BDA0003288369950000171
wherein, yvIs the actual label result for node v, σ (z)v) Is the probability of the neural network prediction of the target graph, σ is the activation function, zvIs a dense vector of the node v, lambda | θ | | non calculation2Are constraints.
Optionally, the apparatus further comprises:
the acquisition module is used for acquiring target service data;
and the input module is used for inputting the target business data into a pre-trained target graph neural network model to obtain the probability of different classification results corresponding to the target business data input by the target graph neural network model, wherein the classification result with the probability larger than a preset threshold value is an abnormal behavior identification result corresponding to the target business data.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Example 3
Embodiments of the present invention also provide a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the steps of:
s1, converting the acquired service data with the preset quantity into graph structure data, and extracting the classification labels corresponding to the service data with the preset quantity;
s2, carrying out aggregation processing on the graph structure data to obtain dense vectors of the graph structure data;
and S3, training the original graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Example 4
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Optionally, in this embodiment, the processor may be configured to execute the following steps by a computer program:
s1, converting the acquired service data with the preset quantity into graph structure data, and extracting the classification labels corresponding to the service data with the preset quantity;
s2, carrying out aggregation processing on the graph structure data to obtain dense vectors of the graph structure data;
and S3, training the original graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An abnormal behavior recognition model determination method is characterized by comprising the following steps:
converting the acquired service data with the preset quantity into graph structure data, and extracting classification labels corresponding to the service data with the preset quantity;
performing aggregation processing on the graph structure data to obtain dense vectors of the graph structure data;
and training the original graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors.
2. The method of claim 1, wherein converting the obtained predetermined amount of service data into graph structure data, and extracting the classification labels corresponding to the predetermined amount of service data comprises:
respectively distributing node IDs for the service data of the preset quantity, wherein one service data corresponds to one node;
extracting the relation of the nodes with the preset number as an adjacency list of the graph structure data, defining a variable of a dictionary type, storing a key of the dictionary into a node ID, and taking the corresponding value as all the corresponding neighbor nodes;
extracting a feature matrix (N, M) of the nodes with the preset number, wherein M is the size of the constructed feature dimension, and N is the number of the nodes;
and extracting labels Y of the nodes with the preset number to form a label matrix of the graph structure data.
3. The method of claim 2, wherein aggregating the graph structure data to obtain dense vectors of the graph structure data comprises:
determining the similarity between the central node and the adjacent nodes according to the adjacency list, and acquiring n nodes with the similarity larger than a preset threshold;
and aggregating the central point and the neighbor nodes in the n nodes according to an aggregation function, and forming a dense vector of the graph structure data.
4. The method of claim 3, wherein determining the similarity between the central node and the neighboring nodes according to the adjacency list, and obtaining n nodes with the similarity greater than a preset threshold comprises:
training to obtain a single-layer perception network according to the adjacency list, the label matrix of the graph structure data and the feature vectors of the nodes with the preset number;
calculating the predicted values of the preset number of nodes through the single-layer sensing network, and determining the L1 distance of the predicted values of every two nodes;
determining a similarity of the central node to the neighboring nodes using the L1 distance;
sequencing the similarity according to a descending order;
and acquiring the n nodes with the similarity larger than the preset threshold.
5. The method of claim 3, wherein aggregating the center point and neighbor nodes of the n nodes according to an aggregation function and forming a dense vector of the graph structure data comprises:
determining a central node vector and a neighbor node vector of the n nodes;
splicing the central node vector and the neighbor node vector to obtain a spliced target vector;
performing aggregation processing on the target vector through an aggregation function to obtain an aggregation vector;
and filtering the aggregation vector to obtain a dense vector of the graph structure data.
6. The method of claim 1, wherein training the raw graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model comprises:
and training an original graph neural network model by using the dense vector and the classification label to obtain the target graph neural network model, wherein the dense vector is input into the original graph neural network model, and the label result corresponding to the dense vector output by the trained target graph neural network model and the classification result actually corresponding to the dense vector meet the following loss function:
Figure FDA0003288369940000031
wherein, yvIs the actual label result for node v, σ (z)v) Is the probability of the neural network prediction of the target graph, σ is the activation function, zvIs a dense vector of the node v, lambda | θ | | non calculation2Are constraints.
7. The method of any one of claims 1 to 6, wherein after training the raw graph neural network model by the dense vectors and the classification labels to obtain a trained target graph neural network model, the method further comprises:
acquiring target service data;
inputting the target service data into a pre-trained target graph neural network model to obtain the probability of different classification results corresponding to the target service data input by the target graph neural network model, wherein the classification result with the probability larger than a preset threshold value is an abnormal behavior identification result corresponding to the target service data.
8. An abnormal behavior recognition model determination apparatus, comprising:
the extraction module is used for converting the acquired service data with the preset quantity into graph structure data and extracting the classification labels corresponding to the service data with the preset quantity;
the aggregation module is used for carrying out aggregation processing on the graph structure data to obtain dense vectors of the graph structure data;
and the training module is used for training the original graph neural network model through the dense vectors and the classification labels to obtain a trained target graph neural network model, wherein the target graph neural network model is used for identifying abnormal behaviors.
9. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to carry out the method of any one of claims 1 to 7 when executed.
10. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 7.
CN202111155817.2A 2021-09-29 2021-09-29 Abnormal behavior recognition model determining method and device Pending CN113935407A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111155817.2A CN113935407A (en) 2021-09-29 2021-09-29 Abnormal behavior recognition model determining method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111155817.2A CN113935407A (en) 2021-09-29 2021-09-29 Abnormal behavior recognition model determining method and device

Publications (1)

Publication Number Publication Date
CN113935407A true CN113935407A (en) 2022-01-14

Family

ID=79277330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111155817.2A Pending CN113935407A (en) 2021-09-29 2021-09-29 Abnormal behavior recognition model determining method and device

Country Status (1)

Country Link
CN (1) CN113935407A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310566A (en) * 2023-03-23 2023-06-23 华谱科仪(北京)科技有限公司 Chromatographic data graph processing method, computer device and computer readable storage medium

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045819A (en) * 2015-06-26 2015-11-11 深圳市腾讯计算机***有限公司 Model training method and device for training data
CN107067020A (en) * 2016-12-30 2017-08-18 腾讯科技(上海)有限公司 Image identification method and device
US20180174051A1 (en) * 2016-12-19 2018-06-21 Canon Kabushiki Kaisha Method for training an artificial neural network
WO2019154262A1 (en) * 2018-02-07 2019-08-15 腾讯科技(深圳)有限公司 Image classification method, server, user terminal, and storage medium
CN110188653A (en) * 2019-05-27 2019-08-30 东南大学 Activity recognition method based on local feature polymerization coding and shot and long term memory network
WO2020108474A1 (en) * 2018-11-30 2020-06-04 广州市百果园信息技术有限公司 Picture classification method, classification identification model generation method and apparatus, device, and medium
US20200287926A1 (en) * 2018-03-14 2020-09-10 Alibaba Group Holding Limited Graph structure model training and junk account identification
CN111814191A (en) * 2020-08-24 2020-10-23 北京邮电大学 Block chain private data protection method, device and system
CN111967433A (en) * 2020-08-31 2020-11-20 重庆科技学院 Action identification method based on self-supervision learning network
CN112015775A (en) * 2020-09-27 2020-12-01 北京百度网讯科技有限公司 Label data processing method, device, equipment and storage medium
CN112069302A (en) * 2020-09-15 2020-12-11 腾讯科技(深圳)有限公司 Training method of conversation intention recognition model, conversation intention recognition method and device
CN112508190A (en) * 2020-12-10 2021-03-16 上海燧原科技有限公司 Method, device and equipment for processing structured sparse parameters and storage medium
CN112529210A (en) * 2020-12-09 2021-03-19 广州云从鼎望科技有限公司 Model training method, device and computer readable storage medium
CN113255514A (en) * 2021-05-24 2021-08-13 西安理工大学 Behavior identification method based on local scene perception graph convolutional network
CN113392317A (en) * 2021-01-07 2021-09-14 腾讯科技(深圳)有限公司 Label configuration method, device, equipment and storage medium
CN113420190A (en) * 2021-08-23 2021-09-21 连连(杭州)信息技术有限公司 Merchant risk identification method, device, equipment and storage medium
CN113449679A (en) * 2021-07-14 2021-09-28 湖南长城科技信息有限公司 Method and device for identifying abnormal behaviors of human body

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105045819A (en) * 2015-06-26 2015-11-11 深圳市腾讯计算机***有限公司 Model training method and device for training data
US20180174051A1 (en) * 2016-12-19 2018-06-21 Canon Kabushiki Kaisha Method for training an artificial neural network
CN107067020A (en) * 2016-12-30 2017-08-18 腾讯科技(上海)有限公司 Image identification method and device
WO2019154262A1 (en) * 2018-02-07 2019-08-15 腾讯科技(深圳)有限公司 Image classification method, server, user terminal, and storage medium
US20200287926A1 (en) * 2018-03-14 2020-09-10 Alibaba Group Holding Limited Graph structure model training and junk account identification
WO2020108474A1 (en) * 2018-11-30 2020-06-04 广州市百果园信息技术有限公司 Picture classification method, classification identification model generation method and apparatus, device, and medium
CN110188653A (en) * 2019-05-27 2019-08-30 东南大学 Activity recognition method based on local feature polymerization coding and shot and long term memory network
CN111814191A (en) * 2020-08-24 2020-10-23 北京邮电大学 Block chain private data protection method, device and system
CN111967433A (en) * 2020-08-31 2020-11-20 重庆科技学院 Action identification method based on self-supervision learning network
CN112069302A (en) * 2020-09-15 2020-12-11 腾讯科技(深圳)有限公司 Training method of conversation intention recognition model, conversation intention recognition method and device
CN112015775A (en) * 2020-09-27 2020-12-01 北京百度网讯科技有限公司 Label data processing method, device, equipment and storage medium
CN112529210A (en) * 2020-12-09 2021-03-19 广州云从鼎望科技有限公司 Model training method, device and computer readable storage medium
CN112508190A (en) * 2020-12-10 2021-03-16 上海燧原科技有限公司 Method, device and equipment for processing structured sparse parameters and storage medium
CN113392317A (en) * 2021-01-07 2021-09-14 腾讯科技(深圳)有限公司 Label configuration method, device, equipment and storage medium
CN113255514A (en) * 2021-05-24 2021-08-13 西安理工大学 Behavior identification method based on local scene perception graph convolutional network
CN113449679A (en) * 2021-07-14 2021-09-28 湖南长城科技信息有限公司 Method and device for identifying abnormal behaviors of human body
CN113420190A (en) * 2021-08-23 2021-09-21 连连(杭州)信息技术有限公司 Merchant risk identification method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘戎翔;贺筱媛;陶九阳;: "基于异态集成学习的飞行目标辅助识别模型", 火力与指挥控制, no. 04, 31 December 2020 (2020-12-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116310566A (en) * 2023-03-23 2023-06-23 华谱科仪(北京)科技有限公司 Chromatographic data graph processing method, computer device and computer readable storage medium
CN116310566B (en) * 2023-03-23 2023-09-15 华谱科仪(北京)科技有限公司 Chromatographic data graph processing method, computer device and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN111582538B (en) Community value prediction method and system based on graph neural network
CN107358268A (en) Method, apparatus, electronic equipment and computer-readable recording medium for data clusters packet
CN113837323B (en) Training method and device of satisfaction prediction model, electronic equipment and storage medium
CN105472631A (en) Service data quantity and/or resource data quantity prediction method and prediction system
CN112148986B (en) Top-N service re-recommendation method and system based on crowdsourcing
CN112669143A (en) Risk assessment method, device and equipment based on associated network and storage medium
CN111210072B (en) Prediction model training and user resource limit determining method and device
CN109325530A (en) Compression method based on the depth convolutional neural networks on a small quantity without label data
CN112801231B (en) Decision model training method and device for business object classification
CN113935407A (en) Abnormal behavior recognition model determining method and device
CN113743594A (en) Network flow prediction model establishing method and device, electronic equipment and storage medium
CN113435900A (en) Transaction risk determination method and device and server
CN117094535A (en) Artificial intelligence-based energy supply management method and system
CN111489192A (en) Product share trend prediction method integrating ICT supply chain network topological features
Deng et al. A novel method for elimination of inconsistencies in ordinal classification with monotonicity constraints
CN116737334A (en) Task scheduling and data set label updating method and device and electronic equipment
CN111160614A (en) Training method and device of resource transfer prediction model and computing equipment
CN111209105A (en) Capacity expansion processing method, capacity expansion processing device, capacity expansion processing equipment and readable storage medium
CN114549062A (en) Consumer preference prediction method, system, electronic equipment and storage product
JP6959559B2 (en) Data number determination device, data number determination method and data number determination program
CN111435463B (en) Data processing method, related equipment and system
JP6783707B2 (en) Data number determination device, data number determination method and data number determination program
JP6849542B2 (en) Class determination device, class determination method and class determination program
CN118069331B (en) Intelligent acquisition task scheduling method and device based on digital twinning
US20240119470A1 (en) Systems and methods for generating a forecast of a timeseries

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination