WO2022105108A1 - 一种网络数据分类方法、装置、设备及可读存储介质 - Google Patents

一种网络数据分类方法、装置、设备及可读存储介质 Download PDF

Info

Publication number
WO2022105108A1
WO2022105108A1 PCT/CN2021/089913 CN2021089913W WO2022105108A1 WO 2022105108 A1 WO2022105108 A1 WO 2022105108A1 CN 2021089913 W CN2021089913 W CN 2021089913W WO 2022105108 A1 WO2022105108 A1 WO 2022105108A1
Authority
WO
WIPO (PCT)
Prior art keywords
graph
network data
matrix
vertex
classification
Prior art date
Application number
PCT/CN2021/089913
Other languages
English (en)
French (fr)
Inventor
胡克坤
董刚
赵雅倩
曹其春
杨宏斌
赵健
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Publication of WO2022105108A1 publication Critical patent/WO2022105108A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to the technical field of deep learning, and in particular, to a network data classification method, apparatus, device, and computer-readable storage medium.
  • Network applications continue to generate a large amount of network data.
  • Graphs are used to model and analyze data.
  • Graph vertices represent network data, and connecting edges represent connections between network data.
  • Some network data has labels, and the corresponding graph vertices carry labels.
  • the network data has no labels, and its corresponding graph vertices also have no labels. Unlabeled network data needs to be classified according to labeled network data.
  • network data classification cannot be solved directly by applying classification methods in traditional machine learning (such as support vector machines, k-nearest neighbors, decision trees, and Naive Bayes). This is because traditional classification methods usually assume that objects are independent, but most of the network data are related, and there are dependencies between different network data.
  • machine learning such as support vector machines, k-nearest neighbors, decision trees, and Naive Bayes.
  • graph convolutional neural networks based on spectral methods are mostly used to classify network data.
  • the graph Fourier transform and its inverse transform are mainly defined with the help of the Laplacian matrix of the graph, and the graph convolution operation, graph convolution layer and graph convolutional neural network are defined according to these two transforms to realize network data classification.
  • the current graph convolutional neural network based on spectral method has some shortcomings when performing network data classification tasks: the computational cost of eigendecomposition for the Laplacian matrix is relatively large; the eigenvector matrix of the Laplacian matrix is relatively Dense, resulting in low efficiency of graph Fourier transform; the graph convolution operation defined by Fourier transform has poor locality of computation in vertex domain, and network data classification efficiency is low.
  • the purpose of the present invention is to provide a network data classification method, which improves the classification efficiency of network data; another purpose of the present invention is to provide a network data classification device, equipment and computer-readable storage medium.
  • the present invention provides the following technical solutions:
  • a network data classification method comprising:
  • the graph vertex set includes labeled graph vertices and unlabeled graph vertices;
  • the graph vertex label matrix is updated by using the graph wavelet neural network to obtain a classification label corresponding to each of the unlabeled graph vertices, so as to classify the network data.
  • using the graph wavelet neural network to update the graph vertex label matrix including:
  • the graph vertex label matrix is updated according to the adaptive moment estimation algorithm by using the graph wavelet neural network.
  • using the adjacency matrix to construct a graph wavelet transform basis and a graph inverse wavelet transform basis including:
  • the graph wavelet transform basis and the graph inverse wavelet transform basis are calculated from the adjacency matrix using Chebyshev polynomials.
  • the method further includes:
  • a network data classification device comprising:
  • the graph modeling module is used to model the network data according to the classification instruction to obtain the target graph
  • a vertex and matrix obtaining module configured to obtain, according to the target graph, a graph vertex set corresponding to the network data, and an adjacency matrix and a vertex feature matrix corresponding to the target graph; wherein, the graph vertex set includes label graph vertices and unlabeled graph vertices;
  • a label matrix building module is used to construct a graph vertex label matrix according to the graph vertex set
  • a network construction module configured to construct a graph wavelet transform basis and a graph inverse wavelet transform basis by using the adjacency matrix, and construct a graph wavelet neural network according to the graph wavelet transform basis and the graph inverse wavelet transform basis;
  • a matrix input module for inputting the vertex feature matrix and the graph vertex label matrix to the graph wavelet neural network
  • a data classification module is used to update the graph vertex label matrix by using the graph wavelet neural network to obtain a classification label corresponding to each of the unlabeled graph vertices, so as to classify the network data.
  • the data classification module includes a matrix update sub-module, and the matrix update sub-module specifically uses the graph wavelet neural network to perform an adaptive moment estimation algorithm on the graph vertex label matrix. Module to be updated.
  • the network building module includes a basis calculation sub-module, and the basis calculation sub-module specifically uses Chebyshev polynomials to calculate the graph wavelet transform basis and the A module for inverse wavelet transform bases of graphs.
  • a classification result obtaining module configured to obtain a network data classification result after classifying the network data
  • the result output module is used for outputting and displaying the classification result of the network data.
  • a network data classification device comprising:
  • the processor is configured to implement the steps of the aforementioned network data classification method when executing the computer program.
  • graph modeling is performed on the network data according to the classification instruction to obtain a target graph; a graph vertex set corresponding to the network data, and an adjacency matrix and a vertex feature matrix corresponding to the target graph are obtained according to the target graph; wherein , the graph vertex set includes labeled graph vertices and unlabeled graph vertices; construct graph vertex label matrix according to graph vertex set; use adjacency matrix to construct graph wavelet transform basis and graph inverse wavelet transform basis, and according to graph wavelet transform basis and graph wavelet inverse Transform the base to construct a graph wavelet neural network; input the vertex feature matrix and the graph vertex label matrix into the graph wavelet neural network; use the graph wavelet neural network to update the graph vertex label matrix, and obtain the classification labels corresponding to the unlabeled graph vertices, which can be used for Classification of network data.
  • the present invention also provides a network data classification apparatus, device and computer-readable storage medium corresponding to the above network data classification method, which have the above technical effects and will not be repeated here.
  • Fig. 1 is a kind of implementation flow chart of the network data classification method in the embodiment of the present invention.
  • Fig. 2 is another implementation flowchart of the network data classification method in the embodiment of the present invention.
  • FIG. 3 is a structural block diagram of an apparatus for classifying network data in an embodiment of the present invention.
  • FIG. 4 is a structural block diagram of a network data classification device in an embodiment of the present invention.
  • FIG. 1 is an implementation flowchart of a method for classifying network data in an embodiment of the present invention, and the method may include the following steps:
  • S101 Perform graph modeling on the network data according to the classification instruction to obtain a target graph.
  • a classification instruction is sent to the data classification system.
  • the data classification system receives the classification instruction, and performs graph modeling on the network data according to the classification instruction to obtain the target graph.
  • the network data can be regarded as graph vertices, and the dependencies between network data can be regarded as connecting edges between graph vertices, so as to obtain the target graph through graph modeling.
  • the network data that needs to be classified can be scientific citation data, protein data, graphic image data, etc.
  • S102 Obtain, according to the target graph, a graph vertex set corresponding to the network data, and an adjacency matrix and a vertex feature matrix corresponding to the target graph.
  • the graph vertex set includes labeled graph vertices and unlabeled graph vertices.
  • the graph vertex set corresponding to the network data, and the adjacency matrix and vertex feature matrix corresponding to the target graph are obtained according to the target graph.
  • Each graph vertex in the graph vertex set has a one-to-one correspondence with the network data, and each element in the adjacency matrix represents the weight of the connecting edge between the two graph vertices.
  • the feature vector is constructed for each network data, and the feature vectors of all network data form a vertex feature matrix.
  • the graph vertex set includes labeled graph vertices and unlabeled graph vertices, labeled graph vertices correspond to labeled network data, and unlabeled graph vertices correspond to unlabeled network data.
  • the graph vertex label matrix is constructed according to the graph vertex set.
  • the graph vertex label matrix has one graph vertex per row, and each column represents a label category.
  • S104 Construct a graph wavelet transform basis and a graph inverse wavelet transform basis by using an adjacency matrix, and construct a graph wavelet neural network according to the graph wavelet transform basis and the graph inverse wavelet transform basis.
  • the adjacency matrix and the vertex feature matrix After obtaining the adjacency matrix and the vertex feature matrix, first use the adjacency matrix to construct the graph wavelet transform basis and the graph inverse wavelet transform basis, and construct the graph wavelet neural network according to the graph wavelet transform basis and the graph inverse wavelet transform basis.
  • S105 Input the vertex feature matrix and the graph vertex label matrix into the graph wavelet neural network.
  • the vertex feature matrix and the graph vertex label matrix are input into the graph wavelet neural network.
  • the graph vertex label matrix is updated by the graph wavelet neural network, and the classification labels corresponding to the unlabeled graph vertices are obtained, so as to realize the classification of network data.
  • the process of updating the graph vertex label matrix may include inputting the feature vector of each graph vertex in the graph vertex set into the graph wavelet neural network for forward propagation, and using the constructed graph convolution layer and output layer to calculate the network output of each layer, Finally, the predicted classification label information of each graph vertex is obtained.
  • the prediction error is calculated, the loss function value is back-propagated, and the network parameters of the graph wavelet neural network are optimized according to the adaptive moment estimation method.
  • the graph vertex label matrix is continuously updated by iterative training of the graph wavelet neural network. After the training, the category to which each unlabeled graph vertex belongs is obtained according to the obtained graph vertex label matrix, and then the category to which the unlabeled network data belongs is obtained.
  • graph modeling is performed on the network data according to the classification instruction to obtain a target graph; a graph vertex set corresponding to the network data, and an adjacency matrix and a vertex feature matrix corresponding to the target graph are obtained according to the target graph; wherein , the graph vertex set includes labeled graph vertices and unlabeled graph vertices; construct graph vertex label matrix according to graph vertex set; use adjacency matrix to construct graph wavelet transform basis and graph inverse wavelet transform basis, and according to graph wavelet transform basis and graph wavelet inverse Transform the base to construct a graph wavelet neural network; input the vertex feature matrix and the graph vertex label matrix into the graph wavelet neural network; use the graph wavelet neural network to update the graph vertex label matrix, and obtain the classification labels corresponding to the unlabeled graph vertices, which can be used for Classification of network data.
  • the embodiment of the present invention also provides a corresponding improvement solution.
  • the same steps or corresponding steps in the above-mentioned first embodiment can be referred to each other, and corresponding beneficial effects can also be referred to each other, which will not be repeated in the following improved embodiments.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 2 is another implementation flowchart of a network data classification method in an embodiment of the present invention, and the method may include the following steps:
  • S201 Perform graph modeling on the network data according to the classification instruction to obtain a target graph.
  • S202 Obtain, according to the target graph, a graph vertex set corresponding to the network data, and an adjacency matrix and a vertex feature matrix corresponding to the target graph.
  • the graph vertex set includes labeled graph vertices and unlabeled graph vertices.
  • V represents the graph vertex set
  • the graph vertex set includes the graph vertex subset VL composed of a small number of graph vertices with class labels and the graph vertices composed of most of the graph vertices without class labels.
  • Graph vertex subset V U , E represents the set of connecting edges.
  • a ij represents the weight of the connecting edge between the graph vertex i and the graph vertex j.
  • n
  • represents the number of graph vertices in the graph vertex set
  • C represents the label category number of all graph vertices
  • the element in the jth column is 1, and the elements in the other columns are 0, that is, there are
  • each column element corresponding to the row is set to 0.
  • S204 Calculate the graph wavelet transform basis and the graph inverse wavelet transform basis according to the adjacency matrix using the Chebyshev polynomial, and construct a graph wavelet neural network according to the graph wavelet transform basis and the graph inverse wavelet transform basis.
  • the Chebyshev polynomial can be used to calculate the graph wavelet transform basis and the graph inverse wavelet transform basis according to the adjacency matrix.
  • the graph wavelet transform basis and the graph wavelet inverse transform basis can be calculated by the following formulas:
  • D is a diagonal matrix
  • the n elements on the main diagonal represent n respectively.
  • the degree of each graph vertex, and the rest of the elements are zero;
  • the vertex feature matrix is X
  • the input layer of the graph wavelet neural network consists of d neurons, which are responsible for sequentially reading the d-dimensional attribute values of each graph vertex of the target graph.
  • the graph convolution operation and graph convolution layer are defined according to the graph wavelet transform basis and the graph wavelet inverse transform basis, and the lth (1 ⁇ l ⁇ L) layer graph convolution layer is defined as:
  • is the nonlinear activation function, represents the i-th column of the l-th layer input feature matrix of n*I dimensions, Represents the jth column of the lth layer output feature matrix of n*J dimension; is the convolution kernel diagonal matrix to be learned in the spectral domain.
  • Z j is an n-dimensional column vector, which represents the probability that all vertices belong to category j, that is, its kth element (1 ⁇ k ⁇ C) represents the probability that vertex k belongs to category j, and the prediction result vectors of all categories form n* C-dimensional prediction result matrix Z.
  • S205 Input the vertex feature matrix and the graph vertex label matrix into the graph wavelet neural network.
  • the loss function ls of the data classification system based on the graph wavelet neural network is pre-defined, which is composed of two parts, the supervised learning loss ls L of the labeled vertices and the unsupervised learning loss ls U of the unlabeled vertices. which is:
  • is a constant used to adjust the proportion of unsupervised learning loss in the overall loss function, represents the maximum probability of graph vertex i belonging to a certain category, Represents the maximum probability that the graph vertex k belongs to a certain class.
  • the training ends.
  • the category j to which it belongs can be obtained according to the vertex label matrix Y obtained by the final update.
  • the method of training the graph wavelet neural network to update the graph vertex label matrix can also use the Stochastic Gradient Descent (SGD), the momentum gradient descent method (Momentum Gradient Descent, MGD), which is not limited in this embodiment of the present invention.
  • SGD Stochastic Gradient Descent
  • MGD momentum gradient descent method
  • the network data classification result is obtained, and the network data classification result is obtained.
  • the network data classification result is output and displayed, so that the user can clearly see the category to which the unlabeled network data belongs.
  • the papers in the downloaded citation network dataset are classified, including 2708 papers divided into seven categories and the citation relationship between 5429 papers, and the corresponding feature vector x is constructed for each paper , the eigenvectors of all papers form the eigenmatrix X.
  • construct its adjacency matrix A According to the citation relationship between papers, construct its adjacency matrix A.
  • the goal is to accurately classify each paper, randomly select 20 papers for each category as labeled data, use 1000 papers as test data, and the rest as unlabeled data, construct a graph vertex label matrix Y, by comparing the graph The vertex label matrix is updated, and the category to which each unlabeled paper belongs is obtained according to the finally updated graph vertex label matrix.
  • Embodiment 1 corresponding to the technical solution claimed in independent claim 1, and also adds the technical solutions claimed in dependent claims 2 to 4.
  • each The technical solutions claimed in the dependent claims are flexibly combined on the basis of not affecting the integrity of the solutions, so as to better meet the requirements of different usage scenarios.
  • This embodiment only provides a solution with the most solutions and the best effect. , because the situation is complicated, it is impossible to enumerate all possible solutions one by one.
  • Those skilled in the art should be aware that there can be many examples based on the basic method principles provided in this application combined with the actual situation. should all fall within the scope of protection of this application.
  • the present invention further provides a network data classification device, and the network data classification device described below and the network data classification method described above can be referred to each other correspondingly.
  • FIG. 3 is a structural block diagram of an apparatus for classifying network data in an embodiment of the present invention, and the apparatus may include:
  • the graph modeling module 31 is configured to perform graph modeling on the network data according to the classification instruction to obtain a target graph
  • the vertex and matrix obtaining module 32 is used to obtain the graph vertex set corresponding to the network data according to the target graph, and the adjacency matrix and the vertex feature matrix corresponding to the target graph; wherein, the graph vertex set includes a label graph vertex and an unlabeled graph vertex;
  • the label matrix building module 33 is used to build a graph vertex label matrix according to the graph vertex set;
  • the network building module 34 is used to construct a graph wavelet transform base and a graph inverse wavelet transform base by using the adjacency matrix, and construct a graph wavelet neural network according to the graph wavelet transform base and the graph inverse wavelet transform base;
  • the matrix input module 35 is used to input the vertex feature matrix and the graph vertex label matrix to the graph wavelet neural network
  • the data classification module 36 is used for updating the graph vertex label matrix by using the graph wavelet neural network to obtain the classification label corresponding to each unlabeled graph vertex, so as to classify the network data.
  • the network data classification device performs graph modeling on the network data according to the classification instruction to obtain a target graph; obtains a graph vertex set corresponding to the network data, and an adjacency matrix and a vertex feature matrix corresponding to the target graph according to the target graph; wherein , the graph vertex set includes labeled graph vertices and unlabeled graph vertices; construct graph vertex label matrix according to graph vertex set; use adjacency matrix to construct graph wavelet transform basis and graph inverse wavelet transform basis, and according to graph wavelet transform basis and graph wavelet inverse Transform the base to construct a graph wavelet neural network; input the vertex feature matrix and the graph vertex label matrix into the graph wavelet neural network; use the graph wavelet neural network to update the graph vertex label matrix, and obtain the classification labels corresponding to the unlabeled graph vertices, which can be used for Classification of network data.
  • the data classification module 36 includes a matrix update sub-module, which is specifically a module for updating the graph vertex label matrix using a graph wavelet neural network according to an adaptive moment estimation algorithm.
  • the network construction module 34 includes a basis calculation submodule, which is specifically a module for calculating the graph wavelet transform basis and the graph inverse wavelet transform basis using Chebyshev polynomials according to the adjacency matrix.
  • the device may further include:
  • the classification result obtaining module is used to obtain the classification result of the network data after classifying the network data
  • the result output module is used to output and display the classification result of the network data.
  • FIG. 4 is a schematic diagram of a network data classification device provided by the present invention, and the device may include:
  • memory 41 for storing computer programs
  • the processor 42 can implement the following steps when executing the computer program stored in the above-mentioned memory 41:
  • the target graph Perform graph modeling on the network data according to the classification instruction to obtain the target graph; obtain the graph vertex set corresponding to the network data, as well as the adjacency matrix and vertex feature matrix corresponding to the target graph according to the target graph; wherein, the graph vertex set includes label graph vertices and Unlabeled graph vertices; construct graph vertex label matrix according to graph vertex set; construct graph wavelet transform basis and graph inverse wavelet transform basis by adjacency matrix, and construct graph wavelet neural network according to graph wavelet transform basis and graph inverse wavelet transform basis; The feature matrix and the graph vertex label matrix are input to the graph wavelet neural network; the graph vertex label matrix is updated by the graph wavelet neural network, and the classification labels corresponding to each unlabeled graph vertex are obtained to classify the network data.
  • the present invention also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the following steps can be implemented:
  • the target graph Perform graph modeling on the network data according to the classification instruction to obtain the target graph; obtain the graph vertex set corresponding to the network data, as well as the adjacency matrix and vertex feature matrix corresponding to the target graph according to the target graph; wherein, the graph vertex set includes label graph vertices and Unlabeled graph vertices; construct graph vertex label matrix according to graph vertex set; construct graph wavelet transform basis and graph inverse wavelet transform basis by adjacency matrix, and construct graph wavelet neural network according to graph wavelet transform basis and graph inverse wavelet transform basis; The feature matrix and the graph vertex label matrix are input to the graph wavelet neural network; the graph vertex label matrix is updated by the graph wavelet neural network, and the classification labels corresponding to each unlabeled graph vertex are obtained to classify the network data.
  • the computer-readable storage medium may include: a USB flash drive, a removable hard disk, a read-only memory (Read-Only Memory, ROM), a random access memory (Random Access Memory, RAM), a magnetic disk or an optical disk, etc. that can store program codes medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种网络数据分类方法、装置、设备及可读存储介质,该方法包括以下步骤:根据分类指令对网络数据进行图建模,获得目标图;根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵;其中,图顶点集包括有标签图顶点和无标签图顶点;根据图顶点集构建图顶点标签矩阵;利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络;利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。应用本发明所提供的网络数据分类方法,提升了对网络数据的分类效率。

Description

一种网络数据分类方法、装置、设备及可读存储介质
本申请要求于2020年11月18日提交至中国专利局、申请号为202011293669.6、发明名称为“一种网络数据分类方法、装置、设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及深度学习技术领域,特别是涉及一种网络数据分类方法、装置、设备及计算机可读存储介质。
背景技术
网络应用不断产生大量网络数据,用图来对数据建模分析,用图顶点表示网络数据,连接边表示网络数据之间的联系,有些网络数据有标签,其对应的图顶点便携带标签,有些网络数据无标签,其对应的图顶点也无标签。需要根据有标签的网络数据对无标签的网络数据进行分类。
与传统分类问题不同,网络数据分类不能直接应用传统机器学习中的分类方法(如支持向量机、k近邻、决策树和朴素贝叶斯)来解决。这是因为,传统分类方法通常假设对象是独立的,但网络数据之间大都存在关联,不同网络数据间存在依赖关系。
现有技术中大都采用基于谱方法的图卷积神经网络对网络数据进行分类。主要借助图的拉普拉斯矩阵定义图傅里叶变换及其逆变换,根据这两个变换定义图卷积操作、图卷积层和图卷积神经网络,以实现网络数据分类。但是,目前基于谱方法的图卷积神经网络在执行网络数据分类任务时,存在一些缺点:对拉普拉斯矩阵进行特征分解的计算开销较大;由于拉普拉斯矩阵的特征向量矩阵较稠密,导致图傅里叶变换效率低;通过傅里叶变换定义的图卷积操作在顶点域计算的局部性差,网络数据分类效率低。
综上所述,如何有效地解决现有的网络数据分类方法网络数据分类效率低的问题,是目前本领域技术人员急需解决的问题。
发明内容
本发明的目的是提供一种网络数据分类方法,该方法提升了对网络数据的分类效率;本发明的另一目的是提供一种网络数据分类装置、设备及计算机可读存储介质。
为解决上述技术问题,本发明提供如下技术方案:
一种网络数据分类方法,包括:
根据分类指令对网络数据进行图建模,获得目标图;
根据所述目标图获得所述网络数据对应的图顶点集,以及所述目标图对应的邻接矩阵和顶点特征矩阵;其中,所述图顶点集包括有标签图顶点和无标签图顶点;
根据所述图顶点集构建图顶点标签矩阵;
利用所述邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据所述图小波变换基底及所述图小波逆变换基底构建图小波神经网络;
将所述顶点特征矩阵和所述图顶点标签矩阵输入到所述图小波神经网络;
利用所述图小波神经网络对所述图顶点标签矩阵进行更新,得到各所述无标签图顶点对应的分类标签,以对所述网络数据进行分类。
在本发明的一种具体实施方式中,利用所述图小波神经网络对所述图顶点标签矩阵进行更新,包括:
利用所述图小波神经网络按照自适应矩估计算法对所述图顶点标签矩阵进行更新。
在本发明的一种具体实施方式中,利用所述邻接矩阵构造图小波变换基底及图小波逆变换基底,包括:
利用切比雪夫多项式根据所述邻接矩阵计算所述图小波变换基底及所述图小波逆变换基底。
在本发明的一种具体实施方式中,在对所述网络数据进行分类之后,还包括:
获取网络数据分类结果;
对所述网络数据分类结果进行输出显示。
一种网络数据分类装置,包括:
图建模模块,用于根据分类指令对网络数据进行图建模,获得目标图;
顶点及矩阵获得模块,用于根据所述目标图获得所述网络数据对应的图顶点集,以及所述目标图对应的邻接矩阵和顶点特征矩阵;其中,所述图顶点集包括有标签图顶点和无标签图顶点;
标签矩阵构建模块,用于根据所述图顶点集构建图顶点标签矩阵;
网络构建模块,用于利用所述邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据所述图小波变换基底及所述图小波逆变换基底构建图小波神经网络;
矩阵输入模块,用于将所述顶点特征矩阵和所述图顶点标签矩阵输入到所述图小波神经网络;
数据分类模块,用于利用所述图小波神经网络对所述图顶点标签矩阵进行更新,得到各所述无标签图顶点对应的分类标签,以对所述网络数据进行分类。
在本发明的一种具体实施方式中,所述数据分类模块包括矩阵更新子模块,所述矩阵更新子模块具体为利用所述图小波神经网络按照自适应矩估计算法对所述图顶点标签矩阵进行更新的模块。
在本发明的一种具体实施方式中,所述网络构建模块包括基底计算子模块,所述基底计算子模块具体为利用切比雪夫多项式根据所述邻接矩阵计算所述图小波变换基底及所述图小波逆变换基底的模块。
在本发明的一种具体实施方式中,还包括:
分类结果获取模块,用于在对所述网络数据进行分类之后,获取网络数据分类结果;
结果输出模块,用于对所述网络数据分类结果进行输出显示。
一种网络数据分类设备,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序时实现如前所述网络数据分类方法的步骤。
一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如前所述网络数据分类方法的 步骤。
本发明所提供的网络数据分类方法,根据分类指令对网络数据进行图建模,获得目标图;根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵;其中,图顶点集包括有标签图顶点和无标签图顶点;根据图顶点集构建图顶点标签矩阵;利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络;将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络;利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。通过构建图小波神经网络,利用图小波神经网络对各无标签图顶点进行分类,从而实现随网络数据的分类,保证了图卷积计算的局部性,降低了计算的复杂度,提升了对网络数据的分类效率。
相应的,本发明还提供了与上述网络数据分类方法相对应的网络数据分类装置、设备和计算机可读存储介质,具有上述技术效果,在此不再赘述。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明实施例中网络数据分类方法的一种实施流程图;
图2为本发明实施例中网络数据分类方法的另一种实施流程图;
图3为本发明实施例中一种网络数据分类装置的结构框图;
图4为本发明实施例中一种网络数据分类设备的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本发明方案,下面结合附图和具 体实施方式对本发明作进一步的详细说明。显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
实施例一:
参见图1,图1为本发明实施例中网络数据分类方法的一种实施流程图,该方法可以包括以下步骤:
S101:根据分类指令对网络数据进行图建模,获得目标图。
当需要对网络数据进行分类时,向数据分类***发送分类指令。数据分类***接收分类指令,并根据分类指令对网络数据进行图建模,获得目标图。如可以根据网络数据间的依赖关系,将网络数据作为图顶点,网络数据间的依赖关系作为图顶点之间的连接边,从而进行图建模得到目标图。
需要分类的网络数据可以为科学引文数据、蛋白质数据、图形图像数据等。
S102:根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵。
其中,图顶点集包括有标签图顶点和无标签图顶点。
在对网络数据进行图建模,获得目标图之后,根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵。图顶点集中的各图顶点与网络数据为一一对应关系,邻接矩阵中的各元素表示两个图顶点之间连接边的权重。针对每个网络数据构建其特征向量,所有网络数据的特征向量组成顶点特征矩阵。图顶点集包括有标签图顶点和无标签图顶点,有标签图顶点对应有标签的网络数据,无标签图顶点对应无标签的网络数据。
S103:根据图顶点集构建图顶点标签矩阵。
在获取到网络数据对应的图顶点集之后,根据图顶点集构建图顶点标签矩阵。图顶点标签矩阵的每行一个图顶点,每列代表一个标签类别。
S104:利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络。
在获取到邻接矩阵,并获取到顶点特征矩阵之后,先利用邻接矩阵构 造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络。
S105:将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络。
在获取到顶点特征矩阵,并构建得到图顶点标签矩阵和图小波神经网络之后,将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络。
S106:利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。
在构建得到图小波神经网络之后,利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,从而实现对网络数据进行分类。
对图顶点标签矩阵进行更新的过程可以包括将图顶点集中的各图顶点的特征向量输入到图小波神经网络进行前向传播,利用构建得到的图卷积层和输出层计算各层网络输出,最终得到每个图顶点的预测分类标签信息。根据预定义的图小波神经网络的网络损失函数,计算预测误差,将损失函数值进行反向传播,根据自适应矩估计法对图小波神经网络的网络参数进行优化。通过对图小波神经网络进行迭代训练,不断更新图顶点标签矩阵。训练结束后,根据得到的图顶点标签矩阵获取各无标签图顶点归属的类别,进而获得无标签的网络数据所属的类别。
本发明所提供的网络数据分类方法,根据分类指令对网络数据进行图建模,获得目标图;根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵;其中,图顶点集包括有标签图顶点和无标签图顶点;根据图顶点集构建图顶点标签矩阵;利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络;将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络;利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。通过构建图小波神经网络,利用图小波神经网络对各无标签图顶点进行分类,从而实现随网络数据的分类,保证了图卷积计算的局部性,降低了计算的复杂度,提升了对网络数据的分类效率。
需要说明的是,基于上述实施例一,本发明实施例还提供了相应的改进方案。在后续实施例中涉及与上述实施例一中相同步骤或相应步骤之间可相互参考,相应的有益效果也可相互参照,在下文的改进实施例中不再一一赘述。
实施例二:
参见图2,图2为本发明实施例中网络数据分类方法的另一种实施流程图,该方法可以包括以下步骤:
S201:根据分类指令对网络数据进行图建模,获得目标图。
S202:根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵。
其中,图顶点集包括有标签图顶点和无标签图顶点。
假设目标图为G=(V,E),V表示图顶点集,图顶点集包括由少量具有类别标签的图顶点构成的图顶点子集V L和由大部分无类别标签的图顶点构成的图顶点子集V U
Figure PCTCN2021089913-appb-000001
E表示连接边集合。
假设目标图G的邻接矩阵为A,A ij表示图顶点i和图顶点j的连接边的权重。
S203:根据图顶点集构建图顶点标签矩阵。
根据已有标签的图顶点子集V L,构建n*C维的标签矩阵Y,其中n=|V|表示图顶点集中的图顶点个数,C表示所有图顶点的标签类别数,Y ij表示图顶点i的类别标签是否为j(j=1,2,…,C),当图顶点i已有类别标签时,至其第j列元素为1,其余列元素为0,即有
Figure PCTCN2021089913-appb-000002
当图顶点i为无标签顶点时,将该行对应的每一列元素都置为0。
S204:利用切比雪夫多项式根据邻接矩阵计算图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络。
在获取到邻接矩阵A之后,可以利用切比雪夫多项式根据邻接矩阵计算图小波变换基底及图小波逆变换基底,假设用
Figure PCTCN2021089913-appb-000003
表示缩放尺度为s的图小波变换基底,
Figure PCTCN2021089913-appb-000004
表示缩放尺度为s的图小波逆变换基底,它们的每个列向量
Figure PCTCN2021089913-appb-000005
Figure PCTCN2021089913-appb-000006
都对应一个图小波函数,图小波变换基底及图小波逆变换基底可以通过以下公式计算获得:
Figure PCTCN2021089913-appb-000007
其中,U是由对目标图G的拉普拉斯矩阵L(L=D-A)进行特征分解得到的特征向量矩阵,D为一个对角矩阵,其主对角线上的n个元素分别表示n个图顶点的度数,其余元素均为零;H s=diag(h(sλ 1),h(sλ 2),…,h(sλ n))是缩放尺度为s的缩放矩阵,设
Figure PCTCN2021089913-appb-000008
λ i(1≤i≤n)是对G的拉普拉斯矩阵进行特征分解得到的特征值。
Figure PCTCN2021089913-appb-000009
可以通过将
Figure PCTCN2021089913-appb-000010
中的h(sλ i)替换为h(-sλ i)求得。
由于矩阵的特征分解计算开销较大,为避免此开销,利用切比雪夫多项式(T k(x)=2xT k-1(x)-T k-2(x),且T 0=1,T 1=x),近似计算
Figure PCTCN2021089913-appb-000011
Figure PCTCN2021089913-appb-000012
假设顶点特征矩阵为X,假设图小波神经网络输入层由d个神经元组成,负责依次读取目标图每个图顶点的d维属性值。
根据图小波变换基底及图小波逆变换基底定义图卷积操作和图卷积层,第l(1≤l≤L)层图卷积层定义为:
Figure PCTCN2021089913-appb-000013
其中,σ为非线性激活函数,
Figure PCTCN2021089913-appb-000014
表示n*I维的第l层输入特征矩阵的第i列,
Figure PCTCN2021089913-appb-000015
表示n*J维的第l层输出特征矩阵的第j列;
Figure PCTCN2021089913-appb-000016
是谱域中待学习的卷积核对角阵。
利用归一化指数(softmax)函数定义分类任务层即输出层:
Figure PCTCN2021089913-appb-000017
其中,
Figure PCTCN2021089913-appb-000018
Z j是n维的列向量,表示所有顶点属于类别j的概率,即它的第k(1≤k≤C)个元素表示顶点k属于类别j的概率,所有类别的预测结果向量组成n*C维的预测结果矩阵Z。
根据图卷积层和输出层定义图小波神经网络。
S205:将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络。
S206:利用图小波神经网络按照自适应矩估计算法对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。
利用图小波神经网络按照自适应矩估计算法Adam(Adaptive Moment Estimation)对图顶点标签矩阵进行更新。
预先定义基于图小波神经网络的数据分类***的损失函数ls,其由有标签顶点有监督学习损失ls L和无标签顶点无监督学习损失ls U两部分组成。即:
Figure PCTCN2021089913-appb-000019
其中,α是一个常数,用于调节无监督学习损失在整个损失函数中所占的比例,
Figure PCTCN2021089913-appb-000020
表示图顶点i属于某一类别的概率最大值,
Figure PCTCN2021089913-appb-000021
表示图顶点k属于某一类别的概率最大值。
对各层网络参数进行初始化,根据图卷积层的定义,结合该层的输入特征矩阵,计算每一个层的输出特征矩阵,按照输出层的定义,计算预测所有图顶点属于每一类别j的概率Z j(1≤j≤C),并根据前述定义的网络损失函数计算损失函数值。对于无标签图顶点v i∈V U,取概率最大的类别作为该图顶点的最新类别,进而更新顶点标签矩阵Y。利用自适应矩估计算法图小波神经网络各层网络参数W l(1≤l≤L)进行修正和更新,以优化损失函数值。当网络误差达到一个指定的较小值或迭代次数达到指定迭最大值时,训练结束。此时,对于无标签图顶点v i∈V U,可以根据最终更新获得的顶点标签矩阵Y得到其归属的类别j。
需要说明的是,对图小波神经网络进行训练,以对图顶点标签矩阵进行更新的方法除自适应矩估计算法外,还可以选用随机梯度下降法(Stochastic Gradient Descent,SGD)、动量梯度下降法(Momentum Gradient Descent,MGD),本发明实施例对此不做限定。
S207:获取网络数据分类结果。
通过对网络数据进行分类,得到网络数据分类结果,获取网络数据分类结果。
S208:对网络数据分类结果进行输出显示。
在获取到网络数据分类结果之后,对网络数据分类结果进行输出显示,从而用户可以清楚的看到无标签的网络数据所属的类别。
在一种具体实例应用中,对下载引文网络数据集中的论文进行分类,包含共分为七个类别的2708篇论文以及5429篇论文间的引用关系,针对每篇论文构建其对应的特征向量x,所有论文的特征向量组成特征矩阵X。根据论文间的引用关系,构建其邻接矩阵A。目标是将每篇论文进行准确归类,为每个类别随机抽取20篇论文作为标记数据,将1000篇论文作为测试数据,其余用作未标记的数据,构建图顶点标签矩阵Y,通过对图顶点标签矩阵进行更新,根据最终更新得到的图顶点标签矩阵,获得各无标签的论文分别所属的类别。
本实施例区别于独立权利要求1所要求保护的技术方案对应的实施例一,还增加了从属权利要求2至4对应要求保护的技术方案,当然,根据实际情况和要求的不同,可将各从属权利要求对应要求保护的技术方案在不影响方案完整性的基础上进行灵活组合,以更加符合不同使用场景的要求,本实施例只是给出了其中一种包含方案最多、效果最优的方案,因为情况复杂,无法对所有可能存在的方案一一列举,本领域技术人员应能意识到根据本申请提供的基本方法原理结合实际情况可以存在很多的例子,在不付出足够的创造性劳动下,应均在本申请的保护范围内。
相应于上面的方法实施例,本发明还提供了一种网络数据分类装置,下文描述的网络数据分类装置与上文描述的网络数据分类方法可相互对应参照。
参见图3,图3为本发明实施例中一种网络数据分类装置的结构框图,该装置可以包括:
图建模模块31,用于根据分类指令对网络数据进行图建模,获得目标图;
顶点及矩阵获得模块32,用于根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵;其中,图顶点集包括有标签图顶点和无标签图顶点;
标签矩阵构建模块33,用于根据图顶点集构建图顶点标签矩阵;
网络构建模块34,用于利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络;
矩阵输入模块35,用于将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络;
数据分类模块36,用于利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。
本发明所提供的网络数据分类装置,根据分类指令对网络数据进行图建模,获得目标图;根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵;其中,图顶点集包括有标签图顶点和无标签图顶点;根据图顶点集构建图顶点标签矩阵;利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络;将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络;利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。通过构建图小波神经网络,利用图小波神经网络对各无标签图顶点进行分类,从而实现随网络数据的分类,保证了图卷积计算的局部性,降低了计算的复杂度,提升了对网络数据的分类效率。
在本发明的一种具体实施方式中,数据分类模块36包括矩阵更新子模块,矩阵更新子模块具体为利用图小波神经网络按照自适应矩估计算法对图顶点标签矩阵进行更新的模块。
在本发明的一种具体实施方式中,网络构建模块34包括基底计算子模块,基底计算子模块具体为利用切比雪夫多项式根据邻接矩阵计算图小波变换基底及图小波逆变换基底的模块。
在本发明的一种具体实施方式中,该装置还可以包括:
分类结果获取模块,用于在对网络数据进行分类之后,获取网络数据分类结果;
结果输出模块,用于对网络数据分类结果进行输出显示。
相应于上面的方法实施例,参见图4,图4为本发明所提供的网络数据分类设备的示意图,该设备可以包括:
存储器41,用于存储计算机程序;
处理器42,用于执行上述存储器41存储的计算机程序时可实现如下步骤:
根据分类指令对网络数据进行图建模,获得目标图;根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵;其中,图顶点集包括有标签图顶点和无标签图顶点;根据图顶点集构建图顶点标签矩阵;利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络;将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络;利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。
对于本发明提供的设备的介绍请参照上述方法实施例,本发明在此不做赘述。
相应于上面的方法实施例,本发明还提供一种计算机可读存储介质,计算机可读存储介质上存储有计算机程序,计算机程序被处理器执行时可实现如下步骤:
根据分类指令对网络数据进行图建模,获得目标图;根据目标图获得网络数据对应的图顶点集,以及目标图对应的邻接矩阵和顶点特征矩阵;其中,图顶点集包括有标签图顶点和无标签图顶点;根据图顶点集构建图 顶点标签矩阵;利用邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据图小波变换基底及图小波逆变换基底构建图小波神经网络;将顶点特征矩阵和图顶点标签矩阵输入到图小波神经网络;利用图小波神经网络对图顶点标签矩阵进行更新,得到各无标签图顶点对应的分类标签,以对网络数据进行分类。
该计算机可读存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
对于本发明提供的计算机可读存储介质的介绍请参照上述方法实施例,本发明在此不做赘述。
本说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其它实施例的不同之处,各个实施例之间相同或相似部分互相参见即可。对于实施例公开的装置、设备及计算机可读存储介质而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的技术方案及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下,还可以对本发明进行若干改进和修饰,这些改进和修饰也落入本发明权利要求的保护范围内。

Claims (10)

  1. 一种网络数据分类方法,其特征在于,包括:
    根据分类指令对网络数据进行图建模,获得目标图;
    根据所述目标图获得所述网络数据对应的图顶点集,以及所述目标图对应的邻接矩阵和顶点特征矩阵;其中,所述图顶点集包括有标签图顶点和无标签图顶点;
    根据所述图顶点集构建图顶点标签矩阵;
    利用所述邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据所述图小波变换基底及所述图小波逆变换基底构建图小波神经网络;
    将所述顶点特征矩阵和所述图顶点标签矩阵输入到所述图小波神经网络;
    利用所述图小波神经网络对所述图顶点标签矩阵进行更新,得到各所述无标签图顶点对应的分类标签,以对所述网络数据进行分类。
  2. 根据权利要求1所述的网络数据分类方法,其特征在于,利用所述图小波神经网络对所述图顶点标签矩阵进行更新,包括:
    利用所述图小波神经网络按照自适应矩估计算法对所述图顶点标签矩阵进行更新。
  3. 根据权利要求1或2所述的网络数据分类方法,其特征在于,利用所述邻接矩阵构造图小波变换基底及图小波逆变换基底,包括:
    利用切比雪夫多项式根据所述邻接矩阵计算所述图小波变换基底及所述图小波逆变换基底。
  4. 根据权利要求3所述的网络数据分类方法,其特征在于,在对所述网络数据进行分类之后,还包括:
    获取网络数据分类结果;
    对所述网络数据分类结果进行输出显示。
  5. 一种网络数据分类装置,其特征在于,包括:
    图建模模块,用于根据分类指令对网络数据进行图建模,获得目标图;
    顶点及矩阵获得模块,用于根据所述目标图获得所述网络数据对应的图顶点集,以及所述目标图对应的邻接矩阵和顶点特征矩阵;其中,所述图顶点集包括有标签图顶点和无标签图顶点;
    标签矩阵构建模块,用于根据所述图顶点集构建图顶点标签矩阵;
    网络构建模块,用于利用所述邻接矩阵构造图小波变换基底及图小波逆变换基底,并根据所述图小波变换基底及所述图小波逆变换基底构建图小波神经网络;
    矩阵输入模块,用于将所述顶点特征矩阵和所述图顶点标签矩阵输入到所述图小波神经网络;
    数据分类模块,用于利用所述图小波神经网络对所述图顶点标签矩阵进行更新,得到各所述无标签图顶点对应的分类标签,以对所述网络数据进行分类。
  6. 根据权利要求5所述的网络数据分类装置,其特征在于,所述数据分类模块包括矩阵更新子模块,所述矩阵更新子模块具体为利用所述图小波神经网络按照自适应矩估计算法对所述图顶点标签矩阵进行更新的模块。
  7. 根据权利要求5或6所述的网络数据分类装置,其特征在于,所述网络构建模块包括基底计算子模块,所述基底计算子模块具体为利用切比雪夫多项式根据所述邻接矩阵计算所述图小波变换基底及所述图小波逆变换基底的模块。
  8. 根据权利要求7所述的网络数据分类装置,其特征在于,还包括:
    分类结果获取模块,用于在对所述网络数据进行分类之后,获取网络数据分类结果;
    结果输出模块,用于对所述网络数据分类结果进行输出显示。
  9. 一种网络数据分类设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序时实现如权利要求1至4任一项所述网络数据分类方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至4任一项所述网络数据分类方法的步骤。
PCT/CN2021/089913 2020-11-18 2021-04-26 一种网络数据分类方法、装置、设备及可读存储介质 WO2022105108A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011293669.6A CN112464057A (zh) 2020-11-18 2020-11-18 一种网络数据分类方法、装置、设备及可读存储介质
CN202011293669.6 2020-11-18

Publications (1)

Publication Number Publication Date
WO2022105108A1 true WO2022105108A1 (zh) 2022-05-27

Family

ID=74836648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089913 WO2022105108A1 (zh) 2020-11-18 2021-04-26 一种网络数据分类方法、装置、设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN112464057A (zh)
WO (1) WO2022105108A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858725A (zh) * 2022-11-22 2023-03-28 广西壮族自治区通信产业服务有限公司技术服务分公司 一种基于无监督式图神经网络的文本噪声筛选方法及***

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112464057A (zh) * 2020-11-18 2021-03-09 苏州浪潮智能科技有限公司 一种网络数据分类方法、装置、设备及可读存储介质
CN113284006A (zh) * 2021-05-14 2021-08-20 杭州莱宸科技有限公司 一种基于图卷积的供水管网独立计量分区方法
CN113255798A (zh) * 2021-06-02 2021-08-13 苏州浪潮智能科技有限公司 一种分类模型训练方法、装置、设备及介质
CN113657171A (zh) * 2021-07-20 2021-11-16 国网上海市电力公司 基于图小波神经网络的低压配电网台区拓扑识别方法
CN113705772A (zh) * 2021-07-21 2021-11-26 浪潮(北京)电子信息产业有限公司 一种模型训练方法、装置、设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918542A (zh) * 2019-01-28 2019-06-21 华南理工大学 一种用于关系图数据的卷积分类方法及***
CN110929029A (zh) * 2019-11-04 2020-03-27 中国科学院信息工程研究所 一种基于图卷积神经网络的文本分类方法及***
CN111461258A (zh) * 2020-04-26 2020-07-28 武汉大学 耦合卷积神经网络和图卷积网络的遥感影像场景分类方法
CN111552803A (zh) * 2020-04-08 2020-08-18 西安工程大学 一种基于图小波网络模型的文本分类方法
CN112464057A (zh) * 2020-11-18 2021-03-09 苏州浪潮智能科技有限公司 一种网络数据分类方法、装置、设备及可读存储介质

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111626119B (zh) * 2020-04-23 2023-09-01 北京百度网讯科技有限公司 目标识别模型训练方法、装置、设备以及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918542A (zh) * 2019-01-28 2019-06-21 华南理工大学 一种用于关系图数据的卷积分类方法及***
CN110929029A (zh) * 2019-11-04 2020-03-27 中国科学院信息工程研究所 一种基于图卷积神经网络的文本分类方法及***
CN111552803A (zh) * 2020-04-08 2020-08-18 西安工程大学 一种基于图小波网络模型的文本分类方法
CN111461258A (zh) * 2020-04-26 2020-07-28 武汉大学 耦合卷积神经网络和图卷积网络的遥感影像场景分类方法
CN112464057A (zh) * 2020-11-18 2021-03-09 苏州浪潮智能科技有限公司 一种网络数据分类方法、装置、设备及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BINGBING XU; HUAWEI SHEN; QI CAO; YUNQI QIU; XUEQI CHENG: "Graph Wavelet Neural Network", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 12 April 2019 (2019-04-12), 201 Olin Library Cornell University Ithaca, NY 14853 , XP081170133 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115858725A (zh) * 2022-11-22 2023-03-28 广西壮族自治区通信产业服务有限公司技术服务分公司 一种基于无监督式图神经网络的文本噪声筛选方法及***

Also Published As

Publication number Publication date
CN112464057A (zh) 2021-03-09

Similar Documents

Publication Publication Date Title
WO2022105108A1 (zh) 一种网络数据分类方法、装置、设备及可读存储介质
Sun et al. What and how: generalized lifelong spectral clustering via dual memory
WO2023000574A1 (zh) 一种模型训练方法、装置、设备及可读存储介质
US20230107574A1 (en) Generating trained neural networks with increased robustness against adversarial attacks
JP6574503B2 (ja) 機械学習方法および装置
US10460230B2 (en) Reducing computations in a neural network
US20190354853A1 (en) System and method for generating explainable latent features of machine learning models
US10074054B2 (en) Systems and methods for Bayesian optimization using non-linear mapping of input
CN114048331A (zh) 一种基于改进型kgat模型的知识图谱推荐方法及***
WO2016062044A1 (zh) 一种模型参数训练方法、装置及***
CN112966114B (zh) 基于对称图卷积神经网络的文献分类方法和装置
WO2022252458A1 (zh) 一种分类模型训练方法、装置、设备及介质
US20230185998A1 (en) System and method for ai-assisted system design
WO2020195940A1 (ja) ニューラルネットワークのモデル縮約装置
CN116188941A (zh) 一种基于松弛标注的流形正则化宽度学习方法及***
WO2022247092A1 (en) Methods and systems for congestion prediction in logic synthesis using graph neural networks
CN110717402B (zh) 一种基于层级优化度量学习的行人再识别方法
CN115392594B (zh) 一种基于神经网络和特征筛选的用电负荷模型训练方法
CN109614581B (zh) 基于对偶局部学习的非负矩阵分解聚类方法
US11875263B2 (en) Method and apparatus for energy-aware deep neural network compression
Dong et al. Discriminative analysis dictionary learning with adaptively ordinal locality preserving
Graham et al. Applying Neural Networks to a Fractal Inverse Problem
JP2020030702A (ja) 学習装置、学習方法及び学習プログラム
CN111626332B (zh) 一种基于图卷积极限学习机的快速半监督分类方法
US20230214425A1 (en) Node Embedding via Hash-Based Projection of Transformed Personalized PageRank

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893265

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21893265

Country of ref document: EP

Kind code of ref document: A1