CN113111224A - Network embedding learning method based on topology perception text representation - Google Patents

Network embedding learning method based on topology perception text representation Download PDF

Info

Publication number
CN113111224A
CN113111224A CN202110287783.6A CN202110287783A CN113111224A CN 113111224 A CN113111224 A CN 113111224A CN 202110287783 A CN202110287783 A CN 202110287783A CN 113111224 A CN113111224 A CN 113111224A
Authority
CN
China
Prior art keywords
text
topology
network
representation
aware
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110287783.6A
Other languages
Chinese (zh)
Other versions
CN113111224B (en
Inventor
苏勤亮
陈佳星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110287783.6A priority Critical patent/CN113111224B/en
Publication of CN113111224A publication Critical patent/CN113111224A/en
Application granted granted Critical
Publication of CN113111224B publication Critical patent/CN113111224B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a network embedding learning method based on topology perception text representation, which utilizes local topology structure information of nodes to generate a topology perception filter in a self-adaptive manner and is used for learning text representation, thereby obtaining the text representation of topology perception and more effectively integrating the topology structure information into the excavation of the text representation; in addition, the method can be combined with the existing network embedding model based on context awareness, the application range is wider, performance improvement is achieved on the tasks of link prediction and node classification, and the effectiveness of the network node representation learned by the method is reflected.

Description

Network embedding learning method based on topology perception text representation
Technical Field
The invention relates to the field of network embedding methods, in particular to a network embedding learning method based on topology perception text representation.
Background
In the real world, data with network structure is very common, for example, social networks based on platforms such as microblog, wechat, etc., paper reference networks, etc. The networks often contain massive information, and reasonably and effectively mining the information is very beneficial to the application of some downstream tasks, such as commodity recommendation and related paper recommendation in e-commerce systems. In the current age of explosive growth of information quantity, the networks usually contain a large number of nodes and edges and are very large in scale, and the direct processing of the networks requires a large amount of time and storage space and is very low in computing efficiency. Therefore, it is of great significance to research how to efficiently mine useful information in the network.
Among many network research methods, network embedding is a method which has wide application and achieves better effect, and is also called as network characterization learning and graph embedding. The goal of network embedding is to learn a low-dimensional representation for each node in the network, so that this low-dimensional representation retains as much important information about the node as possible. After the node characteristics are learned, rich information in the network can be utilized only by processing the low-dimensional characteristics, and the original network does not need to be processed, so that the computing efficiency can be greatly improved.
The traditional network embedding is mainly regarded as a dimension reduction process, and the main dimension reduction method comprises principal component analysis and multidimensional scaling. Later, other methods, such as local linear embedding, were proposed for maintaining a global structure of non-linear flow patterns. These methods can achieve good results on small networks, but they are not suitable for large networks due to the high complexity of the algorithm.
At present, many network-embedded algorithms are proposed, and there are three major categories of information mainly used by these algorithms: network structure information, node attribute information, and node label information. The network structure information refers to information obtained by processing according to the topological structure of the network, such as the direct adjacency relationship of network nodes; the node attribute information refers to some characteristics and contents of the node in the network, such as the sex, age, number of friends and the like of each user in the social network, and keywords, paper texts and the like of each paper in the paper citation network; the node label information refers to category information of all nodes in the network which are divided into several categories according to a certain standard, wherein each node is located.
Disclosure of Invention
The invention provides a network embedding learning method based on topology perception text representation, which enables the representation of network nodes to contain richer information.
In order to achieve the technical effects, the technical scheme of the invention is as follows:
a network embedding learning method based on topology-aware text representation comprises the following steps:
s1: extracting local topological structure information of nodes in a text network by using a graph neural network, and acquiring topological structure representations of all the nodes;
s2: inputting the topological structure representation of the node obtained in the step S1 into a filter generation module to generate a topology-aware filter, and inputting the obtained topology-aware filter and a text into a convolutional neural network module to generate a topology-aware text representation;
s3: obtaining context-aware text representations through an existing network embedding model, and combining the context-aware text representations with the topology-aware text representations obtained in S2 to obtain final text representations of the network nodes; and combining the topological structure representation and the text representation to obtain the final network node representation.
Further, the specific process of step S1 is:
firstly, a topological structure representation is randomly initialized for each node in the network, and the initial structure representation of the node u is used
Figure BDA0002981197480000021
According to an input network adjacency matrix, randomly sampling multi-hop neighbors from all neighbor nodes of a node u, wherein the number of the neighbors of each hop is fixed, and after sampling the neighbors of L hops for the node u, obtaining a local topological graph related to the node u;
after a local topological graph of the node u is obtained through sampling, the structure representation of the node from outside to inside is learned layer by utilizing a graph neural network, and the formula (1) and (2) are shown:
Figure BDA0002981197480000022
Figure BDA0002981197480000023
wherein
Figure BDA0002981197480000024
Wl and Wl' is a parameter of the neural network; n is a radical ofl-1(i) Representing the neighborhood of the ith node at the l-1 level; the Aggregate (-) is used for aggregating the vector representations of all the neighbor nodes to form a matrix; σ (-) is the activation function;
and obtaining the local topological structure representation of the node u after L layers, as shown in formula (3):
Figure BDA0002981197480000025
further, the specific process of step S2 includes:
the topological structure representation S obtained in the step S1uInput to a filter generation module to generate a topology-aware filter FuAs in equation (4):
Fu=Generator(su) (4)
wherein Generator (-) represents a deconvolution neural network;
will input text XuAnd topology aware filter FuInputting the two into a convolutional neural network together, and obtaining text representation based on local topological structure information through nonlinear transformation
Figure BDA0002981197480000031
Text characterization, referred to as topology awareness, as in equations (5) (6):
Mu=Conv(Fu,Xu)+b (5)
Figure BDA0002981197480000032
wherein Conv (·) is a convolutional neural network, b is a bias term in the convolutional layer; tanh (-) is a nonlinear activation function; mean (-) represents the average pooling operation.
Further, the specific process of step S3 is:
inputting the text into the existing context-aware network embedded model to obtain the text representation of upper and lower text perception
Figure BDA0002981197480000033
Context aware text characterization
Figure BDA0002981197480000034
Textual representation of topology perception derived in step S2
Figure BDA0002981197480000035
Linear weighting is carried out to obtain the final text representation of the network node
Figure BDA0002981197480000036
As in equation (7):
Figure BDA0002981197480000037
wherein, gamma belongs to [0,1] is a parameter of the model, and gamma is a parameter which can be learned and obtained in the training process together with other parameters in the model.
Characterizing the topology structure obtained in the step S1
Figure BDA0002981197480000038
And the text representation obtained in step S3
Figure BDA0002981197480000039
Splicing to obtain the final network node representation huAs in equation (8):
Figure BDA00029811974800000310
compared with the prior art, the technical scheme of the invention has the beneficial effects that:
compared with a network embedding method only considering topological structure information or only considering text information, the method simultaneously considers the two information and fuses the two information, so that the representation of the network node contains richer information; compared with a method considering structural information and text information at the same time, the method utilizes local topological structure information of the nodes to generate a topological perception filter in a self-adaptive manner and is used for learning text representation, so that the text representation of topological perception is obtained, and the topological structure information is more effectively integrated into the excavation of the text representation; in addition, the method can be combined with the existing network embedding model based on context awareness, the application range is wider, performance improvement is achieved on the tasks of link prediction and node classification, and the effectiveness of the network node representation learned by the method is reflected.
Drawings
Fig. 1 is a schematic diagram of extracting local topology information of a node in step S1;
FIG. 2 is a schematic flow chart of the text representation for learning topology perception in steps S1 and S2.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product;
it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
As shown in fig. 1-2, a network embedding learning method based on topology-aware text characterization includes the following steps:
s1: extracting local topological structure information of nodes in a text network by using a graph neural network, and acquiring topological structure representations of all the nodes;
s2: inputting the topological structure representation of the node obtained in the step S1 into a filter generation module to generate a topology-aware filter, and inputting the obtained topology-aware filter and a text into a convolutional neural network module to generate a topology-aware text representation;
s3: obtaining context-aware text representations through an existing network embedding model, and combining the context-aware text representations with the topology-aware text representations obtained in S2 to obtain final text representations of the network nodes; and combining the topological structure representation and the text representation to obtain the final network node representation.
Further, the specific process of step S1 is:
firstly, a topological structure representation is randomly initialized for each node in the network, and the initial structure representation of the node u is used
Figure BDA0002981197480000041
According to an input network adjacency matrix, randomly sampling multi-hop neighbors from all neighbor nodes of a node u, wherein the number of the neighbors of each hop is fixed, and after sampling the neighbors of L hops for the node u, obtaining a local topological graph related to the node u;
after a local topological graph of the node u is obtained through sampling, the structure representation of the node from outside to inside is learned layer by utilizing a graph neural network, and the formula (1) and (2) are shown:
Figure BDA0002981197480000042
Figure BDA0002981197480000043
wherein
Figure BDA0002981197480000044
Wl and Wl' is a parameter of the neural network; n is a radical ofl-1(i) Representing the neighborhood of the ith node at the l-1 level; the Aggregate (-) is used for aggregating the vector representations of all the neighbor nodes to form a matrix; σ (-) is the activation function;
and obtaining the local topological structure representation of the node u after L layers, as shown in formula (3):
Figure BDA0002981197480000051
further, the specific process of step S2 includes:
the topological structure representation S obtained in the step S1uInput to a filter generation module to generate a topology-aware filter FuAs in equation (4):
Fu=Generator(su) (4)
wherein Generator (-) represents a deconvolution neural network;
will input text XuAnd topology aware filter FuInputting the two into a convolutional neural network together, and obtaining text representation based on local topological structure information through nonlinear transformation
Figure BDA0002981197480000052
Text characterization, referred to as topology awareness, as in equations (5) (6):
Mu=Conv(Fu,Xu)+b (5)
Figure BDA0002981197480000053
wherein Conv (·) is a convolutional neural network, b is a bias term in the convolutional layer; tanh (-) is a nonlinear activation function; mean (-) represents the average pooling operation.
Further, the specific process of step S3 is:
inputting the text into the existing context-aware network embedded model to obtain the text representation of upper and lower text perception
Figure BDA0002981197480000054
Context aware text characterization
Figure BDA0002981197480000055
Textual representation of topology perception derived in step S2
Figure BDA0002981197480000056
Linear weighting is carried out to obtain the final text representation of the network node
Figure BDA0002981197480000057
As in equation (7):
Figure BDA0002981197480000058
wherein, gamma belongs to [0,1] is a parameter of the model, and gamma is a parameter which can be learned and obtained in the training process together with other parameters in the model.
Characterizing the topology structure obtained in the step S1
Figure BDA0002981197480000059
And the text representation obtained in step S3
Figure BDA00029811974800000510
Splicing to obtain the final network node representation huAs in equation (8):
Figure BDA00029811974800000511
the present embodiment employs two papers referencing network data sets Cora and HepTh, and one social network data set Zhihu. Wherein the Cora data set comprises 2277 papers of 7 research fields in machine learning, and 5214 reference relations exist; the HepTh dataset contains 1038 articles and 1990 reference relations; the Zhihu data set comprises 10000 active users and related descriptions and interesting topics of the users, and also comprises 43896 connection relations.
The method comprises the following specific steps:
firstly, a graph neural network is built, a structure representation is initialized randomly for network nodes, an adjacent matrix in network data and the initialized structure representation are input into the graph neural network, and a local topological structure representation of the nodes is obtained.
Secondly, building a deconvolution neural network and a convolution neural network, inputting the local topological structure representation of the node into the deconvolution neural network, and generating a topology perception filter; and inputting the input text and the topology perception filter into the convolutional neural network together to obtain the topology perception text representation of the nodes.
Inputting the text into an existing network embedded model based on context awareness to obtain a text representation based on context awareness; carrying out linear weighting on the context-aware text representation and the topology-aware text representation to obtain a final text representation; and splicing the text representation and the structural representation of the node to obtain a final network node representation.
Step four, for the link prediction task, deleting edges in a certain proportion in the network at random, calculating the similarity between the generated node representations, predicting whether edges exist among the nodes or not, and verifying the prediction result; and for the node classification task, inputting the generated node representation into a linear SVM classifier for classification, and verifying the classification result.
The same or similar reference numerals correspond to the same or similar parts;
the positional relationships depicted in the drawings are for illustrative purposes only and are not to be construed as limiting the present patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A network embedding learning method based on topology-aware text representation is characterized by comprising the following steps:
s1: extracting local topological structure information of nodes in a text network by using a graph neural network, and acquiring topological structure representations of all the nodes;
s2: inputting the topological structure representation of the node obtained in the step S1 into a filter generation module to generate a topology-aware filter, and inputting the obtained topology-aware filter and a text into a convolutional neural network module to generate a topology-aware text representation;
s3: obtaining context-aware text representations through an existing network embedding model, and combining the context-aware text representations with the topology-aware text representations obtained in S2 to obtain final text representations of the network nodes; and combining the topological structure representation and the text representation to obtain the final network node representation.
2. The method for learning network embedding based on topology-aware text characterization according to claim 1, wherein the specific process of step S1 includes:
firstly, a topological structure representation is randomly initialized for each node in the network, and the initial structure representation of the node u is used
Figure FDA0002981197470000014
The method comprises the steps that according to an input network adjacency matrix, multi-hop neighbors are randomly sampled from all neighbor nodes of a node u, the number of the neighbors of each hop is fixed, and after the node u is sampled by the neighbors of L hops, a local topological graph related to the node u is obtained.
3. The method for learning network embedding based on topology-aware text characterization according to claim 1, wherein the specific process of step S1 further includes:
after a local topological graph of the node u is obtained through sampling, the structure representation of the node from outside to inside is learned layer by utilizing a graph neural network, and the formula (1) and (2) are shown:
Figure FDA0002981197470000011
Figure FDA0002981197470000012
wherein
Figure FDA0002981197470000013
Wl and W′lIs a parameter of the graph neural network; n is a radical ofl-1(i) Representing the neighborhood of the ith node at the l-1 level; the Aggregate (-) is used for aggregating the vector representations of all the neighbor nodes to form a matrix; σ (-) is the activation function.
4. The method for learning network embedding based on topology-aware text characterization according to claim 3, wherein the specific process of step S1 further includes:
and obtaining the local topological structure representation of the node u after L layers, as shown in formula (3):
Figure FDA0002981197470000021
5. the method for learning network embedding based on topology-aware text characterization according to claim 4, wherein the specific process of the step S2 includes:
the topological structure representation S obtained in the step S1uInput to a filter generation module to generate a topology-aware filter FuAs in equation (4):
Fu=Generator(su) (4)
wherein Generator (. cndot.) represents a deconvolution neural network.
6. The topology aware text characterization based network embellishment of claim 5The learning method is characterized in that the specific process of the step S2 further includes: will input text XuAnd topology aware filter FuInputting the two into a convolutional neural network together, and obtaining text representation based on local topological structure information through nonlinear transformation
Figure FDA0002981197470000022
Text characterization, referred to as topology awareness, as in equations (5) (6):
Mu=Conv(Fu,Xu)+b (5)
Figure FDA0002981197470000023
wherein Conv (·) is a convolutional neural network, b is a bias term in the convolutional layer; tanh (-) is a nonlinear activation function; mean (-) represents the average pooling operation.
7. The method for network embedded learning based on topology-aware text characterization according to claim 6, wherein the specific process of the step S3 includes:
inputting the text into the existing context-aware network embedded model to obtain the text representation of upper and lower text perception
Figure FDA0002981197470000024
8. The method for learning network embedding based on topology-aware text characterization according to claim 7, wherein the specific process of step S3 further includes:
context aware text characterization
Figure FDA0002981197470000025
Textual representation of topology perception derived in step S2
Figure FDA0002981197470000026
Linear weighting is carried out to obtain the final text representation of the network node
Figure FDA0002981197470000027
As in equation (7):
Figure FDA0002981197470000028
where γ ∈ [0,1] is a parameter of the model.
9. The method for learning network embedding based on topology-aware text characterization according to claim 8, wherein the specific process of step S3 further includes:
characterizing the topology structure obtained in the step S1
Figure FDA0002981197470000029
And the text representation obtained in step S3
Figure FDA00029811974700000210
Splicing to obtain the final network node representation huAs in equation (8):
Figure FDA00029811974700000211
10. the method as claimed in claim 9, wherein γ is a learnable parameter, and is learned in the training process together with other parameters in the model.
CN202110287783.6A 2021-03-17 2021-03-17 Network embedded learning method based on topology perception text characterization Active CN113111224B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110287783.6A CN113111224B (en) 2021-03-17 2021-03-17 Network embedded learning method based on topology perception text characterization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110287783.6A CN113111224B (en) 2021-03-17 2021-03-17 Network embedded learning method based on topology perception text characterization

Publications (2)

Publication Number Publication Date
CN113111224A true CN113111224A (en) 2021-07-13
CN113111224B CN113111224B (en) 2023-08-18

Family

ID=76711626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110287783.6A Active CN113111224B (en) 2021-03-17 2021-03-17 Network embedded learning method based on topology perception text characterization

Country Status (1)

Country Link
CN (1) CN113111224B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897254A (en) * 2015-12-18 2017-06-27 清华大学 A kind of network representation learning method
CN110781271A (en) * 2019-09-02 2020-02-11 国网天津市电力公司电力科学研究院 Semi-supervised network representation learning model based on hierarchical attention mechanism
CN110851620A (en) * 2019-10-29 2020-02-28 天津大学 Knowledge representation method based on combination of text embedding and structure embedding
CN110874392A (en) * 2019-11-20 2020-03-10 中山大学 Text network information fusion embedding method based on deep bidirectional attention mechanism
CN111368074A (en) * 2020-02-24 2020-07-03 西安电子科技大学 Link prediction method based on network structure and text information
CN111461348A (en) * 2020-04-07 2020-07-28 国家计算机网络与信息安全管理中心 Deep network embedded learning method based on graph core
CN111709518A (en) * 2020-06-16 2020-09-25 重庆大学 Method for enhancing network representation learning based on community perception and relationship attention
CN111709474A (en) * 2020-06-16 2020-09-25 重庆大学 Graph embedding link prediction method fusing topological structure and node attributes
CN111913702A (en) * 2020-08-11 2020-11-10 湖北大学 Method for identifying key classes in software system based on graph neural network
US20210026922A1 (en) * 2019-07-22 2021-01-28 International Business Machines Corporation Semantic parsing using encoded structured representation
CN112347268A (en) * 2020-11-06 2021-02-09 华中科技大学 Text-enhanced knowledge graph joint representation learning method and device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897254A (en) * 2015-12-18 2017-06-27 清华大学 A kind of network representation learning method
US20210026922A1 (en) * 2019-07-22 2021-01-28 International Business Machines Corporation Semantic parsing using encoded structured representation
CN110781271A (en) * 2019-09-02 2020-02-11 国网天津市电力公司电力科学研究院 Semi-supervised network representation learning model based on hierarchical attention mechanism
CN110851620A (en) * 2019-10-29 2020-02-28 天津大学 Knowledge representation method based on combination of text embedding and structure embedding
CN110874392A (en) * 2019-11-20 2020-03-10 中山大学 Text network information fusion embedding method based on deep bidirectional attention mechanism
CN111368074A (en) * 2020-02-24 2020-07-03 西安电子科技大学 Link prediction method based on network structure and text information
CN111461348A (en) * 2020-04-07 2020-07-28 国家计算机网络与信息安全管理中心 Deep network embedded learning method based on graph core
CN111709518A (en) * 2020-06-16 2020-09-25 重庆大学 Method for enhancing network representation learning based on community perception and relationship attention
CN111709474A (en) * 2020-06-16 2020-09-25 重庆大学 Graph embedding link prediction method fusing topological structure and node attributes
CN111913702A (en) * 2020-08-11 2020-11-10 湖北大学 Method for identifying key classes in software system based on graph neural network
CN112347268A (en) * 2020-11-06 2021-02-09 华中科技大学 Text-enhanced knowledge graph joint representation learning method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JIAXING CHEN 等: "Attributed Network Embedding with Data Distribution Adaptation", 《2018 5TH INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, AND SOCIO-CULTURAL COMPUTING (BESC)》, pages 250 - 255 *
RUNXUAN CHEN 等: "Extractive Adversarial Networks for Network Embedding", 《2018 5TH INTERNATIONAL CONFERENCE ON BEHAVIORAL, ECONOMIC, AND SOCIO-CULTURAL COMPUTING (BESC)》, pages 162 - 167 *
ZENAN XU: "A Deep Neural Information Fusion Architecture for Textual Network Embeddings", 《ARXIV.ORG》, pages 1 - 9 *

Also Published As

Publication number Publication date
CN113111224B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN110263280B (en) Multi-view-based dynamic link prediction depth model and application
CN112257066B (en) Malicious behavior identification method and system for weighted heterogeneous graph and storage medium
CN113065974A (en) Link prediction method based on dynamic network representation learning
CN113486190A (en) Multi-mode knowledge representation method integrating entity image information and entity category information
CN112559764A (en) Content recommendation method based on domain knowledge graph
CN111985623A (en) Attribute graph group discovery method based on maximized mutual information and graph neural network
Sun et al. Graph force learning
Elwahsh et al. Modeling neutrosophic data by self-organizing feature map: MANETs data case study
CN110443574B (en) Recommendation method for multi-project convolutional neural network review experts
Li et al. Adaptive subgraph neural network with reinforced critical structure mining
CN111461348A (en) Deep network embedded learning method based on graph core
Paudel et al. Snapsketch: Graph representation approach for intrusion detection in a streaming graph
CN108614932B (en) Edge graph-based linear flow overlapping community discovery method, system and storage medium
Faheem et al. Multilayer cyberattacks identification and classification using machine learning in internet of blockchain (IoBC)-based energy networks
CN114842247B (en) Characteristic accumulation-based graph convolution network semi-supervised node classification method
Guo et al. A novel dynamic incremental rules extraction algorithm based on rough set theory
Sharma et al. Comparative analysis of different algorithms in link prediction on social networks
CN113111224A (en) Network embedding learning method based on topology perception text representation
CN115631057A (en) Social user classification method and system based on graph neural network
CN114596473A (en) Network embedding pre-training method based on graph neural network hierarchical loss function
Ngonmang et al. Toward community dynamic through interactions prediction in complex networks
CN117688425B (en) Multi-task graph classification model construction method and system for Non-IID graph data
Al Etaiwi et al. Learning graph representation: A comparative study
Beddar-Wiesing Using local activity encoding for dynamic graph pooling in stuctural-dynamic graphs: student research abstract
CN117993915A (en) Transaction behavior detection method based on metamulti-graph heterogeneous graph neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant