CN112069306B - Paper partner recommendation method based on author writing tree and graph neural network - Google Patents

Paper partner recommendation method based on author writing tree and graph neural network Download PDF

Info

Publication number
CN112069306B
CN112069306B CN202010710086.2A CN202010710086A CN112069306B CN 112069306 B CN112069306 B CN 112069306B CN 202010710086 A CN202010710086 A CN 202010710086A CN 112069306 B CN112069306 B CN 112069306B
Authority
CN
China
Prior art keywords
author
node
information
paper
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010710086.2A
Other languages
Chinese (zh)
Other versions
CN112069306A (en
Inventor
杜一
乔子越
周园春
宁致远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Computer Network Information Center of CAS
Original Assignee
Computer Network Information Center of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Computer Network Information Center of CAS filed Critical Computer Network Information Center of CAS
Priority to CN202010710086.2A priority Critical patent/CN112069306B/en
Publication of CN112069306A publication Critical patent/CN112069306A/en
Application granted granted Critical
Publication of CN112069306B publication Critical patent/CN112069306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/335Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a paper partner recommending method based on an author writing tree and a graph neural network, which comprises the following steps: 1) for each author in the database, collecting papers of the author and extracting keyword information to construct a work tree of the author; 2) for each authoring tree, constructing an information propagation model corresponding to the authoring tree based on a graph neural network model, propagating thesis information and keyword information on the authoring tree into author nodes, and coding an initial characterization vector corresponding to an author; 3) extracting all the collaborated author pair sets from the database, training parameters of the information propagation model, and optimizing each initial characterization vector to obtain a final characterization vector of each author; 4) for a paper A needing to recommend collaborators, traversing a author set N which does not collaborate with the author of the paper A; and then comparing the cosine similarity of the final characterization vector of the author A and each author in the set N, and recommending collaborators for the authors of the paper A according to the calculation result.

Description

Paper partner recommendation method based on author writing tree and graph neural network
Technical Field
The invention mainly relates to the technical field of entity disambiguation, heterogeneous network embedding and word vector embedding, in particular to a paper partner recommendation method based on an author writing tree and a graph neural network.
Background
Modern science has a new trend of integration and intersection, scientific research activities completed by a single person are becoming more and more difficult, and scientific research cooperation is particularly important. Scientific research cooperation can promote the communication between scientific research personnel, realize the integration and the high-efficient utilization of scientific research resource. Therefore, seeking collaborators is one of the important academic activities of researchers, because suitable collaborators can help to improve the efficiency, innovation and quality of scientific research. For scientific research personnel, how to quickly find out a suitable scientific research collaborator with high quality becomes a problem to be solved urgently at present. On the other hand, with the continuous promotion of the informatization construction of scientific research management in colleges and universities in China, abundant scientific research data resources are accumulated, and how to explore and utilize the existing scientific research management data resources to solve the problem is a topic worth discussing. With the rapid development of information technology, scientific research platforms have been widely developed, and a large number of researchers are attracted to develop scientific research cooperation through virtual communities. On the scientific research platforms, researchers can release own research results, browse public information of other scholars, track scientific research progress of colleagues and seek potential cooperation opportunities. However, with the large growth of academic data, researchers often spend much time locating potential collaborators interested in themselves when facing the massive amount of data in digital research platforms, and users are often ambiguous about knowing which scholars and similar scholars to collaborate with. Therefore, developing an efficient collaborator recommendation system by using the scientific research database of the scientific research platforms can effectively promote academic collaboration and knowledge sharing. The partner recommendation system usually obtains the characteristics of research contents and directions of the authors by analyzing scientific research achievements of the existing authors and mining information of papers published by the authors, and then finds students which are consistent with the research directions of the target authors and have no cooperation for recommendation through similarity of the characteristics.
Disclosure of Invention
The invention aims to provide a paper partner recommendation technical scheme based on an author writing tree structure and a graph neural network information propagation model. The technical scheme is that an author work tree is constructed by using papers published by authors and keyword information of the papers, and initial characterization vectors of each node in the author work tree are obtained by using titles, abstracts and keyword text information in the papers. The final characterization vectors of the author nodes are encoded and optimized using a graph neural network-based information transfer model. And finally, comparing the closeness of the research directions of the authors through the cosine similarity between the final characterization vectors of the authors, and recommending the partners with high closeness to the target authors.
The invention specifically comprises the following steps:
the method comprises the following steps: in the scientific research database, for each author, all papers written by the author are collected, keyword information of the papers is collected, and a writing tree of the papers is constructed.
Step two: according to the step, an information propagation model is constructed on the basis of a graph neural network model according to the work tree constructed by each author, the thesis and keyword information on the work tree are propagated to author nodes, and a final characterization vector of the author is coded.
Step three: all collaborated author pair sets are extracted from the scientific database. And optimizing the final characterization vector of the author according to the information propagation model and the cross entropy loss function training parameters constructed in the second step.
Step four: and (4) according to the author final characterization vectors generated in the first step to the third step, giving any author, and recommending collaborators to the author.
The technical scheme of the invention is as follows:
a paper partner recommendation method based on an author writing tree and a graph neural network comprises the following steps:
1) for each author in the database, collecting papers written by the author and extracting keyword information in the collected papers to construct a writing tree of the author;
2) for each author tree, constructing an information propagation model of the author tree based on a graph neural network model, propagating thesis information and keyword information on the author tree into author nodes, and coding an initial characterization vector corresponding to the author;
3) extracting all the cooperative author pair sets from the database, then training parameters of the information propagation model according to the information propagation model and the cross entropy loss function, and optimizing each initial characterization vector to obtain a final characterization vector of each author;
4) for a paper A needing to recommend collaborators, traversing the database to obtain an author set N which has not collaborated with the author of the paper A; then, the cosine similarity between the final characterization vector of the author A and the final characterization vector of each author in the author set N is compared, and a paper partner is recommended to the author of the paper A according to the cosine similarity calculation result.
The information propagation model is
Figure BDA0002596210310000021
Figure BDA0002596210310000022
Wherein a is i A characterization vector, p, representing the author in the authoring tree i Representing a characterization vector of a paper in a work tree, t i A token vector representing a keyword in the authoring tree; the information propagation model propagation method comprises the following steps: first, any and the characterization vector is p i The key word node connected with the thesis node converts the characterization vector t of the key word node into a self characterization vector t i Is transmitted to the thesis node, the thesis node integrates the information of the node and all the keyword information of the node to generate a new hidden layer token vector p' i (ii) a Then any and the characterization vector is a i The paper node connected to the author node of (a) will own hidden token vector p' i Is transmitted to the paper node, the author node integrates the information of the author node and all the paper information to generate a characterization vector a 'of the author' i (ii) a Wherein N (p) i ) Representing all and token vectors as p i A collection of token vectors of keyword nodes to which the paper nodes are connected, W 1 Is a propagation matrix, W, used by the article to propagate information to itself 2 Is a propagation matrix used by keywords to propagate information to papers, ReLU () is the activation function, N (a) i ) Represents all and token vectors as a i A set of hidden token vectors, W, of the paper nodes connected to the author node of 3 Is a propagation matrix, W, used by the author to propagate information to itself 4 Is the propagation matrix used by the paper to propagate information to the author.
Further, a i 、p i 、t i Are all of dimension d 1 Vector of (a), W 1 Is dimension d 2 ×d 1 Matrix of (W) 2 Is dimension d 2 ×d 1 Matrix of W 3 Is dimension d 3 ×d 1 Matrix of (W) 4 Is of dimension d 3 ×d 2 Matrix of d 1 、d 2 、d 3 Are all set values.
Further, a cross entropy loss function is adopted
Figure BDA0002596210310000031
Training parameters W of the information propagation model 1 、W 2 、W 3 And W 4 (ii) a Wherein (a' i ,a′ j ) Epsilon P represents the final characterization vectors of a' i And a' j Belongs to the author pair set P, D (a' i ) Representing randomly chosen K not associated with a 'final characterization vector' i Wherein K is a set value.
Further, the method for constructing the author tree comprises the following steps: the paper written by an author and the keywords of the paper are regarded as nodes, and a writing tree with three layers of nodes is constructed; wherein the root node of the first level is an author node of the authoring tree; the second layer is a thesis node and is connected with the author node; the third layer is keyword nodes, each keyword node is connected with a thesis node containing the keyword.
Furthermore, assigning an initial characterization vector to each node on the authoring tree, wherein the dimensionalities of the characterization vectors of the nodes are the same; for each keyword node, converting each keyword into a semantic representation vector by using a Word2Vec model, and representing the initial representation vector of the node; for each thesis node, collecting the text information of the title and the abstract of each thesis node, splicing the text information into a piece of long text information, and converting the piece of long text information into a semantic representation vector by using a Doc2Vec model to represent an initial representation vector of the thesis node; for each author node, the mean of the initial token vectors of all the paper nodes will be calculated, representing the initial token vector of the author node.
A computer-readable storage medium, characterized by storing a computer program comprising instructions for carrying out the steps of the above-mentioned method.
Compared with the prior art, the invention has the following positive effects:
1. by extracting the author tree, unstructured text data such as research content and research direction of each author are condensed into tree-shaped structured data, so that the characterization information of the author can be mined more easily by using a neural network model characterized by information propagation between nodes.
2. The invention provides an information propagation model based on a graph neural network, which can sense the structure of an author writing tree, and can propagate keywords and paper information on the author writing tree to author nodes from bottom to top, and parameters are trained by using a cross entropy loss function, so that efficient author representation vectors are encoded and optimized.
3. Compared with the conventional partner recommendation model based on the deep neural network method, the method has the characteristics of high training and recommendation speed, small storage space, easiness in expansion and the like.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a diagram of an author tree structure.
Detailed Description
The invention will be further explained with reference to the drawings and the embodiments.
The invention aims to recommend a new partner for an author in any database, and mainly uses the paper information published by the author, the title, the abstract and the keyword information of the paper. The process flow of the method of the invention is shown in figure 1.
The method comprises the following steps: in the scientific research database, for each author, all papers written by the author are collected, keyword information of the papers is collected, and a writing tree of the papers is constructed.
In the research database, a certain number of authors and a certain number of papers are included. Retrieving a collection of articles written by an author; a paper is searched, and information such as author set, title, keyword set and abstract of the paper can be obtained.
For each author, a set of papers written by the author is retrieved first, and then a set of keywords for each paper in the set is retrieved. Then, the author, the paper written by the author and the keywords of the paper written by the author are regarded as nodes, and a writing tree with three layers of nodes can be constructed. Wherein the root node of the first layer is the author node; the second layer is a paper node written by the second layer and is connected with the author; the third layer is the keyword nodes of all the papers, and each keyword node is connected with the paper node containing the keyword. This creates the author's authoring tree, a schematic of which is shown in FIG. 2.
Each node on the authoring tree is then assigned an initial token vector. For each keyword node, using the Word2Vec model to convert each keyword into a semantic representation vector, representing the initial representation vector of the node. For each paper node, collecting the text information of the title and abstract, splicing the text information into a long text information, and converting the long text into a semantic representation vector by using a Doc2Vec model, wherein the semantic representation vector represents the initial representation vector of the paper. For each author node, the initial token vectors of all papers will be averaged to represent the initial token vector of the author. And the dimension of the token vector of each node is the same.
Step two: according to the step, an information propagation model is constructed on the basis of a graph neural network model according to the author tree constructed by each author, the paper and keyword information on the author tree are propagated to author nodes, and a final characterization vector of the author is coded.
The graph neural network is a popular neural network architecture proposed in recent years, and combines a traditional deep neural network with the structure information of a graph, so that the neural network is popularized to the data of the graph structure. Graph neural networks, and variants thereof, may be viewed as a special case of an information propagation model, with the purpose of converting neighbor information around a node and attribute information of the node itself into an embedded representation of the node.
Our goal is to pass the information of the paper nodes and keyword nodes on each author's work tree into the author nodes. For any author tree, the information of the keyword node is firstly transmitted to the corresponding paper node, and then the information of the paper node is transmitted to the author node, by adopting the propagation sequence from the lower layer to the upper layer. We use an information transfer model based on a graph neural network model, and the formula of propagation is defined as follows:
Figure BDA0002596210310000051
Figure BDA0002596210310000052
wherein a is i ,p i ,t i Representing an author, a paper, a token vector of a keyword in a work tree, respectively.
In formula (1), any and characterization vector is p i The key word node connected with the thesis node converts the characterization vector t of the key word node into a self characterization vector t i Is transmitted to the thesis node, the thesis node integrates the information of the node and all the keyword information of the node to generate a new hidden layer token vector p' i . Wherein N (p) i ) Representing all and token vectors as p i A collection of token vectors of keyword nodes to which the paper nodes are connected, W 1 Is a propagation matrix, W, used by the article to propagate information to itself 2 Is a propagation matrix used by keywords to propagate information to the paper, ReLU () is an activation function that changes any number in parentheses, vector or element less than 0 in the matrix to 0.
In the formula (2), any and the characterization vector is a i The paper node connected with the author node of (2) will own hidden layer token vector p' i The information of the author node is transmitted to the thesis node, and the author node integrates the information of the author node and the author nodePaper information generates the final characterization vector a 'of the author' i . Wherein N (a) i ) Representing all and token vectors as a i A set of hidden token vectors, W, of the paper nodes connected to the author node of 3 Is a propagation matrix, W, used by the author to propagate information to itself 4 Is the propagation matrix used by the paper to propagate information to the author.
In the propagation model, a i ,p i ,t i Is dimension d 1 Vector of (a), W 1 ,W 2 Is dimension d 2 ×d 1 Matrix of W 3 Is dimension d 3 ×d 1 Matrix of W 4 Is dimension d 3 ×d 2 Matrix of d 1 ,d 2 ,d 3 The setting value can be set according to actual conditions.
While W 1 ,W 2 ,W 3 ,W 4 The parameter matrix used for training is first initialized randomly and then its values are changed as the training process progresses.
Step three: all collaborated author pair sets are extracted. And optimizing the final characterization vector of the author according to the information transfer model and the cross entropy loss function training parameters constructed in the second step.
And traversing every two authors in the database in sequence, and if at least one same paper appears in a paper set written by the two authors, namely the two authors collaborate with the same paper, adding an author pair formed by the two authors into the collaborated author pair set P.
Training the parameter W by using a cross entropy loss function according to the obtained set P of all the collaborated authors 1 ,W 2 ,W 3 ,W 4 The cross entropy loss function is defined as follows:
Figure BDA0002596210310000053
wherein (a' i ,a′ j ) Epsilon P represents the final characterization vectors of a' i And a' j Belongs to the set P, i.e. the two authors collaborate on the paper, D (a' i ) Representing that randomly chosen K were not associated with a 'as the final characterization vector' i The author of (2), wherein K can be set as a set value, generally set as 3, according to actual conditions.
Substituting the formulas (1) and (2) in the second step into the formula (3). First, a parameter W is initialized randomly 1 ,W 2 ,W 3 ,W 4 Then a mini-batch Adam optimizer is adopted to minimize the cross entropy loss function
Figure BDA0002596210310000061
The parameters are trained, and thus, the optimized author final characterization vector is obtained through the information transfer model after the parameters are trained.
Step four: for any author, according to the final characterization vectors generated in the first step to the third step, collaborators are recommended to the author.
And obtaining final characterization vectors of all authors according to the steps one to three. For a paper needing to recommend collaborators, traversing all other authors in the data set, if the same paper does not appear in the paper set written by the author and any other author, representing that the author has not collaborated, adding the paper to the author set which the author has not collaborated. After traversal is finished, cosine similarity of final characterization vectors of the author and all authors which have not collaborated with the author is compared, values are sorted from top to bottom, the authors which have not collaborated with the previous M values are taken, M recommended collaborators are finally generated, and the authors which are closer to the previous values represent higher recommendation degrees.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (5)

1. A paper partner recommendation method based on an author writing tree and a graph neural network comprises the following steps:
1) for each author in the database, collecting papers written by the author and extracting keyword information in the collected papers to construct a writing tree of the author; the method for constructing the author tree comprises the following steps: the paper written by an author and the keywords of the paper are regarded as nodes, and a writing tree with three layers of nodes is constructed; wherein the root node of the first level is an author node of the authoring tree; the second layer is a thesis node and is connected with the author node; the third layer is keyword nodes, and each keyword node is connected with a thesis node containing the keyword;
2) for each author tree, constructing an information propagation model of the author tree based on a graph neural network model, propagating thesis information and keyword information on the author tree into author nodes, and coding an initial characterization vector corresponding to the author; wherein the information propagation model is
Figure FDA0003640912620000011
Figure FDA0003640912620000012
Wherein a is i Initial characterization vector, p, representing authors in a authoring tree i Representing an initial characterization vector, t, of a paper in a work tree i An initial token vector representing a keyword in the authoring tree; the information propagation model propagation method comprises the following steps: first, any and initial characterization vector is p i The keyword node connected with the thesis node is used for setting the initial characterization vector t of the keyword node i Is transmitted to the thesis node, the thesis node integrates the information of the node and all the keyword information of the node to generate a new hidden layer token vector p' i (ii) a Then any and initial characterization vector is a i The paper node connected to the author node of (a) will own hidden token vector p' i Is communicated to the author node, the author node integrates the information of the author node and all paper information of the author node to generate a final characterization vector a 'of the author' i (ii) a Wherein N (p) i ) Representing all the vectors p relative to the initial token vector i Section of thesisSet of token vectors, W, of point-connected keyword nodes 1 Is a propagation matrix, W, used by the article to propagate information to itself 2 Is a propagation matrix used by keywords to propagate information to papers, ReLU () is the activation function, N (a) i ) Represents all the feature vectors a i A set of hidden token vectors, W, of the paper nodes connected to the author node of 3 Is a propagation matrix, W, used by the author to propagate information to itself 4 Is a propagation matrix used by the paper to propagate information to the author;
3) extracting all the collaborated author pair sets from the database, and then training the parameters W of the information propagation model according to the information propagation model and the cross entropy loss function 1 、W 2 、W 3 And W 4 Optimizing the initial characterization vectors of each author to obtain the final characterization vectors of each author;
4) for a paper A needing to recommend collaborators, traversing the database to obtain an author set N which does not collaborate with the author of the paper A; and then, the cosine similarity of the final characterization vector of the author A and the final characterization vector of each author in the author set N is compared, and a paper partner is recommended to the author of the paper A according to the cosine similarity calculation result.
2. The method of claim 1, wherein a is i 、p i 、t i Are all of dimension d 1 Vector of (a), W 1 Is dimension d 2 ×d 1 Matrix of W 2 Is of dimension d 2 ×d 1 Matrix of W 3 Is dimension d 3 ×d 1 Matrix of W 4 Is dimension d 3 ×d 2 Matrix of d 1 、d 2 、d 3 Are all set values.
3. A method as claimed in claim 1 or 2, characterized by using a cross-entropy loss function
Figure FDA0003640912620000013
Figure FDA0003640912620000021
Training parameters W of the information propagation model 1 、W 2 、W 3 And W 4 (ii) a Wherein (a' i ,a′ j ) Epsilon P represents the final characterization vectors of a' i And a' j Belongs to the author pair set P, D (a' i ) Representing randomly selected K tokens without a 'from the final token vector' i Wherein K is a set value.
4. The method of claim 1, wherein each node in the authoring tree is assigned a token vector and the token vector dimensions of each node are the same; for each keyword node, converting each keyword into a semantic representation vector by using a Word2Vec model, and representing the representation vector of the node; for each thesis node, collecting the text information of the title and the abstract of the thesis node, splicing the text information into a piece of long text information, and converting the piece of long text information into a semantic representation vector by using a Doc2Vec model to represent the representation vector of the thesis node; for each author node, the mean of the token vectors of all the paper nodes will be calculated, representing the token vector of the author node.
5. A computer-readable storage medium, in which a computer program is stored, the computer program comprising instructions for carrying out the steps of the method according to any one of claims 1 to 4.
CN202010710086.2A 2020-07-22 2020-07-22 Paper partner recommendation method based on author writing tree and graph neural network Active CN112069306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010710086.2A CN112069306B (en) 2020-07-22 2020-07-22 Paper partner recommendation method based on author writing tree and graph neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010710086.2A CN112069306B (en) 2020-07-22 2020-07-22 Paper partner recommendation method based on author writing tree and graph neural network

Publications (2)

Publication Number Publication Date
CN112069306A CN112069306A (en) 2020-12-11
CN112069306B true CN112069306B (en) 2022-09-09

Family

ID=73656417

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010710086.2A Active CN112069306B (en) 2020-07-22 2020-07-22 Paper partner recommendation method based on author writing tree and graph neural network

Country Status (1)

Country Link
CN (1) CN112069306B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112905891B (en) * 2021-03-05 2021-12-10 中国科学院计算机网络信息中心 Scientific research knowledge map talent recommendation method and device based on graph neural network
CN112989199B (en) * 2021-03-30 2023-05-30 武汉大学 Cooperative network link prediction method based on multidimensional proximity attribute network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833142A (en) * 2017-11-08 2018-03-23 广西师范大学 Academic social networks scientific research cooperative person recommends method
CN109145087A (en) * 2018-07-30 2019-01-04 大连理工大学 A kind of scholar's recommendation and collaborative forecasting method based on expression study and competition theory
CN110737837A (en) * 2019-10-16 2020-01-31 河海大学 Scientific research collaborator recommendation method based on multi-dimensional features under research gate platform

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018709A1 (en) * 2016-05-31 2018-01-18 Ramot At Tel-Aviv University Ltd. Information spread in social networks through scheduling seeding methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107833142A (en) * 2017-11-08 2018-03-23 广西师范大学 Academic social networks scientific research cooperative person recommends method
CN109145087A (en) * 2018-07-30 2019-01-04 大连理工大学 A kind of scholar's recommendation and collaborative forecasting method based on expression study and competition theory
CN110737837A (en) * 2019-10-16 2020-01-31 河海大学 Scientific research collaborator recommendation method based on multi-dimensional features under research gate platform

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Meta-GNN: Metagraph Neural Network for Semi-supervised learning in Attributed Heterogeneous Information Networks;Aravind Sankar 等;《2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining》;20200423;第137-144页 *
基于结构分析的社区发现及信息传播影响研究;张志伟;《中国博士学位论文全文数据库》;20170515;全文 *

Also Published As

Publication number Publication date
CN112069306A (en) 2020-12-11

Similar Documents

Publication Publication Date Title
JP5530425B2 (en) Method, system, and computer program for dynamic generation of user-driven semantic networks and media integration
CN110347843A (en) A kind of Chinese tour field Knowledge Service Platform construction method of knowledge based map
Gu et al. Etree: Effective and efficient event modeling for real-time online social media networks
CN102945237B (en) Based on original user input suggestion and the system and method for refined user input
CN104298785B (en) Searching method for public searching resources
CN112069306B (en) Paper partner recommendation method based on author writing tree and graph neural network
CN104239513A (en) Semantic retrieval method oriented to field data
CN109447261B (en) Network representation learning method based on multi-order proximity similarity
CN102043793A (en) Knowledge-service-oriented recommendation method
Lubis et al. A framework of utilizing big data of social media to find out the habits of users using keyword
Ye et al. A web services classification method based on GCN
Wang et al. A novel blockchain oracle implementation scheme based on application specific knowledge engines
CN108038133A (en) Personalized recommendation method
CN109992784A (en) A kind of heterogeneous network building and distance metric method for merging multi-modal information
Shi et al. Heterogeneous graph representation learning and applications
CN116431825A (en) Construction method of 6G knowledge system for global full-scene on-demand service
Yang et al. Cascaded deep neural ranking models in linkedin people search
CN114722304A (en) Community search method based on theme on heterogeneous information network
Lu et al. Layer information similarity concerned network embedding
Guo Evaluation of the Emotion Model in Electronic Music Based on PSO‐BP
Fang et al. Meta-path based heterogeneous graph embedding for music recommendation
Fan et al. Web Service Applications and Consumer Environments Based on ICT‐Driven Optimization
Zhou et al. Towards a fully distributed p2p web search engine
Liu et al. Multi-modal Graph Attention Network for Video Recommendation
Kilfeather et al. An ontological application for archaeological narratives

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant