CN108647258A - A kind of expression learning method based on entity associated constraint - Google Patents

A kind of expression learning method based on entity associated constraint Download PDF

Info

Publication number
CN108647258A
CN108647258A CN201810377516.6A CN201810377516A CN108647258A CN 108647258 A CN108647258 A CN 108647258A CN 201810377516 A CN201810377516 A CN 201810377516A CN 108647258 A CN108647258 A CN 108647258A
Authority
CN
China
Prior art keywords
entity
batch
strong
formula
positive sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810377516.6A
Other languages
Chinese (zh)
Other versions
CN108647258B (en
Inventor
刘琼昕
马敬
龙航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Publication of CN108647258A publication Critical patent/CN108647258A/en
Application granted granted Critical
Publication of CN108647258B publication Critical patent/CN108647258B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The present invention relates to a kind of expression learning methods based on entity associated constraint, belong to natural language processing and knowledge mapping technical field.The present invention is explained by the description text to entity and relevance divides, obtain the strong associated entity set and weak rigidity entity sets of entity, it is fused to relevance as auxiliary loss item in the expression learning method based on translation, in particular by the negative sampling of sample and model training, obtain the embedded expression of entity and relationship, i.e. by knowledge mapping head entity h, tail entity t and head entity and tail entity between relationship r, be respectively embedded in vectorial h, vector t and vector r.The method of the invention is in reasoning effect better than the expression learning method based on translation and based on text model.

Description

A kind of expression learning method based on entity associated constraint
Technical field
The present invention relates to a kind of expression learning methods based on entity associated constraint, belong to natural language processing and know Know graphical spectrum technology field.
Background technology
Knowledge mapping (Knowledge Graph) is a kind of knowledge representation method based on semantic network, provides one kind Efficient and succinct structured representation mode plays key effect in Web search and intelligent answer field.Knowledge mapping will be true The data in the real world are expressed as entity and relationship, and knowledge passes through relationship with (entity, relationship, entity) triple store between entity It is coupled to constitute the netted structure of knowledge.Although knowledge mapping contains a large amount of entity and relationship, due to knowledge mapping All facts can not be drawn into building process, it is complete to lead to knowledge mapping not.Knowledge mapping inference technology can answer It can also be taken out with the information of Opening field such as relationship that may be present between two entities of prediction for the automation completion of collection of illustrative plates It takes and is combined, quality evaluation is carried out to information extraction result.Indicate learning method refer to by knowledge mapping entity and relationship it is embedding Enter to a lower dimensional space, knowledge mapping reasoning is completed in lower dimensional space.
Relationship is regarded as a kind of translation process between entity, the referred to as mould based on translation by the expression learning method of mainstream at present Type, such as TransE models (Bordes A, Usunier N, Weston J, et al.Translating embeddings for modeling multi-relational data[C]in International Conference on Neural Information Processing Systems.Curran Associates Inc.2013:2787-2795) propose relationship It is the translating operation between an entity and tail entity, which only makes inferences from the structure of figure, can not provide more auxiliary The room for promotion of supplementary information, reasoning effect is limited.Then scholar proposes the table for combining entity description text and graph structure Dendrography learning method belongs to text representation model, therein to represent DKRL models (Xie, R., Liu, Z., Jia, J., Luan, H., & Sun,M.(2016,February).Representation Learning of Knowledge Graphs with Entity Descriptions.In AAAI.pp.2659-2665) by description text and structural information the progress combination learning of entity, it uses Bag of words (CBOW) and convolutional neural networks (CNN) encode text, however but having ignored can between entity in text Semantic association existing for energy can lose more semantic information in training.
Although existing expression learning method achieves good effect in knowledge mapping reasoning, to real in text Semantic association between body, which lacks, to be excavated, and there is also much rooms for the improvement in reasoning performance.The purpose of the present invention is be dedicated to solving Certainly tradition indicates that learning method can lose the technological deficiency of the semantic association information between entity in training, it is proposed that one kind is based on The expression learning method of entity associated constraint.
Invention content
It is an object of the invention to not utilize semantic information abundant in text, Yi Jiwen for the model based on translation This expression model can lose the semantic association information between entity in the training process, it is proposed that one kind is constrained based on entity associated Expression learning method.
Core of the invention thought is:Go out relevance entity based on entity description text mining and relevance is divided Grade, using relevance as in auxiliary Constraint fusion to the expression learning method based on translation;The main entity using after explaining is retouched It states text and obtains the co-occurrence information between entity, standard of the information as semantic association degree between a kind of two entities of measurement, and And the correlation degree is oriented.Specific implementation is by head entity h, tail entity t and the head entity and tail reality in knowledge mapping Relationship r between body is respectively embedded in vectorial h, vector t and vector r.
The present invention is realized by following steps:
Step 1: carrying out note and relevance division to the description text of entity, the strong associated entity set of entity is obtained With weak rigidity entity sets;It include specifically following sub-step:
Step 1.1, the description text annotations to entity, obtain entity annotation result;
Wherein, entity refers to the entity in knowledge mapping, is indicated with e;The description text of e, uses DeseIt indicates, being one section has The set of letters of sequence is indicated with formula (1):
Dese=< w1,...,wm> (1)
Wherein, w1,...,wmIt is word, m is the quantity for describing word in text, and the entity for describing to extract in text is by big It is formed in equal to 1 word, when entity is formed by being more than or equal to two words, needs to spell the word extracted It connects;
The process of extraction entity is known as describing text annotations from description text;The entity extracted in text composition will be described Set is to get to entity annotation result:
Dese'=< w1,...,wm'> (2)
Wherein, m'≤m, wiIndicate an entity, Dese' it is DeseEntity annotation result;
Step 1.2, relevance divide;
J pairs of entity is obtained by formula (3) using i-th in the entity annotation result of step 1.1 output and j-th of entity The correlation degree value of entity i, uses WijIt indicates:
If Wij=2, the strong associated entity (Strong Relevant Entity) that note j is i;If Wij=1, note j are The weak rigidity entity (Weak Relevant Entity) of i, if two entities mutually occur in describing each other, relevance Become strong, then obtains the strong associated entity set and weak rigidity entity sets of entity e;
Correlation degree value WijIt is oriented, traverses all entities in entity annotation result, obtains correlation degree value composition Entity associated matrix, be denoted asE is the entity sets in knowledge mapping, | E | indicate the entity in knowledge mapping Total number:
Wherein, the strong associated entity set of entity e is denoted as S (e):
Wherein, eiI-th of entity is represented,Represent entity e and entity eiBetween be strong associated entity relationship;
The weak rigidity entity sets of entity e are denoted as W (e):
Step 2: the negative sampling of sample and model training, obtain the embedded expression of entity and relationship;It represents real Body e and entity eiBetween be weak rigidity entity relationship;
Wherein, model training is the training carried out based on batch stochastic gradient descent algorithm;
Step 2 specifically includes following sub-step:
Step 2.1, loop initialization count value are 1 and cycle count maximum value;
Wherein, loop count is denoted as k;Cycle count maximum value, is denoted as iter;
Step 2.2 enables S indicate the triplet sets in knowledge mapping, a triple in knowledge mapping be one just Sample, i.e. S are positive sample set;B positive sample is randomly selected from S obtains a subset conjunction Sbatch, enable TbatchConstruction include following sub-step:
Step 2.2.1, S is traversedbatch, negative sampling is carried out to each positive sample (h, r, t), bears the method for sampling with document 1 (Feng J.Knowledge Graph Embedding by Translating on Hyperplanes[C]in AAAI.2014):Given relationship r, sample is born average per tail entity corresponding to head entity described in the method for sampling in document 1 Number tph corresponds to the tph of this patentr, the head number of entities hpt corresponding to average each tail entity corresponds to the hpt of this patentr
The equally distributed random number p in [0, a 1] section is generated, if p is less than or equal to tphr/(tphr+hptr), then An entity is extracted from the entity sets E equal probabilities of knowledge mapping and replaces head entity in positive sample, and is ensured after replacing Triple be not belonging to S;If p is more than tphr/(tphr+hptr), then it is taken out from the entity sets E equal probabilities of knowledge mapping It takes an entity to replace the tail entity in positive sample, and ensures that replaced triple is not belonging to S;
Step 2.2.2, after the completion of replacement, S can be obtainedbatchIn negative sample corresponding to each positive sample (h, r, t) Each positive sample and negative sample are added to T by (h', r, t')batchIn set:
Tbatch←Tbatch∪{(h,r,t),(h',r,t')} (6)
T is obtained after step 2.2.1 and step 2.2.2batchSet extracts T outbatchIn entity sets, be denoted as Ebatch
Step 2.3 is trained model based on batch stochastic gradient descent algorithm;
Step 2.3.1, by formula (7), the score function of triple (h, r, t) calculates the scoring of triple (h, r, t), It is denoted as fr(h,t);
Wherein,Represent square of 2 norms of h+r-t vectors;
Step 2.3.2, the loss item L based on entity associated is calculated by formula (8)r
Wherein, α and β is strong associated weight value and weak rigidity weights, and α determines the intensity of High relevancy constraint, and β determines weak The intensity of relevance constraint;E represents EbatchIn entity;On the left side of formula (8), e' represents the strong associated entity set of e, On the right of formula (8), the weak rigidity entity sets of e' tables e;Square of 2 norms of representation vector e-e';SC The strong distance limit for being associated with hyper parameter and weak rigidity hyper parameter, indicating between two associated entities respectively that user specifies is represented with WC System, when entity is in corresponding range, loss is 0, LrSo that related entity does not surpass the distance in vector space A certain range is crossed, and not associated entity is made to adjust the distance minimum simply;
Step 2.3.3, pass through the loss function value of formula (9) computation model:
Wherein, the loss function value of Loss representative models;fr(h, t) represents the scoring of positive sample (h, r, t), fr(h',t') The scoring of negative sample (h', r, t') is represented, positive sample scoring can be made to tend to low value when training, negative sample scoring tends to high level; γ is that loss is spaced, and γ is for controlling fr(h, t) and frThe difference of (h', t');
Step 2.3.4, the derivative in calculation formula (9) about independent variable, and be updated according to formula (10);
Wherein, θ is independent variable, including all h, r and t, and rate is learning rate,Represent the loss letter to model Numerical value Loss takes differential about independent variable θ;
Step 2.3.5, judge whether loop count k has reached count maximum iter, if k=iter, complete This method;Otherwise k=k+1 skips to step 2.2;
So far, from step 1 to step 2, the embedded expression of entity and relationship has been obtained:Vectorial h, vector t and vector R completes a kind of expression learning method constrained based on entity associated.
Advantageous effect
It is a kind of based on entity associated constraint expression learning method have the advantages that compared with the conventional method:
1. the structured message of the more knowledge based collection of illustrative plates of traditional expression learning method, does not make full use of the description of entity Text message, the present invention propose a kind of associated method of measurement Entity Semantics from entity description text, use this method structure A kind of bound term based on entity associated is made, which, which is fused to tradition, indicates in learning method, experimental result table Bright, the present invention compares conventional method reasoning effect more in the link prediction task and ternary component generic task on public data collection It is good and consistent with conventional method in speed;
2. text based indicates that learning method mostly by text vector, has ignored the semantic association between entity in text, Entity associated bound term proposed by the present invention is excavated from the entity in text, more meticulously the semantic association between entity It is modeled, experimental result indicates that the present invention compares text representation model DKRL, and reasoning effect is more preferable.
Description of the drawings
Fig. 1 is the flow signal in a kind of expression learning method constrained based on entity associated of the present invention and embodiment 1 Figure.
Specific implementation mode
The present invention will be further described with reference to the accompanying drawings and examples and detailed description.
Embodiment 1
The present embodiment describes a kind of specific implementation of the expression learning method constrained based on entity associated of the present invention Journey, the implementation process schematic diagram of Fig. 1 the present embodiment.
From figure 1 it appears that steps are as follows for the specific implementation of the present invention and the present embodiment:
Step A, to the description text of entity carry out explain and relevance divide, obtain entity strong associated entity set and Weak rigidity entity sets;It include specifically following sub-step:
Step A.1, to the description text annotations of entity, obtain entity annotation result;
Wherein, entity refers to the entity in knowledge mapping, is indicated with e;The description text of e, uses DeseIt indicates, being one section has The set of letters of sequence is indicated with formula (1):
Dese=< w1,...,wm> (11)
Wherein, w1,...,wmIt is word, m is the quantity for describing word in text, and the entity for describing to extract in text is by big It is formed in equal to 1 word, when entity is formed by being more than or equal to two words, needs to spell the word extracted It connects;
The process of extraction entity is known as describing text annotations from description text;The entity extracted in text composition will be described Set is to get to entity annotation result:
Dese'=< w1,...,wm'> (12)
Wherein, m'≤m, wiIndicate an entity, Dese' it is DeseEntity annotation result;
Step A.2, relevance divide;
I-th and j-th of entity obtain j pairs of entity by formula (3) in the entity annotation result A.1 exported using step The correlation degree value of entity i, uses WijIt indicates:
If Wij=2, the strong associated entity (Strong Relevant Entity) that note j is i;If Wij=1, note j are The weak rigidity entity (Weak Relevant Entity) of i, if two entities mutually occur in describing each other, relevance Become strong, then obtains the strong associated entity set and weak rigidity entity sets of entity e;
Correlation degree value WijIt is oriented, traverses all entities in entity annotation result, obtain entity associated matrix, remembers ForE is the entity sets in knowledge mapping, | E | indicate the entity total number in knowledge mapping:
Wherein, the strong associated entity set of entity e is denoted as S (e):
The weak rigidity entity sets of entity e are denoted as W (e):
Step B, the negative sampling of sample and model training, obtain the embedded expression of entity and relationship;
Wherein, model training is the training carried out based on batch stochastic gradient descent algorithm;Step 2 specifically includes as follows Sub-step:
Step B.1, loop initialization count value, wherein loop count is denoted as k, initializes k=1;
Step B.2, enable S indicate the triplet sets in knowledge mapping, a triple in knowledge mapping be one just Sample, i.e. S are positive sample set;B positive sample is randomly selected from S obtains a subset conjunction Sbatch, wherein B takes 100, enablesTbatchConstruction include following sub-step:
Step B.2.1, traversal Sbatch, negative sampling is carried out to each positive sample (h, r, t), bears the method for sampling with document 1 (Feng J.Knowledge Graph Embedding by Translating on Hyperplanes[C]in AAAI.2014):Given relationship r, sample is born average per tail entity corresponding to head entity described in the method for sampling in document 1 Number tph corresponds to the tph of this patentr, the head number of entities hpt corresponding to average each tail entity corresponds to the hpt of this patentr
The equally distributed random number p in [0, a 1] section is generated, if p is less than or equal to tphr/(tphr+hptr), then An entity is extracted from the entity sets E equal probabilities of knowledge mapping and replaces head entity in positive sample, and is ensured after replacing Triple be not belonging to S;If p is more than tphr/(tphr+hptr), then it is taken out from the entity sets E equal probabilities of knowledge mapping It takes an entity to replace the tail entity in positive sample, and ensures that replaced triple is not belonging to S;
Step B.2.2, replace after the completion of, S can be obtainedbatchIn negative sample corresponding to each positive sample (h, r, t) Each positive sample and negative sample are added to T by (h', r, t')batchIn set:
Tbatch←Tbatch∪{(h,r,t),(h',r,t')} (16)
By step B.2.1 with step B.2.2 after obtain TbatchSet extracts T outbatchIn entity sets, be denoted as Ebatch
B.3, based on batch stochastic gradient descent algorithm step is trained model;
Step B.3.1, by formula (7), the score function of triple (h, r, t) calculates the scoring of triple (h, r, t), It is denoted as fr(h,t);
Wherein,Represent square of 2 norms of h+r-t vectors;
B.3.2, by formula (8) step calculates the loss item L based on entity associatedr
Wherein, α and β is strong associated weight value and weak rigidity weights, and SC and WC are respectively strong association range and weak rigidity range, Wherein α=1, β=0.3, SC=1, WC=1;
Step 2.3.3, pass through the loss function value of formula (9) computation model:
Wherein, γ is that loss is spaced, and γ takes 1;
Step 2.3.4, the derivative in calculation formula (9) about independent variable, and be updated according to formula (10);
Wherein, θ is independent variable, including all h, r and t, and rate is learning rate, rate=0.1;
Step 2.3.5, judge whether loop count k has reached count maximum iter, iter=500, if k= Iter completes this method;Otherwise k=k+1 skips to step 2.1.
So far, from step 1 to step 2, the embedded expression of entity and relationship has been obtained:Vectorial h, vector t and vector R completes a kind of expression learning method constrained based on entity associated.

Claims (2)

1. a kind of expression learning method based on entity associated constraint, it is characterised in that:Core concept is:Based on entity description Text mining goes out relevance entity and is classified to relevance, using relevance as auxiliary Constraint fusion to the table based on translation In dendrography learning method;The main entity description text using after explaining obtains the co-occurrence information between entity, and the information is as a kind of The standard of semantic association degree between two entities of measurement, and the correlation degree is oriented;Specific implementation is by knowledge mapping In head entity h, tail entity t and head entity and tail entity between relationship r, be respectively embedded in vectorial h, vector t and vector r In;It is realized especially by following steps:
Step 1: to the description text of entity explain and relevance divides, the strong associated entity set of entity and weak is obtained Associated entity set;It include specifically following sub-step:
Step 1.1, the description text annotations to entity, obtain entity annotation result;
Wherein, entity refers to the entity in knowledge mapping, is indicated with e;The description text of e, uses DeseIt indicates, is one section orderly Set of letters is indicated with formula (1):
Dese=< w1,...,wm> (1)
Wherein, w1,...,wmIt is word, m is the quantity for describing word in text, and the entity for describing to extract in text is by being more than It is formed in 1 word, when entity is formed by being more than or equal to two words, needs to splice the word extracted;
The process of extraction entity is known as describing text annotations from description text;It will describe the entity extracted in text and form collection It closes to get to entity annotation result:
Dese'=< w1,...,wm'> (2)
Wherein, m'≤m, wiIndicate an entity, Dese' it is DeseEntity annotation result;
Step 1.2, relevance divide;
Entity j is obtained to entity by formula (3) using i-th in the entity annotation result of step 1.1 output and j-th of entity The correlation degree value of i, uses WijIt indicates:
If Wij=2, the strong associated entity (Strong Relevant Entity) that note j is i;If Wij=1, note j is the weak of i Associated entity (Weak Relevant Entity), if two entities mutually occur in describing each other, relevance becomes strong, Then the strong associated entity set and weak rigidity entity sets of entity e are obtained;
All entities in entity annotation result are traversed, the entity associated matrix of correlation degree value composition is obtained, is denoted asE is the entity sets in knowledge mapping, | E | indicate the entity total number in knowledge mapping:
Wherein, the strong associated entity set of entity e is denoted as S (e):
Wherein, eiI-th of entity is represented,Represent entity e and entity eiBetween be strong associated entity relationship;
The weak rigidity entity sets of entity e are denoted as W (e):
Step 2: the negative sampling of sample and model training, obtain the embedded expression of entity and relationship;Represent entity e and Entity eiBetween be weak rigidity entity relationship, specifically include following sub-step:
Step 2.1, loop initialization count value are 1 and cycle count maximum value;
Wherein, loop count is denoted as k;Cycle count maximum value, is denoted as iter;
Step 2.2 enables S indicate the triplet sets in knowledge mapping, and a triple in knowledge mapping is a positive sample, That is S is positive sample set;B positive sample is randomly selected from S obtains a subset conjunction Sbatch, enableTbatchStructure It makes including following sub-step:
Step 2.2.1, S is traversedbatch, negative sampling is carried out to each positive sample (h, r, t), bears the method for sampling with 1 (Feng of document J.Knowledge Graph Embedding by Translating on Hyperplanes[C]in AAAI.2014):It gives Determine relationship r, sample bears the average number tph correspondences per tail entity corresponding to head entity described in the method for sampling originally in document 1 The tph of patentr, the head number of entities hpt corresponding to average each tail entity corresponds to the hpt of this patentr
The equally distributed random number p in [0, a 1] section is generated, if p is less than or equal to tphr/(tphr+hptr), then from knowing It extracts the head entity in an entity replacement positive sample with knowing the entity sets E equal probabilities of collection of illustrative plates, and ensures replaced three Tuple is not belonging to S;If p is more than tphr/(tphr+hptr), then extract one from the entity sets E equal probabilities of knowledge mapping A entity replaces the tail entity in positive sample, and ensures that replaced triple is not belonging to S;
Step 2.2.2, after the completion of replacement, S can be obtainedbatchIn corresponding to each positive sample (h, r, t) negative sample (h', r, T'), each positive sample and negative sample are added to TbatchIn set:
Tbatch←Tbatch∪{(h,r,t),(h',r,t')} (6)
T is obtained after step 2.2.1 and step 2.2.2batchSet extracts T outbatchIn entity sets, be denoted as Ebatch
Step 2.3 is trained model based on batch stochastic gradient descent algorithm;
Step 2.3.1, by formula (7), the score function of triple (h, r, t) calculates the scoring of triple (h, r, t), is denoted as fr(h,t);
Wherein,Represent square of 2 norms of h+r-t vectors;
Step 2.3.2, the loss item L based on entity associated is calculated by formula (8)r
Wherein, α and β is strong associated weight value and weak rigidity weights, and α determines that the intensity of High relevancy constraint, β determine weak rigidity Property constraint intensity;E represents EbatchIn entity;On the left side of formula (8), e' represents the strong associated entity set of e, in public affairs The right of formula (8), the weak rigidity entity sets of e' tables e;Square of 2 norms of representation vector e-e';SC and WC The strong association hyper parameter and weak rigidity hyper parameter that user specifies are represented, indicates the distance limitation between two associated entities respectively, When entity is in corresponding range, loss is 0, LrSo that related entity is no more than the distance in vector space A certain range, and not associated entity is made to adjust the distance minimum simply;
Step 2.3.3, pass through the loss function value of formula (9) computation model:
Wherein, the loss function value of Loss representative models;fr(h, t) represents the scoring of positive sample (h, r, t), fr(h', t') is represented The scoring of negative sample (h', r, t'), can make positive sample scoring tend to low value when training, negative sample scoring tends to high level;γ is Loss interval, γ is for controlling fr(h, t) and frThe difference of (h', t');
Step 2.3.4, the derivative in calculation formula (9) about independent variable, and be updated according to formula (10);
Wherein, θ is independent variable, including all h, r and t, and rate is learning rate,Represent the loss function value to model Loss takes differential about independent variable θ;
Step 2.3.5, judge whether loop count k has reached count maximum iter, if k=iter, complete we Method;Otherwise k=k+1 skips to step 2.2;
So far, from step 1 to step 2, the embedded expression of entity and relationship has been obtained:Vectorial h, vector t and vector r.
2. a kind of expression learning method based on entity associated constraint according to claim 1, it is characterised in that:Association Degree value WijIt is oriented.
CN201810377516.6A 2018-01-24 2018-04-25 Representation learning method based on entity relevance constraint Active CN108647258B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2018100675556 2018-01-24
CN201810067555 2018-01-24

Publications (2)

Publication Number Publication Date
CN108647258A true CN108647258A (en) 2018-10-12
CN108647258B CN108647258B (en) 2020-12-22

Family

ID=63747612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810377516.6A Active CN108647258B (en) 2018-01-24 2018-04-25 Representation learning method based on entity relevance constraint

Country Status (1)

Country Link
CN (1) CN108647258B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674637A (en) * 2019-09-06 2020-01-10 腾讯科技(深圳)有限公司 Character relation recognition model training method, device, equipment and medium
CN110909881A (en) * 2019-11-01 2020-03-24 中电科大数据研究院有限公司 Knowledge representation method for cross-media knowledge reasoning task
CN111428047A (en) * 2020-03-19 2020-07-17 东南大学 Knowledge graph construction method and device based on UC L semantic indexing
CN113220833A (en) * 2021-05-07 2021-08-06 支付宝(杭州)信息技术有限公司 Entity association degree identification method and device
CN114330323A (en) * 2022-03-08 2022-04-12 成都数联云算科技有限公司 Entity relationship joint extraction method and device, computer terminal and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105630901A (en) * 2015-12-21 2016-06-01 清华大学 Knowledge graph representation learning method
US20160154876A1 (en) * 2010-06-07 2016-06-02 Microsoft Technology Licensing, Llc Using context to extract entities from a document collection
CN107122399A (en) * 2017-03-16 2017-09-01 中国科学院自动化研究所 Combined recommendation system based on Public Culture knowledge mapping platform
CN107273349A (en) * 2017-05-09 2017-10-20 清华大学 A kind of entity relation extraction method and server based on multilingual

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160154876A1 (en) * 2010-06-07 2016-06-02 Microsoft Technology Licensing, Llc Using context to extract entities from a document collection
CN105630901A (en) * 2015-12-21 2016-06-01 清华大学 Knowledge graph representation learning method
CN107122399A (en) * 2017-03-16 2017-09-01 中国科学院自动化研究所 Combined recommendation system based on Public Culture knowledge mapping platform
CN107273349A (en) * 2017-05-09 2017-10-20 清华大学 A kind of entity relation extraction method and server based on multilingual

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
HAN XIAO等: "SSP:semantic space projection for knowledge graph embedding with text descriptions", 《THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
RUOBING XIE等: "Representation learning of knowledge graphs with entity descriptions", 《THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE》 *
刘峤等: "知识图谱构建技术综述", 《计算机研究与发展》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674637A (en) * 2019-09-06 2020-01-10 腾讯科技(深圳)有限公司 Character relation recognition model training method, device, equipment and medium
CN110674637B (en) * 2019-09-06 2023-07-11 腾讯科技(深圳)有限公司 Character relationship recognition model training method, device, equipment and medium
CN110909881A (en) * 2019-11-01 2020-03-24 中电科大数据研究院有限公司 Knowledge representation method for cross-media knowledge reasoning task
CN110909881B (en) * 2019-11-01 2022-11-04 中电科大数据研究院有限公司 Knowledge representation method for cross-media knowledge reasoning task
CN111428047A (en) * 2020-03-19 2020-07-17 东南大学 Knowledge graph construction method and device based on UC L semantic indexing
CN111428047B (en) * 2020-03-19 2023-04-21 东南大学 Knowledge graph construction method and device based on UCL semantic indexing
CN113220833A (en) * 2021-05-07 2021-08-06 支付宝(杭州)信息技术有限公司 Entity association degree identification method and device
CN114330323A (en) * 2022-03-08 2022-04-12 成都数联云算科技有限公司 Entity relationship joint extraction method and device, computer terminal and storage medium

Also Published As

Publication number Publication date
CN108647258B (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN108647258A (en) A kind of expression learning method based on entity associated constraint
CN110825881B (en) Method for establishing electric power knowledge graph
CN109558487A (en) Document Classification Method based on the more attention networks of hierarchy
CN109635291A (en) A kind of recommended method of fusion score information and item contents based on coorinated training
CN108388651A (en) A kind of file classification method based on the kernel of graph and convolutional neural networks
CN107526799A (en) A kind of knowledge mapping construction method based on deep learning
CN110245229A (en) A kind of deep learning theme sensibility classification method based on data enhancing
CN110232186A (en) The knowledge mapping for merging entity description, stratification type and text relation information indicates learning method
CN108509425A (en) Chinese new word discovery method based on novelty
CN107766371A (en) A kind of text message sorting technique and its device
CN109033129A (en) Multi-source Information Fusion knowledge mapping based on adaptive weighting indicates learning method
CN108549658A (en) A kind of deep learning video answering method and system based on the upper attention mechanism of syntactic analysis tree
CN108509654A (en) The construction method of dynamic knowledge collection of illustrative plates
CN110415071B (en) Automobile competitive product comparison method based on viewpoint mining analysis
CN109165275B (en) Intelligent substation operation ticket information intelligent search matching method based on deep learning
CN110162631A (en) Chinese patent classification method, system and storage medium towards TRIZ inventive principle
CN108399241A (en) A kind of emerging much-talked-about topic detecting system based on multiclass feature fusion
CN107145514A (en) Chinese sentence pattern sorting technique based on decision tree and SVM mixed models
CN113360582B (en) Relation classification method and system based on BERT model fusion multi-entity information
CN109299248A (en) A kind of business intelligence collection method based on natural language processing
CN109858008A (en) The tendentious method and device of document court verdict based on deep learning
CN114741519A (en) Paper correlation analysis method based on graph convolution neural network and knowledge base
CN108664652A (en) Expression learning method based on Path selection under a kind of complex network
CN113886562A (en) AI resume screening method, system, equipment and storage medium
CN110851733A (en) Community discovery and emotion interpretation method based on network topology and document content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant