CN107885759A - A kind of knowledge mapping based on multiple-objection optimization represents learning method - Google Patents

A kind of knowledge mapping based on multiple-objection optimization represents learning method Download PDF

Info

Publication number
CN107885759A
CN107885759A CN201611189474.0A CN201611189474A CN107885759A CN 107885759 A CN107885759 A CN 107885759A CN 201611189474 A CN201611189474 A CN 201611189474A CN 107885759 A CN107885759 A CN 107885759A
Authority
CN
China
Prior art keywords
mrow
msub
entity
relation
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611189474.0A
Other languages
Chinese (zh)
Inventor
常亮
祝曼丽
孙文平
古天龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201611189474.0A priority Critical patent/CN107885759A/en
Publication of CN107885759A publication Critical patent/CN107885759A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2452Query translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/288Entity relationship models

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention discloses a kind of knowledge mapping based on multiple-objection optimization and represents learning method, it uses the model based on translation between entity vector and relation vector, it using relationship map type definition in triple structure and best show interrelated between the vector of entity in triple and relation vector, and more parameters need not be introduced;Then entity vector sum relation vector is associated using loss function, and optimize the loss function, when reaching optimization aim, the vector of the vector sum relation of each entity in can learned knowledge collection of illustrative plates, so as to the contact preferably between presentation-entity and relation, and preferably it is applied in extensive knowledge mapping completion.The present invention solves the problems, such as that prior art is excessively simple or excessively complicated and can not represent the entity in knowledge mapping and relation well and can not be advantageously applied in extensive knowledge mapping have good practicality.

Description

A kind of knowledge mapping based on multiple-objection optimization represents learning method
Technical field
The present invention relates to knowledge mapping and machine learning techniques field, and in particular to a kind of knowledge based on multiple-objection optimization Collection of illustrative plates represents learning method.
Background technology
With the rapid development of the technologies such as mobile Internet, Internet of Things, numerous new opplications are in a manner of unprecedented and speed Degree is produced and have accumulated substantial amounts of data, and effective information how is filtered out from big data and has become many field faces Opportunities and challenges, thereby produce knowledge mapping.
Knowledge mapping carries out structured modeling to the world, will be expressed as reality with abstract concept by specific things in the world Body, knowledge network is constructed using the relation between entity, its essence is a digraph, with node on behalf entity, uses Lian Bianbiao Show relation.Therefore, the knowledge in knowledge mapping is often represented with triple (h, r, t) i.e. (entity 1, relation, entity 2), right Answer a company side in digraph and its 2 entities of connection.For example, leaf jasmine is this knowledge of the wife of Yao Ming, in knowledge Triple relation (Ye Li, is ... wife, Yao Ming) can be utilized to represent in collection of illustrative plates.Knowledge mapping can be to entity and entity Between relation establish more accurately inner link, be currently widely used for the various fields such as intelligent answer, information retrieval.Although Existing knowledge mapping scale is very huge, but still very sparse, and knowledge mapping completion is current important study hotspot.
Similar with other kinds of large-scale data, extensive knowledge mapping also faces serious Sparse and calculates effect Rate problem.And in recent years, represent that study obtains extensively in the research of extensive knowledge mapping using deep learning as representative Concern, and make remarkable progress.Represent that study is intended to all entities in knowledge mapping and relationship map a to low-dimensional , in continuous, real value vector space, the Sparse faced before solving well in knowledge mapping research and calculating Efficiency.In existing representation of knowledge learning art, because optimization aim is relatively simple, do not consider sufficiently and sharp With the triple structural information of knowledge mapping, and the difference of semantic space between entity and relation is only considered, without thin The map type of relation in the consideration triple structural information of cause, and learning parameter is excessive, causes knowledge mapping to represent study side In study, model excessively can not simply model method to complex relationship or model excessively complexity can not be applied to extensive knowledge graph In spectrum.
The content of the invention
It is to be solved by this invention that to be that existing knowledge represents that learning method has optimization aim single and can not be applied to big The problem of in scale knowledge mapping, there is provided a kind of knowledge mapping based on multiple-objection optimization represents learning method.
To solve the above problems, the present invention is achieved by the following technical solutions:
A kind of knowledge mapping based on multiple-objection optimization represents learning method, comprises the following steps:
Step 1, using the model based on translation between entity vector and relation vector, establish the triple knot of knowledge mapping The correlation function of entity vector and relation vector in structure;
Step 2, the loss function for establishing entity vector and the correlation function of relation vector, and lose letter by minimizing Number, with the vector representation of learn entity and relation, reaches optimization aim.
In step 1, used translation model is TransE translation models or TransH translation models.
In step 1, when optimization aim is t-r=α h,
It is using the correlation function f (h, r, t) constructed by TransE translation models:
It is using the correlation function f (h, r, t) constructed by TransH translation models:
Wherein:(h, r, t) represents the triple of knowledge mapping, and h represents head entity, and t represents tail entity, and r represents head entity h Relation between tail entity t;α represents the parameter of different relationship map types;L1Represent L1Normal form, L2Represent L2Normal form;lrFor Relation vector in relation r corresponding relations space, lhrIt is expressed as a normals of the entity h along hyperplane and projects to corresponding relation space In head entity vector, ltrRepresent the tail entity vector that normals of the tail entity t along hyperplane is projected in corresponding relation space.
In step 1, when optimization aim is α t-r=h,
It is using the correlation function f (h, r, t) constructed by TransE translation models:
It is using the correlation function f (h, r, t) constructed by TransH translation models:
Wherein:(h, r, t) represents that the triple of knowledge mapping is positive example triple, and h represents head entity, and t represents tail entity, R represents head entity h and tail entity t relation;α represents the parameter of different relationship map types;L1Represent L1Normal form, L2Represent L2 Normal form;lrFor the relation vector in relation r corresponding relations space, lhrA normals of the entity h along hyperplane is expressed as to project to pair Answer the head entity vector in relation space, ltrRepresent the tail that normals of the tail entity t along hyperplane is projected in corresponding relation space Entity vector.
In step 2, the loss function L of foundation is:
Wherein:[f(h,r,t)+γ-f(h′,r,t′)]+=max (0, f (h, r, t)+γ-f (h ', r, t '));γ is to set Fixed marginal value;(h, r, t) represents that the triple of knowledge mapping is positive example triple, and h represents head entity, and t represents tail entity, r Head entity h and tail entity t relation is represented, f (h, r, t) represents the correlation function of positive example triple, S(h,r,t)Represent positive example three Tuple-set;(h ', r, t ') represents the negative example triple that random replacement turns around constructed by entity h and tail entity t, f (h ', r, t ') Represent the correlation function of negative example triple, S(h′,r,t′)Represent negative example triplet sets.
Relation r map type include one-to-one (1-1), one-to-many (1-N), many-one (N-1) or multi-to-multi (N-N) this 4 types.
In step 2, during loss function is minimized:
When relation r type is many-one, by constantly adjusting h, t and r, make t-r as equal with α h as possible;
When relation r type is one-to-many, by constantly adjusting h, t and r, make h+r as equal with α t as possible;
When relation r type for one-to-one or multipair polymorphic type when, by constantly adjusting h, t and r, make h+r as far as possible with t It is equal.
The present invention considers relationship map type, using relationship map type in knowledge mapping triple structure and defines more The knowledge mapping of objective optimization represents learning method, best show the interrelated of entity and relation.The present invention is using real Model based on translation between body vector and relation vector, utilizes relationship map type definition in triple structure and fine earth's surface Show that entity is vectorial interrelated between relation vector in triple, and more parameters need not be introduced;Then damage is utilized Lose function to associate entity vector sum relation vector, and optimize the loss function, when reaching optimization aim, it is possible to learn The vector of the vector sum relation of each entity in knowledge mapping is obtained, so as to the contact preferably between presentation-entity and relation, and Preferably it is applied in extensive knowledge mapping completion.The present invention solve prior art it is excessively simple or excessively complicated and can not be very Entity and relation in good expressions knowledge mapping and the problem of can not be advantageously applied in extensive knowledge mapping, with good Good practicality.
Brief description of the drawings
Fig. 1 is the exemplary plot of relation triple in knowledge mapping.
Fig. 2 is the flow chart that knowledge mapping of the present invention represents learning method.
Fig. 3 a are the exemplary plot that the triple table obtained according to prior art knowledge mapping expression learning method advises knowledge.
Fig. 3 b are the exemplary plot that the triple table obtained according to knowledge mapping of the present invention expression learning method advises knowledge.
Embodiment
For the goal of the invention of the present invention and technical scheme is more clearly understood, illustrate referring to the drawings, to the present invention It is further described.
The present invention considers the triple structural information of knowledge mapping, and using typical (entity 1, relation, entity 2) three The form of tuple represents knowledge, and relation is used for connecting two entities, portray the association between two entities.Fig. 1 is knowledge graph The exemplary plot of typical triple in spectrum.Wherein, node such as " Bush ", " U.S. " and " Obama " that circle represents are to be real Body, the Lian Bianru " president ", " birthplace " and " wife " between two entities etc. is all relation.In addition, it will be seen that " president " relation pair has answered multiple entities, map type N-N.
The invention provides a kind of knowledge mapping based on multiple-objection optimization to represent learning method, referring to Fig. 2, this method bag Include:Using entity vector relation vector between the model based on translation, using relationship map type in triple structure (including 1-1 simple relation, 1-N, N-1 and N-N complex relationship) define in triple (h, r, t) entity vector and relation vector it Between it is interrelated, and more parameters need not be introduced;Entity vector sum relation vector is associated by loss function, And loss function is minimized, with the vector representation of learn entity and relation, reach optimization aim.Can be preferably using the present invention Contact between presentation-entity and relation, and can be applied in extensive knowledge mapping.
Embodiment one:
A kind of knowledge mapping based on multiple-objection optimization represents learning method, comprises the following steps:
Step 1, the knot using triple in model and knowledge mapping based on translation between entity vector and relation vector Structure information, establish interrelated between entity vector and relation vector in triple (h, r, t).
Step 11, definition optimization aim are:
T-r=α h
Wherein, α represents parameter when relationship map type is N-1;α span is α ∈ (0,1).
Step 12, utilize the model based on translation, definition measurement relation r and entity pair between entity vector and relation vector Correlation function f (h, r, t) between (h, t).
There is a variety of the model based on translation, between entity vector and relation vector for example, TransE and TransH etc..
If using TransE energy function, then f (h, r, t) can be defined as:
If using TransH energy function, then f (h, r, t) can be defined as:
Wherein, (h, r, t) represents the triple of knowledge mapping, and h represents head entity, and t represents tail entity, and r represents head entity h Relation between tail entity t;α represents the parameter of different relationship map types;L1Represent L1Normal form, L2Represent L2Normal form;lrFor Relation vector in relation r corresponding relations space, lhrIt is expressed as a normals of the entity h along hyperplane and projects to corresponding relation space In head entity vector, ltrRepresent the tail entity vector that normals of the tail entity t along hyperplane is projected in corresponding relation space.
Step 2, by loss function entity vector is associated with relation vector, and minimize loss function, with Entity vector sum relation vector is obtained, reaches optimization aim.
Step 21, definition loss function are:
Wherein, [f (h, r, t)+γ-f (h ', r, t ')]+=max (0, f (h, r, t)+γ-f (h ', r, t ')).γ is side Actual value, positive example triple and negative example triple to be distinguished.(h, r, t) represents that the triple of knowledge mapping is positive example ternary Group, h represent head entity, and t represents tail entity, and r represents head entity h and tail entity t relation, and f (h, r, t) represents positive example triple Correlation function, S(h,r,t)Represent positive example triplet sets.(h ', r, t ') represents that random replacement turns around entity h and tail entity t institutes The negative example triple of structure, f (h ', r, t ') represent the correlation function of negative example triple, S(h′,r,t′)Represent negative example triple collection Close.
Step 22, the loss function is minimized, study obtains each entity vector sum relation vector in knowledge mapping.Most The process of smallization loss function is exactly to reach the process of optimization aim.
During loss function is minimized:
When relation r type is N-1, by constantly adjusting h, t and r, make t-r as equal with α h as possible, reduction t-r with Association between h;
When relation r type is 1-N, by constantly adjusting h, t and r, make h+r as equal with α t as possible;
When relation r is other types (1-1 and N-N type), by constantly adjusting h, t and r, make h+r as far as possible with t phases Deng.
Embodiment two:
A kind of knowledge mapping based on multiple-objection optimization represents learning method, comprises the following steps:
Step 1, the knot using triple in model and knowledge mapping based on translation between entity vector and relation vector Structure information, establish interrelated between entity vector and relation vector in triple (h, r, t).
Step 11, definition optimization aim are:
T-r=α h
Wherein, α represents parameter when relationship map type is N-1;α span is α ∈ (0,1).
Step 12, utilize the model based on translation, definition measurement relation r and entity pair between entity vector and relation vector Correlation function f (h, r, t) between (h, t).
There is a variety of the model based on translation, between entity vector and relation vector for example, TransE and TransH etc..
If using TransE energy function, then f (h, r, t) can be defined as:
If using TransH energy function, then f (h, r, t) can be defined as:
Wherein, (h, r, t) represents the triple of knowledge mapping, and h represents head entity, and t represents tail entity, and r represents head entity h Relation between tail entity t;α represents the parameter of different relationship map types;L1Represent L1Normal form, L2Represent L2Normal form;lrFor Relation vector in relation r corresponding relations space, lhrIt is expressed as a normals of the entity h along hyperplane and projects to corresponding relation space In head entity vector, ltrRepresent the tail entity vector that normals of the tail entity t along hyperplane is projected in corresponding relation space.
Step 2, by loss function entity vector is associated with relation vector, and minimize loss function, with Entity vector sum relation vector is obtained, reaches optimization aim.
Step 21, definition loss function are:
Wherein, [f (h, r, t)+γ-f (h ', r, t ')]+=max (0, f (h, r, t)+γ-f (h ', r, t ')).γ is side Actual value, positive example triple and negative example triple to be distinguished.(h, r, t) represents that the triple of knowledge mapping is positive example ternary Group, h represent head entity, and t represents tail entity, and r represents head entity h and tail entity t relation, and f (h, r, t) represents positive example triple Correlation function, S(h,r,t)Represent positive example triplet sets.(h ', r, t ') represents that random replacement turns around entity h and tail entity t institutes The negative example triple of structure, f (h ', r, t ') represent the correlation function of negative example triple, S(h′,r,t′)Represent negative example triple collection Close.
Step 22, the loss function is minimized, study obtains each entity vector sum relation vector in knowledge mapping.Most The process of smallization loss function is exactly to reach the process of optimization aim.
During loss function is minimized:
When relation r type is N-1, by constantly adjusting h, t and r, make t-r as equal with α h as possible, reduction t-r with Association between h;
When relation r type is 1-N, by constantly adjusting h, t and r, make h+r as equal with α t as possible;
When relation r is other types (1-1 and N-N type), by constantly adjusting h, t and r, make h+r as far as possible with t phases Deng.
As a result emulate:
Learn to obtain each entity vector sum relation vector in knowledge mapping by the above method.Fig. 3 a are prior art The triple table that method obtains advises the exemplary plot of knowledge.Fig. 3 b are to be represented according to knowledge mapping of the present invention based on multiple-objection optimization The triple table that learning method obtains advises the exemplary plot of knowledge.In Fig. 3 a, the pass in knowledge mapping triple structure is not accounted for It is map type, due to the U.S.+president=Obama, meanwhile, the U.S.+president=Bush, therefore finally draw Obama and Bu It is assorted equal, but in fact, Obama and Bush also make a big difference in other side.And in Fig. 3 b, it is contemplated that knowledge mapping Relationship map type in triple structure, when relation r is complex relationship, by weakening associating between h+r and t, both may be used It is that can to express Bush again be US President to US President to express Obama, while and can distinguishes Obama and Bush.Thus As can be seen that Fig. 3 a prior art is compared with Fig. 3 b present invention, knowledge mapping of the invention represents learning method, Ke Yigeng Connecting each other between entity and relation is represented well, and considers that the model after relationship map type is more flexible.
The foregoing is only the present invention preferable case study on implementation, be not intended to limit the scope of the present invention within.

Claims (7)

1. a kind of knowledge mapping based on multiple-objection optimization represents learning method, it is characterized in that, comprise the following steps:
Step 1, using entity vector relation vector between the model based on translation, establish in the triple structure of knowledge mapping The correlation function of entity vector and relation vector;
Step 2, the loss function for establishing entity vector and the correlation function of relation vector, and by minimizing loss function, with The vector representation for entity and the relation of learning, reaches optimization aim.
2. a kind of knowledge mapping based on multiple-objection optimization according to claim 1 represents learning method, it is characterized in that, In step 1, used translation model is TransE translation models or TransH translation models.
3. a kind of knowledge mapping based on multiple-objection optimization according to claim 1 or 2 represents learning method, its feature It is, in step 1, when optimization aim is t-r=α h,
It is using the correlation function f (h, r, t) constructed by TransE translation models:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <mi>&amp;alpha;</mi> <mi>h</mi> <mo>+</mo> <mi>r</mi> <mo>-</mo> <mi>t</mi> <mo>|</mo> <msub> <mo>|</mo> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>/</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </msub> </mrow>
It is using the correlation function f (h, r, t) constructed by TransH translation models:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;l</mi> <mrow> <mi>h</mi> <mi>r</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>l</mi> <mi>r</mi> </msub> <mo>-</mo> <msub> <mi>l</mi> <mrow> <mi>t</mi> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msub> <mo>|</mo> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>/</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </msub> </mrow>
Wherein:(h, r, t) represents the triple of knowledge mapping, and h represents head entity, and t represents tail entity, and r represents head entity h and tail Entity t relation;α represents the parameter of different relationship map types;L1Represent L1Normal form, L2Represent L2Normal form;lrIt is corresponding for relation r Relation vector in relation space, lhrIt is real to be expressed as the head that a normals of the entity h along hyperplane is projected in corresponding relation space Body vector, ltrRepresent the tail entity vector that normals of the tail entity t along hyperplane is projected in corresponding relation space.
4. a kind of knowledge mapping based on multiple-objection optimization according to claim 1 or 2 represents learning method, its feature It is, in step 1, when optimization aim is α t-r=h,
It is using the correlation function f (h, r, t) constructed by TransE translation models:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>&amp;alpha;l</mi> <mrow> <mi>h</mi> <mi>r</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>l</mi> <mi>r</mi> </msub> <mo>-</mo> <msub> <mi>l</mi> <mrow> <mi>t</mi> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msub> <mo>|</mo> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>/</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </msub> </mrow>
It is using the correlation function f (h, r, t) constructed by TransH translation models:
<mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>l</mi> <mrow> <mi>h</mi> <mi>r</mi> </mrow> </msub> <mo>+</mo> <msub> <mi>l</mi> <mi>r</mi> </msub> <mo>-</mo> <msub> <mi>&amp;alpha;l</mi> <mrow> <mi>t</mi> <mi>r</mi> </mrow> </msub> <mo>|</mo> <msub> <mo>|</mo> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>/</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> </msub> </mrow>
Wherein:(h, r, t) represents that the triple of knowledge mapping is positive example triple, and h represents head entity, and t represents tail entity, r tables Show the relation between an entity h and tail entity t;α represents the parameter of different relationship map types;L1Represent L1Normal form, L2Represent L2 Normal form;lrFor the relation vector in relation r corresponding relations space, lhrA normals of the entity h along hyperplane is expressed as to project to pair Answer the head entity vector in relation space, ltrRepresent the tail that normals of the tail entity t along hyperplane is projected in corresponding relation space Entity vector.
5. a kind of knowledge mapping based on multiple-objection optimization according to claim 1 represents learning method, it is characterized in that, In step 2, the loss function L of foundation is:
<mrow> <mi>L</mi> <mo>=</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> <mo>&amp;Element;</mo> <msub> <mi>S</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> </msub> </mrow> </munder> <munder> <mi>&amp;Sigma;</mi> <mrow> <mo>(</mo> <msup> <mi>h</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <mi>r</mi> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>)</mo> <mo>&amp;Element;</mo> <msubsup> <mi>S</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>&amp;prime;</mo> </msubsup> </mrow> </munder> <msub> <mrow> <mo>&amp;lsqb;</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>h</mi> <mo>,</mo> <mi>r</mi> <mo>,</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;gamma;</mi> <mo>-</mo> <mi>f</mi> <mrow> <mo>(</mo> <msup> <mi>h</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <mi>r</mi> <mo>,</mo> <msup> <mi>t</mi> <mo>&amp;prime;</mo> </msup> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>+</mo> </msub> </mrow>
Wherein:[f(h,r,t)+γ-f(h′,r,t′)]+=max (0, f (h, r, t)+γ-f (h ', r, t '));γ is the side of setting Actual value;(h, r, t) represents that the triple of knowledge mapping is positive example triple, and h represents head entity, and t represents tail entity, and r represents head Entity h and tail entity t relation, f (h, r, t) represent the correlation function of positive example triple, S(h,r,t)Represent positive example triple collection Close;(h ', r, t ') represents the negative example triple that random replacement turns around constructed by entity h and tail entity t, and f (h ', r, t ') represents negative The correlation function of example triple, S(h′,r,t′)Represent negative example triplet sets.
6. a kind of knowledge mapping based on multiple-objection optimization according to claim 1 represents learning method, it is characterized in that, close It is that r map type includes one-to-one, one-to-many, many-one and multi-to-multi this 4 type.
7. a kind of knowledge mapping based on multiple-objection optimization according to claim 6 represents learning method, it is characterized in that, In step 2, during loss function is minimized:
When relation r type is many-one, by constantly adjusting h, t and r, make t-r as equal with α h as possible;
When relation r type is one-to-many, by constantly adjusting h, t and r, make h+r as equal with α t as possible;
When relation r type for one-to-one or multipair polymorphic type when, by constantly adjusting h, t and r, make h+r as far as possible with t phases Deng.
CN201611189474.0A 2016-12-21 2016-12-21 A kind of knowledge mapping based on multiple-objection optimization represents learning method Pending CN107885759A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611189474.0A CN107885759A (en) 2016-12-21 2016-12-21 A kind of knowledge mapping based on multiple-objection optimization represents learning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611189474.0A CN107885759A (en) 2016-12-21 2016-12-21 A kind of knowledge mapping based on multiple-objection optimization represents learning method

Publications (1)

Publication Number Publication Date
CN107885759A true CN107885759A (en) 2018-04-06

Family

ID=61770154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611189474.0A Pending CN107885759A (en) 2016-12-21 2016-12-21 A kind of knowledge mapping based on multiple-objection optimization represents learning method

Country Status (1)

Country Link
CN (1) CN107885759A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509654A (en) * 2018-04-18 2018-09-07 上海交通大学 The construction method of dynamic knowledge collection of illustrative plates
CN108959472A (en) * 2018-06-20 2018-12-07 桂林电子科技大学 Knowledge mapping based on multistep relation path indicates learning method
CN108984745A (en) * 2018-07-16 2018-12-11 福州大学 A kind of neural network file classification method merging more knowledge mappings
CN109146078A (en) * 2018-07-19 2019-01-04 桂林电子科技大学 A kind of knowledge mapping expression learning method based on dynamic route
CN109165278A (en) * 2018-09-07 2019-01-08 桂林电子科技大学 It is a kind of that learning method is indicated based on entity and the knowledge mapping of relational structure information
CN109255002A (en) * 2018-09-11 2019-01-22 浙江大学 A method of it is excavated using relation path and solves knowledge mapping alignment task
CN110888942A (en) * 2019-11-05 2020-03-17 天津大学 Ontology inclusion axiom learning method based on linear programming
CN111444181A (en) * 2020-03-20 2020-07-24 腾讯科技(深圳)有限公司 Knowledge graph updating method and device and electronic equipment
CN111914094A (en) * 2019-05-10 2020-11-10 中国人民大学 Knowledge graph representation learning method based on ternary interaction
CN112765398A (en) * 2021-01-04 2021-05-07 珠海格力电器股份有限公司 Information recommendation method and device and storage medium
CN113033914A (en) * 2021-04-16 2021-06-25 哈尔滨工业大学 Entity and relation prediction method for machining process knowledge graph

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509654A (en) * 2018-04-18 2018-09-07 上海交通大学 The construction method of dynamic knowledge collection of illustrative plates
CN108959472A (en) * 2018-06-20 2018-12-07 桂林电子科技大学 Knowledge mapping based on multistep relation path indicates learning method
CN108959472B (en) * 2018-06-20 2021-11-19 桂林电子科技大学 Knowledge graph representation learning method based on multi-step relation path
CN108984745A (en) * 2018-07-16 2018-12-11 福州大学 A kind of neural network file classification method merging more knowledge mappings
CN109146078A (en) * 2018-07-19 2019-01-04 桂林电子科技大学 A kind of knowledge mapping expression learning method based on dynamic route
CN109146078B (en) * 2018-07-19 2021-04-30 桂林电子科技大学 Knowledge graph representation learning method based on dynamic path
CN109165278A (en) * 2018-09-07 2019-01-08 桂林电子科技大学 It is a kind of that learning method is indicated based on entity and the knowledge mapping of relational structure information
CN109165278B (en) * 2018-09-07 2021-11-09 桂林电子科技大学 Knowledge graph representation learning method based on entity and relation structure information
CN109255002B (en) * 2018-09-11 2021-08-27 浙江大学 Method for solving knowledge graph alignment task by utilizing relationship path mining
CN109255002A (en) * 2018-09-11 2019-01-22 浙江大学 A method of it is excavated using relation path and solves knowledge mapping alignment task
CN111914094A (en) * 2019-05-10 2020-11-10 中国人民大学 Knowledge graph representation learning method based on ternary interaction
CN111914094B (en) * 2019-05-10 2023-09-26 中国人民大学 Knowledge graph representation learning method based on ternary interaction
CN110888942A (en) * 2019-11-05 2020-03-17 天津大学 Ontology inclusion axiom learning method based on linear programming
CN111444181B (en) * 2020-03-20 2021-05-11 腾讯科技(深圳)有限公司 Knowledge graph updating method and device and electronic equipment
CN111444181A (en) * 2020-03-20 2020-07-24 腾讯科技(深圳)有限公司 Knowledge graph updating method and device and electronic equipment
CN112765398A (en) * 2021-01-04 2021-05-07 珠海格力电器股份有限公司 Information recommendation method and device and storage medium
CN113033914A (en) * 2021-04-16 2021-06-25 哈尔滨工业大学 Entity and relation prediction method for machining process knowledge graph
CN113033914B (en) * 2021-04-16 2022-03-25 哈尔滨工业大学 Entity and relation prediction method for machining process knowledge graph

Similar Documents

Publication Publication Date Title
CN107885759A (en) A kind of knowledge mapping based on multiple-objection optimization represents learning method
CN108763376B (en) Knowledge representation learning method for integrating relationship path, type and entity description information
Diudea et al. Omega and related counting polynomials
CN109033129A (en) Multi-source Information Fusion knowledge mapping based on adaptive weighting indicates learning method
CN105512289A (en) Image retrieval method based on deep learning and Hash
CN109255002B (en) Method for solving knowledge graph alignment task by utilizing relationship path mining
CN105631037A (en) Image retrieval method
CN102842043B (en) Particle swarm classifying method based on automatic clustering
CN113177132A (en) Image retrieval method based on depth cross-modal hash of joint semantic matrix
CN107741568A (en) A kind of lithium battery SOC estimation method that optimization RBF neural is shifted based on state
CN113065974B (en) Link prediction method based on dynamic network representation learning
CN112000689B (en) Multi-knowledge graph fusion method based on text analysis
CN106649550A (en) Joint knowledge embedded method based on cost sensitive learning
CN104463208A (en) Multi-view semi-supervised collaboration classification algorithm with combination of agreement and disagreement label rules
CN108764295A (en) A kind of soft-measuring modeling method based on semi-supervised integrated study
CN108052683A (en) A kind of knowledge mapping based on cosine measurement rule represents learning method
CN113987203A (en) Knowledge graph reasoning method and system based on affine transformation and bias modeling
CN104504251A (en) Community dividing method based on PageRank algorithm
CN112581368A (en) Multi-robot grid map splicing method based on optimal map matching
CN113887698B (en) Integral knowledge distillation method and system based on graph neural network
CN107590237B (en) Knowledge graph representation learning method based on dynamic translation principle
CN109165278A (en) It is a kind of that learning method is indicated based on entity and the knowledge mapping of relational structure information
CN111693868A (en) Lithium battery state of charge estimation method based on density feature clustering integration
CN109472712A (en) A kind of efficient Markov random field Combo discovering method strengthened based on structure feature
CN114596473A (en) Network embedding pre-training method based on graph neural network hierarchical loss function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180406