CN107480125A - A kind of relational links method of knowledge based collection of illustrative plates - Google Patents
A kind of relational links method of knowledge based collection of illustrative plates Download PDFInfo
- Publication number
- CN107480125A CN107480125A CN201710543849.7A CN201710543849A CN107480125A CN 107480125 A CN107480125 A CN 107480125A CN 201710543849 A CN201710543849 A CN 201710543849A CN 107480125 A CN107480125 A CN 107480125A
- Authority
- CN
- China
- Prior art keywords
- text
- relational
- word
- knowledge
- knowledge mapping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/211—Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
- G06F16/355—Class or cluster creation or modification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/36—Creation of semantic tools, e.g. ontology or thesauri
- G06F16/367—Ontology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
A kind of relational links method of knowledge based collection of illustrative plates is claimed in the present invention, first, the triple comprising certain relation is found using SparQL query statements from knowledge mapping<Subject, relation, object>List collection, and the matching relationship text from non-structured text;The similarity matrix of relational text is obtained using LSWMD algorithms, recycles density peaks clustering algorithm to cluster relational text, obtains relational text class cluster;Based on relational text class cluster, the position of all words in class cluster is extracted, and is fitted using beta distribution, obtains the word distribution pattern of relational text class cluster;Candidate relationship text for not determining relation in Opening field non-structured text, vector is built using word distribution pattern, and be identified using GBDT graders, and then linked with the relation in knowledge mapping.The present invention solve thes problems, such as to link natural language with knowledge mapping insufficient, computer can be helped to be better understood from natural language.
Description
Technical field
The invention belongs to natural language processing field, more particularly to a kind of relational links side of knowledge based collection of illustrative plates
Method.
Background technology
Explore and understand that the knowledge on internet is one of long term object of artificial intelligence field.Due to some distribution systems
The appearance of system, storage and be no longer problem using the data on internet but allow computer it will be appreciated that and with people
Natural language is still a very big challenge.The birth of knowledge mapping has helped computer understanding natural language.Knowledge graph
Spectrum is intended to describe various entities or concept present in real world.Wherein, each entity or concept are globally unique true with one
Fixed ID is identified, and is referred to as their identifier.Each attribute-value is to for portraying the intrinsic characteristic of entity, and relation is used for
Two entities are linked, portray the association between them.Knowledge mapping can also be counted as a huge figure, the node in figure
Presentation-entity or concept, and the side in figure is then made up of attribute or relation.Now popular knowledge mapping has DBpedia,
Wikidata, OpenCyc, YAGO etc..
Knowledge mapping all plays vital effect, such as semantic search, question answering system etc. in many fields.Wherein one
Individual important difficult point is that natural text is mapped in knowledge mapping.Entity link is by the way that some entities of natural text are mapped
Solves this difficult point into knowledge mapping.At present, entity link has been the research of a comparative maturity, but relational links
But it is little affected by concern.Relational links and Relation extraction are different, and the emphasis of Relation extraction is between two entities of identification
Relation, and relational links attempt to find the text representation of relationship by objective (RBO).
A kind of relational links system of knowledge based collection of illustrative plates is to utilize the structural data of knowledge mapping, learning knowledge collection of illustrative plates
In some relation conventional expression, and establish the conventional expression model of the relation.When handling non-structured text, using conventional
Expression model matches to word, so as to which non-structured text to be mapped to some relation in knowledge mapping, implementation relation
Link, at the same time it can also deduce the deep relationship between entity, collection of illustrative plates of enriching one's knowledge.
The content of the invention
Present invention seek to address that above problem of the prior art.Propose a kind of effective solution natural language and knowledge graph
The problem of spectrum link is insufficient, can help computer to be better understood from the relational links side of the knowledge based collection of illustrative plates of natural language
Method.Technical scheme is as follows:
A kind of relational links method of knowledge based collection of illustrative plates, it comprises the following steps:
S1, to knowledge mapping and unstructured text data collection is acquired and data prediction, utilize knowledge
Collection of illustrative plates is labeled, and the relational text in non-structured text is obtained, as training set:
S2, the word displacement algorithm using LSWMD position sensings, obtain based on relational text two-by-two between
Similarity matrix, clustered based on similarity matrix, obtain relational text class cluster:
S3, using beta distribution the position of word in relational text class cluster is fitted, obtains word distribution pattern:
S4, using word distribution pattern, training set is converted into vector, is trained to obtain using GBDT gradient boosted trees
Grader:
S5, for being marked not over knowledge mapping, or the non-structured text that knowledge mapping marks can not be passed through, profit
Matched with relational text class cluster, and differentiated using GBDT graders, if being determined as very, being linked to knowledge mapping
Corresponding relation.
Further, the relational text, which enters the step of line discipline noise reduction, includes:Screen length and be more than 3 relations for being less than 11
Text.
Further, the step S1 is acquired and data prediction to knowledge mapping data set, obtains relational text,
Specifically include step:Entity pair is obtained from knowledge mapping using SparQL, and establishes entity to list;Obtained accordingly according to subject
Wikipedia article;Subordinate sentence is carried out using nltk instruments;If subject, the alias of subject, the main part of subject are included in sentence
Point, labeled as subj;If comprising object, the alias of object, the main part of object in sentence, labeled as obj;Sentence is intercepted
Word segment between subj and obj, as relational text.
Further, the LSWMD algorithms of the step S2 cluster to relational text, obtain the step of relational text class cluster
Suddenly include:Word is converted into vector using word2vec;Utilize the semantic distance square between word vectors calculated relationship text
Battle array;Utilize the syntax distance matrix between the position calculated relationship text of word;Parameter alpha is multiplied by semantic distance matrix and (1-
α) be multiplied by input of the syntax distance matrix sum as EMD, obtain based on relational text two-by-two between similarity, and be configured to phase
Like degree matrix, using similarity matrix as input, clustered using density peaks algorithm, obtain relational text class cluster.
Further, the step S3 obtains word distribution pattern step and included:Count position of the word in relational text
Put;The positional information of word is fitted using beta distribution, parameter alpha is obtained, β, then counts the word in such cluster
The probability γ of middle appearance, word is expressed as to the triple of (α, beta, gamma);The triple of all words forms such cluster in class cluster
Word distribution pattern.
Further, the training step of grader includes:According to class cluster size initialization vector;The sentence of training set utilizes
4 to 10 sliding window finds the maximum sequence of terms of frequency;Using α, β parameters, and the word in the position of sequence of terms,
The matching degree for calculating word is filled up to vectorial correspondence position;Vector of the sequence of terms in each class cluster is stitched together, profit
The vector is trained to obtain a grader with GBDT.
Further, the step S5 non-structured text relations are differentiated using grader, and are linked to knowledge graph
Property pages mask body corresponding to spectrum includes step:Non-structured text is converted into vector using word distribution pattern, using point
Class device identification the text whether include the relation, if comprising, by the relational text of the text be linked to corresponding to knowledge mapping belong to
The property page.
Advantages of the present invention and have the beneficial effect that:
1st, propose that the word displacement of position sensing is calculated the similarity of relational text, language can be considered simultaneously
Adopted distance and grammer distance, have obtained more preferable Similarity Measure result.
2nd, word distribution bag model is put forward, a series of beta that relational text class cluster is expressed as to words is distributed, greatly drop
The low data volume of text relation object cluster, while remain the frequency and positional information of word in class cluster.
3rd, a kind of vector representation mode of new relational text is proposed, for each relational text class cluster, a relation
Text is represented as the vector that the word that the relational text class cluster is included is formed, for multiple relational text class clusters, one
Relational text is represented as the splicing of multiple corresponding vectors.This mode can be with the pass between effective expression relational text and class cluster
Connection relation.
4th, a kind of relational links framework of knowledge based collection of illustrative plates is proposed, can be by the word in non-structured natural text
Word order row are linked in the relation in knowledge mapping, computer can be helped to more fully understand natural language.
Brief description of the drawings
Fig. 1 is the overall flow figure that the present invention provides preferred embodiment;
Fig. 2 is Data Collection and pretreatment process figure.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, detailed
Carefully describe.Described embodiment is only the part of the embodiment of the present invention.
The present invention solve above-mentioned technical problem technical scheme be:
The major design design of technical solution of the present invention is for the relational design AutoLink algorithm in knowledge mapping.Utilize
The structural data of knowledge mapping, train the conventional expression model of some relation.By these conventional expression and non-structured text
Match cognization is carried out, and the non-structured text that the match is successful is mapped to the corresponding relation in knowledge mapping, is closed so as to realize
Tethers connects, and solves the problems, such as the relationship map between non-structured natural language text and the knowledge mapping of structuring.Meanwhile
The deep relationship between entity can also be deduced, collection of illustrative plates of enriching one's knowledge.
With reference to each accompanying drawing to the further details of elaboration of the specific implementation process of technical solution of the present invention.
Referring to Fig. 1, the figure is the flow chart of the embodiment of the relational links system of a knowledge based collection of illustrative plates of the invention,
Its main implementation process is:
Step S1, Data Collection and pretreatment, obtain corresponding relational text.
Step S2, relational text is clustered using LSWMD algorithms, obtain relational text class cluster, detailed process is as follows:
In WMD, word is to present in vector form, and the distance between word can be converted between calculating vector
Distance.But WMD contains only the information of semanteme, the not information comprising syntactic structure.The present invention can not handle sentence for WMD
The characteristics of method structural information, is improved, i.e. Location Sensitive Word Mover ' s Distance (LSWMD).One
Individual relational text { w1, w2... wnRepresent, wiPosition can be calculated by equation below:
After coal addition position, WMD formula is as follows:
Wherein, siFor i-th of word, d in relational textiIt is the weight of i-th of word in sentence, T is siTo sjConversion square
Battle array, D is siTo sjBased on semantic and syntax distance matrix.
The detail formula of distance matrix is as follows:
Dsem+loc(si, s 'j)=α Dsem (si, s 'j)+(1-α)Dloc(si, s 'j), α ∈ [0,1]
Wherein α is hyper parameter.
Similarity matrix obtained as above is clustered using density peaks algorithm, it can will express similar relation
Text gathers together, obtains conventional relational text class cluster.
Step S3, the position of word is fitted using beta distribution, obtains word distribution pattern, detailed process is such as
Under:
Utilize relational text class cluster obtained in the previous step, position of the statistics word in relational text class cluster;Utilize beta
It is distributed and the positional information of word is fitted, obtains parameter alpha, β, then count the probability that the word occurs in such cluster
γ, word is expressed as to the triple of (α, beta, gamma);The triple of all words forms the word distributed mode of such cluster in class cluster
Formula.Word distribution pattern can also only take 20 words of maximum probability as word distribution pattern to save amount of calculation, and
Ignore other low probability words.
Step S4, using word distribution pattern, training set is converted into vector, grader, specific mistake are trained using GBDT
Journey is as follows:
In the training process, using 4 to 10 sliding window, using the frequency in word distribution pattern, choose the window
The maximum conduct candidate relationship sequence of interior sequence of terms frequency sum.Recycle word distribution pattern, and candidate relationship sequence
The position of middle word calculates the matching degree of each word, and so as to obtain the matching degree of this vector, then the word sequence exists
The vector of each class cluster is stitched together, and using classifier training, the vector obtains a grader.
Step S5, for the non-structured text newly inputted, can utilize word distribution pattern can be translated into
Amount, the grader trained using step S4 can be identified whether the text includes the relation.If comprising by the text
Relational text is linked to the attribute page corresponding to knowledge mapping.
In the present invention, to Data Collection and pretreatment referring to Fig. 2, the figure is to Data Collection and pretreatment in the present invention
Flow chart, its main implementation process is as follows:
Step S6, entity pair is obtained from knowledge mapping using SparQL, and establish entity to list.
Step S7, corresponding wikipedia article is obtained according to subject.
Step S8, subordinate sentence is carried out using nltk instruments.
Step S9, if comprising subject, the alias of subject, the main part of subject in sentence, labeled as subj.
Step S10, if comprising object, the alias of object, the main part of object in sentence, labeled as obj.
Step S11, to the further processing of sentence, the word segment between subj and obj is intercepted, as relational text.
Step S12, screening length are more than 3 relational texts for being less than 11.
As fully visible, the present invention utilizes the data of knowledge mapping, finds the relational text comprising certain relation;Use LSWMD
Algorithm obtains the similarity matrix of relational text, recycles density peaks clustering algorithm to obtain relational text class cluster;Based on relation
Text class cluster, the position of all words in class cluster is extracted, and be fitted using beta distribution, obtain the word of relational text class cluster
Language distribution pattern;For non-structured natural text, we can build vector by word distribution pattern, and pass through GBDT
To identify whether the text includes the relation, and then linked with the relation in knowledge mapping.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limited the scope of the invention.
After the content for having read the record of the present invention, technical staff can make various changes or modifications to the present invention, these equivalent changes
Change and modification equally falls into the scope of the claims in the present invention.
Claims (8)
- A kind of 1. relational links method of knowledge based collection of illustrative plates, it is characterised in that comprise the following steps:S1, to knowledge mapping and unstructured text data collection is acquired and data prediction, carried out using knowledge mapping Mark, the relational text in non-structured text is obtained, as training set:S2, the word displacement algorithm using LSWMD position sensings, obtain based on relational text two-by-two between similarity moment Battle array, is clustered based on similarity matrix, obtains relational text class cluster:S3, using beta distribution the position of word in relational text class cluster is fitted, obtains word distribution pattern:S4, using word distribution pattern, training set is converted into vector, wherein the relational text marked using knowledge mapping is Training set, it is trained to obtain grader using GBDT gradient boosted trees:S5, for being marked not over knowledge mapping, or the non-structured text that can not be marked by knowledge mapping utilize pass It is that text class cluster is matched, and is differentiated using grader, if is determined as very, being linked to corresponding to knowledge mapping and closing System.
- 2. the relational links method of knowledge based collection of illustrative plates according to claim 1, it is characterised in that the acquisition relation text The step of also including entering relational text line discipline noise reduction after this.
- 3. the relational links method of knowledge based collection of illustrative plates according to claim 2, it is characterised in that the relational text is entered The step of line discipline noise reduction, includes:Screen length and be more than 3 relational texts for being less than 11.
- 4. the relational links method of the knowledge based collection of illustrative plates according to one of claim 1-3, it is characterised in that the step Knowledge mapping data set is acquired S1 and data prediction, obtains relational text, specifically includes step:Using SparQL from Knowledge mapping obtains entity pair, and establishes entity to list;Corresponding wikipedia article is obtained according to subject;Utilize nltk works Tool carries out subordinate sentence;If comprising subject, the alias of subject, the main part of subject in sentence, labeled as subj;If included in sentence Object, the alias of object, the main part of object, labeled as obj;Word segment between subj and obj is intercepted to sentence, made For relational text.
- 5. the relational links method of knowledge based collection of illustrative plates according to claim 4, it is characterised in that the step S2's LSWMD algorithms cluster to relational text, and the step of obtaining relational text class cluster includes:Word is converted using word2vec For vector;Utilize the semantic distance matrix between word vectors calculated relationship text;Utilize the position calculated relationship text of word Between syntax distance matrix;Parameter alpha is multiplied by semantic distance matrix and is multiplied by syntax distance matrix sum as EMD with (1- α) Input, obtain based on relational text two-by-two between similarity, and similarity matrix is configured to, using similarity matrix as defeated Enter, clustered using density peaks algorithm, obtain relational text class cluster.
- 6. the relational links method of knowledge based collection of illustrative plates according to claim 5, it is characterised in that the step S3 is obtained Word distribution pattern step includes:Count position of the word in relational text;Positional information using beta distribution to word It is fitted, obtains parameter alpha, β, then count the probability γ that the word occurs in such cluster, word is expressed as (α, β, Triple γ);The triple of all words forms the word distribution pattern of such cluster in class cluster.
- 7. the relational links method of knowledge based collection of illustrative plates according to claim 6, it is characterised in that the training step of grader Suddenly include:According to class cluster size initialization vector;The sentence of training set finds the maximum word of frequency using 4 to 10 sliding window Word order arranges;Using α, β parameters, and the word are in the position of sequence of terms, and calculating the matching degree of word, to be filled up to vector right Answer position;Vector of the sequence of terms in each class cluster is stitched together, trains the vector to obtain a grader using GBDT.
- 8. the relational links method of knowledge based collection of illustrative plates according to claim 6, it is characterised in that the non-knots of step S5 Structure text relation differentiated using grader, and is linked to property pages mask body corresponding to knowledge mapping and is included step:Profit Non-structured text is converted into vector with word distribution pattern, whether the relation is included using the grader identification text, if Comprising the relational text of the text is linked into the attribute page corresponding to knowledge mapping.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710543849.7A CN107480125B (en) | 2017-07-05 | 2017-07-05 | Relation linking method based on knowledge graph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710543849.7A CN107480125B (en) | 2017-07-05 | 2017-07-05 | Relation linking method based on knowledge graph |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107480125A true CN107480125A (en) | 2017-12-15 |
CN107480125B CN107480125B (en) | 2020-08-04 |
Family
ID=60595600
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710543849.7A Active CN107480125B (en) | 2017-07-05 | 2017-07-05 | Relation linking method based on knowledge graph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107480125B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109800317A (en) * | 2018-03-19 | 2019-05-24 | 中山大学 | A kind of image querying answer method based on the alignment of image scene map |
CN109993381A (en) * | 2017-12-29 | 2019-07-09 | ***通信集团湖北有限公司 | Demand management application method, device, equipment and the medium of knowledge based map |
CN110019843A (en) * | 2018-09-30 | 2019-07-16 | 北京国双科技有限公司 | The processing method and processing device of knowledge mapping |
CN110147421A (en) * | 2019-05-10 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of target entity link method, device, equipment and storage medium |
CN110297868A (en) * | 2018-03-22 | 2019-10-01 | 奥多比公司 | Construct enterprise's specific knowledge figure |
CN110489560A (en) * | 2019-06-19 | 2019-11-22 | 民生科技有限责任公司 | The little Wei enterprise portrait generation method and device of knowledge based graphical spectrum technology |
CN110909174A (en) * | 2019-11-19 | 2020-03-24 | 南京航空航天大学 | Knowledge graph-based method for improving entity link in simple question answering |
CN111259653A (en) * | 2020-01-15 | 2020-06-09 | 重庆邮电大学 | Knowledge graph question-answering method, system and terminal based on entity relationship disambiguation |
CN111309922A (en) * | 2020-01-19 | 2020-06-19 | 清华大学 | Map construction method, accident classification method, device, computer equipment and medium |
CN111611405A (en) * | 2020-05-22 | 2020-09-01 | 北京明略软件***有限公司 | Knowledge graph construction method and device, electronic equipment and storage medium |
CN111709528A (en) * | 2020-08-18 | 2020-09-25 | 北京工业大数据创新中心有限公司 | Expert rule protection method and device |
CN111737456A (en) * | 2020-05-15 | 2020-10-02 | 恩亿科(北京)数据科技有限公司 | Corpus information processing method and apparatus |
CN111813914A (en) * | 2020-07-13 | 2020-10-23 | 龙马智芯(珠海横琴)科技有限公司 | Question-answering method and device based on dictionary tree, recognition equipment and readable storage medium |
CN111914568A (en) * | 2020-07-31 | 2020-11-10 | 平安科技(深圳)有限公司 | Method, device and equipment for generating text modifying sentence and readable storage medium |
CN112417456A (en) * | 2020-11-16 | 2021-02-26 | 中国电子科技集团公司第三十研究所 | Structured sensitive data reduction detection method based on big data |
CN113449084A (en) * | 2021-09-01 | 2021-09-28 | 中国科学院自动化研究所 | Relationship extraction method based on graph convolution |
CN113821647A (en) * | 2021-11-22 | 2021-12-21 | 山东捷瑞数字科技股份有限公司 | Construction method and system of knowledge graph in engineering machinery industry |
CN117150050A (en) * | 2023-10-31 | 2023-12-01 | 卓世科技(海南)有限公司 | Knowledge graph construction method and system based on large language model |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104035917A (en) * | 2014-06-10 | 2014-09-10 | 复旦大学 | Knowledge graph management method and system based on semantic space mapping |
CN106777232A (en) * | 2016-12-26 | 2017-05-31 | 上海智臻智能网络科技股份有限公司 | Question and answer abstracting method, device and terminal |
CN106777275A (en) * | 2016-12-29 | 2017-05-31 | 北京理工大学 | Entity attribute and property value extracting method based on many granularity semantic chunks |
-
2017
- 2017-07-05 CN CN201710543849.7A patent/CN107480125B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104035917A (en) * | 2014-06-10 | 2014-09-10 | 复旦大学 | Knowledge graph management method and system based on semantic space mapping |
CN106777232A (en) * | 2016-12-26 | 2017-05-31 | 上海智臻智能网络科技股份有限公司 | Question and answer abstracting method, device and terminal |
CN106777275A (en) * | 2016-12-29 | 2017-05-31 | 北京理工大学 | Entity attribute and property value extracting method based on many granularity semantic chunks |
Non-Patent Citations (3)
Title |
---|
MATT J.KUSNER ET AL.: "From Word Embeddings to Document Distances", 《PROCEEDINGS OF THE 32ND INTERNATIONAL CONFERENCE ON MACHINE LEARNING》 * |
WEI SHEN ET AL.: "Entity Linking with a Knowledge Base:Issues,Techniques,and Solutions", 《IEEE TRANSACTION ON KNOWLEDGE AND DATA ENGINEERING》 * |
王飞 等: "一种基于网格的密度峰值聚类算法", 《小型微型计算机***》 * |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109993381A (en) * | 2017-12-29 | 2019-07-09 | ***通信集团湖北有限公司 | Demand management application method, device, equipment and the medium of knowledge based map |
CN109993381B (en) * | 2017-12-29 | 2021-11-30 | ***通信集团湖北有限公司 | Demand management application method, device, equipment and medium based on knowledge graph |
CN109800317A (en) * | 2018-03-19 | 2019-05-24 | 中山大学 | A kind of image querying answer method based on the alignment of image scene map |
CN110297868A (en) * | 2018-03-22 | 2019-10-01 | 奥多比公司 | Construct enterprise's specific knowledge figure |
CN110019843B (en) * | 2018-09-30 | 2020-11-06 | 北京国双科技有限公司 | Knowledge graph processing method and device |
CN110019843A (en) * | 2018-09-30 | 2019-07-16 | 北京国双科技有限公司 | The processing method and processing device of knowledge mapping |
CN110147421A (en) * | 2019-05-10 | 2019-08-20 | 腾讯科技(深圳)有限公司 | A kind of target entity link method, device, equipment and storage medium |
CN110147421B (en) * | 2019-05-10 | 2022-06-21 | 腾讯科技(深圳)有限公司 | Target entity linking method, device, equipment and storage medium |
CN110489560A (en) * | 2019-06-19 | 2019-11-22 | 民生科技有限责任公司 | The little Wei enterprise portrait generation method and device of knowledge based graphical spectrum technology |
CN110909174A (en) * | 2019-11-19 | 2020-03-24 | 南京航空航天大学 | Knowledge graph-based method for improving entity link in simple question answering |
CN110909174B (en) * | 2019-11-19 | 2022-01-04 | 南京航空航天大学 | Knowledge graph-based method for improving entity link in simple question answering |
CN111259653A (en) * | 2020-01-15 | 2020-06-09 | 重庆邮电大学 | Knowledge graph question-answering method, system and terminal based on entity relationship disambiguation |
CN111259653B (en) * | 2020-01-15 | 2022-06-24 | 重庆邮电大学 | Knowledge graph question-answering method, system and terminal based on entity relationship disambiguation |
CN111309922B (en) * | 2020-01-19 | 2023-11-17 | 清华大学 | Map construction method, accident classification device, computer equipment and medium |
CN111309922A (en) * | 2020-01-19 | 2020-06-19 | 清华大学 | Map construction method, accident classification method, device, computer equipment and medium |
CN111737456A (en) * | 2020-05-15 | 2020-10-02 | 恩亿科(北京)数据科技有限公司 | Corpus information processing method and apparatus |
CN111611405A (en) * | 2020-05-22 | 2020-09-01 | 北京明略软件***有限公司 | Knowledge graph construction method and device, electronic equipment and storage medium |
CN111611405B (en) * | 2020-05-22 | 2023-03-21 | 北京明略软件***有限公司 | Knowledge graph construction method and device, electronic equipment and storage medium |
CN111813914A (en) * | 2020-07-13 | 2020-10-23 | 龙马智芯(珠海横琴)科技有限公司 | Question-answering method and device based on dictionary tree, recognition equipment and readable storage medium |
CN111813914B (en) * | 2020-07-13 | 2021-07-06 | 龙马智芯(珠海横琴)科技有限公司 | Question-answering method and device based on dictionary tree, recognition equipment and readable storage medium |
CN111914568A (en) * | 2020-07-31 | 2020-11-10 | 平安科技(深圳)有限公司 | Method, device and equipment for generating text modifying sentence and readable storage medium |
CN111914568B (en) * | 2020-07-31 | 2024-02-06 | 平安科技(深圳)有限公司 | Method, device and equipment for generating text sentence and readable storage medium |
CN111709528B (en) * | 2020-08-18 | 2021-01-05 | 北京工业大数据创新中心有限公司 | Expert rule protection method and device |
CN111709528A (en) * | 2020-08-18 | 2020-09-25 | 北京工业大数据创新中心有限公司 | Expert rule protection method and device |
CN112417456A (en) * | 2020-11-16 | 2021-02-26 | 中国电子科技集团公司第三十研究所 | Structured sensitive data reduction detection method based on big data |
CN112417456B (en) * | 2020-11-16 | 2022-02-08 | 中国电子科技集团公司第三十研究所 | Structured sensitive data reduction detection method based on big data |
CN113449084A (en) * | 2021-09-01 | 2021-09-28 | 中国科学院自动化研究所 | Relationship extraction method based on graph convolution |
CN113821647A (en) * | 2021-11-22 | 2021-12-21 | 山东捷瑞数字科技股份有限公司 | Construction method and system of knowledge graph in engineering machinery industry |
CN117150050A (en) * | 2023-10-31 | 2023-12-01 | 卓世科技(海南)有限公司 | Knowledge graph construction method and system based on large language model |
CN117150050B (en) * | 2023-10-31 | 2024-01-26 | 卓世科技(海南)有限公司 | Knowledge graph construction method and system based on large language model |
Also Published As
Publication number | Publication date |
---|---|
CN107480125B (en) | 2020-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107480125A (en) | A kind of relational links method of knowledge based collection of illustrative plates | |
Yang et al. | Label-driven reconstruction for domain adaptation in semantic segmentation | |
CN106682411B (en) | A method of disease label is converted by physical examination diagnostic data | |
CN111914558B (en) | Course knowledge relation extraction method and system based on sentence bag attention remote supervision | |
CN107679580B (en) | Heterogeneous migration image emotion polarity analysis method based on multi-mode depth potential correlation | |
CN105760507B (en) | Cross-module state topic relativity modeling method based on deep learning | |
CN107526799A (en) | A kind of knowledge mapping construction method based on deep learning | |
CN106776711A (en) | A kind of Chinese medical knowledge mapping construction method based on deep learning | |
CN107944559B (en) | Method and system for automatically identifying entity relationship | |
CN104462066B (en) | Semantic character labeling method and device | |
Wu et al. | Dynamic graph convolutional network for multi-video summarization | |
CN107301171A (en) | A kind of text emotion analysis method and system learnt based on sentiment dictionary | |
Gao et al. | Multi‐dimensional data modelling of video image action recognition and motion capture in deep learning framework | |
CN106909537B (en) | One-word polysemous analysis method based on topic model and vector space | |
CN110413999A (en) | Entity relation extraction method, model training method and relevant apparatus | |
CN107679110A (en) | The method and device of knowledge mapping is improved with reference to text classification and picture attribute extraction | |
Chen et al. | Recursive context routing for object detection | |
CN107463658A (en) | File classification method and device | |
CN109710769A (en) | A kind of waterborne troops's comment detection system and method based on capsule network | |
CN103034726B (en) | Text filtering system and method | |
CN114528411B (en) | Automatic construction method, device and medium for Chinese medicine knowledge graph | |
CN106897572A (en) | Lung neoplasm case matching assisted detection system and its method of work based on manifold learning | |
CN110297888A (en) | A kind of domain classification method based on prefix trees and Recognition with Recurrent Neural Network | |
CN111858940A (en) | Multi-head attention-based legal case similarity calculation method and system | |
CN104537280B (en) | Protein interactive relation recognition methods based on text relation similitude |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |