CN109165299A - A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank - Google Patents

A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank Download PDF

Info

Publication number
CN109165299A
CN109165299A CN201810998966.7A CN201810998966A CN109165299A CN 109165299 A CN109165299 A CN 109165299A CN 201810998966 A CN201810998966 A CN 201810998966A CN 109165299 A CN109165299 A CN 109165299A
Authority
CN
China
Prior art keywords
term
vertex
document map
matrix
document
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810998966.7A
Other languages
Chinese (zh)
Inventor
徐小良
陈学圣
王宇翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201810998966.7A priority Critical patent/CN109165299A/en
Publication of CN109165299A publication Critical patent/CN109165299A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of sciemtifec and technical sphere body constructing methods based on Gspan and TextRank.The present invention, which comprises the steps of:, pre-processes data in sciemtifec and technical sphere corpus, obtains standardized term and term relationship and respective weight and establishes document graph model with this;Document map information computation is constructed by TextRank algorithm to document graph model;It clusters to obtain candidate concepts collection by carrying out Markov to document map information computation;And ontology term relationship figure is calculated to document map information computation based on Gspan Frequent Subgraph Mining;Candidate concepts set ontology term relationship figure is combined and forms sciemtifec and technical sphere ontology.This method comprehensively considers effect of the term information amount in subgraph excavation during constructing sciemtifec and technical sphere ontology, improves Gspan Frequent Subgraph Mining with this, makes ontological construction more complete and accurate, to improve the reliability of ontological construction, validity.

Description

A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank
Technical field
The invention belongs to big data Text Classification fields, are related to frequent object and excavate, specifically a kind of to be based on Gspan With the sciemtifec and technical sphere body constructing method of TextRank.
Background technique
With the arrival of Semantic Web and information huge explosion, extracts on a large scale and indicate that the system research of information becomes more It is important.In recent years, body learning is gradually for known to researcher, the reason is that it is relatively simple and can provide ontology field to obtain information Structure.Further, since ontology can describe to generalities the feature of things and establish logical relation between them, this structure Change can shared information be widely used, be concentrated mainly on information retrieval, artificial intelligence, information extraction, Heterogeneous Information system at present The fields such as the integrated, Semantic Web of system.But as a kind of more abstract concept expression way, ontology in a particular application by Some challenges: ontology difficulty when describing huge information and to its generalities is larger;It is more with ontology application field entity Sample, ontology description language accordingly also become compatible with greater need for having.
Currently, body constructing method can be divided into the method for manual construction and the side using automatic semi-automated techniques building Method.The method of manual ontology generally requires the whole process that ontology expert participates in building, high, low efficiency that there is construction costs Under, the disadvantages of subjectivity is strong, transplanting is inconvenient, thus, such method is just gradually by largely based on automatic, semi-automated techniques ontologies Construction method is replaced, and especially in the fields such as science and technology, a large amount of industry needs are constructed using automatic, automanual The problems such as ontology, but automatic, the whole limitation of semi-automatic method presence existing at present, nicety of grading is not high.
Summary of the invention
Technical problem solved by the invention is to provide a kind of sciemtifec and technical sphere ontology structure based on Gspan and TextRank Construction method improves ontology integrality and accuracy.
In order to reach above-mentioned technical effect, technical scheme is as follows:
1. document map semantization model construction
For the scientific and technological achievement corpus in some field, each achievement carries out pretreatment and normalizing operation respectively, Term and its relational model between other terms are obtained by operations such as sentence segmentation, filtering stop words, part-of-speech tagging, participles (term-verb-term), respectively as the side on term vertex and document map in document map, secondly by term frequency of occurrence and The term co-occurrence frequency is by being normalized, respectively as the weight on document map vertex and side.
G=(V, E, α, β)
V representative term vertex,Relationship, that is, the side on representative term vertex, α: V → ∑VRepresentative term vertex power Weight, β: E → ∑ERepresentative edge weight, ViIndicate the id, E of vertex iiIndicate side id.
2. constructing document map realm information amount model
2.1 document maps update each term vertex weight value by iteration TextRank algorithm
1) according to term initial weight value by term vertex according to descending sort;
2) TextRank formula is called to update its weighted value respectively term according to sequence;
Wherein, WS (Vi) indicate term top ViPoint weight illustrates it for field by the way that iterative calculation vertex weight value is higher Corpus possesses bigger information content, WE (eij) indicate side eijSide right weight values (eijIndicate vertex ViWith vertex VjBetween Side), d indicates given threshold, Neigh (Vi) indicate and vertex ViThe adjacent vertex set in side updates each term vertex by calculating Weighted value, the algorithm are mutually calculated by the way that term vertex and other adjoinings and secondary are abutted term vertex, judge connecting degree, Measure each term vertex information amount;
3) when the number of iterations reaches predesignated number, stopping iteration updating each term vertex weight value;
2.2 document maps update each term side right weight by iteration side Weight algorithm
1) document map side right weight values are updated according to vertex weight value;
It similarly calculates the higher explanation of back weighted value and more large information capacity is possessed for domain corpus.
3. abstracting document figure candidate concepts
1) association adjacency matrix (term number of vertex * term number of vertex), square are built according to document map vertex and side right reconstruct The value of each point of battle array represents the corresponding connected side right weight values in two term vertex and is set to 0 if point-to-point transmission is connectionless.
MG=AG×D-1
MGIndicate document map AGAdjacency matrix, D indicate diagonal matrix;
2) value of document map matrix leading diagonal is arranged is 1.(leading diagonal represents vertex spin, due to subsequent operation When matrix carries out the expansion of odd number power, the value of leading diagonal be will affect for 0 as a result, therefore being uniformly set to 1 (i.e. all vertex Increase the side of a self-loopa));
3) it is standardized respectively according to the adjacency matrix of each document map 2);
EabThe value of representing matrix row a column b, K representing matrix order;
4) Expansion operation calculates the e power of document map matrix i.e. according to setting threshold value e;
Indicate document map matrix, its e power is sought in e expression, and Expansion operation increases entire document The connectivity of figure.
5) the r power of each element in matrix, repetitive operation 3 are asked in Inflation operation);
Inflation operation increases side information amount difference degree.
6) iterative operation 4), it operates 5), until matrix condition stablizes constant (restraining);
7) according to convergent matrix, whether the every a line of matrix is looped to determine with the presence of the point met greater than given threshold min, if Have, which is considered as candidate concepts Ck, otherwise skip.
4. constructing the ontology field excavated based on Frequent tree mining
According to the operation pretreated n document map of 1,2 steps, for searching Frequent tree mining
Candidate concepts collection C={ C is constructed according to the candidate concepts that operation 3 obtains1, C2, C3..., Ck}
1) vertex in n document map and side are arranged respectively according to respective weighted value descending, is filtered to remove weighted value and is less than The vertex and side of minimum support minsup, descending sort again, and its index value lab expression is assigned respectively according to its ranking Its ranking;
2) by operation 1) label sequence respectively by n document map information input;
Vertex: input vertex label lab_v, vertex id-Vi
Side: input vertex A- label lab_A, vertex B- label lab_B, side id-Ei
3) the DFS coding on side is constructed;
E=(V0, V1, A, B, a)
V0, V1Vertex id is respectively indicated, A, B are respectively vertex V0, V1Label, a indicate side id.
G={ E1, E2..., En}
Figure is formed by encoding the side constituted above.
4) the constraint function method of Frequent tree mining is constructed;
Constraint function formula:
I (g) indicates figure information content, iv(v) single vertex information amount, i are indicatede(e) single edge information content is indicated;
D ', D " indicate the subset of chart database, and d ', d " indicate subgraph.
5) Gspan Mining Frequent subgraph SubMining;
A) the smallest side E of label is chosenminAnd atlas is added in the figure for containing the side, iteration judges E in atlasmin Whether meeting minimum DFS coding in the figure, (five-tuple from left to right successively compares, and is then first small less than the side of another a line DFS coding);
Marked if b) meeting, and this while on the basis of expander graphs concentrate potential feasible while, constitute subgraph and continue It excavates, always iteration, until being no longer minimum DFS coding, then the subgraph excavates and terminates;
C) judge whether the subgraph meets operation 3) constraint function method threshold value, result set is added if meeting, then It is excavated again the side not being labeled since label.
6) the concept set C of binding operation 3 combines to form concept relation graph i.e. sciemtifec and technical sphere ontology with result set.
Beneficial effects of the present invention: the present invention proposes the sciemtifec and technical sphere body constructing method based on Gspan and TextRank, To meet the specific demand of sciemtifec and technical sphere, which includes four steps, constructs the document map based on semantization model;Building text Shelves figure realm information amount model, i.e., measure between each term and term relationship to the information of domain corpus for document map Amount;Abstracting document figure candidate concepts, i.e., cluster document map;Construct the ontology field excavated based on Frequent tree mining, needle It to n pretreated document maps of input, finds out and meets global bulk junction composition, form sciemtifec and technical sphere in conjunction with candidate concepts Ontology.The present invention makes ontological construction more complete and accurate, to improve the reliability of ontological construction, validity.
Detailed description of the invention
Fig. 1 is flow diagram of the present invention;
Fig. 2 is that present invention specific implementation example results are shown.
Specific embodiment
Invention is further explained with reference to the accompanying drawing.
Fig. 1 describes flow chart of the invention, provides detailed description below with reference to Fig. 1.
Step 1, the pretreatment of sciemtifec and technical sphere corpus and standardization
As shown in Figure 1, being pre-processed and being standardized respectively behaviour for each achievement in corpus before building document map Make, divided by sentence, filter stop words, part-of-speech tagging, participle obtains term and its relational model between other terms (term-verb-term) i.e. side, by term frequency of occurrence and the term co-occurrence frequency by being normalized, respectively as Its weight.
Example: it is directed to " artificial intelligence " field, paper in a part of field is extracted and is handled:
Term (numerical value is vertex weights): pattern-recognition -0.0002324;Feature extraction -0.00069735;Genetic algorithm- 0.0009298;Data mining -0.001859......
Term relationship side (numerical value is side right weight): cluster-neural network -0.0002345;Robot-neural network- 0.0001323;Artificial intelligence-robot -0.0002286......
Step 2 constructs document map realm information amount model based on TextRank algorithm
Step 2.1 establishes document map
G=(V, E, α, β)
V representative term vertex,Relationship, that is, the side on representative term vertex, α: V → ∑VRepresentative term vertex power Weight, β: E → ∑ERepresentative edge weight, ViIndicate the id, E of vertex iiIndicate side id.
Step 2.2, document map realm information amount model
Document map updates each term vertex weight value by iteration TextRank algorithm, measures term for document with this The information content of figure, the iteration since the maximum vertex of initial weight;
Wherein, WS (Vi) indicate term top ViPoint weight illustrates it for field by the way that iterative calculation vertex weight value is higher Corpus possesses bigger information content, WE (eij) indicate side eijSide right weight values (eijIndicate vertex ViWith vertex VjBetween Side), d indicates given threshold value 0.25 herein, Neigh (Vi) indicate and vertex ViThe adjacent vertex set in side is updated by calculating Each term vertex weight value,
After iteration reaches given threshold, stop iteration, updates each term connection side right weight values;
It similarly calculates the higher explanation of back weighted value and more large information capacity is possessed for domain corpus.
Example: according to 1 result treatment of step:
Term (numerical value is vertex weights): pattern-recognition -0.036481;Feature extraction -0.082345;Genetic algorithm- 0.0212721;Data mining -0.052312......
Term relationship side (numerical value is side right weight): cluster-neural network -0.032084;Robot-neural network- 0.023124;Artificial intelligence-robot -0.031041......
Step 3 clusters extraction field candidate concepts based on Markov
Step 3.1, building document map adjacency matrix
1) association adjacency matrix (term number of vertex * term number of vertex), square are built according to document map vertex and side right reconstruct The value of each point of battle array represents the corresponding connected side right weight values in two term vertex and is set to 0 if point-to-point transmission is connectionless.
MG=AG × D-1
MGIndicate document map AGAdjacency matrix, D indicate diagonal matrix;
2) value of document map matrix leading diagonal is arranged is 1.(leading diagonal represents vertex spin, due to subsequent operation When matrix carries out the expansion of odd number power, the value of leading diagonal be will affect for 0 as a result, therefore being uniformly set to 1 (i.e. all vertex Increase the side of a self-loopa));
3) it is standardized respectively according to the adjacency matrix of each document map 2);
EabThe value of representing matrix row a column b, K representing matrix order
Step 3.2, Expansion operation (increasing the connectivity of graph)
Expansion operation according to setting threshold value e, calculates the e power of document map matrix;
Indicate document map matrix, its e power is sought in e expression;
Step 3.3, Inflation operation (increase side information amount difference degree.)
The r power of each element in matrix is sought, is repeated in step 3.1 3)
EabThe value of representing matrix row a column b, K representing matrix order;
Step 3.3, iteration Expansion, Inflation operation
Until matrix condition convergence, that is, it is worth and no longer changes;
Step 3.4, candidate concepts collection
According to convergent matrix, the every a line of matrix is looped to determine whether with the presence of the point met greater than given threshold min, if having The row is then considered as candidate concepts Ck, otherwise skip.
C={ C1, C2, C3..., Ck}
Example: candidate concepts collection is obtained based on 2 information computation of step:
C1={ neural network;Convolution;Neuron;Pulse;Signal processing;Cluster ... };
C2={ artificial intelligence;Robot;Sensor;Information processing;Genetic algorithm;Identification ... };
Ck=...
Step 4, the ontology Domain relation based on Gspan extract
Step 4.1, building DFS coding
E=(V0, V1, A, B, a)
V0, V1Vertex id is respectively indicated, A, B are respectively vertex V0, V1Label, a indicate side id.
G={ E1, E2..., En}
Figure is formed by encoding the side constituted above.
Step 4.2, the frequent subset restriction functional based method of building
Constraint function formula:
I (g) indicates figure information content, iv(v) single vertex information amount, i are indicatede(e) single edge information content is indicated;
D ', D " indicate the subset of chart database, and d ', d " indicate subgraph;
Step 4.3, Frequent tree mining excavate SubMining
A) the smallest side E of label is chosenminAnd atlas is added in the figure for containing the side, iteration judges E in atlasmin Whether meeting minimum DFS coding in the figure, (five-tuple from left to right successively compares, and is then first small less than the side of another a line DFS coding);
Marked if b) meeting, and this while on the basis of expander graphs concentrate potential feasible while, constitute subgraph and continue It excavates, always iteration, until being no longer minimum DFS coding, then the subgraph excavates and terminates;
C) judge whether the subgraph meets the constraint function method threshold value of step 4.2, result set is added if meeting, then It is excavated again the side not being labeled since label.
D) it combines to form concept relation graph i.e. sciemtifec and technical sphere ontology with result set in conjunction with the concept set C of step 3.
Example: based on step 2, step 3 excavates " artificial intelligence " ontology obtained by the subgraph of step 4, will cure in ontology herein Treatment field content shows that black line represents association between the two with Fig. 2.

Claims (2)

1. a kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank, it is characterised in that this method includes as follows Step:
Step 1: scientific and technological corpus is pre-processed and standardized
For the scientific document in scientific and technological corpus by pretreatment, required basic file information is obtained, corpus is then passed through Term vertex and term relationship needed for standardization obtains building document map are as a result, be specifically:
A) each scientific document is pre-processed first respectively, by sentence segmentation, filtering stop words, part-of-speech tagging, is divided Word obtains term and its relational model between other terms, the side as document map;
B) secondly by term frequency of occurrence and the term co-occurrence frequency by being normalized, respectively as the art in document map The weight on side in language vertex and document map;
Step 2: document map information model is constructed based on TextRank
The term vertex obtained according to step 1 and term relationship result building document map, followed by TextRank algorithm structure Document map information model is built, specifically:
A) term and term relationship and its weight are constructed into document map first;
B) document map information computation is then constructed:
Document map updates each term vertex weight value by iteration TextRank algorithm, measures term for document map with this Information content, the iteration since the maximum vertex of initial weight;
WS(Vi) indicate term vertex weights, WE (eij) indicate side eijSide right weight values, by iterate to calculate vertex weight value get over It possesses domain corpus bigger information content to high explanation, and d indicates given threshold, Neigh (Vi) indicate and vertex ViBian Xiang Adjacent vertex set updates each term vertex weight value by calculating, and after iteration reaches given threshold, stops iteration, updates each A term connects side right weight values;
Step 3: based on Markov cluster building candidate concepts collection
Document map matrix is constructed according to the document map information model of step 2, which is clustered by Markov by term Information cluster obtains the candidate concepts collection about term vertex, specifically:
A) first according to above-mentioned document map model construction document map matrix;
B) matrix is standardized again, i.e., each value is normalized according to the element column in matrix, and by leading diagonal Matrix element is set as 1:
C) calculating matrix e power, i.e. matrix itself multiplication e times are operated by Expansion;
D) then Inflation is operated, that is, calculates separately the r power of each element of matrix, and respectively by matrix element according to it Column normalization operation;
E) then iteration c), d) until matrix condition stablize it is constant;
F) last according to the matrix of consequence handled above building candidate concepts collection, that is, combine term vertex similar in meaning and Term relationship side forms candidate concepts;
Step 4: building ontology field is excavated based on Gspan Frequent tree mining
According to the document map information model of step 2, building subgraph first excavates required term relationship side DFS coding and structure Build the constraint letter that document map constraint function passes through calculating term vertex and term relationship side information amount constructs each document map Breath, then in conjunction with the above two result Mining Frequent subgraphs, specifically:
A) the DFS coding on side is first constructed:
E=(V0, V1, A, B, a)
V0, V1Vertex id is respectively indicated, A, B are respectively vertex V0, V1Label i.e. sorted according to vertex weights after ranking as a result, A indicates side id;Figure is made of the following side constituted that encodes;
G={ E1, E2..., En}
B) document map constraint function is constructed again:
I (g)=∑v∈v(g)iv(v)+∑e∈E(g)ie(e)
I (g) indicates figure information content, iv(v) single vertex information amount, i are indicatede(e) single edge information content is indicated;
D ', D " indicate the subset of chart database, and d ', d " indicate that subgraph, WE (e) indicate term relationship side right weight values;
C) Frequent tree mining excavates SubMining:
According to DFS coding and subset restriction function mining ontological relationship figure;
D) ontological relationship figure forms sciemtifec and technical sphere conception ontology in conjunction with step 3 candidate concepts collection.
2. a kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank according to claim 1, feature It is:
By side information amount descending sort in step 2, sequentially iteration chooses side and carries out excavation subgraph, when the side meets minimum DFS Coding, then this while on the basis of expander graphs concentrate potential feasible while, constitute subgraph and continue to excavate, always iteration, until not It is minimum DFS coding again, then subgraph excavation terminates, and judges whether its information contained value meets constraint function threshold in mining process Value constraint, is added result set if meeting, and then excavates again since residue does not excavate side.
CN201810998966.7A 2018-08-30 2018-08-30 A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank Pending CN109165299A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810998966.7A CN109165299A (en) 2018-08-30 2018-08-30 A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810998966.7A CN109165299A (en) 2018-08-30 2018-08-30 A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank

Publications (1)

Publication Number Publication Date
CN109165299A true CN109165299A (en) 2019-01-08

Family

ID=64893271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810998966.7A Pending CN109165299A (en) 2018-08-30 2018-08-30 A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank

Country Status (1)

Country Link
CN (1) CN109165299A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657627A (en) * 2021-08-17 2021-11-16 国网江苏省电力有限公司信息通信分公司 Defect list generation method and system in power communication network
CN114742071A (en) * 2022-05-12 2022-07-12 昆明理工大学 Chinese cross-language viewpoint object recognition and analysis method based on graph neural network

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710343A (en) * 2009-12-11 2010-05-19 北京中机科海科技发展有限公司 Body automatic build system and method based on text mining

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710343A (en) * 2009-12-11 2010-05-19 北京中机科海科技发展有限公司 Body automatic build system and method based on text mining

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XIN HOU等: ""GRAONTO: A graph-based approach for automatic construction of domain ontology"", 《EXPERT SYSTEMS WITH APPLICATIONS》 *
侯鑫: ""基于本体的设计重用技术研究及其在CAFD中的应用"", 《中国博士学位论文全文数据库 信息科技辑》 *
侯鑫等: ""面向知识与信息管理的领域本体自动构建算法"", 《计算机集成制造***》 *
郑学伟: ""基于知识管理的本体自动构建算法研究"", 《计算机技术与发展》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657627A (en) * 2021-08-17 2021-11-16 国网江苏省电力有限公司信息通信分公司 Defect list generation method and system in power communication network
CN113657627B (en) * 2021-08-17 2024-01-12 国网江苏省电力有限公司信息通信分公司 Defect list generation method and system in power communication network
CN114742071A (en) * 2022-05-12 2022-07-12 昆明理工大学 Chinese cross-language viewpoint object recognition and analysis method based on graph neural network
CN114742071B (en) * 2022-05-12 2024-04-23 昆明理工大学 Cross-language ideas object recognition analysis method based on graph neural network

Similar Documents

Publication Publication Date Title
CN110597735B (en) Software defect prediction method for open-source software defect feature deep learning
CN112214610B (en) Entity relationship joint extraction method based on span and knowledge enhancement
CN104699763B (en) The text similarity gauging system of multiple features fusion
CN110110080A (en) Textual classification model training method, device, computer equipment and storage medium
CN111597347B (en) Knowledge embedding defect report reconstruction method and device
CN107818141B (en) Biomedical event extraction method integrated with structured element recognition
CN110674252A (en) High-precision semantic search system for judicial domain
CN105512209A (en) Biomedicine event trigger word identification method based on characteristic automatic learning
CN111966825A (en) Power grid equipment defect text classification method based on machine learning
CN114816497B (en) Link generation method based on BERT pre-training model
CN106528527A (en) Identification method and identification system for out of vocabularies
CN113821670A (en) Image retrieval method, device, equipment and computer readable storage medium
CN115392252A (en) Entity identification method integrating self-attention and hierarchical residual error memory network
CN111191031A (en) Entity relation classification method of unstructured text based on WordNet and IDF
Elayidom et al. A generalized data mining framework for placement chance prediction problems
CN109165299A (en) A kind of sciemtifec and technical sphere body constructing method based on Gspan and TextRank
CN116187444A (en) K-means++ based professional field sensitive entity knowledge base construction method
CN116108191A (en) Deep learning model recommendation method based on knowledge graph
CN115687609A (en) Zero sample relation extraction method based on Prompt multi-template fusion
CN114860889A (en) Steel potential knowledge reasoning method and system based on steel knowledge graph
CN117725222B (en) Method for extracting document complex knowledge object by integrating knowledge graph and large language model
CN118013038A (en) Text increment relation extraction method based on prototype clustering
CN112489689B (en) Cross-database voice emotion recognition method and device based on multi-scale difference countermeasure
CN113742396A (en) Mining method and device for object learning behavior pattern
CN117954081A (en) Intelligent medical inquiry method and system based on graph transducer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190108

WD01 Invention patent application deemed withdrawn after publication