CN113221576B - Named entity identification method based on sequence-to-sequence architecture - Google Patents
Named entity identification method based on sequence-to-sequence architecture Download PDFInfo
- Publication number
- CN113221576B CN113221576B CN202110608812.4A CN202110608812A CN113221576B CN 113221576 B CN113221576 B CN 113221576B CN 202110608812 A CN202110608812 A CN 202110608812A CN 113221576 B CN113221576 B CN 113221576B
- Authority
- CN
- China
- Prior art keywords
- named
- sequence
- entity
- named entity
- text
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
- G06F40/295—Named entity recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/35—Clustering; Classification
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Character Discrimination (AREA)
Abstract
The invention relates to the technical field of recognition, and provides a named entity recognition method based on a sequence-to-sequence architecture.
Description
Technical Field
The invention relates to the technical field of identification, in particular to a named entity identification method based on a sequence-to-sequence architecture.
Background
The named entity recognition task is a task of capturing a specific type of text segment from given text, such as extracting characters, places, symptoms and the like in the text. For example, for a sentence, "zhang san will be in a job in 2021," two tuples (zhang san, person), (2021, time) need to be extracted, the first element of the tuple represents the content in the sentence, and the second element of the tuple represents what type of named entity the content is.
Named entity recognition is one of basic technologies of information extraction technology, and is widely applied to a question and answer system, a dialogue system, a translation system, and the like in natural language processing. In the most common named entity task, there is no intersection between different entities, and the same entity must be a contiguous piece of text. However, in some specific application scenarios, there may be a nested relationship between entities, for example, the phrase "souvenir hall" includes at least the following entities: (person, lugnu), (commemorative hall, venue), there is a nested relationship between the two entities. Furthermore, named entity recognition in the medical field may also be the case where there are non-continuous entities, for example, in entity recognition where patient symptoms are extracted, both symptoms (muscle pain, symptoms) and (muscle soreness, symptoms) need to be extracted from "patient muscle pain and soreness", where "muscle soreness" is not a continuous text segment in the original sentence.
At present, common named entity recognition is generally solved by a sequence labeling mode, but for nested named entity recognition and discontinuous named entity recognition, a complicated specification needs to be designed by adopting the sequence labeling mode. Moreover, the method for identifying the named entities through sequence marking is very limited, different types of named entity identification must be processed by adopting different model structures, and the application range is narrow.
Disclosure of Invention
The present invention is made to solve the above problems, and an object of the present invention is to provide a named entity recognition method based on a sequence-to-sequence architecture.
The invention provides a named entity identification method based on a sequence-to-sequence architecture, which has the characteristics that the method comprises the following steps: s1, constructing a named entity recognition model; s2, training the named entity recognition model through a preset sample, wherein an entity sequence of the preset sample is obtained according to a preset sequencing rule; s3, inputting the text to be detected into a named entity recognition model to obtain a recognition result sequence; and S4, decoding the recognition result sequence output by the named entity recognition model to obtain a plurality of named entities and text labels corresponding to the named entities, wherein the named entity recognition model comprises an encoder and a decoder, the output of the decoder is named entity positions and the text labels, in the training process, the decoder outputs the named entity positions and the output labels as sample labels according to a preset sample, the corresponding named entities are obtained from the preset sample according to the named entity positions to serve as sample entities, the decoder is trained according to the sample entities and the sample labels, and the named entity sequence is composed of the named entity positions and the text labels output by the named entity recognition model according to the text to be tested.
The named entity identification method based on the sequence-to-sequence architecture provided by the invention can also have the following characteristics: the input of the coder is a text to be recognized, and the output of the coder is a high-dimensional vector of words.
In the named entity recognition method based on the sequence-to-sequence architecture provided by the invention, the method can also have the following characteristics: wherein the input of the decoder is the output of the encoder and the output of the decoder is the named entity sequence.
In the named entity recognition method based on the sequence-to-sequence architecture provided by the invention, the method can also have the following characteristics: in the named entity sequence, the named entity position is used for indicating the position of the named entity in the text to be identified, and the text label is the category corresponding to the named entity.
In the named entity recognition method based on the sequence-to-sequence architecture provided by the invention, the method can also have the following characteristics: wherein the predetermined ordering rule is: and sequencing the named entities according to the starting positions of the named entities in sequence, and sequencing the named entities with the same starting positions according to the entity lengths corresponding to the named entities.
In the named entity recognition method based on the sequence-to-sequence architecture provided by the invention, the method can also have the following characteristics: the named entity position is a pointer pointing to the sequence number of the character in the text.
Action and Effect of the invention
According to the named entity recognition method based on the sequence-to-sequence architecture, the named entity recognition model of the component comprises an encoder and a decoder, the output of the decoder is named entity position and text labels, after the named entity recognition model is trained through a preset sample, a text to be detected is input into the named entity recognition model to obtain a recognition result sequence, and the recognition result sequence output by the named entity recognition model is decoded to obtain a plurality of named entities and text labels corresponding to the named entities.
In addition, in the training process, the decoder outputs the named entity position and the output label as a sample label according to the preset sample, acquires the corresponding named entity from the preset sample as a sample entity according to the named entity position, and trains the decoder according to the sample entity and the sample label, so that the training effect is prevented from being influenced by inputting the named entity position which does not contain semantic information.
Drawings
FIG. 1 is a flow diagram of a named entity recognition method based on a sequence-to-sequence architecture in an embodiment of the present invention;
FIG. 2 is a diagram of a named entity recognition model in an embodiment of the invention.
Detailed Description
In order to make the technical means, creation features, achievement purposes and effects of the invention easy to understand, the following embodiments specifically describe the named entity recognition method based on sequence-to-sequence architecture in combination with the drawings.
< example >
This embodiment details the named entity recognition method based on the sequence-to-sequence architecture.
Fig. 1 is a flowchart of a named entity identification method based on a sequence-to-sequence architecture in this embodiment.
As shown in fig. 1, the named entity identification method based on sequence-to-sequence architecture includes the following steps:
and S1, constructing a named entity recognition model.
The named entity recognition model comprises an encoder and a decoder, wherein the input of the encoder is a text to be recognized, and the output of the encoder is a high-dimensional vector of words. The input of the decoder is the output of the encoder and the output of the decoder is the named entity sequence.
And S2, training the named entity recognition model through a preset sample, wherein an entity sequence of the preset sample is obtained according to a preset sequencing rule.
In the training process, the decoder outputs the position of the named entity and the output label as a sample label according to a preset sample, acquires the corresponding named entity from the preset sample as a sample entity according to the position of the named entity, trains the decoder according to the sample entity and the sample label,
and S3, inputting the text to be detected into the named entity recognition model to obtain a recognition result sequence.
And S4, decoding the recognition result sequence output by the named entity recognition model to obtain a plurality of named entities and text labels corresponding to the named entities.
In the named entity sequence, the named entity position is used for indicating the position of the named entity in the text to be identified, and the text label is the category corresponding to the named entity. The named entity location is a pointer to the sequence number of the character in the text.
The predetermined ordering rule is: and sequencing the named entities according to the starting positions of the named entities in sequence, and sequencing the named entities with the same starting positions according to the entity lengths corresponding to the named entities.
FIG. 2 is a diagram of a named entity recognition model in this embodiment.
As shown in fig. 2, the named entity recognition model includes an encoder and a decoder.
The conversion mode after the text to be tested is input into the encoder is as follows:
and when the named entities in the text to be tested are conventional named entities, sequentially arranging the named entities according to the appearance sequence of the named entities in the text. For example, for text [ x ] to be tested 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ]Suppose [ x ] therein 1 ,x 2 ],[x 5 ,x 6 ]Are respectively entity class e 1 And e 2 Then the named entity sequence in the text to be tested is represented as [1,2, e ] 1 ,5,6,e 2 ]In this embodiment, the entity is expressed by using the named entity position in the text to be tested instead of using the text segment, so as to avoid ambiguity caused by the occurrence of the same text segment in the text. For example, the entity sequence corresponding to "Zhang Sansheng in Hunan" is [1, people, 4, place]。
When the named entities in the text to be tested are nested named entities, the conversion mode of the named entity sequence is that the named entities which start first are ranked in front, and the named entities which start at the same position are ranked in front with shorter length. For example if the sentence [ x ] 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ]In [ x ] 1 ,x 2 ],[x 1 ,x 2 ,x 3 ]And [ x ] 5 ,x 6 ]As entity class e 1 ,e 2 And e 3 Then the corresponding entity sequence is [1,2, e ] 1 ,1,2,3,e 2 ,5,6,e 3 ]And expressing the named entity by using the named entity position in the text to be tested.
When the named entities in the text to be tested are discontinuous named entities, the conversion rule of the named entity sequence is that the entity sequence started first is arranged in front, the entities started at the same position are arranged according to the entity length, and the shorter entity is arranged in front. For example if the sentence [ x ] 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ]In [ x ] 1 ,x 3 ],[x 1 ,x 2 ,x 3 ,x 5 ]And [ x ] 5 ,x 6 ]As entity class e 1 ,e 2 And e 3 Then the corresponding entity sequence is [1,3, e ] 1 ,1,2,3,5,e 2 ,5,6,e 3 ]And expressing the named entity by using the named entity position in the text to be tested. Wherein x is 1 -x n For the identified entity, n > 1
The calculation process of the encoder is as follows:
H e =Encoder([x 1 ,...,x n ]),
wherein H e Is a latent vector for each word after encoding.
The calculation process of the decoder is as follows:
E e =TokenEmbed(X),
C d =TokenEmbed(C),
wherein the content of the first and second substances,is the content that has been generated by the encoder, alpha is a hyper-parameter quantity, C is a collection of entity classes,is a dot product, P t Is the distribution of the output words at the current moment, X is the input word,is the decoder hidden state at time t, E e Is an input word embedding vector, C d Embedded vectors for classes.
In this embodiment, at the time of decoding, P is passed t The output is the pointer number or the category number, and when they are input at the next moment of the decoder, they need to be converted into corresponding words. Since the target generation sequence may generate the word sequence number of the input text, but the sequence number itself does not contain semantic information, the word sequence number cannot be directly transmitted as an input to the decoder in the process of performing autoregressive generation, and needs to be restored to a specific word in the input through mapping.
Effects and effects of the embodiments
According to the named entity recognition method based on the sequence-to-sequence architecture, the named entity recognition model of the component comprises the encoder and the decoder, the output of the decoder is the named entity position and the text label, after the named entity recognition model is trained through the preset samples, the text to be detected is input into the named entity recognition model to obtain the recognition result, and then the recognition result is sequenced according to the preset entity sequencing rule to obtain the named entity sequence.
In addition, in the training process, the decoder outputs the position of the named entity and the output label as a sample label according to the preset sample, acquires the corresponding named entity from the preset sample as a sample entity according to the position of the named entity, and trains the decoder according to the sample entity and the sample label, so that the influence on the training effect caused by the input of the position of the named entity without semantic information is avoided.
The above embodiments are preferred examples of the present invention, and are not intended to limit the scope of the present invention.
Claims (3)
1. A named entity identification method based on sequence-to-sequence architecture is characterized by comprising the following steps:
s1, constructing a named entity recognition model;
s2, training the named entity recognition model through a preset sample, wherein an entity sequence of the preset sample is obtained according to a preset sequencing rule;
s3, inputting the text to be detected into the named entity recognition model to obtain a recognition result sequence;
s4, decoding the recognition result sequence output by the named entity recognition model to obtain a plurality of named entities and text labels corresponding to the named entities,
wherein the named entity recognition model comprises an encoder and a decoder,
the output of the decoder is a named entity location and a text label,
in the training process, the decoder outputs a named entity position and an output label as a sample label according to the preset sample, acquires a corresponding named entity from the preset sample as a sample entity according to the named entity position, and trains the decoder according to the sample entity and the sample label,
the named entity sequence consists of the named entity position and the text label which are output by the named entity recognition model according to the text to be tested,
the named entity position is a pointer pointing to a sequence number of a character in the text to be tested, in the named entity sequence, the named entity position is used for indicating the position of the named entity in the text to be tested, the text label is a category corresponding to the named entity,
the predetermined ordering rule is:
sequencing the named entities according to the starting positions in sequence according to the positions of the named entities, sequencing the named entities with the same starting positions according to the entity lengths corresponding to the named entities,
when the named entities are conventional named entities, the named entities are sequentially arranged according to the appearance sequence of the named entities in the text;
when the named entities are nested named entities, the conversion mode of the named entity sequence is that the named entities which start first are ranked earlier, and the named entities which start at the same position are ranked earlier with shorter length, wherein for the text [ x ] to be tested 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ]Wherein [ x ] 1 ,x 2 ]、[x 1 ,x 2 ,x 3 ]And [ x ] 5 ,x 6 ]Respectively entity class e 1 、e 2 And e 3 Then the named entity sequence in the text to be tested is represented as [1,2, e ] 1 ,1,2,3,e 2 ,5,6,e 3 ];
When the named entities are discontinuous named entities, the conversion rule of the named entity sequence is that the named entities which start first are ranked in front, the named entities which start at the same position are ranked according to the entity length, the shorter named entities are ranked in front, wherein for the text [ x ] to be tested 1 ,x 2 ,x 3 ,x 4 ,x 5 ,x 6 ,x 7 ]Wherein [ x ] 1 ,x 3 ]、[x 1 ,x 2 ,x 3 ,x 5 ]And [ x ] 5 ,x 6 ]Respectively entity class e 1 、e 2 And e 3 Then the named entity sequence in the text to be tested is represented as [1,3, e ] 1 ,1,2,3,5,e 2 ,5,6,e 3 ],
The calculation process of the encoder is as follows:
H e =Encoder([x 1 ,...,x n ])
in the formula, H e Is a latent vector for each word after encoding,
the calculation process of the decoder is as follows:
E e =TokenEmbed(X),
c d =TokenEmbed(C),
2. The named entity recognition method based on sequence-to-sequence architecture as claimed in claim 1, wherein:
the input of the encoder is a text to be detected, and the output of the encoder is a high-dimensional vector of words.
3. The named entity recognition method based on sequence-to-sequence architecture as claimed in claim 1, wherein:
wherein the input of the decoder is the output of the encoder and the output of the decoder is the named entity sequence.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110608812.4A CN113221576B (en) | 2021-06-01 | 2021-06-01 | Named entity identification method based on sequence-to-sequence architecture |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110608812.4A CN113221576B (en) | 2021-06-01 | 2021-06-01 | Named entity identification method based on sequence-to-sequence architecture |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113221576A CN113221576A (en) | 2021-08-06 |
CN113221576B true CN113221576B (en) | 2023-01-13 |
Family
ID=77082195
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110608812.4A Active CN113221576B (en) | 2021-06-01 | 2021-06-01 | Named entity identification method based on sequence-to-sequence architecture |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113221576B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113886522B (en) * | 2021-09-13 | 2022-12-02 | 苏州空天信息研究院 | Discontinuous entity identification method based on path expansion |
CN115983271B (en) * | 2022-12-12 | 2024-04-02 | 北京百度网讯科技有限公司 | Named entity recognition method and named entity recognition model training method |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107705784B (en) * | 2017-09-28 | 2020-09-29 | 百度在线网络技术(北京)有限公司 | Text regularization model training method and device, and text regularization method and device |
CN107680580B (en) * | 2017-09-28 | 2020-08-18 | 百度在线网络技术(北京)有限公司 | Text conversion model training method and device, and text conversion method and device |
CN109543667B (en) * | 2018-11-14 | 2023-05-23 | 北京工业大学 | Text recognition method based on attention mechanism |
CN109684452A (en) * | 2018-12-25 | 2019-04-26 | 中科国力(镇江)智能技术有限公司 | A kind of neural network problem generation method based on answer Yu answer location information |
CN111539229A (en) * | 2019-01-21 | 2020-08-14 | 波音公司 | Neural machine translation model training method, neural machine translation method and device |
CN110162795A (en) * | 2019-05-30 | 2019-08-23 | 重庆大学 | A kind of adaptive cross-cutting name entity recognition method and system |
CN110362823B (en) * | 2019-06-21 | 2023-07-28 | 北京百度网讯科技有限公司 | Training method and device for descriptive text generation model |
CN110704633B (en) * | 2019-09-04 | 2023-07-21 | 平安科技(深圳)有限公司 | Named entity recognition method, named entity recognition device, named entity recognition computer equipment and named entity recognition storage medium |
CN111310485B (en) * | 2020-03-12 | 2022-06-21 | 南京大学 | Machine translation method, device and storage medium |
CN111581361B (en) * | 2020-04-22 | 2023-09-15 | 腾讯科技(深圳)有限公司 | Intention recognition method and device |
CN112069328B (en) * | 2020-09-08 | 2022-06-24 | 中国人民解放军国防科技大学 | Method for establishing entity relation joint extraction model based on multi-label classification |
CN112784602B (en) * | 2020-12-03 | 2024-06-14 | 南京理工大学 | News emotion entity extraction method based on remote supervision |
CN112417902A (en) * | 2020-12-04 | 2021-02-26 | 北京有竹居网络技术有限公司 | Text translation method, device, equipment and storage medium |
-
2021
- 2021-06-01 CN CN202110608812.4A patent/CN113221576B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113221576A (en) | 2021-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ko et al. | Neural sign language translation based on human keypoint estimation | |
CN108717406B (en) | Text emotion analysis method and device and storage medium | |
CN109086357B (en) | Variable automatic encoder-based emotion classification method, device, equipment and medium | |
CN108334487B (en) | Missing semantic information completion method and device, computer equipment and storage medium | |
CN113221576B (en) | Named entity identification method based on sequence-to-sequence architecture | |
CN110909122B (en) | Information processing method and related equipment | |
CN110134954B (en) | Named entity recognition method based on Attention mechanism | |
CN110377902B (en) | Training method and device for descriptive text generation model | |
CN111523316A (en) | Medicine identification method based on machine learning and related equipment | |
Qiu et al. | Word segmentation for Chinese novels | |
CN114490953B (en) | Method for training event extraction model, method, device and medium for extracting event | |
CN113887229A (en) | Address information identification method and device, computer equipment and storage medium | |
CN113158656B (en) | Ironic content recognition method, ironic content recognition device, electronic device, and storage medium | |
CN116628186B (en) | Text abstract generation method and system | |
CN114861601B (en) | Event joint extraction method based on rotary coding and storage medium | |
CN115130613A (en) | False news identification model construction method, false news identification method and device | |
CN115587583A (en) | Noise detection method and device and electronic equipment | |
CN113836929A (en) | Named entity recognition method, device, equipment and storage medium | |
CN117743890A (en) | Expression package classification method with metaphor information based on contrast learning | |
CN113704472B (en) | Method and system for identifying hate and offensive language based on theme memory network | |
CN116186241A (en) | Event element extraction method and device based on semantic analysis and prompt learning, electronic equipment and storage medium | |
CN114067362A (en) | Sign language recognition method, device, equipment and medium based on neural network model | |
Vidhyasagar et al. | Video captioning based on sign language using yolov8 model | |
CN112634878A (en) | Speech recognition post-processing method and system and related equipment | |
CN111160042B (en) | Text semantic analysis method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |