CN113468433B - Target event extraction data processing system - Google Patents

Target event extraction data processing system Download PDF

Info

Publication number
CN113468433B
CN113468433B CN202111024291.4A CN202111024291A CN113468433B CN 113468433 B CN113468433 B CN 113468433B CN 202111024291 A CN202111024291 A CN 202111024291A CN 113468433 B CN113468433 B CN 113468433B
Authority
CN
China
Prior art keywords
text
word
trigger
cos
argument
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111024291.4A
Other languages
Chinese (zh)
Other versions
CN113468433A (en
Inventor
张正义
傅晓航
林方
常宏宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongke Yuchen Technology Co Ltd
Original Assignee
Zhongke Yuchen Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongke Yuchen Technology Co Ltd filed Critical Zhongke Yuchen Technology Co Ltd
Priority to CN202111024291.4A priority Critical patent/CN113468433B/en
Publication of CN113468433A publication Critical patent/CN113468433A/en
Application granted granted Critical
Publication of CN113468433B publication Critical patent/CN113468433B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a target event extraction data processing system, which comprises a preconfigured event argument role configuration table, a preset target event data structure, a memory for storing a computer program and a processor, wherein the event argument role configuration table is used for storing event argument role information records, and the event argument role information records comprise an event type field, an argument role field and an argument role priority field; the target event data structure comprises a target trigger word data segment, a target event type data segment and a target argument role data segment. The invention improves the integrity and accuracy of the target event extraction result.

Description

Target event extraction data processing system
Technical Field
The invention relates to the technical field of data processing, in particular to a target event extraction data processing system.
Background
With the rapid popularization and development of the internet, a great deal of data information is generated and spread in the network, and how to timely and accurately find needed information from a great amount of natural language texts becomes increasingly urgent. The massive natural language documents have the characteristics of large data volume, non-uniform structure, high redundancy, quick update and the like. In the prior art, an event extraction model is usually obtained by training in a machine learning manner to extract events that are of interest to a user from unstructured information, and the events are presented to the user in a structured manner. However, the method of directly extracting events by using an event extraction model depends on corpora, and if the corpus quantity is small, incomplete or inappropriate, the method has a great influence on the event extraction result, and especially for the event types which are not used as training samples and are learned, the accuracy of event extraction is low, and the extracted event information is incomplete. Therefore, how to improve the integrity and accuracy of the event extraction result is a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to provide a target event extraction data processing system, which improves the integrity and accuracy of target event extraction results.
According to one aspect of the present invention, a target event extraction data processing system is provided, which includes a preconfigured event argument role configuration table, a preset target event data structure, a memory storing a computer program, and a processor, wherein the event argument role configuration table is used to store event argument role information records, and the event argument role information records include an event type field, an argument role field, and an argument role priority field; the target event data structure comprises a target trigger word data segment, a target event type data segment and a target argument role data segment;
when the processor executes the computer program, the following steps are realized:
step S1, extracting candidate trigger words from the text to be processed, and constructing a candidate trigger word list { A }1,A2,…AN}, AnThe number of the N candidate trigger words is 1 to N, and N is the number of the candidate trigger words in the text to be processed;
step S2, obtaining an event type corresponding to each candidate trigger word, and if a preset target event type exists, determining the candidate trigger word corresponding to the target event type as a target trigger word An0Storing the target trigger word into the target trigger word data segment and storing the target data type into the target event type data segment, and executing the step S3, otherwise, determining that no target event exists in the text to be processed, and ending the process;
step S3, determining a target argument role list { B } corresponding to the target event type according to the event argument role configuration table1,B2,…BM}, B1、B2、…BMIn order of decreasing priority, BmFor the mth target argument role, the value range of M is 1 to M, M is the number of target argument roles corresponding to the target event type, M =1 is initialized, and historical information h is initializedm= Am0
Step S4, based on Am0、Bm 、hmExtracting mth argument information C from the text to be processedm
Step S5, comparing M and M, if M<M, then set M = M +1, hm= Am0+
Figure 762341DEST_PATH_IMAGE002
Returning to step S4, if M = M, then { C = M }1,C2,…CMAnd storing the data to a target argument role data segment to generate target event data.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By means of the technical scheme, the target event extraction data processing system provided by the invention can achieve considerable technical progress and practicability, has wide industrial utilization value and at least has the following advantages:
according to the invention, the trigger words, the event types and the argument information are sequentially extracted, and in the argument extraction process, the argument priority is set and the historical information is fused, so that the accuracy of argument information extraction is improved, and the integrity and accuracy of the target event extraction result are further improved.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
Fig. 1 is a schematic diagram of a target event extraction data processing system according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description will be given to a specific implementation and effects of a target event extraction data processing system according to the present invention with reference to the accompanying drawings and preferred embodiments.
The embodiment of the invention provides a target event extraction data processing system, which comprises a pre-configured event argument role configuration table, a preset target event data structure, a memory storing a computer program and a processor, wherein the event argument role configuration table is used for storing event argument role information records, and the event argument role information records comprise an event type field, an argument role field and an argument role priority field; the target event data structure comprises a target trigger word data segment, a target event type data segment and a target argument role data segment;
when the processor executes the computer program, the following steps are realized:
step S1, extracting candidate trigger words from the text to be processed, and constructing a candidate trigger word list { A }1,A2,…AN}, AnThe number of the N candidate trigger words is 1 to N, and N is the number of the candidate trigger words in the text to be processed;
step S2, obtaining an event type corresponding to each candidate trigger word, and if a preset target event type exists, determining the candidate trigger word corresponding to the target event type as a target trigger word An0Storing the target trigger word into the target trigger word data segment and storing the target data type into the target event type data segment, and executing the step S3, otherwise, determining that no target event exists in the text to be processed, and ending the process;
step S3, determining a target argument role list { B } corresponding to the target event type according to the event argument role configuration table1,B2,…BM}, B1、B2、…BMIn order of decreasing priority, BmFor the mth target argument role, the value range of M is 1 to M, M is the number of target argument roles corresponding to the target event type, M =1 is initialized, and historical information h is initializedm= Am0
Step S4, based on Am0、Bm 、hmExtracting mth argument information C from the text to be processedm
Step S5, comparing M and M, if M<M, then set M = M +1, hm= Am0+
Figure 210640DEST_PATH_IMAGE002
Returning to step S4, if M = M, then { C = M }1,C2,…CMAnd storing the data to a target argument role data segment to generate target event data.
The embodiment of the invention improves the accuracy of argument information extraction and further improves the integrity and accuracy of target event extraction results by sequentially extracting the trigger words, the event types and the argument information, setting argument priority and fusing historical information in the argument extraction process.
In step S1, the candidate trigger word list may be constructed by pre-training the trigger word discovery model, extracting the trigger words from the text to be processed, or setting a trigger word list extraction format, and the method for constructing the trigger word classification model is described in detail in the following embodiments:
the first embodiment,
The trigger word discovery model is obtained by training based on a preset first text sample training set and a first neural network model architecture, the first text training set comprises a first text sample and a corresponding trigger word, and the first neural network model architecture is a sequence labeling architecture;
when the processor executes the computer program, the following steps are also realized:
step S10, obtaining a first text sample from the first text sample training set, splicing a preset trigger word question with the first text sample through a preset separator to obtain a first spliced text sample, coding the first spliced text sample based on a preset coder, and setting a first actual output labeling sequence corresponding to the first spliced text sample, wherein in the first actual output labeling sequence, all positions corresponding to the trigger word question are labeled as 1, the position of a trigger word corresponding to the first text sample is labeled as 1, and the position of a non-trigger word is labeled as 0;
in an embodiment, the preset separator is [ SEP ], the system is further configured with a preset mask algorithm, the mask algorithm is configured to mask an input part before [ SEP ], only coding is performed on the masked part, and prediction is not performed, and the mask algorithm enables the first neural network model architecture to label only the first text sample after [ SEP ] when sequence labeling is performed.
Step S20, taking the encoded first stitched text sample as an input of a preset first neural network architecture to obtain a first predicted output tagging sequence, adjusting a parameter of the first neural network architecture based on a first actual output tagging sequence and a first actual output tagging sequence of the first stitched text sample, and training to obtain the trigger word discovery model.
It can be understood that, the first neural network architecture parameter is adjusted based on the first actual output tagging sequence and the first actual output tagging sequence of the first stitched text sample, and an existing model training manner may be directly adopted, for example, solving the cross entropy so as to end the model training when the cross entropy is minimum, and the description is not expanded here.
The second embodiment,
The trigger word discovery model is obtained by training based on a preset first text training set and a second classification model architecture, it should be noted that the second classification model architecture can be specifically an SVM (support vector machine), a decision tree and the like, and can also be a sequence labeling model, each position of an output sequence is labeled with a second classification result, and the first text training set comprises a first text sample and a corresponding trigger word;
when the processor executes the computer program, the following steps are also realized:
step S101, obtaining a first text sample from the first text sample training set, taking a trigger word in the first text sample as a positive sample word, slicing the first text sample to obtain a sliced word, and randomly extracting the sliced word to form a non-trigger word as a negative sample word;
it should be noted that, as time progresses, some new trigger words appear, if non-trigger words in the current text are directly extracted from the text as negative samples, and if the non-trigger words are converted into trigger words subsequently, the accuracy of the model is greatly affected. Therefore, the first text sample is sliced to obtain slice participles, the slice participles can be one character of the first text sample or a plurality of continuous characters of the first text sample, and the sliced slice participles are randomly extracted to form non-trigger words as negative sample words, so that the combined large negative sample words are negative samples in certain probability and are converted into positive samples in small probability, the effect of diluting the negative samples is achieved, and the accuracy and the reliability of a trigger word discovery model are improved.
Step S102, respectively encoding the positive sample and the negative sample based on a preset encoder, inputting the encoded positive sample and the negative sample into a preset two-classification model architecture for classification prediction, adjusting parameters of the two-classification model architecture based on a sample prediction classification result and an actual classification result, and generating a trigger word discovery model.
The third embodiment,
The system includes a preset trigger word list, a pre-trained part-of-speech analysis model and a grammar analysis model, the trigger word list includes trigger words, trigger word part-of-speech grammar information and/or trigger word part-of-speech information, in step S1, candidate trigger words are extracted from the text to be processed, including:
step S11, performing word segmentation and word deactivation processing on the text to be processed to obtain a word segmentation list, and matching the word segmentation list with trigger words in the trigger word list to obtain a candidate word segmentation list;
step S12, inputting the text to be processed into the grammar analysis model to obtain grammar information of candidate participles, and/or inputting the participle list and the text to be processed into the part-of-speech analysis model to obtain part-of-speech information of each candidate participle;
and step S13, filtering the candidate participles in the candidate participle list, wherein the candidate participles are inconsistent with the part-of-speech information and/or the grammatical information of the corresponding trigger words in the trigger word list, and obtaining the candidate trigger words.
In the third embodiment, a trigger word can be added in the trigger word list, so that the system can recognize the newly added trigger word, and can be applied to the zero-time learning scene of the first event information, and through the step S12 and the step S13, the trigger words extracted by mistake can be filtered based on the part of speech and the grammar, so that the accuracy of extracting the trigger words is improved.
The step S13 is followed by:
step S104, performing word segmentation and stop word processing on the text to be processed based on the encoder to obtain a word segmentation list, encoding each word segmentation, inputting each word segmentation into the trigger word discovery model, and determining the word segmentation of which the classification result is the trigger word as a candidate trigger word;
step S105, merging the candidate trigger words obtained in step S13 and step S104 to generate the candidate trigger word list.
The fourth embodiment,
In order to extract the trigger words in the text to be processed more comprehensively and further improve the accuracy and reliability of the extraction of the trigger words, the third embodiment may be combined with at least one trigger word discovery model in the first embodiment and the second embodiment, and candidate trigger words obtained in different embodiments are merged to obtain the candidate trigger word list. The following detailed description of the implementation of event type determination is provided in several embodiments:
the first embodiment,
The pre-trained event type classification model is obtained by training based on a preset second text sample training set and a second neural network model architecture, the second text sample training set comprises a second text sample, a trigger word corresponding to the second text sample and an event type corresponding to the second text sample, the second neural network model architecture is a multi-classification model architecture, and an output vector is { d } d1,d2,…dRR is the number of event type names, drA probability value of the input trigger word belonging to the r-th event type;
when the processor executes the computer program, the following steps are realized:
step S201, a second text sample is obtained from a preset second text sample training set, a corresponding trigger word belonging event type question is generated based on a trigger word corresponding to the second text sample, the corresponding trigger word belonging event type question is spliced with the second text sample through a preset separator to obtain a second spliced text sample, the second spliced text sample is encoded based on a preset encoder, a second actual output vector corresponding to the second spliced text sample is set, in the second actual output vector, the probability value of the trigger word actually belonging event type corresponding to the second text sample is 1, and other probability values are 0;
step S202, inputting the coded second spliced text sample into the second neural network model architecture to obtain a second prediction output vector, adjusting parameters of the second neural network model architecture based on the second prediction output vector and a second actual output vector, and generating the event type classification model.
It can be understood that, the parameters of the second neural network model architecture are adjusted based on the second predicted output vector and the second actual output vector, and an existing model training manner may be directly adopted, for example, the cross entropy is solved, so that the model training is ended when the cross entropy is minimum, and the like, and the description is not repeated here.
The second embodiment,
The system further comprises a list of event type names { D }1,D2,…DR},DrFor the R-th event type name, the range of R is from 1 to R, and R is the number of event type names, in step S2, obtaining the event type corresponding to each candidate trigger word includes:
step S21, step DrInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain a r-th event type name pooling encoding Dr’;
The pooling process may specifically be averaging the parameters of each column, or obtaining a maximum value of the parameters of each column.
Step S22, AnInputting the code into the coder, coding and pooling the coding result to obtain the n-th candidate trigger word poolingCode An’,Dr' and An' vector dimensions are the same;
step S23, judging whether r exists or not, so that r satisfies argmaxcos (A)n’, Dr') and cos (A)n’, Dr’)>D1Wherein, cos (A)n’, Dr') represents An' and Dr' cosine similarity, D1And if the first similarity threshold exists, determining the r-th event type as the event type corresponding to the n-th candidate trigger word.
In step S23, if r is not present, r satisfies argmaxcos (a)n’, Dr') and cos (A)n’, Dr’)>D1Then, step S24 is executed:
step S24, obtaining the preset first G cos (A) which are sorted from big to smalln’, Dr') value cos1,cos2,…cosG},cosgIs the g-th cos (A)n’, Dr') G has a value of 1 to G, and G satisfies cosg+1-cosg< D2 ,D2If the current event type is the preset error threshold, executing step S25, otherwise, determining that the event type corresponding to the nth candidate trigger does not exist in the event type name list;
step S25, converting cosgMatching the corresponding candidate trigger word with the trigger word list, and if the candidate trigger word does not exist in the trigger word list, matching the corresponding cos with the trigger word listgFrom { cos1,cos2,…cosGDeleting in the sequence;
step S26, { cos after execution of the operation of step S251,cos2,…cosGDetermining that the event type corresponding to the nth candidate trigger word does not exist in the event type name list if the trigger word is an empty set, otherwise, executing { cos after the operation of the step S251,cos2,…cosGLargest cos in (c) }gAnd determining the corresponding event type as the event type corresponding to the nth candidate trigger word.
It should be noted that, the embodiment can quickly and accurately identify the event type that has been trained by the model, the second embodiment can add an event type in the event type name list, and has better extensibility, and the second embodiment can be applied to a scene in which event information is learned for zero times, that is, event data that has not been trained by the model can be extracted quickly and accurately.
As an embodiment, the argument information extraction model is obtained by training based on a preset third text sample training set and a third neural network model architecture, where the third text sample training set includes Y third text samples { E }1,E2,…EY},EyFor the yth third text sample, EyThe corresponding sample trigger is EAy,EyCorresponding sample argument role { BE1,BE2,…BEyM },EyCorresponding sample argument information CE1,CE2,…CEyMWherein Y has a value ranging from 1 to Y, BE1、BE2、…BEyMAre sequentially lower in priority, BEiIs EyCorresponding ith sample argument role, CEiIs EyCorresponding ith sample argument information, BEiAnd CEiCorrespondingly, the value range of i is 1 to yM; the third neural network model architecture is a sequence labeling model architecture;
when the processor executes the computer program, the following steps are also realized:
step S100, initializing y = 1;
step S200, initializing i =1, and obtaining sample history information Bhy=EAy
Step S300 based on BEi、EAyGenerating corresponding sample argument role question text BFi
Step S400, BFi、Ey、BhyInput a preset encoder, pair EyAnd BFiEncoding to obtain ELyOf ELyInputting the third neural network model architecture to obtain a corresponding second prediction output labeling sequence LCi, LCiCorresponding BFiPosition ofSet the label as 0;
in the step S400, each argument information is extracted by incorporating history information, that is, in the current round of extraction, the known sample trigger word of the argument information extraction model and the already extracted argument information, that is, the known positions are not necessarily target labeled positions, that is, the position information is inevitably labeled as 0. In addition, argument roles are ordered according to preset priorities, argument information which is easy to extract can be extracted by an argument information extraction model, history information is increased along with increase of difficulty of argument information extraction, and the increased history information can guide the model to extract next argument information more quickly and accurately.
In step S400, BF is also subjected toiAnd EySpliced by preset separators, and then based on Bh, the encoderyAnd BFiAnd EyThe BF after splicing of the corresponding character position information pairiAnd EyAnd (6) coding is carried out. The predetermined delimiter is [ SEP ]]The mask algorithm enables the third neural network model architecture to only carry out the SEP (sequence-specific information processing) when carrying out sequence annotation]Then EyAnd (6) labeling.
Step S500 based on Ey、CEiGenerating a second actual output labeling sequence LDiIn the second actual output tag sequence, EyCorresponding CEiPosition marked 1, non-CEiPosition is labeled 0;
step S600 based on LCiAnd LDiJudging whether the currently trained third neural network model architecture reaches preset model precision, if so, determining the currently trained third neural network model architecture as the argument information extraction model, otherwise, executing the step S700;
step S700 based on LCiAnd LDiAdjusting the current third neural network model architecture parameters, comparing the sizes of i and yM, and if i is greater than yM<yM, then set i = i +1, Bhy=EAy+
Figure 521535DEST_PATH_IMAGE004
Returning to step S300, if i = yM, executing the step S800;
step S800 compares the magnitudes of Y and Y, and if Y < Y, then Y = Y +1 is set, and the process returns to step S200, and if Y = Y, then the process returns to step S100.
It should be noted that the question set in the trigger word discovery model and the event type classification model is to maintain consistency with the argument extraction model when the system adopts the cascade model, to improve the accuracy of the system, and after the model parameters are determined, in the actual use process, the corresponding question may not be input when the trigger word discovery model is used to extract the trigger word and the event type is obtained by the event type classification model. However, the question of the argument extraction model still needs to be input, because the question of the argument extraction model also plays a role in guiding the argument extraction model to label corresponding argument information.
As an example, the step S4 includes:
step S41, based on Am0、BmGenerating an mth argument role question text FmTo process the text, Fm、hmInputting the text to be processed and F in a preset encodermCoding to obtain LmIs prepared by mixing LmInputting the argument information extraction model to obtain a corresponding second prediction output labeling sequence LCm;
It should be noted that step S41 corresponds to step S400, and the text to be processed is matched with FmSplicing based on preset separation codes, and then splicing based on the text to be processed and FmThe position information of the characters and the current historical information are used for splicing the text to be processed and the text FmAnd (6) coding is carried out.
Step S42 based on LCmAnd LmExtracting the mth argument information C from the text to be processedm
It should be noted that, as the information labeling result of the argument information extraction model is only to label the information corresponding to the text to be processed, the actually input encoded text is the spliced text to be processed and the spliced FmEncoding is carried out, thus according to the text to be processed and FmDetermining corresponding mth argument information C by combining the position relation of the original characters and the sequence marking result output by the argument information extraction modelm
It should be noted that the argument role priority may be determined directly based on historical experience, may also be determined based on input, and may also be determined by sample argument role distribution, as an embodiment, when the processor executes the computer program, the following steps are further implemented:
step S301, determining the priority of the argument role corresponding to the event type of each argument role priority to be judged based on a sample argument role set formed by all sample argument roles in a preset third text sample training set, wherein the sample argument role set is { BEX }1,BEX2,…BEXZ},BEXzFor the Z-th sample argument role, the value range of Z is 1 to Z, Z is the number of sample argument roles in the sample argument role set, and the argument role set corresponding to the event type of the argument role priority to be judged is { BX {1,BX2,…BXW},BXwThe argument role is the W-th argument role corresponding to the event type of the argument role priority to be judged, the value range of W is 1 to W, and W is the argument role number corresponding to the event type of the argument role priority to be judged;
the step S301 specifically includes:
step S302, adding BXwInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain argument role pooling encoding BX to be judgedw’;
Step S303, mixing BEXzInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain sample argument role pooling encoding BEXz’,BXw' and BEXz' the vector dimensions are the same; cos (BX)w’, BEXz’)
Step S304, obtaining BXwCorresponding priority weight Pw
Pw=
Figure 182324DEST_PATH_IMAGE006
,
Step S305, according to BXwCorresponding priority weight PwAnd generating the priority of the argument roles corresponding to the event types of the argument role priorities to be judged from large to small.
It should be noted that all encoders related in the embodiment of the present invention are the same encoder, and as an embodiment, the system further includes a pre-configured word sequence number mapping table for storing a mapping relationship between words and sequence numbers, each word corresponds to a unique sequence number, the encoder converts each word of a text to be encoded into a corresponding sequence number based on the word sequence number mapping table, then encodes each sequence number into a vector of a preset dimension based on position information of each sequence number in the text to be encoded, and encodes each sequence number into a vector of a preset dimension based on the history information and the position information of each sequence number in the text to be encoded if the encoder further receives the history information. Specifically, the encoder is a pre-training language model, and the pre-training language model includes a bert model, a roberta model, an albert model, and the like.
It should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of some of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. A target event extraction data processing system is characterized in that,
the system comprises a preconfigured event argument role configuration table, a preset target event data structure, a memory storing a computer program and a processor, wherein the event argument role configuration table is used for storing event argument role information records, and the event argument role information records comprise an event type field, an argument role field and an argument role priority field; the target event data structure comprises a target trigger word data segment, a target event type data segment and a target argument role data segment;
when the processor executes the computer program, the following steps are realized:
step S1, extracting candidate trigger words from the text to be processed, and constructing a candidate trigger word list { A }1,A2,…AN}, AnThe number of the N candidate trigger words is 1 to N, and N is the number of the candidate trigger words in the text to be processed;
step S2, obtaining an event type corresponding to each candidate trigger word, and if a preset target event type exists, determining the candidate trigger word corresponding to the target event type as a target trigger word An0Storing the target trigger word into the target trigger word data segment and storing the target data type into the target event type data segment, and executing the step S3, otherwise, determining that no target event exists in the text to be processed, and ending the process;
step S3, determining a target argument role list { B } corresponding to the target event type according to the event argument role configuration table1,B2,…BM}, B1、B2、…BMIn order of decreasing priority, BmFor the mth target argument role, the value range of M is 1 to M, M is corresponding to the target event typeThe number of target argument roles, initialization m =1, and initialization history information hm= Am0
Step S4, based on Am0、Bm 、hmExtracting mth argument information C from the text to be processedm
Step S5, comparing M and M, if M<M, then set M = M +1, hm= Am0+
Figure DEST_PATH_IMAGE001
Returning to step S4, if M = M, then { C = M }1,C2,…CMStoring the data to a target argument role data segment to generate target event data;
the system also comprises a pre-trained argument information extraction model, wherein the argument information extraction model is obtained based on a preset third text sample training set and a third neural network model architecture training, and the third text sample training set comprises Y third text samples { E }1,E2,…EY },EyFor the yth third text sample, EyThe corresponding sample trigger is EAy,EyCorresponding sample argument role { BE1,BE2,…BEyM },EyCorresponding sample argument information CE1,CE2,…CEyMWherein Y has a value ranging from 1 to Y, BE1、BE2、…BEyMAre sequentially lower in priority, BEiIs EyCorresponding ith sample argument role, CEiIs EyCorresponding ith sample argument information, BEiAnd CEiCorrespondingly, the value range of i is 1 to yM; the third neural network model architecture is a sequence labeling model architecture;
when the processor executes the computer program, the following steps are also realized:
step S100, initializing y = 1;
step S200, initializing i =1, and obtaining sample history information Bhy=EAy
Step S300 based on BEi、EAyGenerating correspondencesSample argument role question text BFi
Step S400, BFi、Ey、BhyInput a preset encoder, pair EyAnd BFiEncoding to obtain ELyOf ELyInputting the third neural network model architecture to obtain a corresponding second prediction output labeling sequence LCi, LCiCorresponding BhyIs labeled 0;
step S500 based on Ey、CEiGenerating a second actual output labeling sequence LDiIn the second actual output tag sequence, EyCorresponding CEiPosition marked 1, non-CEiPosition is labeled 0;
step S600 based on LCiAnd LDiJudging whether the currently trained third neural network model architecture reaches preset model precision, if so, determining the currently trained third neural network model architecture as the argument information extraction model, otherwise, executing the step S700;
step S700 based on LCiAnd LDiAdjusting the current third neural network model architecture parameters, comparing the sizes of i and yM, and if i is greater than yM<yM, then set i = i +1, Bhy=EAy+
Figure DEST_PATH_IMAGE002
Returning to step S300, if i = yM, executing step S800;
step S800, comparing the magnitudes of Y and Y, if Y < Y, setting Y = Y +1, returning to execute step S200, and if Y = Y, returning to execute step S100;
the step S4 includes:
step S41, based on Am0、BmGenerating an mth argument role question text FmTo process the text, Fm、hmInputting the text to be processed and F in a preset encodermCoding to obtain LmIs prepared by mixing LmInputting the argument information extraction model to obtain a corresponding second prediction output labeling sequence LCm;
Step (ii) ofS42 based on LCmAnd LmExtracting the mth argument information C from the text to be processedm
2. The system of claim 1,
also included is a list of event type names { D1,D2,…DR},DrFor the R-th event type name, the range of R is from 1 to R, and R is the number of event type names, in step S2, obtaining the event type corresponding to each candidate trigger word includes:
step S21, step DrInputting a preset encoder for encoding, and performing pooling processing on an encoding result to obtain a r-th event type name pooling encoding Dr’;
Step S22, AnInputting the code into the coder, coding and pooling the coding result to obtain the n-th candidate trigger word pooling code An’,Dr' and An' vector dimensions are the same;
step S23, judging whether r exists or not, so that r satisfies argmaxcos (A)n’, Dr') and cos (A)n’, Dr’)>D1Wherein, cos (A)n’, Dr') represents An' and Dr' cosine similarity, D1And if the first similarity threshold exists, determining the r-th event type as the event type corresponding to the n-th candidate trigger word.
3. The system of claim 2,
in step S23, if r is not present, r satisfies argmaxcos (a)n’, Dr') and cos (A)n’, Dr’)>D1Then, step S24 is executed:
step S24, obtaining the preset first G cos (A) which are sorted from big to smalln’, Dr') value cos1,cos2,…cosG},cosgIs the g-th cos (A)n’, Dr') of gThe value is 1 to G, and cos is satisfied if any Gg+1-cosg< D2 ,D2If the current event type is the preset error threshold, executing step S25, otherwise, determining that the event type corresponding to the nth candidate trigger does not exist in the event type name list;
step S25, converting cosgMatching the corresponding candidate trigger word with the trigger word list, and if the candidate trigger word does not exist in the trigger word list, matching the corresponding cos with the trigger word listgFrom { cos1,cos2,…cosGDeleting in the sequence;
step S26, { cos after execution of the operation of step S251,cos2,…cosGDetermining that the event type corresponding to the nth candidate trigger word does not exist in the event type name list if the trigger word is an empty set, otherwise, executing { cos after the operation of the step S251,cos2,…cosGLargest cos in (c) }gAnd determining the corresponding event type as the event type corresponding to the nth candidate trigger word.
4. The system of claim 1,
the system includes a preset trigger word list, a pre-trained part-of-speech analysis model and a grammar analysis model, the trigger word list includes trigger words, trigger word part-of-speech grammar information and/or trigger word part-of-speech information, in step S1, candidate trigger words are extracted from the text to be processed, including:
step S11, performing word segmentation and word deactivation processing on the text to be processed to obtain a word segmentation list, and matching the word segmentation list with trigger words in the trigger word list to obtain a candidate word segmentation list;
step S12, inputting the text to be processed into the grammar analysis model to obtain grammar information of candidate participles, and/or inputting the participle list and the text to be processed into the part-of-speech analysis model to obtain part-of-speech information of each candidate participle;
and step S13, filtering the candidate participles in the candidate participle list, wherein the candidate participles are inconsistent with the part-of-speech information and/or the grammatical information of the corresponding trigger words in the trigger word list, and obtaining the candidate trigger words.
5. The system of claim 4,
the system also comprises a trigger word discovery model used for extracting candidate trigger words from the text to be processed, wherein the trigger word discovery model is obtained by training based on a preset first text training set and a second classification model architecture, and the first text training set comprises a first text sample and corresponding trigger words;
when the processor executes the computer program, the following steps are also realized:
step S101, obtaining a first text sample from the first text sample training set, taking a trigger word in the first text sample as a positive sample word, slicing the first text sample to obtain a sliced word, and randomly extracting the sliced word to form a non-trigger word as a negative sample word;
step S102, respectively encoding the positive sample and the negative sample based on a preset encoder, inputting the encoded positive sample and the negative sample into a preset two-classification model architecture for classification prediction, adjusting parameters of the two-classification model architecture based on a sample prediction classification result and an actual classification result, and generating a trigger word discovery model.
6. The system of claim 5,
the step S13 is followed by:
step S104, performing word segmentation and stop word processing on the text to be processed based on the encoder to obtain a word segmentation list, encoding each word segmentation, inputting each word segmentation into the trigger word discovery model, and determining the word segmentation of which the classification result is the trigger word as a candidate trigger word;
step S105, merging the candidate trigger words obtained in step S13 and step S104 to generate the candidate trigger word list.
7. The system of any one of claims 1, 2, 5 or 6,
the system also comprises a pre-configured word sequence number mapping table for storing the mapping relation between words and sequence numbers, wherein each word corresponds to a unique sequence number, the encoder converts each word of the text to be encoded into a corresponding sequence number based on the word sequence number mapping table, then encodes each sequence number into a vector with preset dimensionality based on the position information of each sequence number in the text to be encoded, and if the encoder also receives historical information, encodes each sequence number into a vector with preset dimensionality based on the historical information and the position information of each sequence number in the text to be encoded.
8. The system of claim 7,
the encoder is a pre-training language model, which includes a bert model, a roberta model, and an albert model.
CN202111024291.4A 2021-09-02 2021-09-02 Target event extraction data processing system Active CN113468433B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111024291.4A CN113468433B (en) 2021-09-02 2021-09-02 Target event extraction data processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111024291.4A CN113468433B (en) 2021-09-02 2021-09-02 Target event extraction data processing system

Publications (2)

Publication Number Publication Date
CN113468433A CN113468433A (en) 2021-10-01
CN113468433B true CN113468433B (en) 2021-12-07

Family

ID=77867423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111024291.4A Active CN113468433B (en) 2021-09-02 2021-09-02 Target event extraction data processing system

Country Status (1)

Country Link
CN (1) CN113468433B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510928B (en) * 2022-01-12 2022-09-23 中国科学院软件研究所 Universal information extraction method and system based on unified structure generation
CN114996414B (en) * 2022-08-05 2022-09-30 中科雨辰科技有限公司 Data processing system for determining similar events
CN115062137B (en) * 2022-08-15 2022-11-04 中科雨辰科技有限公司 Data processing system for determining abnormal text based on active learning
CN117435697B (en) * 2023-12-21 2024-03-22 中科雨辰科技有限公司 Data processing system for acquiring core event
CN117473093B (en) * 2023-12-25 2024-04-12 中科雨辰科技有限公司 Data processing system for acquiring target event based on LLM model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111581954A (en) * 2020-05-15 2020-08-25 中国人民解放军国防科技大学 Text event extraction method and device based on grammar dependency information
CN112861527A (en) * 2021-03-17 2021-05-28 合肥讯飞数码科技有限公司 Event extraction method, device, equipment and storage medium
CN113255322A (en) * 2021-06-10 2021-08-13 深圳追一科技有限公司 Event extraction method and device, computer equipment and computer-readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11544455B2 (en) * 2017-06-21 2023-01-03 Nec Corporation Information processing device, information processing method, and recording medium
CN110134757B (en) * 2019-04-19 2020-04-07 杭州电子科技大学 Event argument role extraction method based on multi-head attention mechanism
US11522881B2 (en) * 2019-08-28 2022-12-06 Nec Corporation Structural graph neural networks for suspicious event detection
CN112052665B (en) * 2020-09-12 2023-06-20 广东工业大学 Remote supervision event extraction method and application thereof
CN112231447B (en) * 2020-11-21 2023-04-07 杭州投知信息技术有限公司 Method and system for extracting Chinese document events

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111581954A (en) * 2020-05-15 2020-08-25 中国人民解放军国防科技大学 Text event extraction method and device based on grammar dependency information
CN112861527A (en) * 2021-03-17 2021-05-28 合肥讯飞数码科技有限公司 Event extraction method, device, equipment and storage medium
CN113255322A (en) * 2021-06-10 2021-08-13 深圳追一科技有限公司 Event extraction method and device, computer equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN113468433A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN113704476B (en) Target event extraction data processing system
CN113468433B (en) Target event extraction data processing system
CN113722461B (en) Target event extraction data processing system
CN108648747B (en) Language identification system
CN111581229A (en) SQL statement generation method and device, computer equipment and storage medium
CN111475617A (en) Event body extraction method and device and storage medium
CN108205524B (en) Text data processing method and device
CN116432655B (en) Method and device for identifying named entities with few samples based on language knowledge learning
CN111666764A (en) XLNET-based automatic summarization method and device
CN116991875B (en) SQL sentence generation and alias mapping method and device based on big model
CN113449084A (en) Relationship extraction method based on graph convolution
CN110795942B (en) Keyword determination method and device based on semantic recognition and storage medium
CN116304748A (en) Text similarity calculation method, system, equipment and medium
CN113886531A (en) Intelligent question and answer determining method and device, computer equipment and storage medium
CN115544303A (en) Method, apparatus, device and medium for determining label of video
CN115495553A (en) Query text ordering method and device, computer equipment and storage medium
CN113553847A (en) Method, device, system and storage medium for parsing address text
CN113722462B (en) Target argument information extraction data processing system
CN115115984A (en) Video data processing method, apparatus, program product, computer device, and medium
CN113221553A (en) Text processing method, device and equipment and readable storage medium
CN112463960A (en) Entity relationship determination method and device, computing equipment and storage medium
CN114647739B (en) Entity chain finger method, device, electronic equipment and storage medium
CN113035175B (en) Voice text rewriting model construction method and voice recognition method
CN114996451A (en) Semantic category identification method and device, electronic equipment and readable storage medium
CN114254622A (en) Intention identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant