CN111191010A - Movie scenario multivariate information extraction method - Google Patents

Movie scenario multivariate information extraction method Download PDF

Info

Publication number
CN111191010A
CN111191010A CN201911416307.9A CN201911416307A CN111191010A CN 111191010 A CN111191010 A CN 111191010A CN 201911416307 A CN201911416307 A CN 201911416307A CN 111191010 A CN111191010 A CN 111191010A
Authority
CN
China
Prior art keywords
scene
event
determining
type
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911416307.9A
Other languages
Chinese (zh)
Other versions
CN111191010B (en
Inventor
刘宏伟
刘宏蕊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Foreign Studies University
Guangdong University of Technology
Original Assignee
Tianjin Foreign Studies University
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Foreign Studies University, Guangdong University of Technology filed Critical Tianjin Foreign Studies University
Priority to CN201911416307.9A priority Critical patent/CN111191010B/en
Publication of CN111191010A publication Critical patent/CN111191010A/en
Application granted granted Critical
Publication of CN111191010B publication Critical patent/CN111191010B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/34Browsing; Visualisation therefor
    • G06F16/345Summarisation for human users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure provides a movie scenario multivariate information extraction method. The method comprises the following steps: extracting one or more scenes from the text; determining events contained in the scene and event information of the events; determining the scene type according to the events contained in the scene; and correspondingly storing the scene, the event information and the plot type into a graph database. The method provided by the disclosure can extract the multivariate information containing the semantic level from the text, so that the reader can better preview the text content.

Description

Movie scenario multivariate information extraction method
Technical Field
The disclosure relates to the field of computer software, in particular to a movie scenario multivariate information extraction method.
Background
In order to extract the main information from a text with a long length for a reader to quickly preview the text content, a text format is generally utilized, and a rule or expression based manner is used for extracting the text information, however, the manner still has some defects, for example, the method ignores the information at the semantic level of the text, and is difficult to extract the multivariate information in the text, so how to extract the multivariate information containing the semantic level from the text for the reader to preview the text content better becomes a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the disclosure aims to provide a method for extracting multivariate information of a movie scenario, so as to extract multivariate information including a semantic layer from a text, thereby facilitating a reader to better preview the text content.
In order to achieve the above object, an embodiment of the present disclosure provides a method for extracting multiple information from a movie scenario, where the method includes:
extracting one or more scenes from the text;
determining events contained in the scene and event information of the events;
determining the scene type according to the events contained in the scene;
and correspondingly storing the scene, the event information and the plot type into a graph database.
The embodiment of the present disclosure further provides a device for extracting multiple information of a movie scenario, where the device includes:
the scene extraction module is used for extracting one or more scenes from the text;
the event determining module is used for determining events contained in the scene and event information of the events;
the plot type determining module is used for determining the plot type of the scene according to the events contained in the scene;
and the data storage module is used for correspondingly storing the scene, the event information and the plot type into a graph database.
The embodiment of the present disclosure further provides a computer device, which includes a processor and a memory for storing processor-executable instructions, where the processor executes the instructions to implement the steps of the method for extracting the multiple information of the screenplay in any of the above embodiments.
The disclosed embodiment also provides a computer readable storage medium, on which computer instructions are stored, and when the instructions are executed, the steps of the movie scenario multivariate information extraction method described in any embodiment above are implemented.
According to the technical scheme provided by the embodiment of the disclosure, the scenario type of each scene is determined by determining the events contained in each scene in the text and the event information of the events according to the events contained in the scene; therefore, the multivariate information containing the semantic level is extracted, and a reader can preview the text content better.
Drawings
Fig. 1 is a flowchart of a method for extracting multiple information from a screenplay provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a screenplay format provided by an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a data storage structure provided by an embodiment of the present disclosure;
fig. 4 is a block diagram of a movie scenario multivariate information extraction device provided in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a computer device provided by embodiments of the present disclosure;
fig. 6 is a schematic diagram of a computer-readable storage medium provided by an embodiment of the disclosure.
Detailed Description
The embodiment of the disclosure provides a movie scenario multivariate information extraction method.
In order to make those skilled in the art better understand the technical solutions in the present disclosure, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present disclosure without any inventive step should fall within the scope of protection of the present disclosure.
Referring to fig. 1, a flowchart of a method for extracting multiple information from a screenplay provided in an embodiment of the present disclosure may include the following steps:
s1: one or more scenes are extracted from the text.
In this embodiment, the scene information may be extracted by using a regular expression.
In some embodiments, the text is a movie scenario, and the scene information in the movie scenario usually starts with "ext." or "int." so that the sentence starting with "ext." or "int." can be located by using the regular expression to determine the scene information of the event.
For example, referring to the movie scenario shown in fig. 2, if the starting position of the scenario contains the character string "ext", the sentence can be located by the regular expression, and the scene information is extracted: we're flying once against RobinHood Trail, ascending slowly.
S2: and determining events contained in the scene and event information of the events.
In some embodiments, in order to determine the event included in the scene, part-of-speech tagging may be performed on the sentence, and a verb in the sentence is determined; and matching the verb with an event type in an ACE (Automatic content extraction) established in advance to obtain the event type and the event subtype matched with the verb.
For example, table 1 below shows a part of the event types and event subtypes in ACE and the matching relationship between trigger words.
Figure BDA0002351275140000031
TABLE 1
Specifically, the sentence may be part-of-speech tagged by using space or StanFordNLP.
In this embodiment, the event information may include people, time, and location, and may also include other contents, which is not limited in this disclosure. Event information can be determined by deep learning models, such as RNN-CRF models, CNN-CRF models, maximum entropy models, and BilSTM-CRF models.
The following takes the BilSTM-CRF model as an example to illustrate how to obtain event information.
The first step is as follows: and mapping words in the text into word vectors by using a word embedding layer.
The second step is that: and inputting the word vector obtained in the first step into a BilSTM layer, and outputting a BIO label predicted for each word and a score value corresponding to the BIO label.
The third step: and outputting a legal BIO label sequence based on the score value of the BIO label output in the second step under the learned constraint by using a pre-trained CRF model. Wherein the learned constraints include: the first word in the sentence starts with the label "B-" or "O", and the labels "B-label I-label 2I-label 3I-" are of the same type for label1, label2, label 3.
For example, one specific example of a BIO label is:
help the Chinese team win in Dabao
B-PER I-PER I-PER O O B-ORG I-ORG O-ORG O O
S3: and determining the scene type according to the events contained in the scene.
In order to further integrate the obtained events, a plurality of events are summarized into one episode type, and the episode type of the scene needs to be determined through an LDA (Latent Dirichlet Allocation) model or a clustering algorithm.
The following describes the relationship between event types and episode types by taking a specific scenario as an example.
For example, at home, a person picks up luggage, takes a purchased ticket, and calls a taxi. When the series of actions occur in the same scene, events such as 'sorting', 'carrying', 'calling', and the like can be triggered, and the events belong to the 'travel' plot type.
The LDA model is used as an example to determine the episode type using the topic model of the event description paragraph.
Specifically, the text D of each event in the event set D and the topic set T, D is regarded as a word sequence < w1,w2,...,wn> (wherein w)iIndicates the ith word, and d has n words. All different words related in D form a large set VOCABULARY (VOC for short), LDA takes the set D as input, two result vectors are trained, and the two result vectors are respectively the probability theta corresponding to different subjects for the text D in each Dd<pt1,...,ptkAnd for the subject T in each T, the probability phi of generating a different wordt<pw1,...,pwm>。
Wherein, for each text D in D, the probability theta corresponding to different subjectsd<pt1,...,ptk>,ptiRepresenting the probability that d corresponds to the ith topic, pt, in Ti=nti/n,ntiRepresenting the number of words in d corresponding to the ith topic, and n is the total number of all words in d.
For each topic T in T, the probability phi of different words is generatedt<pw1,...,pwm>,pwiDenotes the probability that t generates the i-th word in the VOC, pwi=Nwi/N,NwiIndicates the number of i-th words in the VOC corresponding to the topic t, and N indicates the total number of all words corresponding to the topic t.
Further, with the formula p (w | d) ═ p (w | t) × p (t | d), the subject is used as the intermediate layer, and the current θ passes throughdAnd phitThe probability of the occurrence of a word w in the text d is given, where p (t | d) is by θdCalculated as p (w | t) by phitAnd (4) calculating.
Using the current thetadAnd phitFor a word in a text, p (w | d) can be calculated when it corresponds to any one of the topics, and then the topic to which the word should correspond is updated according to the results. Then, if the update changes the topic to which the word corresponds, θ will be adversely affecteddAnd phit
At the beginning of LDA model, first give θ randomlydAnd phitAnd (7) assigning values. The above process is then repeated, and the final convergence result is the LDA output
S4: and correspondingly storing the scene, the event information and the plot type into a graph database.
Referring to fig. 3, the scenes, the event information, and the episode types may be stored to the graph database in the form of triples.
For example, scenario: (Beauty, Scenes, FITTS HOUSE), time: (FITTS HOUSE, Time, NIGHT), human: (JANE, Appear, FITTS HOUSE).
Referring to fig. 4, the present disclosure also provides a movie scenario multivariate information extraction device, the device comprising:
a scene extraction module 100, configured to extract one or more scenes from a text;
an event determining module 200, configured to determine an event included in the scene and event information of the event;
an episode type determining module 300, configured to determine an episode type of the scene according to an event included in the scene;
a data storage module 400, configured to store the scene, the event information, and the episode type in a graph database.
Referring to fig. 5, the present disclosure also provides a computer device, including a processor and a memory for storing processor-executable instructions, where the processor executes the instructions to implement the steps of the method for extracting meta-information of a movie scenario in any of the above embodiments.
Referring to fig. 6, an embodiment of the present disclosure further provides a computer-readable storage medium, on which computer instructions are stored, and when the instructions are executed, the steps of the method for extracting multivariate information of screenplay in any of the above-mentioned embodiments are implemented.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Language, HDL, las, software Language (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language (Hardware Description Language). It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The apparatuses and modules illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, the functionality of the various modules may be implemented in the same one or more software and/or hardware implementations of the present disclosure.
From the above description of the embodiments, it is clear to those skilled in the art that the present disclosure can be implemented by software plus necessary general hardware platform. With this understanding in mind, aspects of the present disclosure may be embodied in software products that are, or that form part of the prior art, typically configured such that the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The computer software product may include instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the methods described in the various embodiments or portions of embodiments of the present disclosure. The computer software product may be stored in a memory, which may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transient media), such as modulated data signals and carrier waves.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The disclosure is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The disclosure may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention and are not intended to limit the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A method for extracting multivariate information of movie scripts is characterized by comprising the following steps:
extracting one or more scenes from the text;
determining events contained in the scene and event information of the events;
determining the scene type according to the events contained in the scene;
and correspondingly storing the scene, the event information and the plot type into a graph database.
2. The method of claim 1, wherein the determining the events included in the scene comprises:
respectively carrying out part-of-speech tagging on the sentences in the scene, and determining verbs in the sentences;
and matching the verb with the event type in a pre-established automatic content extraction library to obtain the event type matched with the verb.
3. The method of claim 1, wherein the event information for the event is determined by a deep learning model.
4. The method of claim 1, wherein the episode type of the scene is determined by a latent dirichlet model or a clustering algorithm.
5. The method of claim 1, wherein the scenes, the event information, and the episode types are stored to the graph database as triples.
6. The method of claim 1, wherein the event information includes people, time, and location.
7. A movie scenario multivariate information extraction device, comprising:
the scene extraction module is used for extracting one or more scenes from the text;
the event determining module is used for determining events contained in the scene and event information of the events;
the plot type determining module is used for determining the plot type of the scene according to the events contained in the scene;
and the data storage module is used for correspondingly storing the scene, the event information and the plot type into a graph database.
8. The apparatus of claim 7, wherein the event determining module comprises:
the part-of-speech tagging unit is used for respectively performing part-of-speech tagging on the sentences in the scene and determining verbs in the sentences;
and the event matching unit is used for matching the verb with an event type in a pre-established automatic content extraction library to obtain the event type matched with the verb.
9. A computer device comprising a processor and a memory for storing processor-executable instructions which, when executed by the processor, implement the steps of the method of any one of claims 1 to 6.
10. A computer readable storage medium having stored thereon computer instructions which, when executed, implement the steps of the method of any one of claims 1-6.
CN201911416307.9A 2019-12-31 2019-12-31 Movie script multi-element information extraction method Active CN111191010B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911416307.9A CN111191010B (en) 2019-12-31 2019-12-31 Movie script multi-element information extraction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911416307.9A CN111191010B (en) 2019-12-31 2019-12-31 Movie script multi-element information extraction method

Publications (2)

Publication Number Publication Date
CN111191010A true CN111191010A (en) 2020-05-22
CN111191010B CN111191010B (en) 2023-08-08

Family

ID=70709722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911416307.9A Active CN111191010B (en) 2019-12-31 2019-12-31 Movie script multi-element information extraction method

Country Status (1)

Country Link
CN (1) CN111191010B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316362A (en) * 2007-05-29 2008-12-03 中国科学院计算技术研究所 Movie action scene detection method based on story line development model analysis
CN102005231A (en) * 2010-09-08 2011-04-06 东莞电子科技大学电子信息工程研究院 Storage method of rich-media scene flows
CN102207948A (en) * 2010-07-13 2011-10-05 天津海量信息技术有限公司 Method for generating incident statement sentence material base
US20130166303A1 (en) * 2009-11-13 2013-06-27 Adobe Systems Incorporated Accessing media data using metadata repository
CN105389304A (en) * 2015-10-27 2016-03-09 小米科技有限责任公司 Event extraction method and apparatus
CN107977359A (en) * 2017-11-27 2018-05-01 西安影视数据评估中心有限公司 A kind of extracting method of video display drama scene information

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316362A (en) * 2007-05-29 2008-12-03 中国科学院计算技术研究所 Movie action scene detection method based on story line development model analysis
US20130166303A1 (en) * 2009-11-13 2013-06-27 Adobe Systems Incorporated Accessing media data using metadata repository
CN102207948A (en) * 2010-07-13 2011-10-05 天津海量信息技术有限公司 Method for generating incident statement sentence material base
CN102005231A (en) * 2010-09-08 2011-04-06 东莞电子科技大学电子信息工程研究院 Storage method of rich-media scene flows
CN105389304A (en) * 2015-10-27 2016-03-09 小米科技有限责任公司 Event extraction method and apparatus
CN107977359A (en) * 2017-11-27 2018-05-01 西安影视数据评估中心有限公司 A kind of extracting method of video display drama scene information

Also Published As

Publication number Publication date
CN111191010B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN110263177B (en) Knowledge graph construction method for event prediction and event prediction method
US10025819B2 (en) Generating a query statement based on unstructured input
US8561014B2 (en) Extracting a system modelling meta-model language model for a system from a natural language specification of the system
US11030416B2 (en) Latent ambiguity handling in natural language processing
US9224103B1 (en) Automatic annotation for training and evaluation of semantic analysis engines
US20210056266A1 (en) Sentence generation method, sentence generation apparatus, and smart device
US10978053B1 (en) System for determining user intent from text
WO2018045646A1 (en) Artificial intelligence-based method and device for human-machine interaction
CN106202395B (en) Text clustering method and device
CN111523289B (en) Text format generation method, device, equipment and readable medium
Yang et al. Deep learning and its applications to natural language processing
US20120158742A1 (en) Managing documents using weighted prevalence data for statements
CN111222315B (en) Movie scenario prediction method
CN111787409A (en) Movie and television comment data processing method and device
CN111611393A (en) Text classification method, device and equipment
CN112487197A (en) Method and device for constructing knowledge graph based on conference record and processor
CN116797195A (en) Work order processing method, apparatus, computer device, and computer readable storage medium
Clark et al. Consistent unsupervised estimators for anchored PCFGs
CN111209389B (en) Movie story generation method
Kreiss et al. Concadia: Tackling image accessibility with context
CN111191010B (en) Movie script multi-element information extraction method
CN113887234B (en) Model training and recommending method and device
US11017172B2 (en) Proposition identification in natural language and usage thereof for search and retrieval
Haj et al. Automatic extraction of SBVR based business vocabulary from natural language business rules
Wang et al. Transition-based chinese semantic dependency graph parsing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant