CN109241255A - A kind of intension recognizing method based on deep learning - Google Patents

A kind of intension recognizing method based on deep learning Download PDF

Info

Publication number
CN109241255A
CN109241255A CN201810945991.9A CN201810945991A CN109241255A CN 109241255 A CN109241255 A CN 109241255A CN 201810945991 A CN201810945991 A CN 201810945991A CN 109241255 A CN109241255 A CN 109241255A
Authority
CN
China
Prior art keywords
vector
word
deep learning
sentence
blstm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810945991.9A
Other languages
Chinese (zh)
Other versions
CN109241255B (en
Inventor
何婷婷
潘敏
汤丽
王逾凡
孙博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central China Normal University
Original Assignee
Central China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central China Normal University filed Critical Central China Normal University
Priority to CN201810945991.9A priority Critical patent/CN109241255B/en
Publication of CN109241255A publication Critical patent/CN109241255A/en
Application granted granted Critical
Publication of CN109241255B publication Critical patent/CN109241255B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Machine Translation (AREA)

Abstract

A kind of conversational system intension recognizing method based on deep learning is matched the pre- dialogue D for carrying out intention assessment to obtain intent classifier result P with rule by rule of the word frequency weight extracting keywords as intention assessment from dialogue corpusA;With dialogue corpus training deep learning MODEL C NN-BLSTM, the deep learning MODEL C NN-BLSTM has merged convolutional neural networks CNN and two-way shot and long term memory network BLSTM, then pre- intent classifier result P is obtained with the deep learning MODEL C NN-BLSTM after training, identification dialogue DB;Finally, by by intent classifier result PAWith intent classifier result PBLinear fusion is carried out, the final intention of dialogue D is obtained.The present invention can effectively improve the accuracy of user's intention assessment.

Description

A kind of intension recognizing method based on deep learning
Technical field
The invention belongs to interactive system technical field, in particular to a kind of intention assessment side based on deep learning Method.
Background technique
Interactive system is one of core technology of artificial intelligence field, it will become the new human-computer interaction of one kind Mode has great researching value.People are pursuing and can exchanged with natural language with computer for a long time, because This has great meaning: people, using computer, can be handed over oneself most familiar with the language being most accustomed to computer Mutually, it does not need to deflorate a large amount of time study again and adapts to computer language.With the arrival of Internet era, human-computer dialogue system The use demand of system greatly increases.Such as it is widely used in the intelligent customer service in online shopping, not only greatly improve people and meter The communication efficiency of calculation machine, also facilitates people's lives and work.Intelligent dialogue system is also added in major science and technology giant one after another Research ranks simultaneously release relevant product such as: the siri of apple, the Cortana of Microsoft, Baidu small degree.Perhaps soon In the future, people will not be used the input equipment of current mainstream, and natural language will replace as most widely used people Machine interactive mode.The key step of the interaction of man-machine natural language include: speech recognition, natural language understanding, dialogue state with Track, spatial term, speech synthesis.
Natural language understanding is a module crucial in interactive system, and effect is that user says computer Natural language be converted into computer it will be appreciated that semantic expressiveness, achieve the purpose that understand user's natural language.It is appreciated that user Word, just must be known by field involved in user's natural language or user thinks the intention of expression, and user's intention assessment is just It is that above-mentioned purpose is reached using the method for classification.The raising of user's intention assessment accuracy rate can greatly help conversational system raw It is replied at reasonable.
In interactive system, the correct identification that user is intended to is that conversational system generates the basis rationally replied.If The intention of user can not correct judgment, conversational system will generate the reply given an irrelevant answer, it is such reply it is also just not any Meaning.Therefore for the performance boost of conversational system, increase user experience, the intention for accurately identifying user is particularly important. In addition to this, accurately judge that user is intended to, commercial Intelligent dialogue system can provide a user useful consumption, amusement, product Deng recommendation, there is very big commercial value.In conclusion user's intention assessment has critically important researching value and research meaning Justice.
Summary of the invention
Problem to be solved by this invention is, using deep learning technology and to improve the accuracy of user's intention assessment.
Technical solution of the present invention provides a kind of conversational system intension recognizing method based on deep learning, firstly, from right By rule of the word frequency weight extracting keywords as intention assessment in language material, the pre- dialogue D for carrying out intention assessment is advised It is then matched to obtain intent classifier result PA;With dialogue corpus training deep learning MODEL C NN-BLSTM, the deep learning MODEL C NN-BLSTM has merged convolutional neural networks CNN and two-way shot and long term memory network BLSTM, then with the depth after training Learning model CNN-BLSTM is spent, identification dialogue D obtains intent classifier result PB;Finally, by by intent classifier result PAAnd meaning Figure classification results PBLinear fusion is carried out, the final intention of dialogue D is obtained.
Moreover, described pass through rule of the word frequency weight extracting keywords as intention assessment from dialogue corpus, including right Each classification carries out the following processing,
Word segmentation processing is carried out, and counts and obtains the number N of all entries under the category, all entries are formed into a word Table, and count in the category, the number W that i-th of word occursi, the total number M of sentence and the length L of j-th strip sentencej, i= 1,2,3 ... N, j=1,2,3 ... M;
The average length AveL of sentence under the category is calculated, calculation formula is as follows,
Calculate the word frequency weight F of i-th of wordiIt is as follows,
Wherein, ∑ S indicates the summation that adds up to the length S for the sentence for occurring i-th of word in the category;
After acquiring word frequency weight to each word under the category, by each entry in vocabulary by word frequency weight from greatly to Small sequence proposes to choose rule of the preceding several entries of ranking as keyword, as the category.
Moreover, the dialogue corpus training deep learning MODEL C NN-BLSTM, implementation is as follows,
After training corpus is carried out word segmentation processing, the vector of each word in every words resulting after participle is indicated into xbGroup The vector for synthesizing word indicates X=[x1, x2, x3... xl], l is the sequence length of vector, b=1,2,3 ... l;
The convolutional layer of the vector X input convolutional neural networks of sentence is calculated, all characteristic pattern s that will be obtainedaGroup closes To obtain output result S=[s1,s2,s3…sn], n indicates the total number of characteristic pattern, a=1,2,3 ... n;
The structure of S is rearranged, gained vector V after permutatation is inputted in BLSTM neural network model, BLSTM is two-way shot and long term Memory Neural Networks, by the shot and long term Memory Neural Networks and a backward length of a forward direction Phase Memory Neural Networks are constituted;For each time step t, the shot and long term Memory Neural Networks of forward direction export hidden layer state Backward shot and long term Memory Neural Networks export hidden layer stateThe vector for combining two hidden layer states obtains vector ht;According to The vector of all time steps indicates, obtains the corresponding vector H of entire sentence, implicit context semantic information in H;
Vector O is obtained after carrying out maximum pondization operation to S, includes most important semantic feature in sentence and classification spy in O Reference breath;
Vector H and vector O are stitched together as vector T,
It is indicated T as final dialogue sentence characteristics vector, connects all sentence characteristics T and obtain intermediate quantity yc, according to yc The probability of each classification is obtained, chooses the intention classification of maximum probability as intention assessment result PB
Moreover, setting intent classifier result PAWith intent classifier result PBFinal intention class is obtained after carrying out linear fusion Other probability distribution P chooses result of the intention classification as final intention assessment corresponding to maximum probability in P.
The present invention provides a kind of intension recognizing methods based on deep learning, on the one hand, passes through word from dialogue corpus Rule of the frequency weight extracting keywords as intention assessment matches the pre- dialogue D for carrying out intention assessment with these rules Obtain pre- intent classifier result PA, on the other hand convolutional neural networks (CNN) and two-way length have been merged with dialogue corpus training The deep learning model (CNN-BLSTM) of phase memory network (BLSTM) is identified to obtain with the model after training to dialogue D Pre- intent classifier result PB, finally, by by intent classifier result PAWith intent classifier result PBLinear fusion is carried out, is obtained pair Talk about the final intention of D.According to the intention assessment side provided by the present invention based on combined deep learning model CNN-BLSTM model Method can only considered timing information in intention assessment efficiently against depth of foundation learning model and have ignored local important The deficiency of information.Rule is obtained in addition, sentence length is incorporated in word frequency weight calculation, can effectively measure the important journey of word Degree selects more representational word as rule.Finally, classification results and rule by combining CNN-BLSTM model The classification results matched can accurately obtain the intention of user.On the data set of multiple official's intention assessments in the world The comparative test result of best multiple models shows to combine combined deep learning model and involvement according to provided by the invention The intension recognizing method of the rule match of sentence length information, realizes on accuracy of identification and is obviously improved.The present invention can have Effect improves the accuracy of user's intention assessment, and the correct identification that user is intended to is that Intelligent dialogue system generates the base rationally replied Plinth accurately identifies the intention of user, and the performance boost of energy conversational system increases user experience, has very big value and research Meaning.
Detailed description of the invention
Fig. 1 is the flow chart of intention assessment in the embodiment of the present invention.
Specific implementation method
The present invention proposes a kind of in the two-way shot and long term Memory Neural Networks (LSTM) of convolutional neural networks (CNN) involvement Combined deep learning model mentions to realize intent classifier using the sentence length information in corpus as influence factor involvement rule In taking, by the matched classification results of binding rule, the accuracy of intention assessment is further increased.
Combined deep learning model proposed by the present invention is known as CNN-BLSTM model, is used for intention assessment.Traditional depth Learning model generallys use Recognition with Recurrent Neural Network (RNN) and its mutation shot and long term Memory Neural Networks in being intended to identification mission (LSTM) etc., this neural network can hold well the timing information of sentence, but be a lack of local important information.It is basic herein On, the present invention incorporates convolutional neural networks (CNN) on traditional model to obtain the important semantic information in the part in sentence.It should Built-up pattern can capture user's intention using more information.
The present invention is proposed for Rules extraction method unreasonable in Classical matching process by the sentence in corpus Length information is as impact factor in view of in word frequency weight calculation, obtaining more reasonable keyword word as rule.Generally For, significance level of the same word in short sentence it is bigger than significance level in long sentence (such as " I wants to listen song ", " song " word is greater than the latter in the former importance in " room-mate's sleep can probably be bothered by singing now "), efficiently use sentence The length information of son, can preferably capture the true intention of user.
Present invention incorporates rule matching method with combined deep learning model method as a result, conventional method and depth Learning method complements each other, and further increases the accuracy of user's intention assessment.Firstly, being taken out from dialogue corpus by word frequency weight Rule of the keyword as intention assessment is taken, the pre- dialogue D for carrying out intention assessment is matched with these rules and is intended to Classification results PA, then convolutional neural networks (CNN) and two-way shot and long term memory network have been merged with dialogue corpus training (BLSTM) deep learning model (CNN-BLSTM) is identified to obtain intent classifier knot with the model after training to dialogue D Fruit PB, finally, by by intent classifier result PAWith intent classifier result PBLinear fusion is carried out, the final meaning of dialogue D is obtained Figure.
Referring to Fig. 1, embodiment the specific implementation process is as follows:
Step 1, to the dialogue corpus (training corpus) for having marked classification, to each classification, following place is carried out respectively Reason, obtains the rule of each classification:
Word segmentation processing is carried out with Jieba participle kit, and counts and obtains the number N of all entries under the category, by institute By entry form a vocabulary.And it counts in the category, the number W that i-th of word occursi(i=1,2,3 ... N), sentence The length L of total number M and j-th strip sentencej(j=1,2,3 ... M).It is existing software tool, this hair that Jieba, which segments kit, It is bright that it will not go into details.
Then the average length for calculating sentence under the category carries out cumulative summation again to the length of all sentences of the category Divided by the total number of sentence, the average length for obtaining classification sentence is AveL, and calculation formula is as follows,
The word frequency weight F of i-th of word can be calculated according to formula (1)i, as shown in formula (2):
In above-mentioned formula, FiIndicate the word frequency weight of i-th of word of the category, WiIndicate the number that category word i occurs, N Indicate the number of all entries under the category, AveL is the average length of sentence in the category, and S indicates to include i-th in the category The length of some sentence of a word, ∑ S are the summation that adds up to the length for the sentence for occurring i-th of word in the category;
Its word frequency weight is acquired according to formula (2) to each word under the category, each entry in vocabulary is pressed into word Frequency weight sorts from large to small, and embodiment proposes the entry of ranking 1% before choosing as keyword, these keywords are just used as should The rule of classification.When it is implemented, user can preset selection ratio, embodiment uses preferred ratio 1%.
Step 2, the rule based on step 1 each classification obtained, and duplicate rule in different classes of is deleted, finally Result of the obtained rule as final rule extraction.
Step 3, the pre- dialogue D for carrying out intention assessment is matched one by one with the rule that step 2 obtains (once successfully It is fitted on a certain rule, then terminates matching), if comprising some rule in D, the corresponding probability mark for being intended to classification of the rule Be denoted as 1, other probability for being intended to classifications are collectively labeled as 0, obtain D it is corresponding intentional classification probability distribution PA=[p1,p2, p3,…pd], d indicates intention class number in total, p1,p2,p3,…pdThe probability of respectively the 1st~the d intention classification.In order to Facilitate understanding, below will for example: if " song " word is the rule that music is intended to classification, when conversation sentence D is that " you like Listen what kind of song " the words when, comprising " song " this rule in D, then judge that the music of D is intended to class probability as 1, It is 0 that other, which are intended to class probability,.
Step 4, the embodiment of the present invention is trained on Chinese wikipedia large-scale corpus by word2vec tool Obtain term vector set.Vector in the term vector set comprising a large amount of word indicates.Word2vec tool is one by word It is converted into the existing software tool of vector form, it will not go into details by the present invention.
Step 5, after training corpus Jieba being segmented kit participle, by each of every words resulting after participle The vector of word indicates xbThe vector that (can find in the term vector set that step 4 obtains) is combined into word indicates X=[x1, x2, x3... xl], l be vector sequence length (embodiment of the present invention according in training corpus sentence length statistics and analysis select Select l=40), when length is more than l, interception is no more than the length thereof of l, zero padding when insufficient, b=1,2,3 ... l.Specific implementation When, user can preset the value for choosing l.
The convolutional layer of the vector X input convolutional neural networks of sentence is calculated, in convolution process, sliding window is according to sentence Sub- sequence dimension direction convolution, sliding window length, that is, convolution kernel size are respectively set to 2,3,5.Every kind of convolution nuclear volume is set It is set to 128.Each convolution kernel sliding distich subvector does convolution operation, obtains different degrees of characteristic pattern sa(each ruler Very little convolution kernel can generate 128 characteristic pattern feature map), by all sa(a=1,2,3 ... n) combines to obtain defeated Result S=[s out1,s2,s3…sn], n indicates the total number of characteristic pattern (for example, n=3 × 128 in the present embodiment).
For the relative ranks for keeping sentence, the structure of S is rearranged to obtain result V, permutatation mode is as follows:
S=[s1,s2,…sn] formula (3)
In above-mentioned formula (3), the convolution results that S is are made of, s several characteristic patternsa(a=1,2,3 ... n) Indicate a-th of the characteristic pattern obtained after convolution operation, each characteristic pattern is a vector, in formula (4) and formula (5) Indicate the numerical value of the corresponding element of a-th of characteristic pattern b (b=1,2,3 ... l) dimension in convolution results.vbIt is by saAfter rearranging Corresponding vector, by vbIt combines to obtain final permutatation result V.
V=[v1,v2,…vl] formula (6)
By in the vector V input BLSTM neural network model after permutatation, BLSTM is two-way shot and long term memory nerve net Network is made of the shot and long term Memory Neural Networks of forward direction and a backward shot and long term Memory Neural Networks.For each (t=1,2,3 ... l) (input of a word is as a time step in a word), and the shot and long term of forward direction remembers mind by a time step t Hidden layer state is exported through networkBackward shot and long term Memory Neural Networks export hidden layer stateCombine two hidden layer states Vector obtains vector ht
H=[h1,h2,h3…hl] formula (8)
Wherein, H indicates that the vector of all time steps indicates, i.e., the vector of entire sentence indicates that implicit context is semantic in H Information.
Step 6, convolution results S obtained in step 5 is subjected to pondization operation, pondization operates the output to convolutional layer and carries out Feature sampling, can merge the feature of various sizes convolution window extraction.The present invention is using maximum pond (max-pooling) side Method, this method retain the maximum most significant feature of characteristic vector pickup.The output in maximum pond is defined as,
smax=max sa, a ∈ [1, n] formula (9)
O=h (Wsmax+ b) formula (10)
S in above-mentioned formula (9)aIndicate the characteristic pattern of convolutional layer output, smaxIt indicates to choose saIn maximum feature.In public affairs In formula (10), operation h () is nonlinear activation function, and the present invention is as activation primitive, W and b using LeakyReLU function Parameter in convolutional network, initial value random value between 0 to 1.Vector O is obtained after operating by maximum pondization, includes in O Most important semantic feature and category feature information in sentence;
Step 7, the vector H that step 5 obtains and the vector O that step 6 obtains are stitched together as new vector T, splicing The method of vector H and vector O are as follows:
T=[O, H] formula (11)
It is indicated T as final dialogue sentence characteristics vector, by full articulamentum, connects all sentence characteristics T and obtain yc, connection type is as follows,
yc=h (Wc×T+bc) formula (12)
By ycIt being input in softmax function and obtains the probability of each classification, calculation formula is as follows,
In formula (12) and formula (13), c (c=1,2,3 ..., d) indicates that c class is intended to classification, ycIt is intermediate quantity, Wc It is the parameter of the full articulamentum of convolutional neural networks, bcIt is bias term parameter, d indicates intention class number in total, operation h () It is nonlinear activation function, for the embodiment of the present invention using tanh function as activation primitive, e is the truth of a matter of natural logrithm, pcTable Show that user's sentence belongs to the probability that c class is intended to classification.Calculate intentional classification probability PB=[p1,p2,p3,… pd], p1,p2,p3,…pdThe probability of respectively the 1st~the d intention classification.
Step 8, step 5-7 is repeated to every a word in training corpus and obtains its probability distribution for being intended to classification, chosen The intention classification of maximum probability is intended to classification as prediction, by the way that (corpus dialogue provides the true of every words with its true intention It is intended to) it is compared, the parameter in CNN-BLSTM model and constantly iteration optimization model is trained with this, with having trained CNN-BLSTM model come calculate it is pre- carry out intention assessment dialogue D (being calculated by the corresponding mode of step 5-7), obtain To D it is corresponding intentional classification probability distribution PB
Step 9, for the pre- dialogue D for carrying out intention assessment, the intention class probability that step 3 is obtained is distributed PAWith step 8 obtained intention class probability distribution PBIt carries out linear fusion and obtains final intention class probability distribution P, amalgamation mode is such as Under,
P=α × PA+β×PBFormula (14)
In formula (14), alpha+beta=1 can take α, and (0.0,0.1,0.2 ... 1.0), preset α as needed, β's takes by β ∈ Value, such as α=0.5, β=0.5.
Finally, choosing result of the intention classification as final intention assessment corresponding to maximum probability in P.
Step 1~3 are to realize rule-based matched classification, and step 4~8 are to realize point based on CNN-BLSTM model Class, step 9 carry out the combination of the two.
When it is implemented, the automatic running that software technology realizes the above process can be used in those skilled in the art.Correspondingly, Provided that a kind of intension recognizing method based on deep learning technology, including computer or server, in computer or service The matched CNN-BLSTM model realization intention assessment of the above process binding rule is executed on device, it should also in guarantor of the invention It protects in range.

Claims (4)

1. a kind of conversational system intension recognizing method based on deep learning, it is characterised in that: firstly, passing through from dialogue corpus The pre- dialogue D for carrying out intention assessment match with rule by rule of the word frequency weight extracting keywords as intention assessment To intent classifier result PA;With dialogue corpus training deep learning MODEL C NN-BLSTM, the deep learning MODEL C NN- BLSTM has merged convolutional neural networks CNN and two-way shot and long term memory network BLSTM, then with the deep learning mould after training Type CNN-BLSTM, identification dialogue D, obtains intent classifier result PB;Finally, by by intent classifier result PAWith intent classifier knot Fruit PBLinear fusion is carried out, the final intention of dialogue D is obtained.
2. the conversational system intension recognizing method based on deep learning according to claim 1, it is characterised in that: described from right By rule of the word frequency weight extracting keywords as intention assessment in language material, including each classification is carried out the following processing,
Word segmentation processing is carried out, and counts and obtains the number N of all entries under the category, all entries are formed into a vocabulary, And it counts in the category, the number W that i-th of word occursi, the total number M of sentence and the length L of j-th strip sentencej, i=1, 2,3 ... N, j=1,2,3 ... M;
The average length AveL of sentence under the category is calculated, calculation formula is as follows,
Calculate the word frequency weight F of i-th of wordiIt is as follows,
Wherein, ∑ S indicates the summation that adds up to the length S for the sentence for occurring i-th of word in the category;
After acquiring word frequency weight to each word under the category, each entry in vocabulary is arranged from big to small by word frequency weight Sequence proposes to choose rule of the preceding several entries of ranking as keyword, as the category.
3. the conversational system intension recognizing method based on deep learning according to claim 1, it is characterised in that: the use pair Language material trains deep learning MODEL C NN-BLSTM, and implementation is as follows,
After training corpus is carried out word segmentation processing, the vector of each word in every words resulting after participle is indicated into xbIt is combined into The vector of the word indicates X=[x1, x2, x3... xl], l is the sequence length of vector, b=1,2,3 ... l;
The convolutional layer of the vector X input convolutional neural networks of sentence is calculated, all characteristic pattern s that will be obtainedaIt combines to obtain Export result S=[s1,s2,s3…sn], n indicates the total number of characteristic pattern, a=1,2,3 ... n;
The structure of S is rearranged, by gained vector V input BLSTM neural network model, BLSTM is after permutatation Two-way shot and long term Memory Neural Networks, by the shot and long term Memory Neural Networks and a backward shot and long term memory mind of a forward direction It is constituted through network;For each time step t, the shot and long term Memory Neural Networks of forward direction export hidden layer stateBackward length Short-term memory neural network exports hidden layer stateThe vector for combining two hidden layer states obtains vector ht;According to all time steps Vector indicate, obtain the corresponding vector H of entire sentence, implicit context semantic information in H;
Vector O is obtained after carrying out maximum pondization operation to S, includes most important semantic feature in sentence and category feature letter in O Breath;
Vector H and vector O are stitched together as vector T,
It is indicated T as final dialogue sentence characteristics vector, connects all sentence characteristics T and obtain intermediate quantity yc, according to ycIt obtains The probability of each classification chooses the intention classification of maximum probability as intention assessment result PB
4. the according to claim 1 or 2 or 3 conversational system intension recognizing methods based on deep learning, it is characterised in that: set By intent classifier result PAWith intent classifier result PBFinal intention class probability distribution P is obtained after carrying out linear fusion, in P Result of the intention classification as final intention assessment corresponding to middle selection maximum probability.
CN201810945991.9A 2018-08-20 2018-08-20 Intention identification method based on deep learning Active CN109241255B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810945991.9A CN109241255B (en) 2018-08-20 2018-08-20 Intention identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810945991.9A CN109241255B (en) 2018-08-20 2018-08-20 Intention identification method based on deep learning

Publications (2)

Publication Number Publication Date
CN109241255A true CN109241255A (en) 2019-01-18
CN109241255B CN109241255B (en) 2021-05-18

Family

ID=65071796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810945991.9A Active CN109241255B (en) 2018-08-20 2018-08-20 Intention identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN109241255B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109893095A (en) * 2019-03-11 2019-06-18 常州市贝叶斯智能科技有限公司 A kind of intelligent robot system of body composition detection and analysis
CN110162775A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Determine the method, apparatus and computer equipment of intention assessment accuracy
CN110209791A (en) * 2019-06-12 2019-09-06 百融云创科技股份有限公司 It is a kind of to take turns dialogue intelligent speech interactive system and device more
CN110232114A (en) * 2019-05-06 2019-09-13 平安科技(深圳)有限公司 Sentence intension recognizing method, device and computer readable storage medium
CN110245348A (en) * 2019-05-17 2019-09-17 北京百度网讯科技有限公司 A kind of intension recognizing method and system
CN110321564A (en) * 2019-07-05 2019-10-11 浙江工业大学 A kind of more wheel dialogue intension recognizing methods
CN110334340A (en) * 2019-05-06 2019-10-15 北京泰迪熊移动科技有限公司 Semantic analysis, device and the readable storage medium storing program for executing of rule-based fusion
CN110377911A (en) * 2019-07-23 2019-10-25 中国工商银行股份有限公司 Intension recognizing method and device under dialogue frame
CN110795944A (en) * 2019-10-11 2020-02-14 腾讯科技(深圳)有限公司 Recommended content processing method and device, and emotion attribute determining method and device
CN110928997A (en) * 2019-12-04 2020-03-27 北京文思海辉金信软件有限公司 Intention recognition method and device, electronic equipment and readable storage medium
CN111400440A (en) * 2020-02-28 2020-07-10 深圳市华海同创科技有限公司 Intention identification method and device
CN111462752A (en) * 2020-04-01 2020-07-28 北京思特奇信息技术股份有限公司 Client intention identification method based on attention mechanism, feature embedding and BI-L STM
WO2020155766A1 (en) * 2019-01-31 2020-08-06 平安科技(深圳)有限公司 Method, device and apparatus for identification rejection in intention identification, and storage medium
CN111597320A (en) * 2020-05-26 2020-08-28 成都晓多科技有限公司 Intention recognition device, method, equipment and storage medium based on hierarchical classification
CN111639152A (en) * 2019-08-29 2020-09-08 上海卓繁信息技术股份有限公司 Intention recognition method
CN111737544A (en) * 2020-05-13 2020-10-02 北京三快在线科技有限公司 Search intention recognition method and device, electronic equipment and storage medium
US10916242B1 (en) 2019-08-07 2021-02-09 Nanjing Silicon Intelligence Technology Co., Ltd. Intent recognition method based on deep learning network
WO2021022816A1 (en) * 2019-08-07 2021-02-11 南京硅基智能科技有限公司 Intent identification method based on deep learning network
CN112667816A (en) * 2020-12-31 2021-04-16 华中师范大学 Deep learning-based aspect level emotion analysis method and system
CN112989003A (en) * 2021-04-01 2021-06-18 网易(杭州)网络有限公司 Intention recognition method, device, processing equipment and medium
CN113094475A (en) * 2021-06-08 2021-07-09 成都晓多科技有限公司 Dialog intention recognition system and method based on context attention flow
CN113158062A (en) * 2021-05-08 2021-07-23 清华大学深圳国际研究生院 User intention identification method and device based on heterogeneous graph neural network
CN113678133A (en) * 2019-04-05 2021-11-19 三星电子株式会社 System and method for context-rich attention memory network with global and local encoding for dialog break detection
CN114547435A (en) * 2020-11-24 2022-05-27 腾讯科技(深圳)有限公司 Content quality identification method, device, equipment and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090012778A1 (en) * 2007-07-05 2009-01-08 Nec (China) Co., Ltd. Apparatus and method for expanding natural language query requirement
CN107679234A (en) * 2017-10-24 2018-02-09 上海携程国际旅行社有限公司 Customer service information providing method, device, electronic equipment, storage medium
CN107679199A (en) * 2017-10-11 2018-02-09 北京邮电大学 A kind of external the Chinese text readability analysis method based on depth local feature
US20180157638A1 (en) * 2016-12-02 2018-06-07 Microsoft Technology Licensing, Llc Joint language understanding and dialogue management
US20180173999A1 (en) * 2016-12-21 2018-06-21 XBrain, Inc. Natural Transfer of Knowledge Between Human and Artificial Intelligence
CN108415923A (en) * 2017-10-18 2018-08-17 北京邮电大学 The intelligent interactive system of closed domain

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090012778A1 (en) * 2007-07-05 2009-01-08 Nec (China) Co., Ltd. Apparatus and method for expanding natural language query requirement
US20180157638A1 (en) * 2016-12-02 2018-06-07 Microsoft Technology Licensing, Llc Joint language understanding and dialogue management
US20180173999A1 (en) * 2016-12-21 2018-06-21 XBrain, Inc. Natural Transfer of Knowledge Between Human and Artificial Intelligence
CN107679199A (en) * 2017-10-11 2018-02-09 北京邮电大学 A kind of external the Chinese text readability analysis method based on depth local feature
CN108415923A (en) * 2017-10-18 2018-08-17 北京邮电大学 The intelligent interactive system of closed domain
CN107679234A (en) * 2017-10-24 2018-02-09 上海携程国际旅行社有限公司 Customer service information providing method, device, electronic equipment, storage medium

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020155766A1 (en) * 2019-01-31 2020-08-06 平安科技(深圳)有限公司 Method, device and apparatus for identification rejection in intention identification, and storage medium
CN109893095A (en) * 2019-03-11 2019-06-18 常州市贝叶斯智能科技有限公司 A kind of intelligent robot system of body composition detection and analysis
CN110162775A (en) * 2019-03-11 2019-08-23 腾讯科技(深圳)有限公司 Determine the method, apparatus and computer equipment of intention assessment accuracy
CN113678133A (en) * 2019-04-05 2021-11-19 三星电子株式会社 System and method for context-rich attention memory network with global and local encoding for dialog break detection
CN110334340A (en) * 2019-05-06 2019-10-15 北京泰迪熊移动科技有限公司 Semantic analysis, device and the readable storage medium storing program for executing of rule-based fusion
CN110232114A (en) * 2019-05-06 2019-09-13 平安科技(深圳)有限公司 Sentence intension recognizing method, device and computer readable storage medium
CN110334340B (en) * 2019-05-06 2021-08-03 北京泰迪熊移动科技有限公司 Semantic analysis method and device based on rule fusion and readable storage medium
CN110245348A (en) * 2019-05-17 2019-09-17 北京百度网讯科技有限公司 A kind of intension recognizing method and system
CN110245348B (en) * 2019-05-17 2023-11-24 北京百度网讯科技有限公司 Intention recognition method and system
CN110209791A (en) * 2019-06-12 2019-09-06 百融云创科技股份有限公司 It is a kind of to take turns dialogue intelligent speech interactive system and device more
CN110209791B (en) * 2019-06-12 2021-03-26 百融云创科技股份有限公司 Multi-round dialogue intelligent voice interaction system and device
CN110321564A (en) * 2019-07-05 2019-10-11 浙江工业大学 A kind of more wheel dialogue intension recognizing methods
CN110321564B (en) * 2019-07-05 2023-07-14 浙江工业大学 Multi-round dialogue intention recognition method
CN110377911A (en) * 2019-07-23 2019-10-25 中国工商银行股份有限公司 Intension recognizing method and device under dialogue frame
CN110377911B (en) * 2019-07-23 2023-07-21 中国工商银行股份有限公司 Method and device for identifying intention under dialog framework
US10916242B1 (en) 2019-08-07 2021-02-09 Nanjing Silicon Intelligence Technology Co., Ltd. Intent recognition method based on deep learning network
WO2021022816A1 (en) * 2019-08-07 2021-02-11 南京硅基智能科技有限公司 Intent identification method based on deep learning network
CN111639152A (en) * 2019-08-29 2020-09-08 上海卓繁信息技术股份有限公司 Intention recognition method
CN111639152B (en) * 2019-08-29 2021-04-13 上海卓繁信息技术股份有限公司 Intention recognition method
CN110795944A (en) * 2019-10-11 2020-02-14 腾讯科技(深圳)有限公司 Recommended content processing method and device, and emotion attribute determining method and device
CN110928997A (en) * 2019-12-04 2020-03-27 北京文思海辉金信软件有限公司 Intention recognition method and device, electronic equipment and readable storage medium
CN111400440A (en) * 2020-02-28 2020-07-10 深圳市华海同创科技有限公司 Intention identification method and device
CN111462752B (en) * 2020-04-01 2023-10-13 北京思特奇信息技术股份有限公司 Attention mechanism, feature embedding and BI-LSTM (business-to-business) based customer intention recognition method
CN111462752A (en) * 2020-04-01 2020-07-28 北京思特奇信息技术股份有限公司 Client intention identification method based on attention mechanism, feature embedding and BI-L STM
CN111737544A (en) * 2020-05-13 2020-10-02 北京三快在线科技有限公司 Search intention recognition method and device, electronic equipment and storage medium
CN111597320A (en) * 2020-05-26 2020-08-28 成都晓多科技有限公司 Intention recognition device, method, equipment and storage medium based on hierarchical classification
CN114547435A (en) * 2020-11-24 2022-05-27 腾讯科技(深圳)有限公司 Content quality identification method, device, equipment and readable storage medium
CN112667816A (en) * 2020-12-31 2021-04-16 华中师范大学 Deep learning-based aspect level emotion analysis method and system
CN112667816B (en) * 2020-12-31 2022-07-05 华中师范大学 Deep learning-based aspect level emotion analysis method and system
CN112989003B (en) * 2021-04-01 2023-04-18 网易(杭州)网络有限公司 Intention recognition method, device, processing equipment and medium
CN112989003A (en) * 2021-04-01 2021-06-18 网易(杭州)网络有限公司 Intention recognition method, device, processing equipment and medium
CN113158062A (en) * 2021-05-08 2021-07-23 清华大学深圳国际研究生院 User intention identification method and device based on heterogeneous graph neural network
CN113094475A (en) * 2021-06-08 2021-07-09 成都晓多科技有限公司 Dialog intention recognition system and method based on context attention flow

Also Published As

Publication number Publication date
CN109241255B (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN109241255A (en) A kind of intension recognizing method based on deep learning
CN109446331B (en) Text emotion classification model establishing method and text emotion classification method
CN106529503B (en) A kind of integrated convolutional neural networks face emotion identification method
CN108363790A (en) For the method, apparatus, equipment and storage medium to being assessed
CN109460737A (en) A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network
CN108363753A (en) Comment text sentiment classification model is trained and sensibility classification method, device and equipment
CN111414461B (en) Intelligent question-answering method and system fusing knowledge base and user modeling
CN107918782A (en) A kind of method and system for the natural language for generating description picture material
CN104598611B (en) The method and system being ranked up to search entry
CN108829662A (en) A kind of conversation activity recognition methods and system based on condition random field structuring attention network
CN110083700A (en) A kind of enterprise's public sentiment sensibility classification method and system based on convolutional neural networks
CN110532379B (en) Electronic information recommendation method based on LSTM (least Square TM) user comment sentiment analysis
CN111966917A (en) Event detection and summarization method based on pre-training language model
CN107704482A (en) Method, apparatus and program
CN107247703A (en) Microblog emotional analysis method based on convolutional neural networks and integrated study
CN110717843A (en) Reusable law strip recommendation framework
CN110992988B (en) Speech emotion recognition method and device based on domain confrontation
CN106682089A (en) RNNs-based method for automatic safety checking of short message
CN110415071A (en) A kind of competing product control methods of automobile based on opining mining analysis
CN109325780A (en) A kind of exchange method of the intelligent customer service system in E-Governance Oriented field
CN107145514A (en) Chinese sentence pattern sorting technique based on decision tree and SVM mixed models
CN112561718A (en) Case microblog evaluation object emotion tendency analysis method based on BilSTM weight sharing
CN117236338B (en) Named entity recognition model of dense entity text and training method thereof
CN108875034A (en) A kind of Chinese Text Categorization based on stratification shot and long term memory network
CN113934835B (en) Retrieval type reply dialogue method and system combining keywords and semantic understanding representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant