CN112559719A - Intention recognition method and device, electronic equipment and storage medium - Google Patents

Intention recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112559719A
CN112559719A CN202011556347.6A CN202011556347A CN112559719A CN 112559719 A CN112559719 A CN 112559719A CN 202011556347 A CN202011556347 A CN 202011556347A CN 112559719 A CN112559719 A CN 112559719A
Authority
CN
China
Prior art keywords
intention
result
data
recognition
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011556347.6A
Other languages
Chinese (zh)
Inventor
章翔
顾孙炎
张俊杰
罗红
孟越涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011556347.6A priority Critical patent/CN112559719A/en
Publication of CN112559719A publication Critical patent/CN112559719A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/338Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention discloses an intention identification method, an intention identification device, electronic equipment and a storage medium, and relates to the field of natural language processing. The intention identification method comprises the following steps: processing voice data to be recognized to obtain a user object in the voice data to be recognized; acquiring at least two historical identification data of the user object; performing intention recognition on the voice data to be recognized to obtain an intention result, and grading the intention result to obtain an intention grade; and acquiring an intention identification result according to the historical identification data, the intention result and the intention score. The method and the device are applied to the intelligent equipment, so that the aim of improving the accuracy of the identification result of the intelligent equipment during the purpose identification is fulfilled.

Description

Intention recognition method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of natural language processing, in particular to an intention identification method, an intention identification device, electronic equipment and a storage medium.
Background
The intention recognition method is an important research field of natural language processing, refers to a natural language processing technology for performing intention recognition on interactive texts and extracting partial key information, and can be applied to various intelligent devices. The existing intention recognition method is to convert voice input by a user into text data, match the text data of the voice with a preset rule, score the matched intention result, and select the intention result with the highest score from the matched intention results for output.
However, when the intention is recognized, the interaction intention of the user can be recognized well only when one intention result is obtained or the highest score of the intention result is greatly different from other scores, but when the highest score of the intention result is close to the scores of other intention results, the highest score of the intention result is not well distinguished, only the result with the highest score can be selected to be output, and the intention results with the similar scores cannot be subjected to personalized analysis, so that the intention recognition result is output wrongly, and the use feeling of the user is reduced.
Disclosure of Invention
An object of embodiments of the present invention is to provide an intention recognition method, an intention recognition apparatus, an electronic device, and a storage medium, which can improve accuracy of an intention recognition result and improve a user experience when performing intention recognition.
In order to solve the above technical problem, an embodiment of the present invention provides an intention identifying method, including: processing voice data to be recognized to obtain a user object in the voice data to be recognized; acquiring at least two historical identification data of the user object; performing intention recognition on the voice data to be recognized to obtain an intention result, and grading the intention result to obtain an intention grade; and acquiring an intention identification result according to the historical identification data, the intention result and the intention score.
An embodiment of the present invention also provides an intention identifying apparatus, including:
a processing module: the voice recognition system is used for processing voice data to be recognized and acquiring a user object in the voice data to be recognized;
an acquisition module: at least two historical data for obtaining the user object;
an intent recognition module: the voice recognition device is used for recognizing the intention according to the voice data to be recognized, obtaining an intention result, scoring the intention result and obtaining an intention score;
a weighting module: the system is used for acquiring an intention recognition result according to the historical recognition data, the intention result and the intention score;
a storage module: for saving the intention recognition result.
An embodiment of the present invention also provides an electronic device, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the intent recognition method described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program which, when executed by a processor, implements the intent recognition method described above.
Compared with the prior art, the method and the device for recognizing the intention of the user are characterized in that voice data to be recognized are processed to obtain the user object in the voice data to be recognized, at least two pieces of historical recognition data of the user object are obtained, intention recognition is conducted on the voice data to be recognized, the intention result is obtained, the intention result is scored, the intention score is obtained, and finally the intention recognition result is obtained according to the historical recognition data, the intention result and the intention score, so that the purpose of obtaining the intention recognition result by using the historical recognition data during intention recognition is achieved, and the problem that the accuracy rate of the intention recognition result is low in the prior art is solved.
In addition, the intention identifying method provided by the embodiment of the invention further comprises the following steps after the intention score is obtained: judging whether the number of the intention results is larger than a preset number or not; if the number of the intention results is less than or equal to the preset number, acquiring the intention recognition result; and if the number of the intention results is larger than the preset number, acquiring the intention recognition result according to the historical recognition data, the intention results and the intention score. After the intention results are obtained, the number of the intention results is judged firstly, and the steps of ending the recognition in advance or continuing to obtain the recognition results can be selected according to the number of the intention results, so that the technical scheme provided by the embodiment of the invention has higher flexibility.
In addition, an intention recognition method according to an embodiment of the present invention, in which an intention recognition result is obtained from the history recognition data, the intention result, and the intention score, includes: sorting the intention results according to the intention scores, and acquiring a difference value between the first two intention scores; judging whether the difference value is larger than a preset threshold value or not; if the difference is larger than the preset threshold value, acquiring the intention identification result; and if the difference is smaller than or equal to the preset threshold, acquiring the intention identification result according to the historical identification data and the intention score. When the intention identification result is obtained, the step of ending identification in advance or continuing to obtain the subsequent identification result is selected according to the difference value between the intention scores of the first two ranking digits, so that the technical scheme provided by the embodiment of the invention has higher flexibility.
In addition, an intention recognition method according to an embodiment of the present invention is a method for acquiring an intention recognition result from the history recognition data and the intention score, including: setting a weighting coefficient for the historical identification data according to the time of the historical identification data; classifying the historical identification data to obtain a data classification result of the historical identification data; obtaining the domain proportion data of the historical identification data according to the historical identification data, the weighting coefficient and the data classification result; obtaining a comprehensive score of the intention result according to the field proportion data and the intention score; and acquiring the intention recognition result according to the comprehensive score. When the intention identification result is obtained, the field proportion data is obtained by setting a weighting coefficient for the historical identification data and dividing the field, and then the comprehensive score of the intention result is obtained according to the field proportion data and the intention score, so that the technical scheme provided by the embodiment of the invention has higher accuracy.
In addition, the intention identification method provided by the embodiment of the invention further comprises the step of saving the intention identification result after the intention identification result is obtained. The acquired intention recognition result is saved and used as the history recognition data of the subsequent intention recognition, so that the technical method provided by the embodiment of the invention has stronger reliability.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of an intention recognition method provided by a first embodiment of the present invention;
FIG. 2 is a flow chart of step 103 of the intent recognition method provided by the first embodiment of the present invention shown in FIG. 1;
FIG. 3 is a flow chart of an intent recognition method provided by a second embodiment of the present invention;
FIG. 4 is a flow chart of an intent recognition method provided by a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an intention identifying apparatus provided in a fourth embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to an intention identifying method. The specific flow is shown in figure 1:
step 101, processing voice data to be recognized, and acquiring a user object in the voice data to be recognized.
Specifically, after receiving voice data of a user, processing original voice to eliminate partial noise and influence caused by different speakers, extracting acoustic features and language models of the voice of the user, performing template matching in a standard library by taking the extracted acoustic features and language models as feature vectors, and searching a recognition result with the highest similarity as the voice data to be recognized, namely text data; acquiring a generator, namely a user object, of the received voice data to be recognized according to the acoustic characteristics of the received voice to be recognized; when the intelligent equipment is currently in other contents, when the voice of a user is received, the echo cancellation technology is used for eliminating the audio of the currently executed contents and identifying the received voice; the processing of the voice data to be recognized may adopt an Automatic Speech Recognition technology (ASR) and a voiceprint Recognition algorithm, the above processing modes are only examples, and other methods may also be adopted in practical applications to achieve the same purpose, which are not described herein in detail.
At step 102, at least two historical identification data of a user object are obtained.
Specifically, when the intention recognition is performed, the finally output intention recognition result is stored and used as the history recognition data, and when the history data is acquired, the data in the specified time period in the history recognition data of the user is acquired, and the size of the specified time period is time-limited and may be regarded as a fixed value, and may be set by the user before the intention recognition is performed, or may be set in advance by the system.
And 103, performing intention recognition according to the voice data to be recognized, acquiring an intention result, grading the intention result, and acquiring an intention grade.
Specifically, this step actually processes the text data obtained after processing the speech data to be recognized, and as shown in fig. 2, step 103 further includes:
step 201, matching the voice data to be recognized with a preset intention rule, and acquiring an intention result and a matching result corresponding to the intention result.
Specifically, the preset intention rules are changed according to the change of the application scene, the number of the preset intention rules influences the data of the acquired intention results, the intention rules are composed of a plurality of word slots, the text data is actually matched with the word slots, and therefore the intention results are acquired while the corresponding matching results are acquired; take the intelligent electronic as an example: assuming that the user enters the speech "please help me put a western note," two intent cases that may be hit are as follows: the intention result 1 is PlayVideoName, and the corresponding intention rule is: (me)? (wan)? (lease)? (you)? (help)? (me 2)? (play)? (num)? The matching result corresponding to the VideoName, intent result 1 (i.e., the case of a hit with a word slot in the intent rule) is: please and please, help and help, me2 and me, play and put, num and one, shorthand and videoName; the intention result 2 is PlayByAlbum (here, "the western note" is also referred to as the audiobook album), and the corresponding intention rule is: (lease)? (help)? (me)? (play)? (num)? Albumname (ok)? The matching result corresponding to intent result 2 (i.e., the case of a hit with a word slot in the intent rule) is: please and please, help and help, me and me, play and put, num and one, western and AlbumName, good and ok; in the intention rule, a bracket represents a word slot position, a question mark behind the word slot position represents an optional word slot position when the word slot position is matched, namely the word slot position can be matched or not matched when being matched, and the word slot position without the question mark is a slot position with an essential name.
And step 202, scoring the intention result according to the intention result and the matching result to obtain an intention score.
Specifically, the intention score of an intention result is equal to the ratio of the matching result of the intention result (the case of hit with the word slot in the intention rule) to the text data of the speech to be recognized corresponding to the intention result; for example: when the intention result is the PlayVideoName, the corresponding word slot matching result is 9(please and please, help and help, me2 and me, play and play, num and one, shorthand and videoName), the text data of the voice to be recognized corresponding to the intention result PlayVideoName is 11, and the intention score is 9/11; when the intention result is PlayByAlbum, the corresponding word slot matching result is 11 (i and me, want to listen and wait, zhou jen and singer, song and song), the text data of the voice to be recognized corresponding to the intention result PlayByAlbum is 11, and the intention score is 11/11.
And 105, acquiring an intention identification result according to the historical identification data, the intention result and the intention score.
Specifically, firstly, the historical identification data are sequenced according to the time of the historical identification data, then a weighting coefficient is set for the historical identification data in each time period, then the historical identification data are divided according to the fields, and the proportion occupied by the historical identification data in each field is obtained according to the weighting coefficient, the historical identification data and the fields; setting corresponding weighting coefficients for the historical identification data proportion data and the intention scores, carrying out weighting calculation on the intention scores and the corresponding historical identification data proportion data, obtaining the comprehensive scores of all the intention results, and selecting the intention results with the highest scores from the comprehensive scores as final intention identification results.
Compared with the prior art, the method and the device for recognizing the intention of the user are characterized in that voice data to be recognized are processed, a user object in the voice data to be recognized is obtained, at least two pieces of historical recognition data of the user object are obtained, intention recognition is conducted on the voice data to be recognized, an intention result is obtained, the intention result is scored, an intention score is obtained, and finally the intention recognition result is obtained according to the historical recognition data, the intention result and the intention score, so that the purpose of obtaining the intention recognition result by using the historical recognition data during intention recognition is achieved, and the problems that the accuracy rate of the intention recognition result is low and the use feeling of the user is poor in the prior art are solved.
A second embodiment of the present invention relates to an intention recognition method, and is substantially the same as the first embodiment, and mainly differs therefrom in that: after step 103 of the first embodiment, the method further includes determining the number of the intended results, and the specific flow is as shown in fig. 3:
step 301, processing the voice data to be recognized, and acquiring a user object in the voice data to be recognized.
Specifically, this step is substantially the same as step 101 in the first embodiment, and is not repeated here.
At step 302, at least two historical identification data of the user object are obtained.
Specifically, this step is substantially the same as step 102 in the first embodiment, and is not repeated here.
And 303, performing intention recognition according to the voice data to be recognized, acquiring an intention result, grading the intention result, and acquiring an intention grade.
Specifically, this step is substantially the same as step 103 in the first embodiment, and is not repeated here.
In step 305, it is determined whether the number of intent results is greater than a predetermined number.
Specifically, at the time of intent recognition, the acquired intent result is determined according to the voice uttered by the user and the preset set intent rule, and when the acquired intent result is less than or equal to the preset number, step 305 is performed, otherwise, steps 306 and 307 are performed.
Step 306, an intention recognition result is obtained.
Specifically, when the acquired intention results are less than or equal to the preset number, the acquired intention results are directly regarded as final intention recognition results, and subsequent steps are not performed.
And 307, acquiring an intention identification result according to the historical identification data, the intention result and the intention score.
Specifically, this step is substantially the same as step 105 in the first embodiment, and is not repeated here.
Compared with the prior art, on the basis of the first embodiment, when the intention recognition result is obtained, the recognition can be finished in advance or the subsequent step of obtaining the recognition result can be continued according to the difference value between the intention scores of the first two ranking bits, so that the technical scheme provided by the embodiment of the invention has higher flexibility.
A third embodiment of the present invention relates to an intention recognition method, and is substantially the same as the first embodiment, and mainly differs therefrom in that: step 105 in the first embodiment is detailed, and the intention recognition result is also saved after the intention recognition result is obtained, and the specific flow is as shown in fig. 4:
step 401, processing the voice data to be recognized, and acquiring a user object in the voice data to be recognized.
Specifically, this step is substantially the same as step 101 in the first embodiment, and is not repeated here.
At step 402, at least two historical identification data of a user object are obtained.
Specifically, this step is substantially the same as step 102 in the first embodiment, and is not repeated here.
And 403, performing intention recognition according to the voice data to be recognized, acquiring an intention result, scoring the intention result, and acquiring an intention score.
Specifically, this step is substantially the same as step 103 in the first embodiment, and is not repeated here.
And step 404, sorting the intention results according to the intention scores, and acquiring a difference value between the first two intention scores.
Specifically, each intention result has an intention score corresponding thereto, and the intention score corresponding to the intention result PlayVideoName is 9/11, taking the intention result and the intention score given in step 202 of the first embodiment as an example; the intention result PlayByAlbum corresponds to an intention score of 11/11, an order of the intention scores of 11/11, 9/11, and a difference between the first two intention scores of about 0.18 (rounding may be used when calculating).
Step 405, determine whether the difference is greater than a preset threshold.
Specifically, after the difference between the two previous intention scores is obtained, the difference is compared with a preset threshold to see whether the difference is greater than the preset threshold, if the difference is greater than the preset threshold, step 407 is executed, and if the difference is less than or equal to the preset threshold, step 408 is started, where the threshold is a statistical value based on the user history data, and the factors to be considered when setting the threshold include the interaction habit, the interaction preference, the interaction time point, the relevance of the intention, and the like of the user.
In step 406, an intent recognition result is obtained.
Specifically, when the difference is greater than a preset threshold, it indicates that there is not a large correlation between the first two intention results, and the intention result with the highest intention score may be output as the final intention recognition result, where the preset threshold may be set by the user or may be a fixed value, and the user's usage habits, intention correlation, and other factors are mainly considered in the setting.
Step 407, setting a weighting coefficient for the historical identification data according to the time of the historical identification data.
Specifically, when the difference is less than or equal to the predetermined threshold, it indicates that there is a certain correlation between the first two intent results, or the two intention results are very similar (for example, the intention result "put a song" and the intention result "put a songbook" are very similar), at this time, the judgment needs to be performed by the historical identification data of the user, a proper intention result is selected to be output, the judgment needs to be performed by the historical identification data to sort the historical identification data according to the time, after the sorting, a weighting coefficient is set for each historical identification data, in setting the weighting coefficients, the same weighting coefficient may be set for all data in a certain period of time, or different weighting coefficients may be set for each history data, for example, setting the same weighting factor for all data in a time period, the weighting factor for one type of history identification data may be set as follows: historical identification data in the last month is obtained, weighting coefficients 4 are set for the intention identification results separated by the last week, weighting coefficients 3 are set for the intention identification results separated by one week to two weeks, and the like. Every other week, the weighting coefficient is reduced by one, for example, the intention recognition result in the past week is that the weighting coefficients of the intention recognition result are 4, wherein the intention recognition result is that all alarm clocks are cancelled, the book is listened to in a red dream, and the book is played in a daily butchery dragon book; the intention recognition results in the past two weeks are 'playing Zhouyeny MV' and 'listening to a book bar', and the weighting coefficients are 3; the intention recognition results in the past three weeks are 'playing a small video in the mobile phone' and 'putting a book related to a gold book', and the weighting coefficients are both 2; the intention recognition results in the past four weeks are 'set an alarm clock at eight morning hours in tomorrow', 'I want to listen to shoot carving hero and transmit the book', and the weighting coefficients are all 1.
And step 408, classifying the historical identification data to obtain a data classification result of the historical identification data.
Specifically, the intention of the user is classified according to the acquired history identification data, and taking the history identification data acquired in step 408 as an example, the history identification data may be classified into the following categories: the method comprises the following steps of listening to books, playing videos and alarm clocks, wherein stored historical identification data already comprise corresponding intention and corresponding field information of each piece of data, and the classification only needs to count historical data and does not need to be classified separately.
And step 409, acquiring the domain proportion data of the historical identification data according to the historical identification data, the weighting coefficient and the data classification result.
Specifically, the total occupation ratio of the historical identification data is firstly obtained (namely, the weighting coefficients of all the historical identification data are summed), when the occupation ratio data of a certain field is obtained, the weighting coefficients corresponding to each intention result belonging to the field are summed, and then the proportional relation between the field weighting coefficient sum and the weighting coefficient sum of all the historical identification data is calculated; taking the historical identification data acquired in step 408 and the domain divided in step 409 as examples: the sum of all the weighting coefficients of the history identification data is 4 × 3+3 × 2+2 × 1+1 × 2 ═ 24, the sum of the weighting coefficients of the audiobook field is 4 × 2+3 × 1+2 × 1+1 × 1 ═ 14, and the audiobook field proportion data is 7/12; the sum of the weighting coefficients of the video playing fields is 3 × 1+2 × 1 ═ 5, and the video playing field proportion data is 5/24; the sum of the alarm clock domain weighting coefficients is 4 × 1+1 × 1 ═ 5, and the alarm clock domain proportion data is 5/24.
Step 4010, obtaining a comprehensive score of the intention result according to the domain proportion data and the intention score;
specifically, the composite score is calculated by the following formula:
composite score of a × s (i) + b × h (j) i, j ═ 1,2, … n
Wherein, S (i) is intention score, H (j) is corresponding domain proportion data of S (i), and a and b are preset coefficients; taking the two intention results and the corresponding intention scores obtained in step 202 and the area proportion data obtained in step 4010 as examples: assuming that the preset coefficient a is 9 and the coefficient b is 1, then:
the result of intention PlayVideoName corresponds to a composite score of 9 × (9/11) +1 × (5/24) ═ 7.57
The result of intention PlayByAlbum corresponds to a composite score of 9 × (11/11) +1 × (7/12) ═ 9.58
The coefficient a of the intention score and the coefficient b of the domain proportion data are preset values based on statistical results of historical identification data, and because the value setting has direct influence on intention identification, a user cannot directly set a modified value by himself, and certainly an algorithm developer can carry out re-statistical modification according to updated historical data of the user, but the user is not allowed to modify once the re-statistical modification is determined. The value is obtained by statistical data, historical interaction data of all users are considered during setting, the historical interaction data mainly comprises factors such as user interaction preference and interaction quantity under different intentions, two decimal numbers are reserved by default when the intention scoring and the comprehensive scoring are used for obtaining results, and the method for reserving other numbers can also be realized.
And step 4011, obtaining an intention recognition result according to the comprehensive score.
Specifically, an intention result with a relatively high composite score is selected as a final intention recognition result, and taking the composite score calculated in step 4011 as an example, the final output intention recognition result is as follows: PlayByAlbum.
Step 4012, save the intention recognition result.
Specifically, after the intention recognition result is obtained, the intention recognition result is saved, and the intelligent device is used as historical recognition data in the subsequent use process; when storing the intention recognition result, some other information related thereto is also saved, such as: text data of a voice to be recognized corresponding to an intention recognition result, time of the intention recognition result, a field corresponding to the intention recognition result, user information corresponding to the intention recognition result, and the like; and the intention identification result is not covered in the storage process, and when the historical identification data is acquired, the stored data is screened according to the user and time, and the historical identification data meeting the conditions is selected.
Compared with the prior art, on the basis of the first embodiment, when the intention identification result is obtained, the step of ending identification in advance or continuing to obtain the subsequent identification result is selected according to the difference value between the intention scores of the first two ranked digits, the field proportion data is obtained by setting a weighting coefficient for the historical identification data and dividing the field proportion data into fields, the comprehensive score of the intention result is obtained according to the field proportion data and the intention scores, and the intention identification result is obtained according to the comprehensive score, so that the technical scheme provided by the embodiment of the invention has higher accuracy and higher flexibility.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fourth embodiment of the present invention relates to an intention identifying apparatus, as shown in fig. 5, including:
the processing module 501: the voice recognition device is used for processing the voice data to be recognized and acquiring a user object in the voice data to be recognized;
the obtaining module 502: at least two historical data for obtaining a user object;
the intent recognition module 503: the voice recognition system is used for recognizing intentions according to voice data to be recognized, acquiring intention results, scoring the intention results and acquiring intention scores;
the weighting module 504: the system is used for acquiring an intention recognition result according to the historical recognition data, the intention result and the intention score;
the storage module 505: for saving the intention recognition result.
It should be understood that the modules 501 to 504 in this embodiment are system examples corresponding to the first embodiment, and therefore this embodiment can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that each module referred to in this embodiment is a logical module, and in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fifth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 6, including:
at least one processor 601; and the number of the first and second groups,
a memory 602 communicatively coupled to the at least one processor 601; wherein the content of the first and second substances,
the memory 602 stores instructions executable by the at least one processor, the instructions being executed by the at least one processor
The processor 601 executes instructions to enable the at least one processor 601 to execute the method for intention identification according to the first to fourth embodiments of the present invention.
The memory and the processor are connected by a bus, which may include any number of interconnected buses and bridges, linking together one or more of the various circuits of the processor and the memory. The bus may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (10)

1. An intent recognition method, comprising:
processing voice data to be recognized to obtain a user object in the voice data to be recognized;
acquiring at least two historical identification data of the user object;
performing intention recognition on the voice data to be recognized to obtain an intention result, and grading the intention result to obtain an intention grade;
and acquiring an intention identification result according to the historical identification data, the intention result and the intention score.
2. The intention recognition method according to claim 1, wherein the performing intention recognition on the voice data to be recognized, obtaining an intention result, and scoring the intention result, obtaining an intention score comprises:
matching the voice data to be recognized with a preset intention rule to obtain an intention result and a matching result corresponding to the intention result;
and scoring the intention result according to the intention result and the matching result to obtain the intention score.
3. The method of claim 1, wherein the obtaining the intent score further comprises:
judging whether the number of the intention results is larger than a preset number or not;
if the number of the intention results is smaller than or equal to the preset number, acquiring the intention recognition result;
and if the number of the intention results is larger than the preset number, acquiring the intention recognition result according to the historical recognition data, the intention results and the intention score.
4. The intent recognition method according to claim 1, wherein said obtaining an intent recognition result from the historical recognition data, the intent result, and the intent score comprises:
sorting the intention results according to the intention scores, and acquiring a difference value between the first two intention scores;
judging whether the difference value is larger than a preset threshold value or not;
if the difference value is larger than the preset threshold value, acquiring the intention identification result;
and if the difference is smaller than or equal to the preset threshold value, acquiring the intention identification result according to the historical identification data and the intention score.
5. The intent recognition method according to claim 4, wherein said obtaining the intent recognition result according to the historical recognition data and the intent score comprises:
setting a weighting coefficient for the historical identification data according to the time of the historical identification data;
classifying the historical identification data to obtain a data classification result of the historical identification data;
obtaining the domain proportion data of the historical identification data according to the historical identification data, the weighting coefficient and the data classification result;
obtaining a comprehensive score of the intention result according to the field proportion data and the intention score;
and acquiring the intention recognition result according to the comprehensive score.
6. The intention recognition method according to claim 5, wherein the obtaining of the composite score of the intention result is performed by the following formula according to the area proportion data and the intention score:
composite score of a × s (i) + b × h (j) i, j ═ 1,2, … n
Wherein, s (i) is the intention score, h (j) is the area proportion data corresponding to s (i), and a and b are preset coefficients.
7. The method according to claim 1, wherein the obtaining of the intention recognition result further comprises saving the intention recognition result.
8. An intention recognition apparatus, comprising:
a processing module: the voice recognition system is used for processing voice data to be recognized and acquiring a user object in the voice data to be recognized;
an acquisition module: at least two historical data for obtaining the user object;
an intent recognition module: the voice recognition device is used for recognizing the intention according to the voice data to be recognized, obtaining an intention result, scoring the intention result and obtaining an intention score;
a weighting module: the system is used for acquiring an intention recognition result according to the historical recognition data, the intention result and the intention score;
a storage module: for saving the intention recognition result.
9. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the intent recognition method of any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of intention recognition of any one of claims 1 to 7.
CN202011556347.6A 2020-12-23 2020-12-23 Intention recognition method and device, electronic equipment and storage medium Pending CN112559719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011556347.6A CN112559719A (en) 2020-12-23 2020-12-23 Intention recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011556347.6A CN112559719A (en) 2020-12-23 2020-12-23 Intention recognition method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112559719A true CN112559719A (en) 2021-03-26

Family

ID=75033947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011556347.6A Pending CN112559719A (en) 2020-12-23 2020-12-23 Intention recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112559719A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334201A (en) * 2019-07-18 2019-10-15 中国工商银行股份有限公司 A kind of intension recognizing method, apparatus and system
CN110992167A (en) * 2019-11-28 2020-04-10 中国银行股份有限公司 Bank client business intention identification method and device
CN111241814A (en) * 2019-12-31 2020-06-05 中移(杭州)信息技术有限公司 Error correction method and device for voice recognition text, electronic equipment and storage medium
CN111933127A (en) * 2020-07-31 2020-11-13 升智信息科技(南京)有限公司 Intention recognition method and intention recognition system with self-learning capability

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110334201A (en) * 2019-07-18 2019-10-15 中国工商银行股份有限公司 A kind of intension recognizing method, apparatus and system
CN110992167A (en) * 2019-11-28 2020-04-10 中国银行股份有限公司 Bank client business intention identification method and device
CN111241814A (en) * 2019-12-31 2020-06-05 中移(杭州)信息技术有限公司 Error correction method and device for voice recognition text, electronic equipment and storage medium
CN111933127A (en) * 2020-07-31 2020-11-13 升智信息科技(南京)有限公司 Intention recognition method and intention recognition system with self-learning capability

Similar Documents

Publication Publication Date Title
CN107818781B (en) Intelligent interaction method, equipment and storage medium
CN108447471B (en) Speech recognition method and speech recognition device
CN108874895B (en) Interactive information pushing method and device, computer equipment and storage medium
CN110110041A (en) Wrong word correcting method, device, computer installation and storage medium
CN108364650B (en) Device and method for adjusting voice recognition result
CN102404278A (en) Song request system based on voiceprint recognition and application method thereof
CN106649253B (en) Auxiliary control method and system based on rear verifying
JP2020004382A (en) Method and device for voice interaction
CN111767393A (en) Text core content extraction method and device
CN113813609B (en) Game music style classification method and device, readable medium and electronic equipment
CN112163067A (en) Sentence reply method, sentence reply device and electronic equipment
CN106302987A (en) A kind of audio frequency recommends method and apparatus
WO2023272616A1 (en) Text understanding method and system, terminal device, and storage medium
CN111178081B (en) Semantic recognition method, server, electronic device and computer storage medium
CN108153875B (en) Corpus processing method and device, intelligent sound box and storage medium
CN112527955A (en) Data processing method and device
CN111816170A (en) Training of audio classification model and junk audio recognition method and device
CN116738250A (en) Prompt text expansion method, device, electronic equipment and storage medium
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN114783424A (en) Text corpus screening method, device, equipment and storage medium
CN111680514A (en) Information processing and model training method, device, equipment and storage medium
CN111508481A (en) Training method and device of voice awakening model, electronic equipment and storage medium
CN112559719A (en) Intention recognition method and device, electronic equipment and storage medium
CN113539235B (en) Text analysis and speech synthesis method, device, system and storage medium
CN113539234B (en) Speech synthesis method, device, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326