CN109102809B - Dialogue method and system for intelligent robot - Google Patents

Dialogue method and system for intelligent robot Download PDF

Info

Publication number
CN109102809B
CN109102809B CN201810650049.XA CN201810650049A CN109102809B CN 109102809 B CN109102809 B CN 109102809B CN 201810650049 A CN201810650049 A CN 201810650049A CN 109102809 B CN109102809 B CN 109102809B
Authority
CN
China
Prior art keywords
dialogue
sentence
standard
information
sentences
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810650049.XA
Other languages
Chinese (zh)
Other versions
CN109102809A (en
Inventor
喻凯东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201810650049.XA priority Critical patent/CN109102809B/en
Publication of CN109102809A publication Critical patent/CN109102809A/en
Application granted granted Critical
Publication of CN109102809B publication Critical patent/CN109102809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0638Interactive procedures

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

A dialogue method and system for an intelligent robot are provided, wherein the method comprises the following steps: step one, obtaining dialogue information input by a user; step two, respectively calculating semantic similarity between the dialogue information and a plurality of standard dialogue sentences, and determining the standard dialogue sentences corresponding to the dialogue information according to the semantic similarity; and step three, generating voice feedback information by using a preset knowledge graph according to the semantic understanding result of the standard dialogue statement. According to the method, the non-standardized dialogue sentences which are input by the user and cannot be used by the knowledge graph are converted into the standardized dialogue sentences which can be used by the knowledge graph based on the semantic similarity, so that the user can naturally interact with the intelligent robot in a man-machine mode, and the user experience of the intelligent robot is improved.

Description

Dialogue method and system for intelligent robot
Technical Field
The invention relates to the technical field of voice interaction, in particular to a dialogue method and system for an intelligent robot.
Background
With the continuous development of science and technology and the introduction of information technology, computer technology and artificial intelligence technology, the research of robots has gradually gone out of the industrial field and gradually expanded to the fields of medical treatment, health care, families, entertainment, service industry and the like. The requirements of people on the robot are also improved from simple and repeated mechanical actions to an intelligent robot with anthropomorphic question answering, autonomy and interaction with other robots, and human-computer interaction also becomes an important factor for determining the development of the intelligent robot.
Therefore, how to enable the intelligent robot to interact with the user more accurately and effectively is a technical problem to be solved urgently in the robot field.
Disclosure of Invention
In order to solve the above problems, the present invention provides a dialogue method for an intelligent robot, the method including:
step one, obtaining dialogue information input by a user;
secondly, respectively calculating semantic similarity between the dialogue information and a plurality of standard dialogue sentences, and determining the standard dialogue sentences corresponding to the dialogue information according to the semantic similarity;
and step three, generating voice feedback information by using a preset knowledge graph according to the semantic understanding result of the standard dialogue statement.
According to an embodiment of the present invention, in the second step, a sentence with the largest semantic similarity value is selected from the plurality of standard dialogue sentences as the standard dialogue sentence corresponding to the dialogue information.
According to an embodiment of the present invention, the plurality of standard conversational sentences are stored in a preset sentence import knowledge base, and in step two, the standard conversational sentences corresponding to the conversational information are retrieved from the preset sentence import knowledge base according to the semantic similarity.
According to an embodiment of the invention, the preset statement is generated and imported into the knowledge base according to the entities of the preset knowledge graph and the relationship between the entities.
According to an embodiment of the present invention, similar dialogue sentences associated with the standard dialogue sentences are further stored in the preset sentence importing knowledge base, in the second step, semantic similarity between the dialogue information and each dialogue sentence imported into the preset sentence importing knowledge base is respectively calculated, and if a specific similar dialogue sentence is selected according to the semantic similarity, the corresponding standard dialogue sentence is determined according to the specific similar dialogue sentence.
The invention also provides a dialogue system for an intelligent robot, which comprises:
the conversation information acquisition module is used for acquiring conversation information input by a user;
the standard dialogue sentence generation module is connected with the dialogue information acquisition module and used for respectively calculating semantic similarity between the dialogue information and a plurality of standard dialogue sentences and determining the standard dialogue sentences corresponding to the dialogue information according to the semantic similarity;
and the feedback information generation module is connected with the standard dialogue statement generation module and used for generating voice feedback information by utilizing a preset knowledge graph according to the semantic understanding result of the standard dialogue statement.
According to an embodiment of the present invention, the standard dialogue statement generation module is configured to select a statement with the largest semantic similarity value from the plurality of standard dialogue statements as the standard dialogue statement corresponding to the dialogue information.
According to an embodiment of the present invention, the plurality of standard conversational sentences are stored in a preset sentence import knowledge base, and the standard conversational sentence generating module is configured to retrieve the standard conversational sentence corresponding to the conversational information from the preset sentence import knowledge base according to the semantic similarity.
According to an embodiment of the present invention, the preset statement is imported into the knowledge base according to the entities of the preset knowledge graph and the relationship between the entities.
According to an embodiment of the present invention, similar dialogue sentences associated with the standard dialogue sentences are further stored in the preset sentence importing knowledge base, the standard dialogue sentence generation module is configured to calculate semantic similarity between the dialogue information and each dialogue sentence imported into the preset sentence importing knowledge base, and if a specific similar dialogue sentence is selected according to the semantic similarity, the corresponding standard dialogue sentence is determined according to the specific similar dialogue sentence.
The dialogue method for the intelligent robot converts the non-standardized dialogue sentences which are input by the user and cannot be used by the knowledge graph into the standardized dialogue sentences which can be used by the knowledge graph based on the semantic similarity, so that the user can more naturally perform man-machine interaction with the intelligent robot, and the user experience of the intelligent robot is improved.
Meanwhile, the method provided by the invention can convert the non-standard conversion dialogue statement into the standard dialogue statement, so compared with the existing method, the method does not need to carry out normalization processing on the entity and the entity relation. For example, for the existing man-machine conversation method, it may need to normalize the entity relationship such as "wife", "husband", etc. into "wife" in the application process, which is not needed by the method provided by the present invention.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following briefly introduces the drawings required in the description of the embodiments or the prior art:
FIG. 1 is a schematic diagram of a human-machine interaction scenario for an intelligent robot, according to one embodiment of the present invention;
FIG. 2 is a flow chart illustrating an implementation of a dialogue method for an intelligent robot according to an embodiment of the invention;
FIG. 3 is a flow chart illustrating an implementation of a dialogue method for an intelligent robot according to another embodiment of the invention;
fig. 4 is a schematic structural diagram of a dialogue system for an intelligent robot according to an embodiment of the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention will be provided with reference to the drawings and examples, so that how to apply the technical means to solve the technical problems and achieve the technical effects can be fully understood and implemented. It should be noted that, as long as there is no conflict, the embodiments and the features of the embodiments of the present invention may be combined with each other, and the technical solutions formed are within the scope of the present invention.
In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without some of these specific details or with other methods described herein.
Additionally, the steps illustrated in the flow charts of the figures may be performed in a computer system such as a set of computer-executable instructions and, although a logical order is illustrated in the flow charts, in some cases, the steps illustrated or described may be performed in an order different than here.
Most of dialog systems used by intelligent robots currently implement human-computer interaction based on knowledge graphs, however, when the existing dialog systems generate feedback voice by using knowledge graphs, users are required to inquire the dialog systems according to specific formats. If the voice information inputted by the user does not satisfy the preset format (i.e. is not in the standard format), then the dialog system cannot utilize the knowledge-graph to generate the feedback voice.
In view of the above problems in the prior art, the present invention provides a new dialog method for an intelligent robot, which no longer requires a user to have to dialog with the intelligent robot according to a specific standard dialog sentence format, and thus, the interaction experience of the intelligent robot is improved.
As shown in fig. 1, the dialogue method provided by the present invention is preferably configured in an intelligent robot, which can be executed by a robot operating system built in the intelligent robot. When the built-in operating system of the intelligent robot can realize the method provided by the invention, the user 100 can input corresponding dialogue information to the intelligent robot 101 according to own habits, and the intelligent robot 101 can generate reasonable feedback information based on the knowledge graph according to the dialogue information input by the user, so that a more anthropomorphic man-machine dialogue process is realized.
It should be noted that in different embodiments of the present invention, the intelligent robot 101 may be a different form of system with man-machine interaction capability. For example, in one embodiment of the present invention, the intelligent robot 101 may be a humanoid robot equipped with an intelligent operating system, while in another embodiment of the present invention, the intelligent robot 101 may be a specific software or application capable of performing the man-machine interaction method provided by the present invention.
In order to more clearly illustrate the implementation principle, implementation process and advantages of the dialog method for the intelligent robot provided by the invention, the specific contents of the dialog method are further described below in conjunction with different embodiments.
The first embodiment is as follows:
fig. 2 shows a schematic implementation flow diagram of the dialogue method for the intelligent robot provided by the embodiment.
As shown in fig. 2, the dialog method for the intelligent robot provided by the present embodiment preferably obtains dialog information input by the user in step S201. Specifically, in the present embodiment, the dialog information input by the user may be voice information, and the dialog method preferably acquires the voice information input by the user through a voice collecting device (e.g., a microphone, etc.) equipped with the intelligent robot in step S201.
Of course, in other embodiments of the present invention, the dialog information input by the user and acquired in step S201 by the dialog method may also be other forms of information according to actual needs, and the present invention is not limited thereto. For example, in an embodiment of the present invention, the dialog information acquired in step S201 by the dialog method may also be text information, where the text information may be text information obtained by performing Optical Character Recognition (OCR) on the acquired image, or text information input by a user through a corresponding device (for example, a virtual keyboard or a physical keyboard).
After obtaining the dialog information input by the user, the method calculates semantic similarities between the dialog information and a plurality of standard dialog sentences in step S202. In this embodiment, in step S202, the method preferably uses a multi-feature sentence semantic similarity calculation algorithm that comprehensively considers factors such as the weight of a word, the semantics of the word in a sentence, and the sentence structure to calculate the semantic similarity between the dialog information and each standard dialog sentence.
Existing sentence similarity calculation methods can be roughly classified into 5 categories, including: a word face matching method, a Term Frequency-inverse Document Frequency (TF-IDF) vector method, a probability method, a sentence structure method and a semantic expansion method. The literal matching method mainly calculates the similarity of sentences according to the number of the same words contained in the two sentences. The TF-IDF vector method mainly represents sentences into TF-IDF vectors, and cosine values of the two TF-IDF vectors are used as similarity. The probability method is mainly used for obtaining the similarity of a plurality of sentences by introducing a language model frame and utilizing a probability method. In the sentence structure method, a sentence is usually divided into different components in a matching manner of a sentence template, and then similarity is calculated according to the structural components of the sentence.
The existing 4 methods only consider the literal values of words in sentences. Because there is a case of ambiguous words, if only the face value is relied on, it will easily cause the mismatch of sentence similarity. The inventor finds out through research and analysis that sentences are composed of words, different parts of speech of the words and positions of the words in the sentences have different influence degrees on sentence semantics, and concepts represented by the words are limited by context. Meanwhile, the appearance order of words in a sentence (i.e., sentence structure) has different influences on the meaning of the sentence. The above methods are lacking in comprehensive consideration of these factors. For this reason, the method provided by the present embodiment preferably employs a multi-feature sentence semantic similarity calculation algorithm capable of comprehensively considering factors such as the weight of a word, the semantics of the word in a sentence, and the sentence structure to calculate the semantic similarity between the dialogue information and each standard dialogue sentence.
The different parts of speech and the different positions of the words in the sentence have different effects on distinguishing whether the sentences are similar or not, the importance degree of the words in distinguishing the similarity of the sentences can be represented by word weight, and the importance degree mainly comprises the frequency of the words appearing in the sentences, the position weight of the words and the part of speech weight.
The word order also has an important influence on the representation of the semantics. For example, S1: flights from beijing to shanghai; s2: flights from shanghai to beijing. If the similarity of the two sentences is determined by using a literal matching method, the conclusion that the two sentences are completely similar is obtained, and the reason for the mismatch structure is that the structure information of the sentences is not considered. And the word order is a kind of basic structural information that can effectively distinguish two sentences having the same word set. Specifically, in this embodiment, the method preferably uses a normalized word order similarity calculation algorithm of two sentences to determine the sentence structure similarity. When the word sequences of the two sentences are completely the same, the sentence structure similarity of the two sentences is the maximum value 1.
After obtaining the word weight, the semantic similarity of the words, and the sentence structure similarity, the method may respectively calculate the semantic similarity between the dialogue information acquired in step S201 and each standard dialogue sentence in a multi-feature weighting manner.
Of course, in other embodiments of the present invention, the method may also use other reasonable manners to calculate the semantic similarity between the above dialog information and each standard dialog statement, and the present invention is not limited thereto.
As shown in fig. 2, after obtaining the semantic similarity between the dialogue information acquired in step S201 and each standard dialogue term, the method determines a standard dialogue term corresponding to the dialogue information according to the semantic similarity in step S203.
Specifically, in this embodiment, in step S203, the method preferably selects a sentence with the largest semantic similarity from the plurality of standard conversational sentences as the standard conversational sentence corresponding to the conversational information acquired in step S201.
In this embodiment, the plurality of standard dialogue sentences are preferably stored in a preset sentence import knowledge base. The method may retrieve a standard dialog sentence corresponding to the dialog information from the preset sentence import knowledge base based on the semantic similarity in step S203.
It should be noted that, in this embodiment, the preset statements are preferably generated according to entities of the preset knowledge graph and relationships between the entities. Of course, in other embodiments of the present invention, the method may also use other reasonable ways to generate the preset statements to be imported into the knowledge base, and the present invention is not limited thereto.
For example, for dialog information entered by the user, such as "who is wife of jegeren," the corresponding standard dialog sentence stored in the preset sentence import knowledge base may be "$ { person } who is wife. After receiving the dialogue information of ' who is the wife of the Sujieren ' input by the user ', the method can convert the dialogue information input by the user into a standard dialogue sentence ' who is the wife of the Sujieren ' which can be used by an indicated map based on semantic similarity.
However, if the dialog information input by the user is "who was married to jigeren", the existing dialog method based on the knowledge graph can only recognize a certain standard dialog sentence, so that the semantics of the dialog information of "who was married to jigeren" cannot be correctly understood based on the knowledge graph by the existing dialog method, which is actually the same as "who is a wife of jigeren". By calculating the semantic similarity, the method provided by the embodiment can determine that the standard conversation sentence with the most similar semantic to that of 'who married the couple to Zhougelong' is 'who is the wife of Zhougelong', so that the method can convert the non-standard conversation sentence into the standard conversation sentence which can be used by the knowledge graph.
As shown in fig. 2, in this embodiment, after obtaining a standard dialogue statement of dialogue information, the method performs semantic understanding on the standard dialogue statement in step S204, and generates and outputs corresponding voice feedback information by using a preset knowledge graph according to a semantic understanding result.
Since the knowledge graph can store the entities and the relationships between the entities, the method can obtain the entities "husband" and the entity relationship "wife" by semantically understanding the standard dialogue sentences determined in step S203 in step S204, and the peer searches the knowledge graph, so as to obtain the entity "husband" having the relationship "wife" with the entity "husband". This also obtains the answer information of the dialog information input by the user, and feedback information such as "is kunling" can be generated according to the answer information.
As can be seen from the above description, the dialogue method for the intelligent robot provided by this embodiment converts the non-standardized dialogue sentences that cannot be used by the knowledge graph and are input by the user into the standardized dialogue sentences that can be used by the knowledge graph based on semantic similarity, so that the user can more naturally perform human-computer interaction with the intelligent robot, and the user experience of the intelligent robot is improved.
Meanwhile, the method provided by the embodiment can convert the non-standard conversion dialogue statements into the standard dialogue statements, so compared with the existing method, the method does not need to normalize the entity and the entity relationship. For example, for the existing man-machine conversation method, it may need to normalize the entity relationship such as "wife", "husband", etc. into "wife" in the application process, and the method provided by the present embodiment does not need the process.
Example two:
fig. 3 shows a schematic implementation flow diagram of the dialogue method for the intelligent robot provided by the embodiment.
As shown in fig. 3, the dialog method for the intelligent robot provided by the present embodiment preferably obtains dialog information input by the user in step S301. Specifically, in the present embodiment, the dialog information input by the user may be voice information, and the dialog method preferably acquires the voice information input by the user through a voice collecting device (e.g., a microphone, etc.) equipped with the intelligent robot in step S301.
Of course, in other embodiments of the present invention, the dialog information input by the user, acquired in step S301 by the dialog method, may also be other forms of information according to actual needs, and the present invention is not limited thereto.
As shown in fig. 3, in the present embodiment, in step S302, the method respectively calculates semantic similarities between the dialog information and each dialog sentence imported into the knowledge base by the preset sentence. In this embodiment, the preset sentence importing knowledge base stores not only the standard dialogue sentences but also similar dialogue sentences associated with the standard dialogue sentences. In this way, in step S302, not only the similarity between the dialogue information and each standard dialogue statement imported into the knowledge base in the preset statement but also the similarity between the dialogue statements associated with each standard dialogue statement are calculated.
If the sentence with the highest semantic similarity to the dialog information is the standard dialog sentence, the method may also generate and output corresponding feedback information by using the preset knowledge graph based on the same principle and process as step S204 in the first embodiment.
If the sentence with the highest semantic similarity to the dialog information is a non-standard dialog sentence (i.e., a similar dialog sentence associated with a standard sentence), in this embodiment, the method determines the standard dialog sentence corresponding to the determined specific similar dialog sentence in step S303, then performs semantic understanding on the standard dialog sentence determined in step S303 in step S304, and generates and outputs corresponding voice feedback information using the preset knowledge graph according to the semantic understanding result.
As can be seen from the above description, in the dialog method for the intelligent robot provided in this embodiment, on the basis of the method provided in the first embodiment, similar dialog sentences associated with the standard dialog sentences are further added in the preset sentence import knowledge base (for example, other similar interrogatories to the standard interrogatories are added, and such similar interrogatories may be interrogatories that may be adopted by the user), which is beneficial to improving the coverage rate of the standard dialog sentences, so as to further improve the user experience of the intelligent robot.
The invention also provides a dialogue system for the intelligent robot, wherein fig. 4 shows a schematic structural diagram of the dialogue system in the embodiment.
As shown in fig. 4, the dialogue system for an intelligent robot provided by the present embodiment preferably includes: a dialogue information acquisition module 401, a standard dialogue sentence generation module 402, and a feedback information generation module 403. The session information acquiring module 401 is configured to acquire session information input by a user. In this embodiment, the dialog information obtaining module 401 may be a voice collecting device (e.g., a microphone) equipped in the intelligent robot, and the dialog system may also obtain the voice information input by the user by using the voice collecting device.
Of course, in other embodiments of the present invention, the session information obtaining module 401 may also include other reasonable devices or be implemented by other reasonable devices according to actual needs, and the present invention is not limited thereto. For example, in an embodiment of the present invention, the dialog information obtaining module 401 may further include an optical character recognition device, and the optical character recognition device may perform optical character recognition on the obtained image to obtain corresponding text information.
The dialogue information acquisition module 401 is connected to the standard dialogue sentence generation module 402, and is capable of transmitting the dialogue information acquired by itself to the standard dialogue sentence generation module 402 to generate a standard dialogue sentence corresponding to the dialogue information by the standard dialogue sentence generation module 402.
It should be noted that, in this embodiment, the standard dialogue statement generation module 402 generates the standard dialogue statement corresponding to the dialogue statement information, which may adopt the technical solutions disclosed in step S202 to step S203 in the first embodiment, or adopt the technical solutions disclosed in step S302 to step S303 in the second embodiment, and detailed descriptions of the specific principle and process of the standard dialogue statement generation module 402 for implementing the function thereof are omitted here.
The feedback information generating module 403 is connected to the standard dialogue sentence generating module 402, and is capable of performing semantic understanding on the standard dialogue sentences transmitted by the standard dialogue sentence generating module 402, and generating and outputting corresponding voice feedback information according to a semantic understanding result by using a preset knowledge graph. The specific principle and process of the feedback information generating module 403 for implementing the function are the same as those disclosed in step S204 in the first embodiment, and therefore, the related content of the feedback information generating module 403 is not repeated here again.
It is to be understood that the disclosed embodiments of the invention are not limited to the particular structures or process steps disclosed herein, but extend to equivalents thereof as would be understood by those skilled in the relevant art. It is also to be understood that the terminology used herein is for the purpose of describing particular embodiments only, and is not intended to be limiting.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. Thus, the appearances of the phrase "one embodiment" or "an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
While the above examples are illustrative of the principles of the present invention in one or more applications, it will be apparent to those of ordinary skill in the art that various changes in form, usage and details of implementation can be made without departing from the principles and concepts of the invention. Accordingly, the invention is defined by the appended claims.

Claims (6)

1. A dialog method for an intelligent robot, the method comprising:
step one, obtaining dialogue information input by a user;
secondly, respectively calculating semantic similarity between the dialogue information and a plurality of standard dialogue sentences, and determining the standard dialogue sentences corresponding to the dialogue information according to the semantic similarity; the plurality of standard dialogue sentences are stored in a preset sentence import knowledge base, and in the second step, the standard dialogue sentences corresponding to the dialogue information are obtained by searching the preset sentence import knowledge base according to the semantic similarity; calculating the semantic similarity of the dialogue information and each standard dialogue sentence by adopting a multi-feature sentence semantic similarity calculation algorithm which comprehensively considers the weight factor of a word, the semantic factor of the word in the sentence and the sentence structure factor; the weight factors of the words comprise the frequency of appearance of the words in the sentence, the position weight and the part of speech weight of the words;
furthermore, similar dialogue sentences associated with the standard dialogue sentences are stored in the preset sentence importing knowledge base, in the second step, semantic similarity between the dialogue information and each dialogue sentence in the preset sentence importing knowledge base is calculated respectively, and if a specific similar dialogue sentence is selected according to the semantic similarity, the corresponding standard dialogue sentence is determined according to the specific similar dialogue sentence;
and thirdly, generating voice feedback information by using a preset knowledge graph according to a semantic understanding result of a standard dialogue statement corresponding to the dialogue information.
2. The dialogue method according to claim 1, wherein in the second step, a sentence with the largest semantic similarity value is selected from the plurality of standard dialogue sentences as a standard dialogue sentence corresponding to the dialogue information.
3. The dialog method of claim 1 wherein the predetermined statement import knowledge base is generated based on entities and relationships between entities of the predetermined knowledge graph.
4. A dialog system for an intelligent robot, the system comprising:
the conversation information acquisition module is used for acquiring conversation information input by a user;
the standard dialogue sentence generation module is connected with the dialogue information acquisition module and used for respectively calculating semantic similarity between the dialogue information and a plurality of standard dialogue sentences and determining the standard dialogue sentences corresponding to the dialogue information according to the semantic similarity; the standard dialogue sentences are stored in a preset sentence import knowledge base, and the standard dialogue sentence generation module is configured to retrieve the standard dialogue sentences corresponding to the dialogue information from the preset sentence import knowledge base according to the semantic similarity; the weight factors of the words comprise the frequency of the appearance of the words in the sentences, the position weight and the part-of-speech weight of the words;
the preset sentence import knowledge base is also stored with similar dialogue sentences related to the standard dialogue sentences, the standard dialogue sentence generation module is configured to respectively calculate semantic similarity between the dialogue information and each dialogue sentence in the preset sentence import knowledge base, and if a specific similar dialogue sentence is selected according to the semantic similarity, the corresponding standard dialogue sentence is determined according to the specific similar dialogue sentence;
the standard dialogue sentence generation module is configured to calculate the semantic similarity between dialogue information and each standard dialogue sentence by adopting a multi-feature sentence semantic similarity calculation algorithm which comprehensively considers the weight factor of a word, the semantic factor of the word in the sentence and the sentence structure factor;
and the feedback information generation module is connected with the standard dialogue statement generation module and used for generating voice feedback information by utilizing a preset knowledge graph according to the semantic understanding result of the standard dialogue statement.
5. The dialog system of claim 4 wherein the standard dialog sentence generation module is configured to select a sentence with the largest semantic similarity value from the plurality of standard dialog sentences as the standard dialog sentence corresponding to the dialog information.
6. The dialog system of claim 4 wherein the import of the predetermined statements into the knowledge base is generated based on entities and relationships between entities of the predetermined knowledge graph.
CN201810650049.XA 2018-06-22 2018-06-22 Dialogue method and system for intelligent robot Active CN109102809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810650049.XA CN109102809B (en) 2018-06-22 2018-06-22 Dialogue method and system for intelligent robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810650049.XA CN109102809B (en) 2018-06-22 2018-06-22 Dialogue method and system for intelligent robot

Publications (2)

Publication Number Publication Date
CN109102809A CN109102809A (en) 2018-12-28
CN109102809B true CN109102809B (en) 2021-06-15

Family

ID=64844889

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810650049.XA Active CN109102809B (en) 2018-06-22 2018-06-22 Dialogue method and system for intelligent robot

Country Status (1)

Country Link
CN (1) CN109102809B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109887483A (en) * 2019-01-04 2019-06-14 平安科技(深圳)有限公司 Self-Service processing method, device, computer equipment and storage medium
CN109920414A (en) * 2019-01-17 2019-06-21 平安城市建设科技(深圳)有限公司 Nan-machine interrogation's method, apparatus, equipment and storage medium
CN110046238B (en) * 2019-03-29 2024-03-26 华为技术有限公司 Dialogue interaction method, graphic user interface, terminal equipment and network equipment
CN111858865A (en) * 2019-04-30 2020-10-30 北京嘀嘀无限科技发展有限公司 Semantic recognition method and device, electronic equipment and computer-readable storage medium
CN113051405B (en) * 2019-04-30 2024-06-11 五竹科技(北京)有限公司 Intelligent outbound knowledge graph construction method and device based on dialogue scene
CN110489740B (en) * 2019-07-12 2023-10-24 深圳追一科技有限公司 Semantic analysis method and related product
CN110473540B (en) 2019-08-29 2022-05-31 京东方科技集团股份有限公司 Voice interaction method and system, terminal device, computer device and medium
CN110750629A (en) * 2019-09-18 2020-02-04 平安科技(深圳)有限公司 Robot dialogue generation method and device, readable storage medium and robot
CN110781277A (en) * 2019-09-23 2020-02-11 厦门快商通科技股份有限公司 Text recognition model similarity training method, system, recognition method and terminal
CN110738982B (en) * 2019-10-22 2022-01-28 珠海格力电器股份有限公司 Request processing method and device and electronic equipment
CN111563029A (en) * 2020-03-13 2020-08-21 深圳市奥拓电子股份有限公司 Testing method, system, storage medium and computer equipment for conversation robot
CN113448829B (en) * 2020-03-27 2024-06-04 来也科技(北京)有限公司 Dialogue robot testing method, device, equipment and storage medium
CN111508488A (en) * 2020-04-13 2020-08-07 江苏止芯科技有限公司 Intelligent robot dialogue system
CN112612877A (en) * 2020-12-16 2021-04-06 平安普惠企业管理有限公司 Multi-type message intelligent reply method, device, computer equipment and storage medium

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101086843A (en) * 2006-06-07 2007-12-12 中国科学院自动化研究所 A sentence similarity recognition method for voice answer system
CN101520802A (en) * 2009-04-13 2009-09-02 腾讯科技(深圳)有限公司 Question-answer pair quality evaluation method and system
CN103810218B (en) * 2012-11-14 2018-06-08 北京百度网讯科技有限公司 A kind of automatic question-answering method and device based on problem cluster
US9135240B2 (en) * 2013-02-12 2015-09-15 International Business Machines Corporation Latent semantic analysis for application in a question answer system
CN105335447A (en) * 2014-08-14 2016-02-17 北京奇虎科技有限公司 Computer network-based expert question-answering system and construction method thereof
CN105373568B (en) * 2014-09-02 2019-01-15 联想(北京)有限公司 Problem answers Auto-learning Method and device
US10565508B2 (en) * 2014-12-12 2020-02-18 International Business Machines Corporation Inferred facts discovered through knowledge graph derived contextual overlays
CN104462553B (en) * 2014-12-25 2019-02-26 北京奇虎科技有限公司 Question and answer page relevant issues recommended method and device
US10586156B2 (en) * 2015-06-25 2020-03-10 International Business Machines Corporation Knowledge canvassing using a knowledge graph and a question and answer system
CN106407198A (en) * 2015-07-28 2017-02-15 百度在线网络技术(北京)有限公司 Question and answer information processing method and device
CN105068661B (en) * 2015-09-07 2018-09-07 百度在线网络技术(北京)有限公司 Man-machine interaction method based on artificial intelligence and system
CN106776532B (en) * 2015-11-25 2020-07-07 ***通信集团公司 Knowledge question-answering method and device
CN105550361B (en) * 2015-12-31 2018-11-09 上海智臻智能网络科技股份有限公司 Log processing method and device and question and answer information processing method and device
CN106202038A (en) * 2016-06-29 2016-12-07 北京智能管家科技有限公司 Synonym method for digging based on iteration and device
CN106777232B (en) * 2016-12-26 2019-07-12 上海智臻智能网络科技股份有限公司 Question and answer abstracting method, device and terminal
CN106847279A (en) * 2017-01-10 2017-06-13 西安电子科技大学 Man-machine interaction method based on robot operating system ROS
CN106919655B (en) * 2017-01-24 2020-05-19 网易(杭州)网络有限公司 Answer providing method and device
CN107123042A (en) * 2017-04-26 2017-09-01 山东浪潮商用***有限公司 A kind of intelligent sound does tax method, apparatus and system
CN107688608A (en) * 2017-07-28 2018-02-13 合肥美的智能科技有限公司 Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing
CN107918640A (en) * 2017-10-20 2018-04-17 阿里巴巴集团控股有限公司 Sample determines method and device
CN107980130A (en) * 2017-11-02 2018-05-01 深圳前海达闼云端智能科技有限公司 It is automatic to answer method, apparatus, storage medium and electronic equipment
CN108021555A (en) * 2017-11-21 2018-05-11 浪潮金融信息技术有限公司 A kind of Question sentence parsing measure based on depth convolutional neural networks
CN107992472A (en) * 2017-11-23 2018-05-04 浪潮金融信息技术有限公司 Sentence similarity computational methods and device, computer-readable storage medium and terminal
CN108053351A (en) * 2018-02-08 2018-05-18 南京邮电大学 Intelligent college entrance will commending system and recommendation method

Also Published As

Publication number Publication date
CN109102809A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109102809B (en) Dialogue method and system for intelligent robot
WO2021093449A1 (en) Wakeup word detection method and apparatus employing artificial intelligence, device, and medium
US20220292269A1 (en) Method and apparatus for acquiring pre-trained model
CN105843381B (en) Data processing method for realizing multi-modal interaction and multi-modal interaction system
WO2020182153A1 (en) Method for performing speech recognition based on self-adaptive language, and related apparatus
CN108962255B (en) Emotion recognition method, emotion recognition device, server and storage medium for voice conversation
US11769018B2 (en) System and method for temporal attention behavioral analysis of multi-modal conversations in a question and answer system
CN109509470B (en) Voice interaction method and device, computer readable storage medium and terminal equipment
CN113205817B (en) Speech semantic recognition method, system, device and medium
CN106486121B (en) Voice optimization method and device applied to intelligent robot
CN107515900B (en) Intelligent robot and event memo system and method thereof
CN111832308B (en) Speech recognition text consistency processing method and device
WO2020238045A1 (en) Intelligent speech recognition method and apparatus, and computer-readable storage medium
KR20190059084A (en) Natural language question-answering system and learning method
CN106548777B (en) Data processing method and device for intelligent robot
CN108595406B (en) User state reminding method and device, electronic equipment and storage medium
CN112735418A (en) Voice interaction processing method and device, terminal and storage medium
WO2023274187A1 (en) Information processing method and apparatus based on natural language inference, and electronic device
CN112669842A (en) Man-machine conversation control method, device, computer equipment and storage medium
WO2023226239A1 (en) Object emotion analysis method and apparatus and electronic device
CN114330371A (en) Session intention identification method and device based on prompt learning and electronic equipment
CN113705315A (en) Video processing method, device, equipment and storage medium
CN110827799A (en) Method, apparatus, device and medium for processing voice signal
JP2008276543A (en) Interactive processing apparatus, response sentence generation method, and response sentence generation processing program
CN117633198A (en) Training method of role dialogue model, dialogue generation method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant