CN110597973B - Man-machine conversation method, device, terminal equipment and readable storage medium - Google Patents

Man-machine conversation method, device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN110597973B
CN110597973B CN201910880191.8A CN201910880191A CN110597973B CN 110597973 B CN110597973 B CN 110597973B CN 201910880191 A CN201910880191 A CN 201910880191A CN 110597973 B CN110597973 B CN 110597973B
Authority
CN
China
Prior art keywords
label
preset
user
determining
dialogue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910880191.8A
Other languages
Chinese (zh)
Other versions
CN110597973A (en
Inventor
戴世昌
张军
闫羽婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910880191.8A priority Critical patent/CN110597973B/en
Publication of CN110597973A publication Critical patent/CN110597973A/en
Application granted granted Critical
Publication of CN110597973B publication Critical patent/CN110597973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Library & Information Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a method, a device, terminal equipment and a readable storage medium for man-machine conversation, which are used for improving user experience in the man-machine conversation process. The method comprises the following steps: acquiring dialogue characteristics of a user in a man-machine dialogue process; determining a first label of a user according to the dialogue characteristics; determining a second label matched with the first label in a preset label set according to a preset label matching rule; determining preset materials corresponding to the second labels according to the corresponding relation between the second labels and the preset materials; and displaying the preset material through a preset virtual shape.

Description

Man-machine conversation method, device, terminal equipment and readable storage medium
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to a method and apparatus for man-machine interaction, a terminal device, and a readable storage medium.
Background
With the development of artificial intelligence technology, human-machine conversations are being applied to more and more scenes. For example, some virtual figures are set at the terminal, the user can perform man-machine objects with virtual shapes, and the virtual shapes imitate human actions or expressions.
Currently, a main method of man-machine conversation is that a terminal detects a voice text input by a user, when a preset text appears, an avatar reacts accordingly, for example, the avatar can make a preset action or feedback some voice.
However, the above method can only realize basic dialogue, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a method, a device, terminal equipment and a readable storage medium for man-machine conversation, which are used for improving user experience in the man-machine conversation process.
In view of this, a first aspect of the present application provides a method for human-machine interaction, including:
Acquiring dialogue characteristics of a user in a man-machine dialogue process;
determining a first label of a user according to the dialogue characteristics;
Determining a second label matched with the first label in a preset label set according to a preset label matching rule;
determining preset materials corresponding to the second labels according to the corresponding relation between the second labels and the preset materials;
And displaying the preset materials through preset virtual images.
In a first implementation manner of the first aspect of the embodiment of the present application, the determining, according to the dialog feature, a first label of a user includes:
Analyzing emotion of the user according to the dialogue characteristics;
and determining the first label according to the result of emotion analysis.
In a second implementation manner of the first aspect of the embodiments of the present application, the dialogue feature includes an input speed of a user, and determining, according to the dialogue feature, a first tag of the user includes:
if the input speed is smaller than the preset first speed, determining that the first label of the user is a preset third label;
if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
In a third implementation manner of the first aspect of the embodiments of the present application, the dialogue feature includes an interval time of a user response, and determining, according to the dialogue feature, a first tag of the user includes:
if the interval time is smaller than the preset first time, determining that the first label of the user is a preset fifth label;
If the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
In a fourth implementation manner of the first aspect of the embodiments of the present application, when the dialog feature includes text content input by a user, the determining, according to the dialog feature, a first tag of the user includes:
Extracting keywords in the text content input by the user;
And determining the first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
In a fifth implementation manner of the first aspect of the embodiment of the present application, the determining, according to a preset tag matching rule, a second tag matching the first tag in a preset tag set includes:
and determining a preset seventh label which is synonymous with the first label in a preset label set, and taking the preset seventh label as a second label.
In a sixth implementation manner of the first aspect of the embodiment of the present application, the preset material includes one or more of an action material, an expression material and an audio material.
In a seventh implementation manner of the first aspect of the embodiment of the present application, the method is applied to a terminal, where the terminal is a block node device in a block chain.
A second aspect of an embodiment of the present application provides a device for man-machine interaction, including:
the acquisition unit is used for acquiring dialogue characteristics of a user in a man-machine dialogue process;
A first determining unit, configured to determine a first tag of a user according to the dialogue feature;
The matching unit is used for determining a second label matched with the first label in a preset label set according to a preset label matching rule;
The second determining unit is used for determining preset materials corresponding to the second label according to the corresponding relation between the second label and the preset materials;
and the display unit is used for displaying the preset materials through the preset virtual images.
In a first implementation manner of the second aspect of the embodiments of the present application, the first determining unit is configured to:
Analyzing emotion of the user according to the dialogue characteristics;
and determining the first label according to the result of emotion analysis.
In a second implementation manner of the second aspect of the embodiments of the present application, the dialogue feature includes an input speed of a user, and the first determining unit is configured to:
if the input speed is smaller than the preset first speed, determining that the first label of the user is a preset third label;
if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
In a third implementation manner of the second aspect of the embodiments of the present application, the dialogue feature includes an interval time of user response, and the first determining unit is configured to:
if the interval time is smaller than the preset first time, determining that the first label of the user is a preset fifth label;
If the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
In a fourth implementation manner of the second aspect of the embodiments of the present application, when the dialog feature includes text content input by a user, the first determining unit is configured to:
Extracting keywords in the text content input by the user;
And determining the first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
In a fifth implementation manner of the fifth aspect of the embodiments of the present application, the matching unit is configured to:
and determining a preset seventh label which is synonymous with the first label in a preset label set, and taking the preset seventh label as a second label.
In a sixth implementation manner of the second aspect of the embodiment of the present application, the preset material includes one or more of an action material, an expression material and an audio material.
In a seventh implementation manner of the second aspect of the embodiment of the present application, the method is applied to a terminal, where the terminal is a block node device in a block chain.
A third aspect of an embodiment of the present application provides a terminal device, including: memory, transceiver, processor, and bus system;
Wherein the memory is used for storing programs;
The processor is configured to execute a program in the memory to perform a method according to any one of the first aspect of the embodiments of the present application.
A fourth aspect of the embodiments of the present application provides a computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of the first aspect of the embodiments of the present application.
A fifth aspect of the embodiments of the present application provides a computer program product comprising computer software instructions executable by a processor to perform the method according to any of the first aspect of the embodiments of the present application.
From the above technical solutions, the embodiment of the present application has the following advantages:
Firstly, the dialogue characteristics of the user in the man-machine dialogue process are acquired, wherein the dialogue characteristics can comprise a plurality of dimensions, such as the input speed of the user, the interval time of the response of the user, the text content input by the user, the intonation and the tone of the voice input by the user and the like, so that the dialogue characteristics can comprise various characteristics in the man-machine dialogue process; then, determining a first label of the user according to the dialogue characteristics, wherein the first label determined according to the dialogue characteristics can well represent the current state of the user because the dialogue characteristics cover more aspects; determining a second label matched with the first label in a preset label set according to a preset label matching rule; determining preset materials corresponding to the second labels according to the corresponding relation between the second labels and the preset materials; and finally, displaying the preset materials through the preset virtual images, wherein the preset materials comprise one or more of action materials, expression materials and audio materials, so that dialogue can be more vividly and closely carried out with the user, and the user experience of the user is improved.
Drawings
Fig. 1 is a schematic diagram of an application scenario of a method for man-machine interaction in an embodiment of the present application;
FIG. 2 is a schematic diagram of an embodiment of an ergonomic system;
FIG. 3 is a schematic diagram of an embodiment of a method for human-machine interaction according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a first embodiment of a preset material according to an embodiment of the present application;
FIG. 5 is a diagram showing a second embodiment of preset materials according to an embodiment of the present application;
FIG. 6 is a schematic diagram of an embodiment of a data sharing system according to an embodiment of the present application;
FIG. 7 is a block diagram of an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a new block generation process according to an embodiment of the present application;
FIG. 9 is a schematic diagram of an embodiment of a device for man-machine interaction according to an embodiment of the present application;
fig. 10 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method, a device, terminal equipment and a readable storage medium for man-machine conversation, which are used for improving user experience in the man-machine conversation process.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented, for example, in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be appreciated that the present application is applicable to a human-machine conversation scenario, and in particular, to a scenario in which a user dialogues with an avatar on a terminal. Referring to fig. 1, an application scenario of a man-machine interaction method in an embodiment of the application is shown. Fig. 1 includes an avatar with which a user can perform a man-machine conversation through a terminal, the avatar imitating a series of actions, expressions, languages, etc. of a person, and being given for feedback.
In order to facilitate understanding, the present application provides a method for man-machine conversation, which is applied to the man-machine conversation system shown in fig. 2, please refer to fig. 2, fig. 2 is a schematic diagram of an architecture of the man-machine conversation system in an embodiment of the present application, as shown in the figure, the man-machine conversation system includes a plurality of terminals, wherein the terminals include, but are not limited to, a mobile phone, a tablet computer, a notebook computer and a palm computer in fig. 2. In addition, the terminal may also include an intelligent terminal installed in a service hall, a recreational area, or the like, and may also include an intelligent home device. An avatar, which may be a avatar and a avatar action, may be established in a terminal of the human-machine conversation system through an application.
The user can input voice or text on the terminal, after the terminal collects the voice or text, the virtual image can make corresponding feedback according to the content of the voice or text, and in order to enable the content of the feedback to be more fit with the context of the man-machine conversation and the state of the user, the embodiment of the application provides a man-machine conversation method, and the method is specifically described below.
Referring to fig. 3, an embodiment of a method for human-machine interaction according to an embodiment of the present application is shown. In this embodiment, the method comprises:
101, acquiring dialogue characteristics of a user in a man-machine dialogue process.
Firstly, it should be noted that the dialogue features are not limited in the embodiment of the present application, and any feature capable of representing the human-machine dialogue context and the user state may be included.
For example, dialog features may include instructions entered by the user, text content entered by the user, speech content entered by the user, intonation and mood entered by the user's speech, speed of input by the user, and time interval of response by the user. Wherein, intonation and mood can be represented by the volume level of the user's voice; the input speed of the user may be the speed of inputting text content or the speed of inputting voice content, which is not limited in the embodiment of the present application; the interval time of the user response refers to a time interval when feedback from the avatar in the terminal is started and the content input by the user is received at the terminal.
Since the acquisition method corresponds to the dialogue feature, the embodiment of the application does not specifically limit the acquisition method. Specifically, when the dialogue feature is text content, the method of acquiring the dialogue feature may be to extract corresponding text content from the input text; when the dialogue feature is voice content, the method of acquiring the dialogue feature may be to collect voice data of the user and then extract the voice content through a voice recognition technology.
102, Determining a first label of the user according to the dialogue characteristics.
It should be noted that, the first tag includes multiple types, and may be a behavior tag, for example, when the dialogue feature is the input speed of the user, the first tag may be the input speed of the user is slow; the emotion list labels may also be used, for example, when the dialog feature is text content entered by the user, the first label being a heart injury. Since the dialogue features are various and the first tag pages are various, the method for determining the first tag according to the dialogue features includes various, and the method for determining the first tag according to the embodiment of the application is not particularly limited.
103, Determining a second label matched with the first label in the preset label set according to a preset label matching rule.
It should be noted that, the second tag is used to represent the preset material, and the first tag is used to represent the state of the user, so the first tag and the second tag may be the same or different, and therefore, the matching needs to be performed through the tag matching rule.
Specifically, when the text content input by the user is "today me not happy", that is, the dialogue feature is "today me not happy", the first label may be "wounded", and in order to improve the user experience, the second label may be "comfort", and then the avatar may display the preset material corresponding to "comfort" to the user. In this scenario, the first tag and the second tag are related, but not identical, and therefore a tag matching rule is required to associate the first tag with the second tag.
104, Determining the preset materials corresponding to the second label according to the corresponding relation between the second label and the preset materials.
It should be noted that, in order to improve user experience, the preset materials may be as many as possible, and corresponding second tags are set for each preset material, specifically, the second tags corresponding to the preset materials of the same type may be the same, that is, one second tag may correspond to multiple preset materials; the second label may be a keyword, where the keyword can represent a preset material, and the second label may also be other marks such as numbers, letters, and the like.
And 105, displaying the preset materials through the preset virtual images.
It should be noted that, the avatar may be an avatar and an avatar, and the display form of the avatar is not particularly limited in the embodiment of the present application. The preset material may include one or more of action material, expression material, and audio material. The preset materials are different, and the corresponding display modes are also different. For example, when the preset material is an action material, the action material can be displayed through a specific picture; when the preset material is an audio material, the audio material can be displayed through the audio output equipment.
Taking a preset material as an example of an action material, referring to fig. 4, in an embodiment of the present application, a first embodiment of the preset material is shown, and as shown in fig. 4, an avatar shows an action material of "call in call; referring to fig. 5, a second embodiment of the preset material is shown in the embodiment of the present application, and as shown in fig. 5, the avatar shows the action material of "order departure".
For another example, if the second label is "placebo," the avatar may make a hug action or play a placebo piece of audio.
In the embodiment of the application, as the dialogue characteristics can contain various characteristics, the state and the context of the user in the man-machine dialogue process can be better reflected, the determined preset materials are more in line with the state and the context of the user, the untimely preset materials are prevented from being appeared, the preset materials are various in form, the preset materials can be displayed to the user in various modes such as vision and hearing, and the user experience in the man-machine dialogue process is improved.
As can be seen from the foregoing description, the first tag may be an emotion tag or a behavior tag, and the process of determining the first tag will be described below by taking the first tag as an emotion tag as an example.
In another embodiment of the method for man-machine conversation provided by the embodiment of the present application, determining a first label of a user according to conversation characteristics includes:
Firstly, analyzing emotion of a user according to dialogue characteristics.
It can be appreciated that the emotion of the user is analyzed according to the dialogue characteristics; for example, when the dialog features include the input speed of the user and intonation of the user's voice input, if the input speed is faster and the intonation is higher, the corresponding user emotion may be impatient; when the input speed is slow and the intonation is low, the corresponding user emotion may be a low emotion.
And determining the first label according to the result of emotion analysis.
It may be appreciated that the result of emotion analysis may include multiple situations, and embodiments of the present application may combine all situations to determine the first tag; for example, determining that the emotion of the user is unobtrusive based on the text content in the dialog feature, determining that the emotion of the user is depressed based on the intonation mood in the dialog feature, then the first tag ultimately determined may be unobtrusive and depressed; if the emotion of the user is determined to be not happy according to the text content in the dialogue characteristics, the emotion of the user is determined to be impatient according to the intonation and the mood in the dialogue characteristics, and finally the first label can be generated.
In the embodiment of the application, the emotion of the user is analyzed according to the dialogue characteristics, and then the first label is determined according to the emotion analysis result, so that the finally determined preset material meets the emotion requirement of the user, and the user experience is improved.
As can be seen from the foregoing description, the manner in which the first tag is determined varies with the dialogue characteristics, and the determination process of the first tag will be described below by taking a plurality of dialogue characteristics as an example.
In another embodiment of the method for man-machine conversation provided by the embodiment of the present application, the conversation feature includes an input speed of a user, and determining the first label of the user according to the conversation feature includes:
if the input speed is smaller than the preset first speed, determining that the first label of the user is a preset third label;
if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
The first speed and the second speed can be adjusted according to actual needs, the third label and the fourth label can also include various forms, the embodiment of the application is not limited to this, for example, the third label can be a low emotion or choking, and the fourth label can be a impatience or agitation.
It will be appreciated that taking the voice input speed of the user as an example, under normal conditions, the voice input speed of the user corresponds to a speed range, the upper limit of the speed range is the second speed, the lower limit of the speed range is the first speed, when the voice input speed of the user is smaller than the first speed, the voice input speed of the user is indicated to be slower, the first label can be determined to be a preset third label, when the voice input speed of the user is greater than the second speed, the voice input speed of the user is indicated to be faster, and the emotion is indicated to be excited, and the first label can be determined to be a preset fourth label.
In another embodiment of the method for man-machine conversation provided by the embodiment of the present application, the conversation feature includes an interval time of user response, and determining the first label of the user according to the conversation feature includes:
if the interval time is smaller than the preset first time, determining that the first label of the user is a preset fifth label;
if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
It will be appreciated that, under normal circumstances, the interval time of the user response should also be in a time range, the upper limit of the time range may be the second time, the lower limit of the time range may be the first time, when the interval time is less than the first time, the user response is indicated to be faster, the preset fifth label may be emotional rise, quick response or happiness, and when the interval time is greater than the second time, the user response is indicated to be slower, and the preset sixth label may be emotional low, slow response or unhappy.
In another embodiment of the method for man-machine conversation provided by the embodiment of the present application, when the conversation feature includes text content input by a user, determining a first label of the user according to the conversation feature includes:
Extracting keywords in text content input by a user;
and determining the first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
It will be appreciated that the text content entered by the user may be more, but only a portion of the content may be capable of expressing the user's state; taking text content as an example of ' today's me not happy ', both ' today ' and ' me ' may not embody the emotion state of the user, and the keyword ' not happy ' may embody the emotion state of the user, so in order to improve the efficiency and accuracy of determining the first tag, the keyword in the text content may be extracted first, and then the first tag may be finally determined according to the correspondence between the keyword and the first tag.
As can be seen from the foregoing description, the first tag and the second tag may be the same or different, and when the first tag and the second tag are different, matching according to a tag matching rule is required, and a matching process will be described below.
In another embodiment of the method for man-machine interaction provided by the embodiment of the present application, determining, according to a preset tag matching rule, a second tag matching with the first tag in a preset tag set includes:
And determining a preset seventh label which is synonymous with the first label in the preset label set, and taking the preset seventh label as a second label.
For example, when the first tag is "bye", the avatar should respond by displaying the preset material of the prop-erty to the user, however, for the preset material of the prop-erty, the preset second tag may be "bye", so that the seventh tag "bye" which is synonymous with the first tag "bye" may be first searched for from the tag set according to the synonymous relationship, and then the seventh tag "bye" is used as the second tag, so that the preset material of the prop-erty corresponding to the "bye" can be finally determined.
The method provided by the embodiment of the application can be applied to a terminal, wherein the terminal can be block node equipment in a block chain, namely, the terminal can be one node in the block chain. The nodes in the blockchain will be described in detail below.
Referring to the data sharing system shown in fig. 6, the data sharing system 100 refers to a system for performing data sharing between nodes, and the data sharing system may include a plurality of nodes 101, and the plurality of nodes 101 may be respective clients in the data sharing system. Each node 101 may receive input information while operating normally and maintain shared data within the data sharing system based on the received input information. In order to ensure the information intercommunication in the data sharing system, information connection can exist between each node in the data sharing system, and the nodes can transmit information through the information connection. For example, when any node in the data sharing system receives input information, other nodes in the data sharing system acquire the input information according to a consensus algorithm, and store the input information as data in the shared data, so that the data stored on all nodes in the data sharing system are consistent.
Each node in the data sharing system has a node identifier corresponding to the node identifier, and each node in the data sharing system can store the node identifiers of other nodes in the data sharing system, so that the generated block can be broadcast to other nodes in the data sharing system according to the node identifiers of other nodes. Each node can maintain a node identification list shown in the following table, and the node names and the node identifications are correspondingly stored in the node identification list. The node identifier may be an S19P1855 (Internet Protocol, protocol of interconnection between networks) address and any other information that can be used to identify the node, and the S19P1855 address is only shown in table 1 as an example.
Node name Node identification
Node 1 117.114.151.174
Node 2 117.116.189.145
Node N 119.123.789.258
Each node in the data sharing system stores one and the same blockchain. The blockchain is composed of a plurality of blocks, referring to fig. 7, the blockchain is composed of a plurality of blocks, the starting block comprises a block head and a block main body, the block head stores an input information characteristic value, a version number, a time stamp and a difficulty value, and the block main body stores input information; the next block of the starting block takes the starting block as a father block, the next block also comprises a block head and a block main body, the block head stores the input information characteristic value of the current block, the block head characteristic value of the father block, the version number, the timestamp and the difficulty value, and the like, so that the block data stored in each block in the block chain are associated with the block data stored in the father block, and the safety of the input information in the block is ensured.
When each block in the blockchain is generated, referring to fig. 8, when the node where the blockchain is located receives input information, checking the input information, after the checking is completed, storing the input information into a memory pool, and updating a hash tree used for recording the input information; then, updating the update time stamp to the time of receiving the input information, trying different random numbers, and calculating the characteristic value for a plurality of times, so that the calculated characteristic value can meet the following formula:
SHA256(SHA256(version+prev_hash+merkle_root+ntime+nbits+x))<TARGET
Wherein SHA256 is a eigenvalue algorithm used to calculate eigenvalues; version (version number) is version information of the related block protocol in the block chain; the prev_hash is the block header characteristic value of the parent block of the current block; merkle _root is a characteristic value of input information; ntime is the update time of the update timestamp; the nbits is the current difficulty, is a fixed value in a period of time, and is determined again after exceeding a fixed period of time; x is a random number; TARGET is a eigenvalue threshold that can be determined from nbits.
Thus, when the random number meeting the formula is calculated, the information can be correspondingly stored to generate the block head and the block main body, and the current block is obtained. And then, the node where the blockchain is located sends the newly generated blocks to other nodes in the data sharing system where the newly generated blocks are located according to the node identification of other nodes in the data sharing system, the other nodes verify the newly generated blocks, and the newly generated blocks are added into the blockchain stored in the newly generated blocks after the verification is completed.
Referring to fig. 9, an embodiment of a device for man-machine interaction according to the present application is shown. As shown in fig. 9, an embodiment of the present application provides an apparatus for man-machine interaction, including:
An obtaining unit 301, configured to obtain a dialogue feature of a user during a man-machine dialogue;
a first determining unit 302, configured to determine a first tag of the user according to the dialogue feature;
A matching unit 303, configured to determine a second tag matching the first tag in the preset tag set according to a preset tag matching rule;
A second determining unit 304, configured to determine a preset material corresponding to the second tag according to a correspondence between the second tag and the preset material;
and the display unit 305 is used for displaying the preset materials.
In another embodiment of the device for man-machine interaction provided by the embodiment of the present application, the first determining unit 302 is configured to:
Analyzing the emotion of the user according to the dialogue characteristics;
and determining the first label according to the result of emotion analysis.
In another embodiment of the apparatus for man-machine interaction provided by the embodiment of the present application, the interaction feature includes an input speed of a user, and the first determining unit 302 is configured to:
if the input speed is smaller than the preset first speed, determining that the first label of the user is a preset third label;
if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label.
In another embodiment of the apparatus for man-machine interaction provided in the embodiments of the present application, the interaction feature includes an interval time of user response, and the first determining unit 302 is configured to:
if the interval time is smaller than the preset first time, determining that the first label of the user is a preset fifth label;
if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label.
In another embodiment of the apparatus for man-machine conversation provided in the embodiments of the present application, when the conversation feature includes text content input by a user, the first determining unit 302 is configured to:
Extracting keywords in text content input by a user;
and determining the first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
In another embodiment of the device for man-machine interaction provided by the embodiment of the present application, the matching unit 303 is configured to:
And determining a preset seventh label which is synonymous with the first label in the preset label set, and taking the preset seventh label as a second label.
Next, an embodiment of the present application further provides a terminal device, as shown in fig. 10, for convenience of explanation, only a portion related to the embodiment of the present application is shown, and specific technical details are not disclosed, please refer to a method portion of the embodiment of the present application. The attribute information display device may be any terminal device including a mobile phone, a tablet Personal computer, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a Point of Sales (POS), a vehicle-mounted computer, and the like, taking the attribute information display device as an example of the mobile phone:
Fig. 10 is a block diagram showing a part of the structure of a mobile phone related to the attribute information display device according to the embodiment of the present invention. Referring to fig. 10, the mobile phone includes: radio Frequency (RF) circuitry 810, memory 820, input unit 830, display unit 840, sensor 850, audio circuitry 860, wireless fidelity (WIRELESS FIDELITY, wiFi) module 870, processor 880, power supply 890, and the like. It will be appreciated by those skilled in the art that the handset construction shown in fig. 10 is not limiting of the handset and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The following describes the components of the mobile phone in detail with reference to fig. 10:
The RF circuit 810 may be used for receiving and transmitting signals during a message or a call, and in particular, after receiving downlink information of a base station, it is processed by the processor 880; in addition, the data of the design uplink is sent to the base station. Typically, the RF circuitry 810 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (Low Noise Amplifier, LNA), a duplexer, and the like. In addition, the RF circuitry 810 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (Global System of Mobile communication, GSM), general Packet Radio Service (GPRS), code division multiple Access (Code Division MultS P1856le Access, CDMA), wideband code division multiple Access (Wideband Code Division MultS P1856le Access, WCDMA), long term evolution (Long Term Evolution, LTE), email, short message Service (Short MESSAGING SERVICE, SMS), and the like.
The memory 820 may be used to store software programs and modules, and the processor 880 performs various functional applications and data processing of the cellular phone by executing the software programs and modules stored in the memory 820. The memory 820 may mainly include a storage program area that may store an operating system, application programs required for at least one function (such as a sound playing function, an image playing function, etc.), and a storage data area; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, memory 820 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 830 may be used to receive input numeric or character information and to generate key signal inputs related to user settings and function controls of the handset. In particular, the input unit 830 may include a touch panel 831 and other input devices 88. The touch panel 831, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 831 or thereabout using any suitable object or accessory such as a finger, stylus, etc.), and actuate the corresponding connection device according to a predetermined program. Alternatively, the touch panel 831 may include two portions of a touch detection device and a touch controller. The touch detection device detects the touch azimuth of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device and converts it into touch point coordinates, which are then sent to the processor 880 and can receive commands from the processor 880 and execute them. In addition, the touch panel 831 may be implemented in various types of resistive, capacitive, infrared, surface acoustic wave, and the like. The input unit 830 may include other input devices 88 in addition to the touch panel 831. In particular, other input devices 88 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, mouse, joystick, etc.
The display unit 840 may be used to display information input by a user or information provided to the user and various menus of the mobile phone. The display unit 840 may include a display panel 841, and alternatively, the display panel 841 may be configured in the form of a Liquid crystal display (Liquid CRYSTAL DISPLAY, LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 831 may overlay the display panel 841, and when the touch panel 831 detects a touch operation thereon or thereabout, the touch operation is transferred to the processor 880 to determine the type of touch event, and the processor 880 then provides a corresponding visual output on the display panel 841 according to the type of touch event. Although in fig. 10, the touch panel 831 and the display panel 841 are implemented as two separate components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 831 and the display panel 841 may be integrated to implement the input and output functions of the mobile phone.
The handset may also include at least one sensor 850, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 841 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 841 and/or the backlight when the mobile phone moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and direction when stationary, and can be used for applications of recognizing the gesture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; other sensors such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc. that may also be configured with the handset are not described in detail herein.
Audio circuitry 860, speaker 861, microphone 862 may provide an audio interface between the user and the handset. The audio circuit 860 may transmit the received electrical signal converted from audio data to the speaker 861, and the electrical signal is converted into a sound signal by the speaker 861 to be output; on the other hand, microphone 862 converts the collected sound signals into electrical signals, which are received by audio circuit 860 and converted into audio data, which are processed by audio data output processor 880 for transmission to, for example, another cell phone via RF circuit 810, or which are output to memory 820 for further processing.
WiFi belongs to a short-distance wireless transmission technology, and a mobile phone can help a user to send and receive emails, browse webpages, access streaming media and the like through a WiFi module 870, so that wireless broadband Internet access is provided for the user. Although fig. 10 shows a WiFi module 870, it is understood that it does not belong to the necessary constitution of the handset, and can be omitted entirely as needed within the scope of not changing the essence of the invention.
The processor 880 is a control center of the mobile phone, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the mobile phone and processes data by running or executing software programs and/or modules stored in the memory 820, and calling data stored in the memory 820. In the alternative, processor 880 may include one or more processing units; alternatively, the processor 880 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 880.
The handset further includes a power supply 890 (e.g., a battery) for powering the various components, optionally in logical communication with the processor 880 through a power management system, as well as performing functions such as managing charge, discharge, and power consumption by the power management system.
Although not shown, the mobile phone may further include a camera module, a bluetooth module, etc., which will not be described herein.
In the embodiment of the present invention, the processor 880 included in the terminal device further has the function of the device for man-machine interaction in the foregoing embodiment.
Embodiments of the present application also provide a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to implement the functions of the apparatus for human-machine interaction in the foregoing embodiments.
Embodiments of the present application also provide a computer program product comprising computer software instructions that enable a processor to perform the functions of the apparatus for human-machine interaction of the previous embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (9)

1. A method of human-machine conversation, comprising:
Acquiring dialogue characteristics of a user in a man-machine dialogue process; the dialogue characteristics comprise text content input by the user, input speed, interval time of response of the user, voice content input by the user, intonation and mood input by the voice of the user;
determining a first label of a user according to the dialogue characteristics; the first tag includes a plurality of types;
Determining a second label matched with the first label in a preset label set according to a preset label matching rule;
Determining preset materials corresponding to the second labels according to the corresponding relation between the second labels and the preset materials; the preset materials comprise one or more of action materials, expression materials and audio materials;
Displaying the preset materials through preset virtual images; wherein, the preset materials are different, and the corresponding display modes are different;
when the dialogue feature includes an input speed of the user, the determining the first label of the user according to the dialogue feature includes:
if the input speed is smaller than the preset first speed, determining that the first label of the user is a preset third label;
if the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label;
when the dialogue feature includes an interval time of user response, the determining the first label of the user according to the dialogue feature includes:
if the interval time is smaller than the preset first time, determining that the first label of the user is a preset fifth label;
If the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label;
When the dialog feature includes text content entered by a user, the determining a first tag of the user based on the dialog feature includes:
Extracting keywords in the text content input by the user;
And determining the first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
2. The method of claim 1, wherein determining the first tag of the user based on the dialog feature comprises:
Analyzing emotion of the user according to the dialogue characteristics;
and determining the first label according to the result of emotion analysis.
3. The method of claim 1, wherein determining a second tag of the preset set of tags that matches the first tag according to a preset tag matching rule comprises:
and determining a preset seventh label which is synonymous with the first label in a preset label set, and taking the preset seventh label as a second label.
4. The method of claim 1, wherein the method is applied to a terminal that is a block node device in a block chain.
5. A human-machine interactive apparatus, comprising:
The acquisition unit is used for acquiring dialogue characteristics of a user in a man-machine dialogue process; the dialogue characteristics comprise text content input by the user, input speed, interval time of response of the user, voice content input by the user, intonation and mood input by the voice of the user;
A first determining unit, configured to determine a first tag of a user according to the dialogue feature; the first tag includes a plurality of types;
The matching unit is used for determining a second label matched with the first label in a preset label set according to a preset label matching rule;
The second determining unit is used for determining preset materials corresponding to the second label according to the corresponding relation between the second label and the preset materials; the preset materials comprise one or more of action materials, expression materials and audio materials;
The display unit is used for displaying the preset materials through a preset virtual image; wherein, the preset materials are different, and the corresponding display modes are different;
when the dialog feature comprises an input speed of the user, the first determination unit is adapted to:
if the input speed is smaller than the preset first speed, determining that the first label of the user is a preset third label;
If the input speed is greater than the preset second speed, determining that the first label of the user is a preset fourth label;
when the dialog feature comprises an interval time of user response, the first determining unit is configured to:
if the interval time is smaller than the preset first time, determining that the first label of the user is a preset fifth label;
if the interval time is greater than the preset second time, determining that the first label of the user is a preset sixth label;
when the dialog feature comprises text content entered by a user, the first determination unit is adapted to:
Extracting keywords in text content input by a user;
and determining the first label corresponding to the keyword according to the corresponding relation between the keyword and the first label.
6. The apparatus of claim 5, wherein the first determining unit is configured to:
Analyzing the emotion of the user according to the dialogue characteristics;
and determining the first label according to the result of emotion analysis.
7. The apparatus of claim 5, wherein the matching unit is configured to:
And determining a preset seventh label which is synonymous with the first label in the preset label set, and taking the preset seventh label as a second label.
8. A terminal device, comprising: memory, transceiver, processor, and bus system;
Wherein the memory is used for storing programs;
the processor is configured to execute a program in the memory to perform the method of any one of claims 1 to 4.
9. A computer readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 4.
CN201910880191.8A 2019-09-12 2019-09-12 Man-machine conversation method, device, terminal equipment and readable storage medium Active CN110597973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910880191.8A CN110597973B (en) 2019-09-12 2019-09-12 Man-machine conversation method, device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910880191.8A CN110597973B (en) 2019-09-12 2019-09-12 Man-machine conversation method, device, terminal equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN110597973A CN110597973A (en) 2019-12-20
CN110597973B true CN110597973B (en) 2024-06-07

Family

ID=68860557

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910880191.8A Active CN110597973B (en) 2019-09-12 2019-09-12 Man-machine conversation method, device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN110597973B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111857344A (en) * 2020-07-22 2020-10-30 杭州网易云音乐科技有限公司 Information processing method, system, medium, and computing device
CN114721516A (en) * 2022-03-29 2022-07-08 网易有道信息技术(北京)有限公司 Multi-object interaction method based on virtual space and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423277A (en) * 2016-02-16 2017-12-01 中兴通讯股份有限公司 A kind of expression input method, device and terminal
CN110023926A (en) * 2016-08-30 2019-07-16 谷歌有限责任公司 The reply content to be presented is generated using text input and user state information to input with response text
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device
CN110209897A (en) * 2018-02-12 2019-09-06 腾讯科技(深圳)有限公司 Intelligent dialogue method, apparatus, storage medium and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107423277A (en) * 2016-02-16 2017-12-01 中兴通讯股份有限公司 A kind of expression input method, device and terminal
CN110023926A (en) * 2016-08-30 2019-07-16 谷歌有限责任公司 The reply content to be presented is generated using text input and user state information to input with response text
CN110209897A (en) * 2018-02-12 2019-09-06 腾讯科技(深圳)有限公司 Intelligent dialogue method, apparatus, storage medium and equipment
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device

Also Published As

Publication number Publication date
CN110597973A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN109379641B (en) Subtitle generating method and device
CN108021572B (en) Reply information recommendation method and device
CN111282268B (en) Plot showing method, plot showing device, plot showing terminal and storage medium in virtual environment
CN106506321B (en) Group message processing method and terminal device
CN108494665B (en) Group message display method and mobile terminal
CN105630846B (en) Head portrait updating method and device
EP3249857B1 (en) Chat history display method and apparatus
CN108549681B (en) Data processing method and device, electronic equipment and computer readable storage medium
CN108521365B (en) Method for adding friends and mobile terminal
CN110597973B (en) Man-machine conversation method, device, terminal equipment and readable storage medium
CN110851745B (en) Information processing method, information processing device, storage medium and electronic equipment
CN110399474B (en) Intelligent dialogue method, device, equipment and storage medium
CN110750198A (en) Expression sending method and mobile terminal
CN111666498B (en) Friend recommendation method based on interaction information, related device and storage medium
CN110390102B (en) Emotion analysis method and related device
CN110277097B (en) Data processing method and related equipment
CN109274814B (en) Message prompting method and device and terminal equipment
CN109510897B (en) Expression picture management method and mobile terminal
CN107957789B (en) Text input method and mobile terminal
CN111027406B (en) Picture identification method and device, storage medium and electronic equipment
CN108958505B (en) Method and terminal for displaying candidate information
CN112101215A (en) Face input method, terminal equipment and computer readable storage medium
CN108446345B (en) Data searching method and mobile terminal
CN111338598A (en) Message processing method and electronic equipment
CN110809234A (en) Figure category identification method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant