CN110674398A - Virtual character interaction method and device, terminal equipment and storage medium - Google Patents

Virtual character interaction method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN110674398A
CN110674398A CN201910838053.3A CN201910838053A CN110674398A CN 110674398 A CN110674398 A CN 110674398A CN 201910838053 A CN201910838053 A CN 201910838053A CN 110674398 A CN110674398 A CN 110674398A
Authority
CN
China
Prior art keywords
information
user
interactive
interaction
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910838053.3A
Other languages
Chinese (zh)
Inventor
袁小薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chase Technology Co Ltd
Shenzhen Zhuiyi Technology Co Ltd
Original Assignee
Shenzhen Chase Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chase Technology Co Ltd filed Critical Shenzhen Chase Technology Co Ltd
Priority to CN201910838053.3A priority Critical patent/CN110674398A/en
Publication of CN110674398A publication Critical patent/CN110674398A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a virtual character image interaction method, a virtual character image interaction device, terminal equipment and a storage medium, wherein the method comprises the following steps: acquiring interaction demand information of a user, wherein the interaction demand information comprises at least one of social information and online behavior characteristics; acquiring interactive figure information according to the interactive demand information; acquiring a figure image corresponding to the interactive figure information, wherein the figure image comprises at least one of a figure picture and a figure video; generating a virtual character image according to the character image; and interacting with the user through the virtual character. On one hand, the visual interaction of the user is realized, on the other hand, the virtual character image is generated based on the user interaction demand information, the visual sense of the user is more easily met, and the user experience is further improved.

Description

Virtual character interaction method and device, terminal equipment and storage medium
Technical Field
The application relates to the technical field of terminal equipment, in particular to a virtual character interaction method, a virtual character interaction device, terminal equipment and a storage medium.
Background
With the development of mobile internet, the popularity of mobile terminal devices such as mobile phones is higher and higher, and the applications of mobile phones are also more and more varied. In order to ensure better experience of the client, most applications set a customer service function, so that the user can consult or seek help through the customer service.
The customer service functions provided by a general enterprise for users generally include manual customer service and robot customer service. When the user consults, the robot customer service will answer first, if the answer is not clear, then the manual customer service will be used, thereby greatly improving the customer service efficiency and saving the manual resources.
However, the existing robot customer service is often communicated with the user based on a voice interaction or text interaction mode, so that the communication mode is single, and the requirements of the user cannot be met.
Disclosure of Invention
In view of the above problems, the present application provides a virtual character image interaction method, device, terminal device and storage medium, which can realize visual interaction and improve user experience.
In a first aspect, an embodiment of the present application provides a virtual character image interaction method, including: acquiring interaction demand information of a user, wherein the interaction demand information comprises social information and online behavior characteristics of the user; acquiring interactive figure information according to the interactive demand information; acquiring a figure image corresponding to the interactive figure information, wherein the figure image comprises at least one of a figure picture and a figure video; generating a virtual character image according to the character image; and interacting with the user through the virtual character.
Optionally, the social information includes an address book and an address record of the user, and the obtaining of the interactive person information according to the interactive demand information includes: extracting a plurality of first person information from the address book, and determining the intimacy between each first person information and the user according to the address book; and determining the interactive character information in the plurality of first character information according to the intimacy.
Optionally, the determining the interactive personal information in the plurality of first personal information according to the intimacy degree includes: respectively judging whether the intimacy between each piece of first person information and the user is greater than or equal to a preset intimacy; and determining any first person information with the intimacy greater than or equal to a preset intimacy as the interactive person information.
Optionally, the determining the interactive personal information in the plurality of first personal information according to the intimacy degree includes: comparing the degree of intimacy between each of the first person information and the user; and determining the first person information with the greatest intimacy with the user as the interactive person information.
Optionally, the online behavior feature includes an attention record, a praise record, a browsing record, and a comment record of the user, and the obtaining of the interactive person information according to the interactive demand information includes: acquiring a plurality of second person information according to the attention records, and determining the attention degree of the user to each second person information according to the like record, the comment record and the browsing record; and determining the interactive personal information in the plurality of second personal information according to the attention degree.
Optionally, the determining the interactive personal information in the plurality of pieces of second personal information according to the attention degree includes: respectively judging whether the attention of the user to each piece of second person information is greater than or equal to a preset attention; and determining any second person information with the attention degree larger than or equal to the preset attention degree as the interactive person information.
Optionally, the interaction demand information further includes user-defined character information, and the interaction demand information of the user is acquired; acquiring interactive figure information according to the interactive demand information, comprising: acquiring user-defined character information uploaded by a user; and determining the user-defined character information as interactive character information.
Optionally, the method further comprises: acquiring voiceprint information corresponding to the interactive figure information; generating an interactive audio according to the voiceprint information; and when the virtual character image interacts with the user through the interactive audio, the virtual character image interacts with the user through the interactive audio.
Optionally, the interacting with the user through the virtual character comprises: acquiring interactive information input by a user, wherein the interactive information comprises audio information and character information; inputting the interaction information into a pre-trained first model to obtain facial feature points corresponding to the interaction information; inputting the facial feature points into a pre-trained second model to obtain a face image; and updating the virtual character image based on the face image.
Optionally, the method further comprises: acquiring sample facial features, sample interaction information and a sample face image; inputting the sample facial feature points and the sample interaction information into a first machine learning model for training to obtain the first model; and inputting the sample face image and the sample face feature points into a second machine learning model for training to obtain the second model.
In a second aspect, an embodiment of the present application provides an interactive device for a virtual character, where the device includes: the system comprises an interaction demand information acquisition module, an interaction figure information acquisition module, a figure image acquisition module, a virtual figure image generation module and an interaction module. The interaction demand information acquisition module is used for acquiring interaction demand information of a user, wherein the interaction demand information comprises social information or online behavior characteristics of the user; the interactive figure information acquisition module is used for acquiring interactive figure information according to the interactive demand information; the figure image acquisition module is used for acquiring a figure image corresponding to the interactive figure information, wherein the figure image comprises at least one of a figure picture and a figure video; the virtual character image generation module is used for generating a virtual character image according to the character image; and the interaction module is used for interacting with the user through the virtual character.
Optionally, the social information includes an address book and an address record of the user, and the interactive person information obtaining module further includes: the system comprises an intimacy determining unit and a first interactive person determining unit, wherein the intimacy determining unit is used for extracting a plurality of pieces of first person information from the address book and determining intimacy between each piece of first person information and the user according to the address records; the first interactive person determining unit is used for determining the interactive person information in the first person information according to the intimacy.
Optionally, the online behavior feature includes a concern record, a praise record, a browsing record, and a comment record of the user, and the interactive person information obtaining module further includes: the attention degree determining unit is used for acquiring a plurality of pieces of second person information according to the attention records, and determining the attention degree of the user to each piece of second person information according to the like record, the comment record and the browsing record; the second interactive person determining unit is configured to determine the interactive person information among the plurality of second person information according to the attention.
In a third aspect, an embodiment of the present application provides a terminal device, which includes: a memory; one or more processors coupled with the memory; one or more programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of the first aspect as described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which program code is stored, and the program code can be called by a processor to execute the method according to the first aspect.
The virtual character interaction method, the virtual character interaction device, the terminal equipment and the storage medium provided by the embodiment of the application can be used for acquiring the interaction demand information of a user, wherein the interaction demand information comprises social information and online behavior characteristics of the user, and the interaction character information is acquired according to the interaction demand information, so that the interaction character information can be closely related to the daily life of the user, then the character image corresponding to the interaction character information is acquired, and the virtual character is generated according to the character image to interact with the user, so that on one hand, the visual interaction of the user is realized, on the other hand, the virtual character is generated based on the user interaction demand information, the visual sense of the user is more easily met, and further, the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 shows a flow diagram of an interaction method of an avatar according to an embodiment of the present application.
Fig. 3 shows a flow chart of an interaction method of the virtual character image provided by another embodiment of the application.
Fig. 4 shows a flow chart of an interaction method of the virtual character provided by another embodiment of the application.
Fig. 5 shows a flow chart of an interaction method of the virtual character image provided by an embodiment of the application.
Fig. 6 shows a flow chart of an interaction method of the virtual character image provided by another embodiment of the application.
Fig. 7 shows a flow chart of an interaction method of the virtual character provided by another embodiment of the application.
Fig. 8 shows an interaction diagram of a user interacting with an avatar through a terminal device according to an embodiment of the present application.
Fig. 9 shows a schematic flowchart of performing steps S601 to S603 in an embodiment of the present application.
Fig. 10 shows a block diagram of a virtual character interaction device provided in an embodiment of the present application.
Fig. 11 is a block diagram of a terminal device for executing the avatar interaction method according to the embodiment of the present application.
Fig. 12 is a storage unit for storing or carrying program codes for implementing the virtual character interaction method according to the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, the popularity of mobile terminal devices such as mobile phones and the like is higher and higher, and smart phones become essential personal belongings for people going out. With the rapid development of the mobile internet, various applications appear on the mobile terminal, and many of the applications can provide customer service functions for users, so that the users can perform services such as product consultation and the like through the customer service.
Along with the development of science and technology, the requirement of people for humanized experience in the use process of various intelligent products is gradually increased, and in the process of communicating with customer service, a user also hopes that the user can not only obtain the reply of characters or voice, but also can communicate in a more natural interaction mode similar to interpersonal communication in actual life.
The inventor finds in research that the intimacy of customer service can be improved by enabling the customer service robot to simulate the real person to speak. For example, when the customer service robot has a conversation with the user, the reply content of the user consultation can be expressed in a voice mode through the mouth of the virtual character image, so that the user can visually see that the customer service robot with the virtual character image speaks on the user interface, and the user and the customer service robot can communicate in a face-to-face mode.
However, in the actual research process, the inventor finds that the virtual character serving the robot is only one. This results in that no matter who the user is, only this virtual character will be displayed during interaction, and since some users may not like the current virtual character, but cannot change the virtual character, the user will be provided with an unnatural feeling of interaction, which affects the interaction experience of the user.
In order to improve the above problems, the inventor researches the implementation process of the customer service robot with the virtual character, and comprehensively considers the use requirements in the actual interaction scene, and proposes the virtual character interaction method, the apparatus, the terminal device and the storage medium in the embodiment of the application.
In order to better understand the virtual character interaction method, device, terminal device, and storage medium provided in the embodiments of the present application, an application environment suitable for the embodiments of the present application is described first below.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an application environment suitable for the embodiment of the present application. The virtual character interaction method provided by the embodiment of the application can be applied to the polymorphic interaction system 100 shown in fig. 1. The polymorphic interaction system 100 includes a terminal device 101 and a server 102, the server 102 being communicatively coupled to the terminal device 101. The server 102 may be a conventional server or a cloud server, and is not limited herein.
The terminal device 101 may be various terminal devices having a display screen and supporting data input, including but not limited to a smart phone, a tablet computer, a laptop portable computer, a desktop computer, a wearable terminal device, and the like. Specifically, the data input may be based on a voice module provided on the terminal device 101 to input voice, a character input module to input characters, an image input module to input images, and the like, or may be based on a gesture recognition module installed on the terminal device 101, so that a user may implement an interaction manner such as gesture input.
Wherein, the terminal device 101 may be installed with a client application program, and the user may communicate with the server 102 based on the client application program (e.g. APP, wechat applet, etc.), specifically, the server 102 is installed with a corresponding server application program, and the user may register a user account at the server 102 based on the client application program and communicate with the server 102 based on the user account, for example, a user logs into a user account at a client application, and enters through the client application based on the user account, text information, voice information or image information can be input, after the client application program receives the information input by the user, the information may be sent to the server 102 so that the server 102 may receive the information, process and store the information, and the server 102 may also receive the information and return a corresponding output information to the terminal device 101 according to the information.
In some embodiments, a client application may be used to provide customer service to a user, in customer service communication with the user, and the client application may interact with the user based on a virtual robot. In particular, the client application may receive information input by a user and respond to the information based on the virtual robot. The virtual robot is a software program based on visual graphics, and the software program can present robot forms simulating biological behaviors or ideas to a user after being executed. The virtual robot may be a robot simulating a real person, such as a robot resembling a real person, which is created according to the shape of the user himself or the other person, or a robot having an animation effect, such as a robot having an animal shape or a cartoon character shape.
In some embodiments, after acquiring reply information corresponding to information input by the user, the terminal device 101 may display a virtual robot image corresponding to the reply information on a display screen of the terminal device 101 or other image output device connected thereto. As a mode, while the virtual robot image is played, the audio corresponding to the virtual robot image may be played through a speaker of the terminal device 101 or other audio output devices connected thereto, and a text or a graphic corresponding to the reply information may be displayed on a display screen of the terminal device 101, so that multi-state interaction with the user in multiple aspects of image, voice, text, and the like is realized.
In other embodiments, the means for processing the information input by the user may also be disposed on the terminal device 101, so that the terminal device 101 can interact with the user without relying on establishing communication with the server 102, and in this case, the polymorphic interaction system 100 may only include the terminal device 101.
The above application environments are only examples for facilitating understanding, and it is to be understood that the embodiments of the present application are not limited to the above application environments.
The following describes in detail a virtual character interaction method, an apparatus, a terminal device, and a storage medium provided in the embodiments of the present application with specific embodiments.
Referring to fig. 2, fig. 2 is a schematic flow chart illustrating an interaction method of an avatar according to an embodiment of the present application. The method may comprise the steps of:
step S110, obtaining interaction requirement information of the user, where the interaction requirement information may include at least one of social information and online behavior characteristics of the user.
In some embodiments, when the interaction requirement information of the user is obtained, the server may obtain the interaction requirement information from the terminal device of the user or a related application of the terminal device.
In some embodiments, the interaction requirement information of the user may be character information acquired from the network for tracking close relationship with the user. The personal information may include a photo of the person, a video, etc. that may show the personal appearance of the person. Optionally, the interaction demand information may include social information of the user obtained from a social platform or a mobile terminal of the user, and the social information may include a mobile phone contact list of the user, a friend list in a social platform (e.g., QQ, WeChat) installed in a mobile phone of the user, and the like, and people information of the user who has daily connections with the user, such as relatives, friends, co-workers, and the like, may be tracked through the social information.
Optionally, the interaction demand information may further include online behavior characteristics of the user, where the online behavior characteristics may be actions of the user such as attention, comments, and praise made on a social platform, a live platform, a forum platform, and the like, and the person information that the user is interested in more may be found through the online behavior characteristics, for example, a star that the user is interested in on the social platform (such as a microblog), for example, a blogger that the user frequently browses, comments, and praise, for example, a singer of a song that the user frequently listens to. The characters may include, but are not limited to, real characters in reality, and may also be virtual characters, such as animations, characters in cartoons, and the like.
In some embodiments, the interaction requirement information of the user may be user-defined uploaded personal information which may not be found on the network, and the user may upload the personal information, such as a photo, a video and the like of the person, by himself through the terminal device.
And step S120, acquiring the interactive character information according to the interactive demand information.
After the persons having close relationship with the user are determined, generally more than one person having close relationship with the user often exists, so that the degree of close relationship between the persons and the user can be calculated according to the interaction demand information. Thereby, the character most closely related to the user can be selected as the interactive character information. And the interactive character information is the information of the character corresponding to the virtual character when the user performs virtual character interaction.
In some embodiments, the final interactive personal information may be determined by a correspondence record in the social information. Generally, people who communicate with the user frequently have a close relationship with the user, so that the information of people who communicate with the user frequently within a period of time can be acquired as the interactive people information.
In some embodiments, the persons concerned by the user in comparison can be determined through the online behavior characteristics of the user, and the person most concerned by the user is selected as the interactive person information of the user. Generally, for a person with a high attention, a user often frequently browses or reviews information related to the person on the web, so the attention of the user to the person can be determined according to the browsing frequency, the review frequency and the like of the user, and the person with the highest attention can be selected as the interactive person information.
Step S130, a character image corresponding to the interactive character information is obtained, where the character image includes at least one of a character picture and a character video.
In some embodiments, the server may search and acquire the character image corresponding to the interactive character information from the third-party platform according to the interactive character information, or may acquire the character image corresponding to the interactive character information, which is stored on the terminal device by the user, from the terminal device of the user.
Step S140, generating a virtual character image according to the character image.
In some embodiments, a facial feature point corresponding to the interactive character information may be obtained first, and the facial feature point is input into a pre-trained neural network model to obtain a virtual character image corresponding to the facial feature point. Wherein the facial feature points corresponding to the personal information are extracted from the personal image. The pre-trained neural network model can be trained by a plurality of facial feature point samples extracted from a plurality of human figure images for training and a virtual human figure image sample. Alternatively, the avatar may be generated at a server where a pre-trained neural network model is stored.
And S150, interacting with the user through the virtual character.
In some embodiments, when the user interacts, the terminal device may download the avatar from the server and interact with the user at the terminal device through the avatar. Specifically, when the user inputs the interactive information at the terminal device, the virtual character is displayed on the terminal device, the corresponding reply information is generated according to the interactive information input by the user, and the virtual character can display the animation expression corresponding to the reply information while the virtual character replies the information, so that the interaction with the user is realized. Optionally, the interactive information may be audio information, text information, or information combining audio and text. In the embodiment, interaction can be performed through various kinds of interaction information, so that the interaction mode can be more flexible.
In this embodiment, the method for interacting the virtual character includes tracking character information having a close relationship with the user according to the interaction requirement information of the user, determining the interaction character information through the character information, that is, the character information having the closest relationship with the user, for example, determining that the interaction character information is a wife of the user, generating a corresponding virtual character according to the interaction character information, for example, using a face of the wife of the user as a simulated face of the virtual character, and finally interacting with the user through the character. Therefore, when a user interacts with the virtual character, the user has good feeling or familiarity to the virtual character, and interaction experience of the user is improved.
Referring to fig. 3, fig. 3 is a flow chart illustrating a virtual character interaction method according to another embodiment of the present application. The method can comprise the following steps:
s210, acquiring interaction demand information of a user, wherein the interaction demand information comprises at least one of social information and online behavior characteristics; the social information comprises an address book and an address record.
The address list can be a mobile phone address list on a mobile phone of the user. When the address list is a mobile phone address list, the address list can be the number of phone calls, the number of short message sending and receiving, and the like of the user and the character on the address list. In addition, the address book may also be a buddy list of the user on a communication application (e.g., WeChat, email, etc.) of the terminal device, and when the address book is the buddy list, the address book may be the number of times of call of voice or video between the user and a person on the buddy list, the call duration, the number of times of sending and receiving information (email), and the like.
S220, extracting a plurality of pieces of first person information from the address book, and determining the intimacy between each piece of first person information and the user according to the communication record.
In some embodiments, the number of times of communication or the duration of the call between each person in the address book and the user can be compared to determine the affinity between each person and the user. Generally, the people who communicate with the user for a large number of times have a high affinity with the user, and the people who have a long time to talk with the user each time have a high affinity with the user. Therefore, when calculating the intimacy degree, the communication number may be directly recorded, for example, if the number of communication between a certain person in the address book and the user within one month is 50, the intimacy degree between the user and the person may be determined to be 50, if the number of communication is 40, if the intimacy degree is calculated by taking 1-point intimacy degree for each communication, the intimacy degree between the user and the person may be 40. The intimacy can also be determined directly according to the call duration, for example, the total call duration between the user and a certain person in one month is 300 minutes, and if the intimacy is calculated as 1 point per 10 minutes, the intimacy is 30. In the embodiment, the intimacy degree is calculated through the number of times of communication or the length of communication time, so that the intimacy degree between the person and the user in the address book can be conveniently and quickly determined and calculated.
In some embodiments, affinity may be calculated by combining the number of communications and the duration of the call. The communication times and the call duration may be respectively weighted by 50%. For example, if the number of calls made with a person in a month is 40 and the duration of the call is 500 minutes, the intimacy between the person and the person is 40 × 50% +500 × 10% × 50% + 45. In the embodiment, the intimacy degree is calculated by combining the communication times and the call duration, so that the intimacy degree can reflect the relationship between the person and the user more truly.
And S230, determining the interactive character information in the plurality of first character information according to the intimacy.
In some embodiments, step 230 may comprise: respectively judging whether the intimacy between each piece of first person information and the user is greater than or equal to a preset intimacy; and determining any first person information with the intimacy greater than or equal to the preset intimacy as the interactive person information. For example, the preset intimacy degree is 50, a plurality of people with intimacy degree greater than or equal to 50 with the user in the address book are provided, one person can be randomly selected from the plurality of people, and the person information of the person can be used as the interactive person information. To ensure that the preset intimacy level does not exceed the maximum intimacy level, the preset intimacy level may be set to 80% of the maximum intimacy level. In the embodiment, any one of the character information with the intimacy greater than the preset intimacy in the address book is selected as the interactive character information, so that the selected character and the user can have higher intimacy, the interactive character information has certain randomness, and the user can be ensured to keep certain freshness during interaction.
In other embodiments, step 230 may comprise: comparing the degree of intimacy between each first person information and the user; and determining the first person information with the greatest intimacy with the user as the interactive person information. In the embodiment, the character information with the highest intimacy is used as the interactive character information, so that the user can interact with the most familiar virtual character image during interaction, and further the intimacy of the user during interaction is ensured.
S240, acquiring a character image corresponding to the interactive character information, wherein the character image comprises at least one of a character picture and a character video.
And S250, generating a virtual character image according to the character image.
And S260, interacting with the user through the virtual character.
In this embodiment, the interactive character information is determined by calculating the intimacy between the character and the user, so that it can be ensured that the interactive character is a person with whom the user is more familiar, such as a parent, a friend, a colleague, a classmate, and the like of the user.
Referring to fig. 4, fig. 4 is a flowchart illustrating an interaction method of an avatar according to another embodiment of the present application. The method can comprise the following steps:
s310, acquiring interaction demand information of a user, wherein the interaction demand information comprises at least one of social information and online behavior characteristics; the online behavior characteristics comprise an attention record, a praise record, a browsing record and a comment record of the user.
The attention records can be a person list concerned by the users on the social platform, the forum platform and the music platform, and the person list records attention persons of a plurality of users. The like record, the browsing record and the comment record may be the number of times of like, browsing times and comment times, etc. recorded by the user for the person concerned in a certain time, and alternatively, if the person concerned is a singer, the browsing record may include the number of times of playing of the song by the user for the singer.
And S320, acquiring a plurality of second person information according to the attention records, and determining the attention degree of the user to each second person information according to the praise records, the comment records and the browsing records.
In some embodiments, the attention degree of the user to the person concerned by the user can be calculated by one of the number of times of praise, the number of times of comment and the number of times of browsing of the person concerned by the user in a certain time. For example, if the number of times of comments made by the user on a certain person of interest in one month is 50 and the degree of attention of the user at 1 point at a time is calculated, the degree of attention of the user on the person of interest is 50.
In some embodiments, the attention degree of a user to a certain attention person can be calculated by combining the number of times of praise, the number of times of comment and the number of times of browsing of the user to the certain attention person. For example, the number of votes, the number of comments, and the number of views each account for 30% of the attention degree weight, and each of the number of votes, the number of comments, and the number of views is 1 attention degree, and if the number of votes to a certain person of interest by the user in one month is 60, the number of comments is 30, and the number of views is 90, the attention degree of the person of interest by the user is 30 +60 + 30 +90 + 30%, 54. In the embodiment, the attention degree is calculated by combining the praise record, the comment record and the browsing record of the user, so that the accuracy and the authenticity of the attention degree can be ensured.
And S330, determining interactive character information in the plurality of second character information according to the attention degree.
In some embodiments, step S330 may include: respectively judging whether the attention of the user to each piece of second personal information is greater than or equal to a preset attention; and determining any second person information with the attention degree larger than or equal to the preset attention degree as the interactive person information. For example, the preset attention degree is 50, and there are a plurality of people the user pays attention to, and one person may be randomly selected from the plurality of persons, and the personal information of the person may be used as the interactive personal information. To ensure that the preset attention does not exceed the maximum attention, the preset attention may be set to 80% of the maximum attention. In the embodiment, any one of the character information with the attention degree greater than the preset attention degree is selected as the interactive character information, so that the user can be ensured to have higher attention degree on the interactive character, the interactive character information has certain randomness, and the user can be ensured to keep certain freshness during interaction.
In other embodiments, step S330 may include: comparing the attention degree of the user to each piece of second person information; and determining the second personal information with the maximum attention as the interactive personal information.
In the embodiment, the character information most concerned by the user is selected as the interactive character information, and people often have good feeling to the concerned people, so that the user can interact with a good virtual character image during interaction, and the experience feeling of the user can be improved.
S340, acquiring a figure image corresponding to the interactive figure information, wherein the figure image comprises at least one of a figure picture and a figure video.
And S350, generating a virtual character image according to the character image.
And S360, interacting with the user through the virtual character.
In this embodiment, the interactive character information is determined by calculating the attention of the user to the character, so that it can be ensured that the interactive character is a favorite person of the user, such as a favorite actor, singer, idol, and the like of the user, and therefore, during interaction, the user can also have a good feeling on the virtual character image generated according to the interactive character information, thereby improving the interactive experience of the user.
Referring to fig. 5, fig. 5 is a flowchart illustrating a virtual character interaction method according to an embodiment of the present application. The method can comprise the following steps:
s410, acquiring interaction demand information of a user, wherein the interaction demand information comprises at least one of social information and online behavior characteristics; the interaction requirement information also comprises custom character information.
In some embodiments, the customized character information may be a video or a picture arbitrarily uploaded to the terminal device by the user, and the character included in the video or the picture may be a real character or a virtual character, such as a character in a cartoon.
S420, obtaining user-defined character information uploaded by a user; and determining the user-defined character information as the interactive character information.
S430, acquiring a character image corresponding to the interactive character information, wherein the character image comprises at least one of a character picture and a character video.
S440, generating a virtual character image according to the character image.
And S450, interacting with the user through the virtual character.
In the embodiment, the user can upload the user-defined character information according to the preference of the user, so that the virtual character image during interaction is generated, and the flexibility and the freedom of the user during interaction with the virtual character image are ensured. Thereby further improving the user experience.
Referring to fig. 6, fig. 6 is a flow chart illustrating a virtual character interaction method according to another embodiment of the present application. The method can comprise the following steps:
s510, acquiring interaction requirement information of the user, wherein the interaction requirement information comprises at least one of social information and online behavior characteristics.
And S520, acquiring the interactive character information according to the interactive demand information.
S530, acquiring a figure image corresponding to the interactive figure information, wherein the figure image comprises at least one of a figure picture and a figure video.
And S540, generating a virtual character image according to the character image.
And S550, acquiring voiceprint information corresponding to the interactive character information.
In some embodiments, if the interactive character is in the user address book, the voice print information of the character can be extracted from the call recording of the user and the character. If the interactive character is a public character or singer, the voice print information of the character can be extracted from the recording or the song of the singer disclosed by the public character.
And S560, generating the interactive audio according to the voiceprint information.
In some embodiments, the interactive audio corresponding to the voiceprint information can be obtained by inputting the voiceprint information into a pre-trained model. And the pre-trained model is trained in advance through sample voiceprint information and sample interactive audio. Wherein the sample interactive audio may be extracted from a video or a recording.
S570, when the virtual character is interacted with the user through the virtual character, the virtual character is interacted with the user through the interaction audio.
In this embodiment, by acquiring the voiceprint information corresponding to the interactive character information and generating the interactive audio according to the voiceprint information, the interaction between the user and the virtual character image is realized through the interactive audio, so that not only can the user visually interact with the familiar or favorite virtual character image, but also the user can interact with the familiar or favorite sound in the sense of hearing, and the interactive experience of the user is further improved.
Referring to fig. 7, fig. 7 is a flowchart illustrating an interaction method of an avatar according to another embodiment of the present application. The method can comprise the following steps:
s610, obtaining interaction requirement information of the user, wherein the interaction requirement information comprises at least one of social information and online behavior characteristics.
And S620, acquiring the interactive character information according to the interactive demand information.
S630, acquiring a character image corresponding to the interactive character information, wherein the character image comprises at least one of a character picture and a character video.
And S640, generating a virtual character image according to the character image.
S650, acquiring interactive information input by a user, wherein the interactive information comprises audio information and character information.
In some implementations, the interaction information may include, but is not limited to, voice information, text information, image information, motion information, and the like. The voice information may include audio information of a language class (e.g., chinese, english audio, etc.) and audio information of a non-language class (e.g., music audio, etc.); the text information may include text information of a character class (e.g., chinese, english, etc.) and text information of a non-character class (e.g., special symbols, character expressions, etc.); the image information may include still image information (e.g., still pictures, photographs, etc.) as well as moving image information (e.g., moving pictures, video images, etc.); the motion information may include user motion information (e.g., user gestures, body motions, expressive motions, etc.) as well as terminal motion information (e.g., position, attitude, and motion state of the terminal device such as shaking, rotation, etc.).
It can be understood that information collection can be performed through different types of information input modules on the terminal device corresponding to different types of interaction information. For example, voice information of a user may be collected through an audio input device such as a microphone, text information input by the user may be collected through a touch screen or a physical key, image information may be collected through a camera, and motion information may be collected through an optical sensor, a gravity sensor, or the like.
As a way, when the application program corresponding to the virtual character runs in the system foreground of the terminal device, each hardware module of the terminal device may be called to obtain the interaction information input by the user through the application program interface corresponding to the customer service robot.
And S660, inputting the interactive information into a pre-trained first model, and obtaining the facial feature points corresponding to the interactive information.
In this embodiment, the sample facial feature points may be a feature point set for describing all or part of the morphology of the human face, where position information and depth information of each feature point on the human face in the space are recorded, and a local or all image of the human face may be reconstructed by obtaining the facial feature points. As one way, the facial feature points may be pre-selected, for example, in order to describe the lip shape of a person, a contour line of the lips of the person may be extracted, and a plurality of points distributed at intervals on the contour line of the lips may be selected as the facial feature points for describing the lip shape as required.
In some implementations, the facial feature points may include at least one of lip feature points, facial contour feature points, and face detail feature points. It is understood that the facial feature points may also be other feature points presented in any manner for describing the whole or partial shape of the human face, according to the user's needs and application environment.
And S670, inputting the facial feature points into a pre-trained second model to obtain a facial image.
And S680, updating the virtual character image based on the face image.
In some embodiments, when the virtual character plays the reply information, the face image may be updated corresponding to the reply information, specifically please refer to fig. 8, fig. 8 shows an interaction schematic diagram when the user performs virtual character interaction through the terminal device, the terminal device in the diagram takes "display game fast" as the interaction character information, the virtual character is generated according to the character image of "display game fast", the virtual character is taken as the character of the customer service robot, and the virtual character is displayed when the user communicates with the customer service robot.
Referring to fig. 9, in some embodiments, before step S650 of the virtual character interaction method of the embodiment, the method further includes:
s601, obtaining sample facial features, sample interaction information and sample face images.
S602, inputting the sample facial feature points and the sample interaction information into a first machine learning model for training to obtain a first model.
In this embodiment, the sample interaction information includes interaction information and specific audio information corresponding to the interaction information, and it can be understood that the specific audio information is audio information for responding to the interaction information, for example, if the interaction information is a question posed by a user, the specific audio information may be response information for answering the question with audio. The first machine learning model can be obtained by training through a neural network based on a large number of real person speaking videos (including real person speaking images and real person speaking audios corresponding to the real person speaking images) and training samples of facial feature points when the real person speaks. It will be appreciated that the first machine learning model is a model for converting audio into corresponding facial feature points. By inputting the specific audio information acquired before into the first machine learning model, the facial feature points corresponding to the specific audio information can be output by the first machine learning model.
It can be understood that when a person speaks, the face changes, and the position information and depth information of each feature point of the corresponding facial feature points also change, that is, each pronunciation (corresponding to the speaking audio) when the person speaks corresponds to at least one facial image, and each facial image corresponds to a group of facial feature points, and the corresponding relationship between the facial feature points and the audio can be inferred by extracting the real-person facial image corresponding to the audio from the real-person speaking video and extracting the facial feature points from the real-person facial image.
It is to be understood that, in the present embodiment, the acquired facial feature points correspond to the specific audio information in time. For example, 30 sets of facial feature points are required for one second (each set of facial feature points includes position information and depth information of each feature point in space), and if the audio duration corresponding to the specific audio information is 10 seconds, the total amount of the required facial feature points is 300 sets, and the 300 sets of facial feature points are temporally aligned with the 10 seconds of the specific audio information.
In some implementations, the first machine learning model can be run in a server, which converts the input specific audio information into corresponding facial feature points by the first machine learning model based on the server. As one mode, after the terminal device obtains the interaction information, the terminal device may send the interaction information to the server, the server identifies the interaction information to generate specific audio information, and the server converts the generated specific audio information into the facial feature points, that is, both the data processing processes of generating the specific audio information and converting the facial feature points may be completed by the server. As another mode, the terminal device may also obtain specific audio information locally, and send the specific audio information to the server, where the server obtains the corresponding facial feature points according to the specific audio information sent by the terminal device. The first machine learning model is deployed in the server, so that the occupation of the storage capacity and the operation resources of the terminal equipment can be reduced, the server only needs to receive a small amount of data, the pressure of data transmission is greatly reduced, and the efficiency of data transmission is improved.
In other embodiments, the first machine learning model may also be run locally at the terminal device, such that the customer service robot may provide service in an offline environment.
As one approach, the first machine learning model may adopt an RNN (Recurrent Neural Network) model, which can process an input sequence of arbitrary timing sequence by using internal memory, which makes it more computationally efficient and accurate in speech recognition processing than other machine learning models.
And S603, inputting the sample face image and the sample face feature points into a second machine learning model for training to obtain a second model.
In this embodiment, the second machine learning model may be obtained by training through a neural network based on a large number of face images of real persons during speaking and training samples of facial feature points extracted from the face images. It is to be understood that the second machine learning model is a model for constructing a face image corresponding to a facial feature point from the facial feature point of a face. By inputting the facial feature points output by the first machine learning model into the second machine learning model, a face image corresponding to the face can be output by the second machine learning model.
It is to be understood that since the acquired facial feature points correspond to the specific audio information, the face image acquired based on the facial feature points also corresponds to the specific audio information.
In some embodiments, the second machine learning model, similar to the first machine learning model, may be run in a server or may run locally on the terminal device, and has corresponding advantages in different application scenarios, and may be selected according to actual needs.
In this embodiment, the second machine learning model may output a face image similar to the human face image according to the input facial feature points, for example, after training to a certain extent, the second machine learning model may output a face image that is visually indistinguishable from the human face. It can be understood that the fidelity of the face image based on the facial feature points of the second machine learning model is gradually improved based on the number of training samples and the accumulation of training time.
As one mode, the second machine learning model may select a GAN (generic adaptive networks, Generative networks) model, which can continuously optimize its output through mutual game learning of a Generator (generatior) and a Discriminator (Discriminator), and when the number of training samples is large enough, a face image approaching a human face of a real person infinitely can be obtained through the GAN model, thereby achieving an effect of "falsely playing with the real". Further, the face image may be a two-dimensional face image, that is, the face feature points are input into the GAN model, and a two-dimensional face image corresponding to the face feature points may be obtained.
Referring to fig. 10, fig. 10 is a block diagram illustrating a virtual character interaction apparatus according to an embodiment of the present application. The device is applied to terminal equipment with a display screen or other image output devices, and the terminal equipment can be terminal equipment such as a smart phone, a tablet personal computer and a wearable intelligent terminal. As will be explained below with respect to the block diagram of the module shown in fig. 10, the apparatus 500 includes:
an interaction requirement information acquisition module 510, an interaction character information acquisition module 520, a character image acquisition module 530, a virtual character generation module 540 and an interaction module 530. The interaction requirement information obtaining module 510 is configured to obtain interaction requirement information of a user, where the interaction requirement information includes social information or online behavior characteristics of the user; the interactive figure information obtaining module 520 is configured to obtain interactive figure information according to the interactive demand information; the figure image obtaining module 530 is configured to obtain a figure image corresponding to the interactive figure information, where the figure image includes at least one of a figure picture and a figure video; the virtual character generation module 540 is used for generating a virtual character according to the character image; the interaction module 530 is used for interacting with the user through the virtual character.
Further, the social information includes an address book and an address record of the user, and the interactive character information obtaining module 520 further includes: the system comprises an intimacy determining unit and a first interactive person determining unit, wherein the intimacy determining unit is used for extracting a plurality of pieces of first person information from an address book and determining intimacy between each piece of first person information and a user according to a communication record; the first interactive person determining unit is configured to determine interactive person information among the plurality of first person information according to the degree of closeness.
Further, the first interactive character determining unit is specifically configured to respectively determine whether the intimacy between each piece of first character information and the user is greater than or equal to a preset intimacy; and determining any first person information with the intimacy greater than or equal to the preset intimacy as the interactive person information.
Further, the first interactive person determining unit is specifically configured to compare the degree of intimacy between each piece of first person information and the user; and determining the first person information with the greatest intimacy with the user as the interactive person information.
Further, the online behavior characteristics include an attention record, a praise record, a browsing record, and a comment record of the user, and the interactive character information obtaining module 520 further includes: the attention degree determining unit is used for acquiring a plurality of pieces of second person information according to the attention records and determining the attention degree of the user to each piece of second person information according to the praise records, the comment records and the browsing records; the second interactive person determination unit is configured to determine the interactive person information among the plurality of second person information according to the degree of attention.
Further, the second interactive person determining unit is specifically configured to respectively determine whether the attention of the user to each piece of second person information is greater than or equal to a preset attention; and determining any second person information with the attention degree larger than or equal to the preset attention degree as the interactive person information.
Further, the interaction demand information further includes custom character information, and the interaction character information obtaining module 520 is further configured to obtain custom character information uploaded by the user; and determining the user-defined character information as the interactive character information.
Further, the apparatus 500 further includes an audio simulation module, where the audio simulation module is configured to obtain voiceprint information corresponding to the interactive character information; generating an interactive audio according to the voiceprint information; when the virtual character is interacted with the user, the virtual character is interacted with the user through the interactive audio.
Further, the interaction module 530 is specifically configured to obtain interaction information input by the user, where the interaction information includes audio information and text information; inputting the interaction information into a pre-trained first model to obtain facial feature points corresponding to the interaction information; inputting the facial feature points into a pre-trained second model to obtain a face image; and updating the virtual character image based on the face image.
Further, the apparatus 500 further includes a first model building module and a second model building module, where the first model building module is configured to obtain sample facial features, sample interaction information, and a sample face image.
And inputting the sample facial feature points and the sample interaction information into a first machine learning model for training to obtain a first model.
And the second model establishing module is used for inputting the sample face image and the sample facial feature points into a second machine learning model for training to obtain a second model.
The virtual character interaction device 500 provided in this embodiment of the application is used to implement the corresponding virtual character interaction method in the foregoing method embodiments, and has the beneficial effects of the corresponding method embodiments, which are not described herein again.
It can be clearly understood by those skilled in the art that the virtual character image interaction device 500 provided in the embodiment of the present application can implement each process in the foregoing method embodiments, and for convenience and brevity of description, the specific working processes of the device 500 and the modules described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the apparatus 500 or the modules may be in an electrical, mechanical or other form.
In addition, each functional module in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
Referring to fig. 11, a block diagram of a terminal device 600 according to an embodiment of the present disclosure is shown. The terminal device 600 may be a terminal device capable of running an application, such as a smart phone or a tablet computer. The terminal device 600 in the present application may comprise one or more of the following components: a processor 610, a memory 620, and one or more applications, wherein the one or more applications may be stored in the memory 620 and configured to be executed by the one or more processors 610, the one or more programs configured to perform the methods as described in the aforementioned method embodiments.
The processor 610 may include one or more processing cores. The processor 610 connects various parts within the entire terminal apparatus 600 using various interfaces and lines, and performs various functions of the terminal apparatus 600 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 620 and calling data stored in the memory 620. Alternatively, the processor 610 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 610 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 610, but may be implemented by a communication chip.
The Memory 620 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 620 may be used to store instructions, programs, code sets, or instruction sets. The memory 620 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 600 during use (such as a phonebook, audio-video data, chat log data), and the like.
Referring to fig. 10, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 700 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 700 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 700 includes a non-volatile computer-readable storage medium. The computer readable storage medium 700 has storage space for program code 710 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 710 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (13)

1. A virtual character image interaction method is characterized in that the method comprises the following steps:
acquiring interaction demand information of a user, wherein the interaction demand information comprises at least one of social information and online behavior characteristics;
acquiring interactive figure information according to the interactive demand information;
acquiring a figure image corresponding to the interactive figure information, wherein the figure image comprises at least one of a figure picture and a figure video;
generating a virtual character image according to the character image;
and interacting with the user through the virtual character.
2. The method of claim 1, wherein the social information includes an address book and an address record, and the obtaining of the interactive person information according to the interaction requirement information includes:
extracting a plurality of first person information from the address book, and determining the intimacy between each first person information and the user according to the address book;
and determining the interactive character information in the plurality of first character information according to the intimacy.
3. The method of claim 2, wherein determining the interactive personal information among the plurality of first personal information according to the affinity comprises:
respectively judging whether the intimacy between each piece of first person information and the user is greater than or equal to a preset intimacy;
and determining any first person information with the intimacy greater than or equal to a preset intimacy as the interactive person information.
4. The method of claim 2, wherein determining the interactive personal information among the plurality of first personal information according to the affinity comprises:
comparing the degree of intimacy between each of the first person information and the user;
and determining the first person information with the greatest intimacy with the user as the interactive person information.
5. The method of claim 1, wherein the online behavior characteristics include an attention record, a praise record, a browsing record and a comment record of the user, and the obtaining of the interactive person information according to the interactive demand information includes:
acquiring a plurality of second person information according to the attention records, and determining the attention degree of the user to each second person information according to the like record, the comment record and the browsing record;
and determining the interactive personal information in the plurality of second personal information according to the attention degree.
6. The method of claim 5, wherein the determining the interactive personal information among the plurality of second personal information according to the attention comprises:
respectively judging whether the attention of the user to each piece of second person information is greater than or equal to a preset attention;
and determining any second person information with the attention degree larger than or equal to the preset attention degree as the interactive person information.
7. The method according to any one of claims 1 to 6, wherein the interaction requirement information further comprises custom character information, and the interaction requirement information of the user is obtained; acquiring interactive figure information according to the interactive demand information, comprising:
acquiring user-defined character information uploaded by a user; and determining the user-defined character information as interactive character information.
8. The method according to any one of claims 1 to 6, further comprising:
acquiring voiceprint information corresponding to the interactive figure information;
generating an interactive audio according to the voiceprint information;
and when the virtual character image interacts with the user through the interactive audio, the virtual character image interacts with the user through the interactive audio.
9. The method of any of claims 1 to 6, wherein said interacting with said user through said avatar comprises:
acquiring interactive information input by a user, wherein the interactive information comprises audio information and character information;
inputting the interaction information into a pre-trained first model to obtain facial feature points corresponding to the interaction information;
inputting the facial feature points into a pre-trained second model to obtain a face image;
and updating the virtual character image based on the face image.
10. The method of claim 9, further comprising:
acquiring sample facial features, sample interaction information and a sample face image;
inputting the sample facial feature points and the sample interaction information into a first machine learning model for training to obtain the first model;
and inputting the sample face image and the sample face feature points into a second machine learning model for training to obtain the second model.
11. An avatar interaction apparatus, comprising:
the interaction demand information acquisition module is used for acquiring interaction demand information of a user, and the interaction demand information comprises social information or online behavior characteristics of the user;
the interactive figure information acquisition module is used for acquiring interactive figure information according to the interactive demand information;
the figure image acquisition module is used for acquiring a figure image corresponding to the interactive figure information, and the figure image comprises at least one of a figure picture and a figure video;
the virtual character image generation module is used for generating a virtual character image according to the character image;
and interacting with the user through the virtual character.
12. A terminal device, comprising:
a memory;
one or more processors coupled with the memory;
one or more programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-10.
13. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 10.
CN201910838053.3A 2019-09-05 2019-09-05 Virtual character interaction method and device, terminal equipment and storage medium Pending CN110674398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910838053.3A CN110674398A (en) 2019-09-05 2019-09-05 Virtual character interaction method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910838053.3A CN110674398A (en) 2019-09-05 2019-09-05 Virtual character interaction method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110674398A true CN110674398A (en) 2020-01-10

Family

ID=69076058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910838053.3A Pending CN110674398A (en) 2019-09-05 2019-09-05 Virtual character interaction method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110674398A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652985A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Virtual object control method and device, electronic equipment and storage medium
CN111696180A (en) * 2020-05-06 2020-09-22 广东康云科技有限公司 Method, system, device and storage medium for generating virtual dummy
CN111833418A (en) * 2020-07-14 2020-10-27 北京百度网讯科技有限公司 Animation interaction method, device, equipment and storage medium
CN112990043A (en) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 Service interaction method and device, electronic equipment and storage medium
CN113568667A (en) * 2020-12-05 2021-10-29 宁波绿能科创文化艺术发展有限公司 Remote control method based on multimedia information, remote blessing device and system

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461525A (en) * 2014-11-27 2015-03-25 韩慧健 Intelligent user-defined consulting platform generating system
CN107146275A (en) * 2017-03-31 2017-09-08 北京奇艺世纪科技有限公司 A kind of method and device of setting virtual image
CN107340859A (en) * 2017-06-14 2017-11-10 北京光年无限科技有限公司 The multi-modal exchange method and system of multi-modal virtual robot
CN107894833A (en) * 2017-10-26 2018-04-10 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human
CN108259638A (en) * 2017-12-11 2018-07-06 海南智媒云图科技股份有限公司 Personal group list intelligent sorting method, intelligent terminal and storage medium
CN108491147A (en) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 A kind of man-machine interaction method and mobile terminal based on virtual portrait
CN108804698A (en) * 2018-03-30 2018-11-13 深圳狗尾草智能科技有限公司 Man-machine interaction method, system, medium based on personage IP and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104461525A (en) * 2014-11-27 2015-03-25 韩慧健 Intelligent user-defined consulting platform generating system
CN107146275A (en) * 2017-03-31 2017-09-08 北京奇艺世纪科技有限公司 A kind of method and device of setting virtual image
CN107340859A (en) * 2017-06-14 2017-11-10 北京光年无限科技有限公司 The multi-modal exchange method and system of multi-modal virtual robot
CN107894833A (en) * 2017-10-26 2018-04-10 北京光年无限科技有限公司 Multi-modal interaction processing method and system based on visual human
CN108259638A (en) * 2017-12-11 2018-07-06 海南智媒云图科技股份有限公司 Personal group list intelligent sorting method, intelligent terminal and storage medium
CN108804698A (en) * 2018-03-30 2018-11-13 深圳狗尾草智能科技有限公司 Man-machine interaction method, system, medium based on personage IP and equipment
CN108491147A (en) * 2018-04-16 2018-09-04 青岛海信移动通信技术股份有限公司 A kind of man-machine interaction method and mobile terminal based on virtual portrait

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696180A (en) * 2020-05-06 2020-09-22 广东康云科技有限公司 Method, system, device and storage medium for generating virtual dummy
CN111652985A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Virtual object control method and device, electronic equipment and storage medium
CN111652985B (en) * 2020-06-10 2024-04-16 上海商汤智能科技有限公司 Virtual object control method and device, electronic equipment and storage medium
CN111833418A (en) * 2020-07-14 2020-10-27 北京百度网讯科技有限公司 Animation interaction method, device, equipment and storage medium
CN111833418B (en) * 2020-07-14 2024-03-29 北京百度网讯科技有限公司 Animation interaction method, device, equipment and storage medium
CN113568667A (en) * 2020-12-05 2021-10-29 宁波绿能科创文化艺术发展有限公司 Remote control method based on multimedia information, remote blessing device and system
CN112990043A (en) * 2021-03-25 2021-06-18 北京市商汤科技开发有限公司 Service interaction method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110688911B (en) Video processing method, device, system, terminal equipment and storage medium
CN110674398A (en) Virtual character interaction method and device, terminal equipment and storage medium
CN110807388B (en) Interaction method, interaction device, terminal equipment and storage medium
TWI778477B (en) Interaction methods, apparatuses thereof, electronic devices and computer readable storage media
US9665563B2 (en) Animation system and methods for generating animation based on text-based data and user information
JP7391913B2 (en) Parsing electronic conversations for presentation in alternative interfaces
CN110085244B (en) Live broadcast interaction method and device, electronic equipment and readable storage medium
CN110400251A (en) Method for processing video frequency, device, terminal device and storage medium
CN110826441B (en) Interaction method, interaction device, terminal equipment and storage medium
CN110599359B (en) Social contact method, device, system, terminal equipment and storage medium
CN110609620A (en) Human-computer interaction method and device based on virtual image and electronic equipment
JP2021170313A (en) Method and device for generating videos
KR101628050B1 (en) Animation system for reproducing text base data by animation
US9087131B1 (en) Auto-summarization for a multiuser communication session
WO2022170848A1 (en) Human-computer interaction method, apparatus and system, electronic device and computer medium
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
CN110278140B (en) Communication method and device
CN110674706B (en) Social contact method and device, electronic equipment and storage medium
CN111538456A (en) Human-computer interaction method, device, terminal and storage medium based on virtual image
CN113067953A (en) Customer service method, system, device, server and storage medium
CN110794964A (en) Interaction method and device for virtual robot, electronic equipment and storage medium
WO2015012819A1 (en) System and method for adaptive selection of context-based communication responses
CN112669846A (en) Interactive system, method, device, electronic equipment and storage medium
CN113850898A (en) Scene rendering method and device, storage medium and electronic equipment
US20090210476A1 (en) System and method for providing tangible feedback according to a context and personality state

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110

RJ01 Rejection of invention patent application after publication