CN110812843A - Interaction method and device based on virtual image and computer storage medium - Google Patents

Interaction method and device based on virtual image and computer storage medium Download PDF

Info

Publication number
CN110812843A
CN110812843A CN201911046918.9A CN201911046918A CN110812843A CN 110812843 A CN110812843 A CN 110812843A CN 201911046918 A CN201911046918 A CN 201911046918A CN 110812843 A CN110812843 A CN 110812843A
Authority
CN
China
Prior art keywords
user
calling
material library
current time
historical behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911046918.9A
Other languages
Chinese (zh)
Other versions
CN110812843B (en
Inventor
闫羽婷
戴世昌
张军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911046918.9A priority Critical patent/CN110812843B/en
Publication of CN110812843A publication Critical patent/CN110812843A/en
Application granted granted Critical
Publication of CN110812843B publication Critical patent/CN110812843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/55Controlling game characters or game objects based on the game progress
    • A63F13/56Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/30Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device
    • A63F2300/301Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by output arrangements for receiving control signals generated by the game device using an additional display connected to the game console, e.g. on the controller

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides an interaction method based on an avatar, which comprises the following steps: acquiring the current time of a system and historical behavior data of a user; calling a corresponding interface scene according to the current time of the system, and calling a corresponding personalized message and a virtual image according to the historical behavior data of the user; and jointly displaying the interface scene, the personalized message and the virtual image on a display interface. The automatic switching of the interface scene and the virtual image based on the historical behavior data of the user are realized based on the current time of the system, and the operation of the user is not required. And moreover, personalized messages can be automatically pushed to the user based on historical behavior data of the user, so that active interaction with the user is realized, and a man-machine interaction mode is more intelligent. The application also provides an interaction device based on the virtual image and a computer storage medium corresponding to the method.

Description

Interaction method and device based on virtual image and computer storage medium
Technical Field
The invention relates to the technical field of human-computer interaction, in particular to an interaction method and device based on an avatar and a computer storage medium.
Background
Human-computer interaction is one of important functions of intelligent equipment, and in order to improve user experience, some existing intelligent equipment or applications are provided with virtual images so as to interact with users to a certain extent through the virtual images, thereby providing user experience.
In the prior art, a plurality of scenes and virtual images are preset, when a user triggers an interaction instruction through a virtual key or voice, the system switches the interface scenes and the virtual images according to the interaction instruction triggered by the user, and therefore interaction between the system and the user is achieved. For example, as shown in fig. 1, when the user presses "good night", the interface scene is switched from the interface scene in the daytime to the interface scene in the evening, and the avatar is also switched to the avatar with eyes closed.
However, the interaction mode of switching the interface scene and the avatar by the user is not convenient enough, and cannot actively interact with the user, and is limited by the number of operation instructions, and the types and the number of the interface scene and the avatar are limited, so the degree of intelligence of the mode is too low, and the user experience cannot be well improved.
Disclosure of Invention
Based on the defects of the prior art, the invention provides an interaction method and device based on an avatar and a computer storage medium, so as to solve the problem that the interaction mode based on the avatar in the prior art has too low intelligent degree and cannot well improve the user experience.
In order to achieve the purpose, the invention provides the following technical scheme:
the invention provides an interaction method based on an avatar in a first aspect, which comprises the following steps:
acquiring the current time of a system and historical behavior data of a user;
calling a corresponding interface scene according to the current time of the system, and calling a corresponding personalized message and a virtual image according to the historical behavior data of the user;
and jointly displaying the interface scene, the personalized message and the virtual image on a display interface.
Optionally, in the foregoing method, the invoking a corresponding interface scene according to the current time of the system includes:
generating a calling label corresponding to the current time of the system;
matching a material library label corresponding to the calling label from a plurality of preset material library labels;
based on the matched material library labels, calling interface scenes corresponding to the material library labels from a material library; and a plurality of interface scenes are preset in the material library.
Optionally, in the foregoing method, the generating an invocation label corresponding to the current time of the system includes:
determining the current season and the current time period according to the current time of the system; wherein the time periods include morning, noon, afternoon, and evening;
and setting the current season and the current time period as calling labels.
Optionally, in the above method, the retrieving, according to the historical behavior data of the user, the corresponding personalized message and the corresponding avatar includes:
selecting a personalized message which is consistent with the historical behavior of the user from a material library by analyzing data with timeliness in the historical behavior data of the user;
calling a corresponding dynamic virtual image from the material library according to the key words in the personalized message;
wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic virtual character at least corresponds to one keyword.
Optionally, in the method, the selecting a personalized message from a material library by analyzing recent time-sensitive data in the historical behavior data of the user includes:
obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user;
and selecting a personalized message which is in accordance with the game type, the game time and the game result from the material library.
The invention provides an interaction device based on an avatar in a second aspect, comprising:
the acquisition unit is used for acquiring the current time of the system and the historical behavior data of the user;
the first calling unit is used for calling a corresponding interface scene according to the current time of the system;
the second calling unit is used for calling corresponding personalized messages and virtual images according to the historical behavior data of the user;
and the display unit is used for jointly displaying the interface scene, the personalized message and the virtual image on a display interface.
Optionally, in the above apparatus, the first retrieving unit includes:
the generating unit is used for generating a calling label corresponding to the current time of the system;
the matching unit is used for matching material library labels corresponding to the calling labels from a plurality of preset material library labels;
the first calling subunit is used for calling an interface scene corresponding to the material library label from the material library based on the matched material library label; and a plurality of interface scenes are preset in the material library.
Optionally, in the above apparatus, the generating unit includes:
the determining unit is used for determining the current season and the current time period according to the current time of the system; wherein the time periods include morning, noon, afternoon, and evening;
and the generation subunit is used for setting the current season and the current time period as calling labels.
Optionally, in the above apparatus, the second retrieving unit includes:
the selection unit is used for selecting a personalized message which is consistent with the historical behavior of the user from a material library by analyzing data with timeliness in the historical behavior data of the user;
the second calling subunit is used for calling the corresponding dynamic virtual image from the material library according to the key words in the personalized message;
wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic virtual character at least corresponds to one keyword.
Optionally, in the above apparatus, the selecting unit includes:
the analysis unit is used for obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user;
and the selecting subunit is used for selecting a personalized message which is in accordance with the game type, the game time and the game result from the material library.
A third aspect of the present invention provides a computer storage medium storing a program for implementing the avatar-based interaction method as described in any one of the above when executed.
The interactive method and device based on the virtual image and the computer storage medium provided by the invention are characterized in that the current time of the system and the historical behavior data of the user are obtained, then the corresponding interface scene is called according to the current time of the system, the corresponding personalized message and the virtual image are called according to the historical behavior data of the user, and finally the interface scene, the personalized message and the virtual image are displayed on a display interface together. The switching between the interface scene and the virtual image is realized without the operation of the user, but the automatic switching between the interface scene and the virtual image is realized through the current time of the system and the historical behavior data of the user, so the types and the number of the interface scene and the virtual image are not limited by the number of the operation instructions. And moreover, personalized messages can be automatically pushed to the user based on historical behavior data of the user, so that the user can be actively interacted with the personalized messages, a highly intelligent man-machine interaction mode is realized, and the user experience is effectively improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic diagram of an interactive interface of a human-computer interaction method based on an avatar in the prior art;
fig. 2 is a schematic flowchart of an interaction method based on an avatar according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating an avatar-based interaction method according to another embodiment of the present invention;
FIG. 4 is a flowchart illustrating an avatar-based interaction method according to another embodiment of the present invention;
FIG. 5 is a flowchart illustrating an avatar-based interaction method according to another embodiment of the present invention;
FIG. 6 is a flowchart illustrating an avatar-based interaction method according to another embodiment of the present invention;
FIG. 7 is a diagram illustrating a display interface of an avatar-based interaction method according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of an avatar-based interactive apparatus according to another embodiment of the present invention;
fig. 9 is a schematic structural diagram of a first retrieving unit according to another embodiment of the present invention;
fig. 10 is a schematic structural diagram of a second retrieving unit according to another embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The embodiment of the invention provides an interaction method based on an avatar, as shown in fig. 2, comprising the following steps:
s201, obtaining the current time of the system and historical behavior data of the user.
The current time of the system refers to the current actual time, that is, the time used by the current user in the area, such as beijing time. It should be noted that the current time of the system may include specific year, month and day, and is not limited to only hours, minutes and seconds of the day. Alternatively, since almost all the system applications of the device system are self-provided with time applications, the current time of the system can be directly acquired from the self-provided time applications of the system.
The historical behavior data of the user refers to data generated by the behavior of the user before the acquired current time of the system. Specifically, the data may be game data generated by playing a game by the user, motion data generated by the motion of the user, geographic position change information of the user going out, voice information generated by speaking of the user, and the like, which are generated by the behavior of the user and can be acquired.
Specifically, a third-party application, such as a game, sports software, map software, or a positioning system provided by the system, may be docked via a corresponding interface protocol, so as to directly obtain game data, sports data, geographic location information, and the like of the user, which are acquired by the third-party application, from the third-party application, or obtain behavior data of the user from the third-party application by requesting the third-party application. Or the geographic position information of the user, the voice data of the user and the like can be directly obtained through the self-contained system software or hardware of the system, such as a positioning system, a microphone and the like. Of course, the current time of the system and the historical behavior data of the user may be obtained in other manners, and the present invention also belongs to the protection scope of the present invention.
S202, calling a corresponding interface scene according to the current time of the system, and calling a corresponding personalized message and a corresponding virtual image according to the historical behavior data of the user.
It should be noted that, in the implementation of the present invention, a plurality of interface scenes, personalized messages, and avatars need to be designed in advance. The interface scene can also be understood as an interface background. Specifically, a plurality of interface scenes are designed for different time periods. The time periods can comprise a plurality of time periods in one day and a plurality of time periods in one year, and one time period can be designed with a plurality of interface scenes, so that the selectable interface scenes are richer, and the aesthetic fatigue of users is avoided. And then storing the designed interface scene in a material library. For example, interface scenes having common characteristics are designed for winter, and two different interface scenes are designed according to the daytime and the night respectively.
Similarly, for the personalized message, a plurality of personalized messages and a plurality of virtual images can be respectively configured according to different user behavior data. For example, personalized messages such as "refuel, you over yesterday", "pace with today progresses, refuel" are configured for the user's running data, and the running athlete's avatar is configured. The avatar refers to an avatar such as a virtual character, an animal or a robot, which may be dynamic or static, and may be planar or stereoscopic. The personalized message may be text or text and speech corresponding to the text. And storing the configured personalized message and the virtual image in a material library. It should be noted that the interface scenes, personalized messages and avatars in the material library may be continuously updated.
Therefore, when the current time of the system and the historical behavior data of the user are obtained, the interface scene corresponding to the obtained current time of the system can be called from the material library according to the current time of the system, and the corresponding personalized message and the corresponding virtual image can be called according to the historical behavior data of the user.
Optionally, in another embodiment of the present invention, as shown in fig. 3, an implementation manner of invoking a corresponding interface scene according to the current time of the system in step S202 includes:
and S301, generating a calling label corresponding to the current time of the system.
It should be noted that, in the embodiment of the present invention, after a plurality of interface scenes are designed in advance for different times, a corresponding material library tag is marked on each interface scene according to the corresponding time, so as to form a mapping relationship between the interface scenes and the material library tags, and the interface scenes and the material library tags corresponding to the interface scenes are stored in the material library. The material library label may be any single character or a combination of multiple characters which can be distinguished from each other, and characters or numbers are generally used as the material library label. And different material library labels are adopted at different times. Moreover, one interface scene may correspond to a single or multiple material library tags, for example, one interface scene may correspond to one material library tag in the morning, or may correspond to three material library tags in winter, first winter, and morning. Similarly, a plurality of interface scenes can be corresponded under the same material library label.
Optionally, the calling tag and the material library tag are generated based on the same principle, so that the corresponding material library tag can be matched more quickly through the calling tag subsequently. Certainly, generating the calling tag and the material library tag through two different principles is also one of the ways, but the corresponding relationship between the tags generated through the two different principles needs to be set and maintained, so that extra workload is increased, and the implementation process of the method is too complicated.
Optionally, in another embodiment of the present invention, as shown in fig. 4, a specific implementation manner of step S301 includes:
s401, according to the current time of the system, the current season and the current time period are determined, wherein the time period comprises morning, noon, afternoon and evening.
That is, in the embodiment of the present invention, one year is divided into four seasons according to the difference in time, and one day is divided into four periods of morning, noon, afternoon, and evening. When the current time of the system is acquired, the season to which the current time of the system currently belongs and which time period in the day are determined. Of course, this is only one alternative way, and other time segmentation ways are adopted, for example, each season may be further divided into: early season and late season, such as winter divided into early winter and late winter; and the time of day may further include: early morning, evening, etc. Or the system may be divided into only one day and not divided into years, which all fall into the protection scope of the present invention.
S402, setting the current season and the current time period as calling labels.
That is, the present invention directly sets the current season and the current time period determined in step S401 as the retrieval tag. Because the call tags and the material library tags are generated based on the same principle, the material library tags also include seasons and time periods.
Specifically, after the current time of the system is obtained, the calling tag and the material library tag are called based on the same principle as the generation of the material library tag, and the calling tag corresponding to the current time of the system is generated. For example, if the acquired current time of the system is 20 o' clock in 11 months, the generated calling tag can be winter and night.
And S302, matching the material library labels corresponding to the calling labels from a plurality of preset material library labels.
In the embodiment of the invention, the interface scene corresponding to the current time of the system is called in a tag matching mode, so that the real world can be simulated through the corresponding interface scene, the interface scene is consistent with the real scene, and the user experience is improved.
Because the calling label and the material library label are generated on the basis of the same principle, the generated calling label and the material library label in the material library can be compared one by one, so that the material library label corresponding to the calling label is matched. The material library label corresponding to the calling label can be understood as the same material library label as the calling label.
And S303, based on the matched material library labels, calling interface scenes corresponding to the material library labels from the material library.
Specifically, based on the matched material library tags, an interface scene in the material library corresponding to the matched material library tags is determined. And at least one interface scene corresponding to the matched material library label is included.
Optionally, when the matched material library label corresponds to a plurality of interface scenes, the interface scene with the minimum total number of previous calls in the plurality of interface scenes may be called according to the total number of previous calls in the interface scenes. Therefore, the same interface scene can be prevented from being presented to a user, aesthetic fatigue is caused, and a new interface scene updated to the material library can be called quickly. Of course, one interface scene may be called from a plurality of interface scenes in a random or other manner.
Optionally, in another embodiment of the present invention, an implementation manner of invoking the corresponding personalized message and the avatar according to the historical behavior data of the user in step S202 is as shown in fig. 5, and includes:
s501, selecting a personalized message which is consistent with the historical behavior of the user from a material library by analyzing the time-sensitive data in the historical behavior data of the user.
The data with timeliness refers to data closely related to time, i.e., data with strong timeliness. Generally, the data with poor timeliness is data generated by performing behaviors which are relatively common and high in frequency and do not have much meaning for the user, and personalized messages do not need to be fed back for the historical behavior data. Therefore, the embodiment of the invention only analyzes the time-sensitive data in the historical behavior data of the user and selects the personalized message, thereby avoiding uninterruptedly pushing meaningless messages to the user.
Alternatively, the type of the user's behavior may be determined by analyzing historical behavior data of the user, and specific data when performing the behavior, such as pace, heartbeat, distance, etc. of running and running. And then, according to the behavior type of the user and the specific data when the behavior is specifically carried out, directly matching personalized messages which accord with the historical behavior data of the user from the material library, or comparing the data which carry out the same type of behavior at the last time, and according to the comparison result and the current behavior data, matching personalized messages which accord with the comparison result and the historical behavior data of the user from the material library.
Optionally, in another embodiment of the present invention, when the behavior data of the user is game data, a specific implementation manner of step S501 is provided, as shown in fig. 6, including:
s601, obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user.
That is, when the acquired historical behavior data of the user includes game data recorded when the user played the game previously, the type of the game played by the user at this time, the game time and the game result are determined by analyzing the historical behavior data of the user. The game result can be a completed task or a stage, or a battle performance. Of course, besides determining the type and the game result of the game played by the user, the total duration of the game may also be determined.
S602, selecting a personalized message which is in accordance with the game type, the game time and the game result from the material library.
Specifically, a personalized message meeting the game type and the game result can be selected from the material library according to the corresponding relation between the keyword of the personalized message and the game type and the game result. For example, if a user has just played a hero league at 20 o' clock and lost a game, then depending on the game type: hero alliance, time of play: evening, and game outcome: if the user fails, selecting a personalized message as follows: summoning the teacher, not taking care of the mind, early-point information, and continuing to refuel in the sky! Need to help I stay at all times!
And S502, calling a corresponding dynamic virtual image from the material library according to the key words in the personalized message.
Wherein, the material library is preset with a plurality of personalized messages and a plurality of dynamic virtual images. Each dynamic avatar corresponds to at least one keyword.
That is to say, in the embodiment of the present invention, the avatar is called based on the preset corresponding relationship between the keyword in the personalized message and the avatar. Because the personalized message and the avatar are finally presented to the user, the avatar speaks the selected personalized message, in the embodiment of the invention, the avatar adopts a dynamic avatar, and the dynamic avatar is called based on the keywords in the personalized message, so that the shapes of the called dynamic avatar, such as actions, expressions and the like, are more consistent with the semantics of the called personalized message, and the effect presented to the user is more real.
Therefore, in the embodiment of the present invention, when the personalized message and the dynamic avatar are preset, the keywords and the dynamic avatar in the personalized message also need to be set. Wherein each dynamic avatar corresponds to at least one keyword.
Alternatively, one personalized message may contain a plurality of keywords, and different personalized messages may have the same keyword, so that it can be simply understood that: each dynamic avatar corresponds to at least one personalized message, and each personalized message corresponds to at least one dynamic avatar. When the personalized message has a plurality of keywords, the dynamic avatar to be called can be determined according to the degree of the total correlation between the dynamic avatar and all the keywords.
And S203, displaying the interface scene, the personalized message and the virtual image on the display interface together.
Specifically, the called interface scene, the personalized message and the dynamic virtual image are combined together and displayed on the display interface. Wherein, the interface scene is used as the background of the display interface, and the virtual image is expressed as speaking the personalized message.
Alternatively, as shown in fig. 7, the personalized message is displayed in the form of a speech bubble. Specifically, the personalized message is displayed in a speech bubble, and the speech bubble is displayed at a position near the mouth of the avatar. It should be noted that, if the personalized message includes text and voice, the personalized message voice is played while the personalized message is displayed through the dialogue bubble.
According to the interaction method based on the virtual image, the current time of a system and the historical behavior data of a user are obtained, then the corresponding interface scene is called according to the current time of the system, the corresponding personalized message and the virtual image are called according to the historical behavior data of the user, and finally the interface scene, the personalized message and the virtual image are displayed on a display interface together. The switching between the interface scene and the virtual image is realized without the operation of the user, but the automatic switching between the interface scene and the virtual image is realized through the current time of the system and the historical behavior data of the user, so the types and the number of the interface scene and the virtual image are not limited by the number of the operation instructions. And moreover, personalized messages can be automatically pushed to the user based on historical behavior data of the user, so that the user can be actively interacted with the personalized messages, a highly intelligent man-machine interaction mode is realized, and the user experience is effectively improved.
Another embodiment of the present invention provides an avatar-based interactive apparatus, as shown in fig. 8, including:
an obtaining unit 801, configured to obtain a current time of the system and historical behavior data of the user.
It should be noted that, the specific working process of the obtaining unit 801 may refer to step S201 in the foregoing method embodiment accordingly, and details are not described here again.
A first retrieving unit 802, configured to retrieve a corresponding interface scene according to the current time of the system.
It should be noted that, the specific working process of the first retrieving unit 802 may refer to step S202 in the foregoing method embodiment accordingly, and details are not described here again.
And a second retrieving unit 803, configured to retrieve the corresponding personalized message and the corresponding avatar according to the historical behavior data of the user.
It should be noted that, the specific working process of the second retrieving unit 803 may also refer to step S202 in the foregoing method embodiment, which is not described herein again.
A display unit 804, configured to display the interface scene, the personalized message, and the avatar on a display interface together.
It should be noted that, the specific working process of the display unit 804 may refer to step S203 in the foregoing method embodiment accordingly, which is not described herein again.
Optionally, in another embodiment of the present invention, the first retrieving unit, as shown in fig. 9, includes:
a generating unit 901, configured to generate an invoking tag corresponding to the current time of the system.
It should be noted that, the specific working process of the generating unit 901 may refer to step S301 in the foregoing method embodiment accordingly, which is not described herein again.
The matching unit 902 is configured to match a material library tag corresponding to the called tag from a plurality of preset material library tags.
It should be noted that, the specific working process of the matching unit 902 may refer to step S302 in the foregoing method embodiment accordingly, and is not described herein again.
And the first calling subunit 903 is used for calling the interface scene corresponding to the material library label from the material library based on the matched material library label.
And a plurality of interface scenes are preset in the material library.
It should be noted that, the specific working process of the first retrieving subunit 903 may refer to step S303 in the foregoing method embodiment accordingly, which is not described herein again.
Optionally, in another embodiment of the present invention, the generating unit includes:
the determining unit is used for determining the current season and the current time period according to the current time of the system; wherein the time periods include morning, noon, afternoon, and evening.
It should be noted that, the specific working process of the determining unit may refer to step S401 in the foregoing method embodiment accordingly, and details are not described here again.
And the generation subunit is used for setting the current season and the current time period as calling labels.
It should be noted that, the step S402 in the above method embodiment may be referred to in the specific working process of generating the sub-unit accordingly, and details are not described here again.
Optionally, in another embodiment of the present invention, the second retrieving unit, as shown in fig. 10, includes:
a selecting unit 1001, configured to select a personalized message that matches the historical behavior of the user from a material library by analyzing data with timeliness in the historical behavior data of the user.
It should be noted that, the specific working process of the selecting unit 1001 may refer to step S501 in the foregoing method embodiment accordingly, and details are not repeated here.
And a second retrieving subunit 1002, configured to retrieve, according to the keyword in the personalized message, a corresponding dynamic avatar from the material library.
Wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic virtual character at least corresponds to one keyword.
It should be noted that, the specific working process of the second retrieving subunit 1002 may refer to step S502 in the foregoing method embodiment accordingly, and details are not described here again.
Optionally, in another embodiment of the present invention, the selecting unit includes:
and the analysis unit is used for obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user.
It should be noted that, the specific working process of the analysis unit may refer to step S601 in the foregoing method embodiment accordingly, which is not described herein again.
And the selecting subunit is used for selecting a personalized message which is in accordance with the game type, the game time and the game result from the material library.
It should be noted that, the specific working process of selecting the sub-unit may refer to step S602 in the foregoing method embodiment accordingly, which is not described herein again.
According to the interaction device based on the virtual image, the current time of a system and historical behavior data of a user are obtained through an obtaining unit, then a first calling unit calls a corresponding interface scene according to the current time of the system, a second calling unit calls a corresponding personalized message and the virtual image according to the historical behavior data of the user, and finally a display unit displays the interface scene, the personalized message and the virtual image on a display interface together. The switching between the interface scene and the virtual image is realized without the operation of the user, but the automatic switching between the interface scene and the virtual image is realized through the current time of the system and the historical behavior data of the user, so the types and the number of the interface scene and the virtual image are not limited by the number of the operation instructions. And moreover, personalized messages can be automatically pushed to the user based on historical behavior data of the user, so that the user can be actively interacted with the personalized messages, a highly intelligent man-machine interaction mode is realized, and the user experience is effectively improved.
Another embodiment of the present invention provides a computer storage medium storing a program for implementing the avatar-based interaction method as described in any one of the above method embodiments when the program is executed.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An interaction method based on an avatar, comprising:
acquiring the current time of a system and historical behavior data of a user;
calling a corresponding interface scene according to the current time of the system, and calling a corresponding personalized message and a virtual image according to the historical behavior data of the user;
and jointly displaying the interface scene, the personalized message and the virtual image on a display interface.
2. The method of claim 1, wherein the invoking of the corresponding interface scene according to the current time of the system comprises:
generating a calling label corresponding to the current time of the system;
matching a material library label corresponding to the calling label from a plurality of preset material library labels;
based on the matched material library labels, calling interface scenes corresponding to the material library labels from a material library; and a plurality of interface scenes are preset in the material library.
3. The method of claim 2, wherein generating the invocation label corresponding to the current time of the system comprises:
determining the current season and the current time period according to the current time of the system; wherein the time periods include morning, noon, afternoon, and evening;
and setting the current season and the current time period as calling labels.
4. The method of claim 1, wherein retrieving the corresponding personalized message and avatar based on the historical behavior data of the user comprises:
selecting a personalized message which is consistent with the historical behavior of the user from a material library by analyzing data with timeliness in the historical behavior data of the user;
calling a corresponding dynamic virtual image from the material library according to the key words in the personalized message;
wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic virtual character at least corresponds to one keyword.
5. The method of claim 4, wherein selecting a personalized message from a corpus by analyzing recent time-sensitive data in the user's historical behavior data comprises:
obtaining the game type, the game time and the game result of the game played by the user by analyzing the game data in the historical behavior data of the user;
and selecting a personalized message which is in accordance with the game type, the game time and the game result from the material library.
6. An avatar-based interaction device, comprising:
the acquisition unit is used for acquiring the current time of the system and the historical behavior data of the user;
the first calling unit is used for calling a corresponding interface scene according to the current time of the system;
the second calling unit is used for calling corresponding personalized messages and virtual images according to the historical behavior data of the user;
and the display unit is used for jointly displaying the interface scene, the personalized message and the virtual image on a display interface.
7. The apparatus of claim 6, wherein the first retrieving unit comprises:
the generating unit is used for generating a calling label corresponding to the current time of the system;
the matching unit is used for matching material library labels corresponding to the calling labels from a plurality of preset material library labels;
the first calling subunit is used for calling an interface scene corresponding to the material library label from the material library based on the matched material library label; and a plurality of interface scenes are preset in the material library.
8. The apparatus of claim 7, wherein the generating unit comprises:
the determining unit is used for determining the current season and the current time period according to the current time of the system; wherein the time periods include morning, noon, afternoon, and evening;
and the generation subunit is used for setting the current season and the current time period as calling labels.
9. The apparatus of claim 6, wherein the second retrieving unit comprises:
the selection unit is used for selecting a personalized message which is consistent with the historical behavior of the user from a material library by analyzing data with timeliness in the historical behavior data of the user;
the second calling subunit is used for calling the corresponding dynamic virtual image from the material library according to the key words in the personalized message;
wherein, a plurality of personalized messages and a plurality of dynamic virtual images are preset in the material library; each dynamic virtual character at least corresponds to one keyword.
10. A computer storage medium storing a program for implementing the avatar-based interaction method of any one of claims 1 to 5 when executed.
CN201911046918.9A 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium Active CN110812843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911046918.9A CN110812843B (en) 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911046918.9A CN110812843B (en) 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium

Publications (2)

Publication Number Publication Date
CN110812843A true CN110812843A (en) 2020-02-21
CN110812843B CN110812843B (en) 2023-09-15

Family

ID=69551553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911046918.9A Active CN110812843B (en) 2019-10-30 2019-10-30 Interactive method and device based on virtual image and computer storage medium

Country Status (1)

Country Link
CN (1) CN110812843B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034989A (en) * 2020-09-04 2020-12-04 华人运通(上海)云计算科技有限公司 Intelligent interaction system
CN112115231A (en) * 2020-09-17 2020-12-22 中国传媒大学 Data processing method and device
CN112988022A (en) * 2021-04-22 2021-06-18 北京航天驭星科技有限公司 Virtual calendar display method and device, electronic equipment and computer readable medium
CN114363302A (en) * 2021-12-14 2022-04-15 北京云端智度科技有限公司 Method for improving streaming media transmission quality by using layering technology
CN114816625A (en) * 2022-04-08 2022-07-29 郑州铁路职业技术学院 Method and device for designing interface of automatic interactive system
CN115841354A (en) * 2022-12-27 2023-03-24 华北电力大学 Electric vehicle charging pile maintenance evaluation method and system based on block chain
CN116627261A (en) * 2023-07-25 2023-08-22 安徽淘云科技股份有限公司 Interaction method, device, storage medium and electronic equipment
CN112115231B (en) * 2020-09-17 2024-06-25 中国传媒大学 Data processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040128389A1 (en) * 2002-12-31 2004-07-01 Kurt Kopchik Method and apparatus for wirelessly establishing user preference settings on a computer
CN104639725A (en) * 2013-11-08 2015-05-20 腾讯科技(深圳)有限公司 Interface switching method and device
CN106502705A (en) * 2016-11-04 2017-03-15 乐视控股(北京)有限公司 Method and its device of application program theme are set
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040128389A1 (en) * 2002-12-31 2004-07-01 Kurt Kopchik Method and apparatus for wirelessly establishing user preference settings on a computer
CN104639725A (en) * 2013-11-08 2015-05-20 腾讯科技(深圳)有限公司 Interface switching method and device
CN106502705A (en) * 2016-11-04 2017-03-15 乐视控股(北京)有限公司 Method and its device of application program theme are set
CN108510437A (en) * 2018-04-04 2018-09-07 科大讯飞股份有限公司 A kind of virtual image generation method, device, equipment and readable storage medium storing program for executing

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112034989A (en) * 2020-09-04 2020-12-04 华人运通(上海)云计算科技有限公司 Intelligent interaction system
CN112115231A (en) * 2020-09-17 2020-12-22 中国传媒大学 Data processing method and device
CN112115231B (en) * 2020-09-17 2024-06-25 中国传媒大学 Data processing method and device
CN112988022A (en) * 2021-04-22 2021-06-18 北京航天驭星科技有限公司 Virtual calendar display method and device, electronic equipment and computer readable medium
CN114363302A (en) * 2021-12-14 2022-04-15 北京云端智度科技有限公司 Method for improving streaming media transmission quality by using layering technology
CN114816625A (en) * 2022-04-08 2022-07-29 郑州铁路职业技术学院 Method and device for designing interface of automatic interactive system
CN114816625B (en) * 2022-04-08 2023-06-16 郑州铁路职业技术学院 Automatic interaction system interface design method and device
CN115841354A (en) * 2022-12-27 2023-03-24 华北电力大学 Electric vehicle charging pile maintenance evaluation method and system based on block chain
CN115841354B (en) * 2022-12-27 2023-09-12 华北电力大学 Electric vehicle charging pile maintenance evaluation method and system based on block chain
CN116627261A (en) * 2023-07-25 2023-08-22 安徽淘云科技股份有限公司 Interaction method, device, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN110812843B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN110812843B (en) Interactive method and device based on virtual image and computer storage medium
US10515655B2 (en) Emotion type classification for interactive dialog system
CN105345818B (en) Band is in a bad mood and the 3D video interactives robot of expression module
CN1312554C (en) Proactive user interface
US10664741B2 (en) Selecting a behavior of a virtual agent
US20190057298A1 (en) Mapping actions and objects to tasks
US20170277993A1 (en) Virtual assistant escalation
CN111309886B (en) Information interaction method and device and computer readable storage medium
CN104461525B (en) A kind of intelligent consulting platform generation system that can customize
KR20210110620A (en) Interaction methods, devices, electronic devices and storage media
US10445115B2 (en) Virtual assistant focused user interfaces
CN107704169B (en) Virtual human state management method and system
US20230206912A1 (en) Digital assistant control of applications
US11003860B2 (en) System and method for learning preferences in dialogue personalization
CN107480766B (en) Method and system for content generation for multi-modal virtual robots
WO2023030010A1 (en) Interaction method, and electronic device and storage medium
CN109063662A (en) Data processing method, device, equipment and storage medium
CA3045132C (en) Communication with augmented reality virtual agents
CN115396738A (en) Video playing method, device, equipment and storage medium
CN112306321A (en) Information display method, device and equipment and computer readable storage medium
DE102023102142A1 (en) CONVERSATIONAL AI PLATFORM WITH EXTRAACTIVE QUESTION ANSWER
JP2013175066A (en) Method, system, server device, terminal device, and program for distributing data constituting three-dimensional figure
CN116301329A (en) Intelligent device active interaction method, device, equipment and storage medium
CN113297414A (en) Management method and device of music gift, medium and computing equipment
CN109726267A (en) Story recommended method and device for Story machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022445

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant