CN114237396B - Action adjustment method, action adjustment device, electronic equipment and readable storage medium - Google Patents

Action adjustment method, action adjustment device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114237396B
CN114237396B CN202111534748.6A CN202111534748A CN114237396B CN 114237396 B CN114237396 B CN 114237396B CN 202111534748 A CN202111534748 A CN 202111534748A CN 114237396 B CN114237396 B CN 114237396B
Authority
CN
China
Prior art keywords
control information
action
target
virtual
adjustment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111534748.6A
Other languages
Chinese (zh)
Other versions
CN114237396A (en
Inventor
王骁玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202111534748.6A priority Critical patent/CN114237396B/en
Publication of CN114237396A publication Critical patent/CN114237396A/en
Application granted granted Critical
Publication of CN114237396B publication Critical patent/CN114237396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure provides an action adjustment method, an apparatus, an electronic device, and a storage medium, where the action adjustment method includes: acquiring original control information of each virtual character, wherein the original control information of each virtual character indicates a plurality of virtual characters to make the same target action; when the maximum difference between the target actions of the multiple virtual roles is larger than a first preset threshold value, adjusting the original control information of at least one virtual role to obtain adjustment control information; driving the corresponding virtual character to make an adjustment action based on the adjustment control information, and/or driving the corresponding virtual character to make a target action based on the original control information; the difference between any two adjustment actions, any two target actions or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is smaller than the first preset threshold. Thus, consistency of actions of the plurality of virtual characters can be improved.

Description

Action adjustment method, action adjustment device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for adjusting actions, an electronic device, and a storage medium.
Background
In recent years, virtual character live broadcasting occupies an increasing proportion in video live broadcasting services, and the virtual character live broadcasting is to replace a real image of a host to carry out video live broadcasting by a specific virtual character, specifically, control signals about action expression data of actors (persons in the middle) are acquired through external hardware equipment, and the virtual character is driven to act.
However, when multiple virtual roles need to make the same action, there are often differences in actions among the multiple virtual roles due to various reasons (such as abnormal hardware devices or differences among control signals), so that the viewing experience of users is affected, and especially in the process of group performance, the visual effect is seriously affected by the action differences.
Disclosure of Invention
The embodiment of the disclosure at least provides an action adjusting method, an action adjusting device, electronic equipment and a storage medium.
The embodiment of the disclosure provides an action adjusting method, which comprises the following steps:
acquiring original control information of each virtual character, wherein the original control information of each virtual character indicates a plurality of virtual characters to make the same target action;
When the maximum difference between the target actions of the multiple virtual roles is larger than a first preset threshold value, adjusting original control information of at least one virtual role to obtain adjustment control information;
driving a corresponding virtual character to make an adjustment action based on the adjustment control information, and/or driving the corresponding virtual character to make the target action based on the original control information; wherein the difference between any two adjustment actions, any two target actions, or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is less than the first preset threshold.
In a possible implementation manner, the adjusting the original control information of at least one virtual character to obtain adjusted control information includes:
determining a target position of each virtual character in the 3D scene according to the current position of each virtual character in the 3D scene and the original control information;
determining a target virtual role to be adjusted from the plurality of virtual roles based on the target position of each virtual role in the 3D scene;
And adjusting the original control information of the target virtual character to obtain the adjustment control information.
In a possible implementation manner, the adjusting the original control information of the target virtual character to obtain the adjusted control information includes:
and adjusting the control information of the target virtual character for indicating the foot movement to obtain the adjustment control information of the foot of the target virtual character.
In a possible implementation manner, the target action includes a gesture action and/or a gesture action, and the adjusting the control information of at least one virtual character to obtain adjusted control information includes:
determining an adjustment action matched with the target action from a preset action library based on the target action;
and adjusting the control information of each virtual role based on the control information of the adjustment action to obtain the adjustment control information.
In a possible implementation manner, the target action includes a gesture action and/or a gesture action, and the adjusting the control information of at least one virtual character to obtain adjusted control information includes:
determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters;
And adjusting the original control information based on the control information of the reference gesture motion and/or the reference gesture to obtain the adjustment control information.
In a possible implementation manner, the determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters includes:
and determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters based on the standard degree of the target motion of each virtual character.
In one possible implementation, the criteria for the target action for each virtual character is determined using artificial intelligence methods.
The embodiment of the disclosure provides an action adjusting device, comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring original control information of each virtual character, wherein the original control information of each virtual character indicates a plurality of virtual characters to make the same target action;
the adjusting module is used for adjusting the original control information of at least one virtual character to obtain adjustment control information under the condition that the maximum difference between the target actions of the plurality of virtual characters is larger than a first preset threshold value;
the driving module is used for driving the corresponding virtual roles to make adjustment actions based on the adjustment control information and/or driving the corresponding virtual roles to make the target actions based on the original control information; wherein the difference between any two adjustment actions, any two target actions, or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is less than the first preset threshold.
In one possible embodiment, the adjustment module is specifically configured to:
determining a target position of each virtual character in the 3D scene according to the current position of each virtual character in the 3D scene and the original control information;
determining a target virtual role to be adjusted from the plurality of virtual roles based on the target position of each virtual role in the 3D scene;
and adjusting the original control information of the target virtual character to obtain the adjustment control information.
In one possible embodiment, the adjustment module is specifically configured to:
and adjusting the control information of the target virtual character for indicating the foot movement to obtain the adjustment control information of the foot of the target virtual character.
In a possible implementation manner, the target action includes a gesture action and/or a gesture action, and the adjustment module is specifically configured to:
determining an adjustment action matched with the target action from a preset action library based on the target action;
and adjusting the control information of each virtual role based on the control information of the adjustment action to obtain the adjustment control information.
In a possible implementation manner, the target action includes a gesture action and/or a gesture action, and the adjustment module is specifically configured to:
determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters;
and adjusting the original control information based on the control information of the reference gesture motion and/or the reference gesture to obtain the adjustment control information.
In one possible embodiment, the adjustment module is specifically configured to:
and determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters based on the standard degree of the target motion of each virtual character.
In one possible embodiment, the adjustment module is specifically configured to:
and determining the standard degree of the target action of each virtual character by adopting an artificial intelligence method.
The embodiment of the disclosure provides an electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the action adjustment method as described above.
The disclosed embodiments provide a computer readable storage medium having a computer program stored thereon, which when executed by a processor performs the action adjustment method as described above.
In the method, the device, the electronic device and the storage medium for adjusting actions provided in the embodiments, if it is determined that the multiple virtual roles need to perform the same actions, if the maximum difference between the target actions of the multiple virtual roles is greater than the first preset threshold, that is, if there is a great difference between the actions of at least one virtual role and other virtual roles in the multiple virtual roles, the original control information of the at least one virtual role is adjusted, so that the difference between the actions of the multiple virtual roles is smaller than the second preset threshold, that is, the difference between the actions of the multiple virtual roles is smaller, thereby realizing the unified effect of the actions of the multiple virtual roles and improving the viewing experience of the user.
The foregoing objects, features and advantages of the disclosure will be more readily apparent from the following detailed description of the preferred embodiments taken in conjunction with the accompanying drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the embodiments are briefly described below, which are incorporated in and constitute a part of the specification, these drawings showing embodiments consistent with the present disclosure and together with the description serve to illustrate the technical solutions of the present disclosure. It is to be understood that the following drawings illustrate only certain embodiments of the present disclosure and are therefore not to be considered limiting of its scope, for the person of ordinary skill in the art may admit to other equally relevant drawings without inventive effort.
FIG. 1 illustrates a flow chart of a first method of action adjustment provided by an embodiment of the present disclosure;
FIG. 2 illustrates a schematic diagram of a relationship between a virtual character and an actor provided by an embodiment of the present disclosure;
FIG. 3 illustrates a schematic diagram of a current location of each virtual character in a 3D scene provided by embodiments of the present disclosure;
FIG. 4 illustrates a schematic diagram of a target location of each virtual character in a 3D scene provided by an embodiment of the present disclosure;
FIG. 5 illustrates a schematic diagram of the expected location of various virtual characters in a 3D scene provided by embodiments of the present disclosure;
FIG. 6 illustrates a flow chart of a method for adjusting at least one virtual character provided by an embodiment of the present disclosure;
FIG. 7 is a schematic diagram of a target gesture and a tuning gesture in a library of actions provided by an embodiment of the present disclosure;
FIG. 8 illustrates another method flow diagram for adjusting at least one virtual character provided by embodiments of the present disclosure;
fig. 9 is a schematic structural view of an action adjusting device according to an embodiment of the present disclosure;
fig. 10 shows a schematic diagram of an electronic device provided by an embodiment of the disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. The components of the embodiments of the present disclosure, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure, as claimed, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be made by those skilled in the art based on the embodiments of this disclosure without making any inventive effort, are intended to be within the scope of this disclosure.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
The term "and/or" is used herein to describe only one relationship, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
The virtual character live broadcast is to replace the real image of the host with a specific virtual character to carry out video live broadcast, specifically, the control signal of the action expression data of the actor (person in the actor) can be obtained through external hardware equipment, and the virtual character is driven to act.
According to research, when multiple virtual roles need to make the same action, the action difference among the multiple virtual roles is usually caused by various reasons (such as abnormal hardware equipment or difference among control information), so that the watching experience of users is affected, and especially in the group performance process, the visual effect is seriously affected by the action difference, so that how to promote the consistency of the multiple virtual roles under specific conditions (such as dancing) is a problem to be solved.
Based on the above study, the present disclosure provides an action adjustment method, including: acquiring original control information of each virtual character, wherein the original control information of each virtual character indicates a plurality of virtual characters to make the same target action; when the maximum difference between the target actions of the multiple virtual roles is larger than a first preset threshold value, adjusting original control information of at least one virtual role to obtain adjustment control information; driving a corresponding virtual character to make an adjustment action based on the adjustment control information, and/or driving the corresponding virtual character to make the target action based on the original control information; wherein the difference between any two adjustment actions, any two target actions, or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is less than the first preset threshold.
In the embodiment of the disclosure, when it is determined that the multiple virtual roles need to make the same action, if the maximum difference between the target actions of the multiple virtual roles is greater than a first preset threshold, that is, when there is a greater action difference between at least one virtual role and other virtual roles in the multiple virtual roles, the original control information of the at least one virtual role is adjusted, so that the action difference between the multiple virtual roles is smaller than a second preset threshold, that is, the action difference between the multiple virtual roles is smaller, thereby realizing an effect of unifying the actions of the multiple virtual roles and improving the viewing experience of the user.
In some implementations, the motion adjustment method provided by the embodiments of the present disclosure may be applied to an electronic device, where the electronic device is configured to run a 3D rendering environment, where the 3D rendering environment includes 3D scene information, where the 3D scene information is configured to generate a 3D scene after rendering, where the 3D scene information includes at least one virtual role information and at least one virtual lens, where the virtual role information is configured to generate a virtual role after rendering, and where the virtual role is driven by control information captured by a motion capture device.
The 3D rendering environment may be a 3D engine running in the electronic device, capable of generating image information based on one or more perspectives based on the data to be rendered. The virtual character information refers to a character model existing in the 3D engine, and a corresponding virtual character can be generated after rendering. The virtual characters may include virtual personas, virtual animal characters, virtual cartoon characters, etc., without limitation herein.
The 3D scene information may run in a computer CPU (Central Processing Uni, central processing unit), GPU (Graphics Processing Unit, graphics processor) and memory, which contains gridded model information and sum map texture information. Accordingly, the avatar data includes, by way of example, but is not limited to, gridded model data, voxel data, and map texture data, or a combination thereof. Wherein the mesh includes, but is not limited to, a triangular mesh, a quadrilateral mesh, other polygonal meshes, or combinations thereof.
The action adjustment method can be applied to electronic equipment, and refers to that an execution main body of the action adjustment method is generally electronic equipment with certain computing capability, and the electronic equipment can be a server, wherein the server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud storage, big data, an artificial intelligent platform and the like. In other embodiments, the electronic device may also be a terminal device (e.g., a mobile phone, a notebook, an in-vehicle device, etc.) or other processing device. Further, the action adjustment method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
It should be noted that, in some embodiments, video data may also be generated based on the 3D scene information. Wherein the video data comprises a plurality of video frames. It can be understood that the video data generated based on the 3D scene information may be displayed locally, for example, if the electronic device is provided with a display screen or externally connected with a display device, the generated video data may be played locally, recorded broadcast video may be formed, and live broadcast video stream may be formed for live broadcast.
The following describes in detail the action adjustment method provided by the embodiment of the present disclosure.
Referring to fig. 1, a flowchart of a first action adjustment method according to an embodiment of the present disclosure is shown, where the action adjustment method includes the following steps S101 to S103:
s101, acquiring original control information of each virtual character, wherein the original control information of each virtual character indicates a plurality of virtual characters to make the same target action.
For example, referring to fig. 2, a plurality of virtual characters may exist in the 3D scene, in this embodiment, a virtual character a and a virtual character B are taken as an example to describe, where each virtual character corresponds to an actor, as shown in fig. 2, virtual character a corresponds to actor 10, and virtual character B corresponds to actor 20, so that the motion data of actor 10 can be captured by the motion capture device worn by actor 10, and the virtual character a can be driven to make a corresponding motion according to the obtained motion data, and similarly, the motion data of actor 20 can be captured by the motion capture device worn by actor 20, and further the virtual character B can be driven to make a corresponding motion. In other embodiments, the number of virtual characters may be greater, for example, 3, 5, or 10, which are not limited herein.
The motion capture device comprises a garment worn on the body of the actor, a glove worn on the hand of the actor, and the like. Wherein, the clothes are used for capturing the limb movements of the actors, and the glove is used for capturing the hand movements of the actors. Specifically, the motion capture device includes a plurality of feature points to be identified, which may correspond to key points of the bones of the actor. For example, feature points may be set at positions of the motion capture device corresponding to joints (such as knee joints, elbow joints, and finger joints) of bones of the actor, where the feature points may be made of a specific material (such as nanomaterial), and further, position information of the feature points may be acquired through a camera to obtain control information.
Accordingly, in order to drive the virtual character, the virtual character includes controlled feature points matched with the plurality of feature points to be identified, for example, the feature points to be identified of the elbow joint of the actor are matched with the elbow joint controlled points of the virtual character, that is, there is a one-to-one correspondence between skeleton key points of the actor and skeleton key points of the virtual character, so after control information of the feature points to be identified of the elbow joint of the actor is obtained, corresponding change of the elbow joint of the virtual character can be driven, and then action change of the virtual character is formed by the change of the plurality of controlled points.
In other embodiments, the motion capture device may also be a camera, i.e., by capturing relevant video of the actor's limb motion and/or facial expression motion, control information for the virtual character may be determined.
Illustratively, the raw control information includes limb motion data and/or facial expression data of the actor acquired by the motion capture device. Wherein the limb includes, but is not limited to, various parts of the actor's body, such as the head, arms, palms, legs, or feet, etc.
It will be understood that after the original control information of the virtual character is obtained, the target action indicated by the original control information may be determined according to the obtained original control information, for example, the action of the virtual character a may be determined to be a clapping action according to the original control information of the virtual character a, and the action of the virtual character B may be determined to be a walking action according to the original control information of the virtual character B, which indicates that the target actions made by the virtual character a and the virtual character B are different.
However, in some specific scenarios (such as a collective dance scenario), multiple virtual characters may be required to make the same target action. Specifically, the multiple virtual characters may be all virtual characters existing in the 3D scene, or may be part of virtual characters in all virtual characters existing in the 3D scene, for example, if there are 5 virtual characters in the 3D scene, the multiple virtual characters may be all 5 virtual characters, or may be 3 or two virtual characters in the 5 virtual characters.
The same target action is that target actions made by multiple virtual roles belong to the same class, and are not identical in a strict sense. For example, referring to fig. 2 again, if it is determined that the virtual character a and the virtual character B need to make the gestures (including the hand motion and the leg motion) shown in fig. 2 at the same time according to the original control information of each virtual character, it can be determined that multiple virtual characters need to make the same target motion.
And S102, adjusting the original control information of at least one virtual character to obtain adjustment control information under the condition that the maximum difference between the target actions of the plurality of virtual characters is larger than a first preset threshold value.
Under the condition that the plurality of virtual roles need to make the same target actions, in order to promote the unified effect of the same actions of the plurality of virtual roles, the difference of the target actions of the plurality of virtual roles needs to be judged, if the maximum difference between the target actions of the plurality of virtual roles is smaller than or equal to a first preset threshold value, the target actions of the plurality of virtual roles are illustrated to be tidy and unified, at the moment, the original control information does not need to be adjusted, and the corresponding virtual roles are directly driven to make corresponding actions based on the original control information; if the maximum difference between the target actions of the multiple virtual roles is greater than the first preset threshold, it is indicated that the target actions of the multiple virtual roles are relatively uneven, which will affect the overall effect of the performance of the multiple virtual roles, and at this time, the original control information of at least one virtual role needs to be adjusted to obtain adjustment control information.
The adjustment of the original control information of at least one virtual character means that part of the virtual characters or all the virtual characters in the plurality of virtual characters can be adjusted, and the specific adjustment quantity can be determined according to actual conditions, so long as the target actions of the plurality of adjusted virtual characters can be tidy.
In addition, the variability between target actions includes not only the variability between actions but also the proportionality variability between the same actions. For example, because the heights of the virtual roles are different, even if the original control information determines that different virtual roles make the same target action, the actual actions made by the virtual roles have proportion differences, and are particularly obvious in the scene of the virtual roles walking.
For example, if the original control information indicates that the avatar walks forward by one step, however, because the avatar a is tall, the step size actually walked by the avatar a is smaller than the step size walked by the avatar B, and thus the position of the avatar a walked forward by one step is offset from the position of the avatar B walked forward by one step.
S103, driving the corresponding virtual roles to make adjustment actions based on the adjustment control information, and/or driving the corresponding virtual roles to make the target actions based on the original control information; wherein the difference between any two adjustment actions, any two target actions, or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is less than the first preset threshold.
It can be appreciated that, since there is a case where original control information of only a part of the virtual characters is adjusted, after the adjustment control information is obtained, an adjustment action can be made by driving a corresponding virtual character based on the adjustment control information, and/or the target action can be made by driving a corresponding virtual character based on the original control information.
In this embodiment, after the original control information of the virtual roles, the difference between any two adjustment actions, between any two target actions, or between any one target action and any one adjustment action is smaller than a second preset threshold, and the second preset threshold is smaller than the first preset threshold, so that the action difference between the multiple virtual roles is smaller, thereby realizing the effect of unifying the actions of the multiple virtual roles, and improving the viewing experience of the user.
In some embodiments, when the original control information of at least one virtual character is adjusted, the following (a) to (c) may be included:
(a) And determining the target position of each virtual character in the 3D scene according to the current position of each virtual character in the 3D scene and the original control information.
In this embodiment, the original control information includes original control information of a foot of the virtual character. As an example, referring to fig. 3, a schematic diagram of a current position of each virtual character in a 3D scene is shown, it can be seen from fig. 3 that a plurality of virtual characters form a straight line at a current position of an M area, and are relatively regular, however, referring to fig. 4, according to the current position of each virtual character in the 3D scene and the original control information, it is determined that a target position of each virtual character in the 3D scene is shown in fig. 4, that is, if a corresponding action is performed on the basis of the original control information to drive the corresponding virtual character, the position of each virtual character is changed, and becomes a target position in an N area, so that a formation of the plurality of virtual characters is changed and becomes irregular. Therefore, it is necessary to adjust some or all of the original control information of the virtual character a, the virtual character B, the virtual character C, the virtual character D, and the virtual character E so that the formation at the target position is still uniform.
It should be noted that other actions may exist in other parts of the virtual character (such as the hand) during the movement of the virtual character from the current position to the target position, which is not limited herein.
(b) And determining a target virtual role to be adjusted from the plurality of virtual roles based on the target position of each virtual role in the 3D scene.
For example, the target virtual character to be adjusted may be determined based on the distribution of the target position of each virtual character in the 3D scene within the preset range (N area), for example, as shown in fig. 4, according to the distribution of the target position, it is found that the virtual character a, the virtual character B, and the virtual character C in the multiple virtual characters are concentrated, and the virtual character D and the virtual character E deviate farther, and at this time, the target virtual character to be adjusted may be determined according to the distribution of the target position.
In some embodiments, the adjustment amplitude of the original control information may be determined according to an expected effect of a plurality of virtual characters distributed within a preset range in the 3D scene. For example, referring to fig. 5, the expected distribution effect of the multiple virtual characters is shown in fig. 5, and at this time, it may be determined that the virtual character B, the virtual character C, and the virtual character D are target virtual characters, that is, the virtual character a and the virtual character E need not be adjusted according to the expected distribution effect.
(c) And adjusting the original control information of the target virtual character to obtain the adjustment control information.
Illustratively, the original control information of the target virtual characters may be adjusted according to the adjustment amplitude, so that the distribution of the target positions of the multiple virtual characters achieves the desired effect (such as the effect of uniform and tidy distribution) shown in fig. 5.
In some embodiments, if the adjusted target position of the target virtual character is determined according to the expected distribution effect, at this time, the adjustment range of the target virtual character may be determined according to the relative distance between the current position of the target virtual character in the 3D scene and the adjusted target position. The adjustment amplitude may include an adjustment amplitude of a target avatar movement step and/or a movement step frequency.
For example, when the distance between the current position and the adjusted target position is greater than the preset distance (far), the step size increase and/or the step frequency increase indicating the movement of the virtual character in the original control information can be adjusted, so that the virtual character can reach the adjusted target position in the movement process, further the expected effect is achieved, and the viewing experience of the user is improved. In addition, the moving process of the target virtual role is adjusted by adjusting the step frequency or the step length, so that the adjusted action of the target virtual role is natural, abrupt change (jump) is not easy to occur, and the visual experience of a user is further improved.
Thus, in some embodiments, when the original control information of the target virtual character is adjusted, the adjustment control information may include: and adjusting the control information of the target virtual character for indicating the foot movement to obtain the adjustment control information of the foot of the target virtual character.
In some embodiments, the target actions include gesture actions and/or gesture actions, as shown in fig. 6, when adjusting the control information of at least one virtual character to obtain adjustment control information, the following steps S1021-S1022 may be included:
s1021, based on the target action, determining an adjustment action matched with the target action from a preset action library.
For example, in the case where the target action indicated by the original control information is a specific gesture or posture, an adjustment action matching the target action may be determined from a preset action library. It can be appreciated that, in order to improve the adjustment efficiency, an action library may be pre-established, and various actions in the action library may be actions with high use frequency obtained based on a large number of scenes, or actions with high action difficulty coefficients, which is not limited herein.
Referring to fig. 7, when the target motion indicated by the original control information is a specific gesture motion J, an adjustment motion K matching the target motion may be determined from the motion library.
And S1022, adjusting the original control information of each virtual role based on the control information of the adjustment action to obtain the adjustment control information.
After determining the adjustment action K, the control information of each virtual character may be adjusted based on the control information of the adjustment action K, so as to obtain the adjustment control information, as can be seen from fig. 7, the beauty of the adjustment action K is better than the gesture action J of the target action, so that the beauty and the uniformity of the gesture of the virtual character may be improved.
The control information in the present embodiment includes information on the curvature of the finger, information on the relative position between the fingers, and the like.
In other embodiments, referring to fig. 8, when the original control information of at least one virtual character is adjusted to obtain the adjusted control information, the following steps S102a to S102b may be further included:
s102a, determining a reference gesture action and/or a reference gesture from the plurality of virtual characters;
For example, unlike the scheme of determining the adjustment motion from the motion library, the reference gesture motion and/or the reference gesture may also be determined from a plurality of virtual characters, that is, one target virtual character may be determined from a plurality of virtual characters as a basis, and the other virtual characters may be adjusted according to the control information of the target virtual character.
In some implementations, the reference gesture motion and/or the reference gesture may be determined from the plurality of virtual characters based on a standard degree of the target motion of each virtual character. In other embodiments, the determination may also be made based on other metrics, such as, for example, the innovativeness of the target action.
The innovation refers to the occurrence frequency and/or difficulty of the target action. For example, if the target actions are all actions of the lower waists, but the innovation degree of each virtual character is different, the lower waists made by one virtual character may be novel (such as the lower waistline arc is graceful, and the face is close to the ground), and not common, at this time, the lower waistline action of the virtual character is used as the reference gesture, so that the user can appreciate the action of the missing, and the viewing experience of the user is improved.
In some embodiments, the standard degree can be obtained by adopting an artificial intelligence method, so that the determined reference gesture and/or reference gesture can be more in line with aesthetic requirements of the user, and the viewing experience of the user is further improved. In other embodiments, the corresponding standard may be preset for each operation.
Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
And S102b, adjusting the original control information based on the control information of the reference gesture motion and/or the reference gesture to obtain the adjustment control information.
It can be appreciated that after determining the reference gesture and/or the reference gesture, the original control information may be adjusted based on the control information of the reference gesture and/or the reference gesture, to obtain the adjustment control information, so that the adjustment control information of each virtual character is the same.
The control information for determining the virtual character corresponding to the reference gesture motion and/or the reference gesture is not necessarily adjusted.
It will be appreciated by those skilled in the art that in the above-described method of the specific embodiments, the written order of steps is not meant to imply a strict order of execution but rather should be construed according to the function and possibly inherent logic of the steps.
Based on the same technical concept, the embodiment of the disclosure further provides an action adjusting device corresponding to the action adjusting method, and since the principle of solving the problem by the device in the embodiment of the disclosure is similar to that of the action adjusting method in the embodiment of the disclosure, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 9, a schematic diagram of an action adjusting device 500 according to an embodiment of the disclosure is provided, where the device includes:
An obtaining module 501, configured to obtain original control information of each virtual character, where the original control information of each virtual character indicates that multiple virtual characters make the same target action;
an adjustment module 502, configured to adjust original control information of at least one virtual character to obtain adjustment control information when it is determined that a maximum difference between target actions of the multiple virtual characters is greater than a first preset threshold;
a driving module 503, configured to drive a corresponding virtual character to make an adjustment action based on the adjustment control information, and/or drive a corresponding virtual character to make the target action based on the original control information; wherein the difference between any two adjustment actions, any two target actions, or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is less than the first preset threshold.
In one possible implementation, the adjustment module 502 is specifically configured to:
determining a target position of each virtual character in the 3D scene according to the current position of each virtual character in the 3D scene and the original control information;
Determining a target virtual role to be adjusted from the plurality of virtual roles based on the target position of each virtual role in the 3D scene;
and adjusting the original control information of the target virtual character to obtain the adjustment control information.
In one possible implementation, the adjustment module 502 is specifically configured to:
and adjusting the control information of the target virtual character for indicating the foot movement to obtain the adjustment control information of the foot of the target virtual character.
In one possible implementation, the target actions include gesture actions and/or gesture actions, and the adjustment module 502 is specifically configured to:
determining an adjustment action matched with the target action from a preset action library based on the target action;
and adjusting the control information of each virtual role based on the control information of the adjustment action to obtain the adjustment control information.
In one possible implementation, the target actions include gesture actions and/or gesture actions, and the adjustment module 502 is specifically configured to:
determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters;
And adjusting the original control information based on the control information of the reference gesture motion and/or the reference gesture to obtain the adjustment control information.
In one possible implementation, the adjustment module 502 is specifically configured to:
and determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters based on the standard degree of the target motion of each virtual character.
In one possible implementation, the adjustment module 502 is specifically configured to:
and determining the standard degree of the target action of each virtual character by adopting an artificial intelligence method.
The process flow of each module in the apparatus and the interaction flow between the modules may be described with reference to the related descriptions in the above method embodiments, which are not described in detail herein.
Based on the same technical concept, the embodiment of the disclosure also provides electronic equipment. Referring to fig. 10, a schematic structural diagram of an electronic device 700 according to an embodiment of the disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is configured to store execution instructions, including a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory, and is used for temporarily storing operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In the embodiment of the present application, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and the execution is controlled by the processor 701. That is, when the electronic device 700 is in operation, communication between the processor 701 and the memory 702 via the bus 703 causes the processor 701 to execute the application code stored in the memory 702, thereby performing the methods described in any of the previous embodiments.
The Memory 702 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It should be understood that the illustrated structure of the embodiment of the present application does not constitute a specific limitation on the electronic device 700. In other embodiments of the application, electronic device 700 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The disclosed embodiments also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the action adjustment method in the method embodiments described above. Wherein the storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiments of the present disclosure further provide a computer program product, where the computer program product carries a program code, where instructions included in the program code may be used to perform steps of the action adjustment method in the above method embodiments, and specifically reference may be made to the above method embodiments, which are not described herein.
Wherein the above-mentioned computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system and apparatus may refer to corresponding procedures in the foregoing method embodiments, which are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in essence or a part contributing to the prior art or a part of the technical solution, or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present disclosure. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the foregoing examples are merely specific embodiments of the present disclosure, and are not intended to limit the scope of the disclosure, but the present disclosure is not limited thereto, and those skilled in the art will appreciate that while the foregoing examples are described in detail, it is not limited to the disclosure: any person skilled in the art, within the technical scope of the disclosure of the present disclosure, may modify or easily conceive changes to the technical solutions described in the foregoing embodiments, or make equivalent substitutions for some of the technical features thereof; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. An action adjustment method, comprising:
acquiring original control information of each virtual character, wherein the original control information of each virtual character indicates a plurality of virtual characters to make the same target action;
when the maximum difference between the target actions of the multiple virtual roles is larger than a first preset threshold value, adjusting original control information of at least one virtual role to obtain adjustment control information;
driving a corresponding virtual character to make an adjustment action based on the adjustment control information, and/or driving the corresponding virtual character to make the target action based on the original control information; wherein the difference between any two adjustment actions, any two target actions, or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is less than the first preset threshold.
2. The method of claim 1, wherein adjusting the original control information of the at least one virtual character to obtain adjusted control information comprises:
determining a target position of each virtual character in the 3D scene according to the current position of each virtual character in the 3D scene and the original control information;
Determining a target virtual role to be adjusted from the plurality of virtual roles based on the target position of each virtual role in the 3D scene;
and adjusting the original control information of the target virtual character to obtain the adjustment control information.
3. The method of claim 2, wherein adjusting the original control information of the target virtual character to obtain the adjusted control information includes:
and adjusting the control information of the target virtual character for indicating the foot movement to obtain the adjustment control information of the foot of the target virtual character.
4. The method of claim 1, wherein the target action comprises a gesture action and/or a gesture action, and wherein adjusting the original control information of the at least one virtual character to obtain the adjusted control information comprises:
determining an adjustment action matched with the target action from a preset action library based on the target action;
and adjusting the control information of each virtual role based on the control information of the adjustment action to obtain the adjustment control information.
5. The method of claim 1, wherein the target action comprises a gesture action and/or a gesture action, and wherein adjusting the original control information of the at least one virtual character to obtain the adjusted control information comprises:
Determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters;
and adjusting the original control information based on the control information of the reference gesture motion and/or the reference gesture to obtain the adjustment control information.
6. The method of claim 5, wherein the determining a reference gesture and/or reference gesture from the plurality of virtual characters comprises:
and determining a reference gesture motion and/or a reference gesture from the plurality of virtual characters based on the standard degree of the target motion of each virtual character.
7. The method of claim 6, wherein the criteria for the target action for each virtual character is determined using an artificial intelligence method.
8. An operation adjustment device, comprising:
the system comprises an acquisition module, a control module and a control module, wherein the acquisition module is used for acquiring original control information of each virtual character, wherein the original control information of each virtual character indicates a plurality of virtual characters to make the same target action;
the adjusting module is used for adjusting the original control information of at least one virtual character to obtain adjustment control information under the condition that the maximum difference between the target actions of the plurality of virtual characters is larger than a first preset threshold value;
The driving module is used for driving the corresponding virtual roles to make adjustment actions based on the adjustment control information and/or driving the corresponding virtual roles to make the target actions based on the original control information; wherein the difference between any two adjustment actions, any two target actions, or any one target action and any one adjustment action is smaller than a second preset threshold; wherein the second preset threshold is less than the first preset threshold.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the action adjustment method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, performs the action adjustment method according to any of claims 1-7.
CN202111534748.6A 2021-12-15 2021-12-15 Action adjustment method, action adjustment device, electronic equipment and readable storage medium Active CN114237396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111534748.6A CN114237396B (en) 2021-12-15 2021-12-15 Action adjustment method, action adjustment device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111534748.6A CN114237396B (en) 2021-12-15 2021-12-15 Action adjustment method, action adjustment device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114237396A CN114237396A (en) 2022-03-25
CN114237396B true CN114237396B (en) 2023-08-15

Family

ID=80756367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111534748.6A Active CN114237396B (en) 2021-12-15 2021-12-15 Action adjustment method, action adjustment device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114237396B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115356953B (en) * 2022-10-21 2023-02-03 北京红棉小冰科技有限公司 Virtual robot decision method, system and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110000052A (en) * 2009-06-26 2011-01-03 주식회사한얼엠에스티 Online game method for b-boy dance battle based on rhythm-action
CN107392783A (en) * 2017-07-05 2017-11-24 龚少卓 Social contact method and device based on virtual reality
CN108109440A (en) * 2017-12-21 2018-06-01 沈阳体育学院 A kind of more people's Dancing Teaching interaction method and devices
CN110152308A (en) * 2019-06-27 2019-08-23 北京乐动派软件有限公司 A kind of more personages' group photo methods of game virtual image
CN110928411A (en) * 2019-11-18 2020-03-27 珠海格力电器股份有限公司 AR-based interaction method and device, storage medium and electronic equipment
WO2021003994A1 (en) * 2019-07-05 2021-01-14 深圳市工匠社科技有限公司 Control method for virtual character, and related product
CN112774203A (en) * 2021-01-22 2021-05-11 北京字跳网络技术有限公司 Pose control method and device of virtual object and computer storage medium
CN113327311A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Virtual character based display method, device, equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9811937B2 (en) * 2015-09-29 2017-11-07 Disney Enterprises, Inc. Coordinated gesture and locomotion for virtual pedestrians
US10245507B2 (en) * 2016-06-13 2019-04-02 Sony Interactive Entertainment Inc. Spectator management at view locations in virtual reality environments
EP3675488B1 (en) * 2017-08-24 2024-02-28 Tencent Technology (Shenzhen) Company Limited Method for recording video on the basis of a virtual reality application, terminal device, and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110000052A (en) * 2009-06-26 2011-01-03 주식회사한얼엠에스티 Online game method for b-boy dance battle based on rhythm-action
CN107392783A (en) * 2017-07-05 2017-11-24 龚少卓 Social contact method and device based on virtual reality
CN108109440A (en) * 2017-12-21 2018-06-01 沈阳体育学院 A kind of more people's Dancing Teaching interaction method and devices
CN110152308A (en) * 2019-06-27 2019-08-23 北京乐动派软件有限公司 A kind of more personages' group photo methods of game virtual image
WO2021003994A1 (en) * 2019-07-05 2021-01-14 深圳市工匠社科技有限公司 Control method for virtual character, and related product
CN110928411A (en) * 2019-11-18 2020-03-27 珠海格力电器股份有限公司 AR-based interaction method and device, storage medium and electronic equipment
CN112774203A (en) * 2021-01-22 2021-05-11 北京字跳网络技术有限公司 Pose control method and device of virtual object and computer storage medium
CN113327311A (en) * 2021-05-27 2021-08-31 百度在线网络技术(北京)有限公司 Virtual character based display method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN114237396A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN111598818B (en) Training method and device for face fusion model and electronic equipment
CN111968207B (en) Animation generation method, device, system and storage medium
CN109035373B (en) Method and device for generating three-dimensional special effect program file package and method and device for generating three-dimensional special effect
TW201911082A (en) Image processing method, device and storage medium
CN114612643B (en) Image adjustment method and device for virtual object, electronic equipment and storage medium
KR20080069601A (en) Stereo video for gaming
KR102012405B1 (en) Method and apparatus for generating animation
CN113852838B (en) Video data generation method, device, electronic equipment and readable storage medium
CN108983974B (en) AR scene processing method, device, equipment and computer-readable storage medium
US11816772B2 (en) System for customizing in-game character animations by players
CN112927332B (en) Bone animation updating method, device, equipment and storage medium
CN114237396B (en) Action adjustment method, action adjustment device, electronic equipment and readable storage medium
CN111714880A (en) Method and device for displaying picture, storage medium and electronic device
CN113784160A (en) Video data generation method and device, electronic equipment and readable storage medium
CN114095744A (en) Video live broadcast method and device, electronic equipment and readable storage medium
CN114615513A (en) Video data generation method and device, electronic equipment and storage medium
CN114820915A (en) Method and device for rendering shading light, storage medium and electronic device
CN111599002A (en) Method and apparatus for generating image
US11880945B2 (en) System and method for populating a virtual crowd in real time using augmented and virtual reality
JP7401199B2 (en) Information processing device, information processing method, and program
CN114155324B (en) Virtual character driving method and device, electronic equipment and readable storage medium
CN114742970A (en) Processing method of virtual three-dimensional model, nonvolatile storage medium and electronic device
CN113706675A (en) Mirror image processing method, mirror image processing device, storage medium and electronic device
CN114470768A (en) Virtual item control method and device, electronic equipment and readable storage medium
CN114618163A (en) Driving method and device of virtual prop, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant