CN114618163A - Driving method and device of virtual prop, electronic equipment and readable storage medium - Google Patents

Driving method and device of virtual prop, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114618163A
CN114618163A CN202210279097.9A CN202210279097A CN114618163A CN 114618163 A CN114618163 A CN 114618163A CN 202210279097 A CN202210279097 A CN 202210279097A CN 114618163 A CN114618163 A CN 114618163A
Authority
CN
China
Prior art keywords
driving information
information
virtual prop
driving
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210279097.9A
Other languages
Chinese (zh)
Inventor
顾佳祺
陈都
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210279097.9A priority Critical patent/CN114618163A/en
Publication of CN114618163A publication Critical patent/CN114618163A/en
Priority to PCT/CN2023/077873 priority patent/WO2023179292A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6009Methods for processing data by generating or executing the game program for importing or creating game content, e.g. authoring tools during game development, adapting content to different platforms, use of a scripting language to create content
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present disclosure provides a driving method, an apparatus, an electronic device, and a storage medium for a virtual item, where the driving method for the virtual item includes: acquiring multi-section driving information of the virtual prop; each piece of driving information is used for driving at least part of the virtual prop, and the virtual prop is a virtual prop capable of being deformed; determining complete driving information of the virtual prop based on the plurality of pieces of driving information; driving the virtual prop to move in the 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after being rendered. This application embodiment can promote the drive effect of flexible virtual stage property.

Description

Driving method and device of virtual prop, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to the field of virtual item driving technologies, and in particular, to a method and an apparatus for driving a virtual item, an electronic device, and a storage medium.
Background
In the related art, the main method of virtual live broadcasting includes: control signals regarding motion expression data of (a person among) actors are acquired through the motion capture device, and avatar motions are driven.
Interaction with the virtual prop can be involved in the live broadcast process of the virtual image, wherein the driving of the virtual prop is similar to that of the virtual image, the motion data about the real prop is obtained through the motion capture equipment, and the virtual prop is driven to move.
However, the current virtual props are rigid objects (such as glasses and microphones) that do not deform, and for flexible virtual props (such as exercise rings) that can deform, if a motion capture processing mode of the rigid virtual props is still adopted, some motion capture data will be lost, and then the display form of the flexible virtual props is distorted.
Disclosure of Invention
The embodiment of the disclosure at least provides a driving method and device of a virtual prop, an electronic device and a storage medium, which can improve the driving effect of a flexible virtual prop.
The embodiment of the present disclosure provides a driving method for a virtual prop, including:
acquiring multi-section driving information of the virtual prop; each section of driving information is used for driving at least part of the virtual prop, and the virtual prop is a deformable virtual prop;
determining complete driving information of the virtual prop based on the plurality of pieces of driving information;
driving the virtual prop to move in the 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after being rendered.
In the embodiment of the present disclosure, since the complete driving information of the driving of the virtual prop is composed of multiple segments of driving information, that is, each part of the flexible virtual prop is respectively considered as a target to be driven, so that even when the flexible virtual prop deforms, each part can also reach a corresponding driving effect, and further the virtual prop can display the complete driving effect, and the display effect of the flexible virtual prop in the deformed state can not be promoted because some part deforms, which leads to the display of the distorted form.
In a possible implementation manner, each piece of driving information is obtained through a motion capture device, each piece of driving information corresponds to different target portions of the virtual prop, the motion capture device is disposed at a corresponding position of the real prop, and the corresponding position of the real prop corresponds to the target portion of the virtual prop.
In a possible embodiment, the determining the complete driving information of the virtual prop based on the plurality of pieces of driving information includes:
and fusing the multiple sections of driving information based on the attribute information of the virtual prop to obtain the complete driving information.
In the embodiment of the disclosure, based on the attribute information of the virtual prop, the obtained complete driving information can be more consistent with the attribute of the virtual prop, and then the driving effect of the virtual prop can be improved.
In a possible implementation manner, the attribute information of the virtual item includes at least one of a type of the virtual item, a deformation direction of the virtual item, a deformation range of the virtual item, and a structural form of the virtual item.
In a possible implementation manner, the fusing the multiple pieces of driving information based on the attribute information of the virtual item to obtain the complete driving information includes:
determining whether data in the multiple pieces of driving information meet preset conditions or not based on the attribute information of the virtual prop;
eliminating the data which do not accord with the preset condition to obtain a plurality of sections of target driving information;
and fusing the multiple sections of target driving information to obtain the complete driving information.
In the embodiment of the disclosure, before the multi-segment driving information is fused, whether data which are not in accordance with the preset condition exist in the multi-segment driving information is determined, if the data which are not in accordance with the preset condition exist, the data which are not in accordance with the preset condition are removed, and then the data are fused, so that the accuracy of the complete driving information can be improved, and the driving effect of the virtual prop is further improved.
In a possible embodiment, the determining the complete driving information of the virtual prop based on the plurality of pieces of driving information includes:
determining whether the multiple pieces of driving information correspond to each target part of the virtual prop one by one;
determining first driving information of at least one first target region based on at least part of the plurality of pieces of driving information in the case that there is at least one target region lacking corresponding driving information, the first target region being a target region lacking corresponding driving information;
and obtaining the complete driving information based on the plurality of pieces of driving information and at least one piece of the first driving information.
In the embodiment of the disclosure, because the missing drive information is determined according to the obtained drive information, the drive effect of the target part lacking the drive information can be coordinated with other target parts, and the display effect of the virtual prop under the condition of lacking the drive information is improved.
In a possible implementation manner, the fusing the multiple pieces of driving information based on the attribute information of the virtual item to obtain the complete driving information includes:
determining target data in each piece of driving information based on the attribute information of the virtual prop;
and fitting the target data in each section of driving information to obtain the complete driving information.
In the embodiment of the disclosure, through data fitting, the connection between each target part of the virtual prop can be in smooth transition, so that the display effect of the virtual prop is more fit with the effect of the real prop, and the watching experience of a user is improved.
The embodiment of the present disclosure provides a driving device of a virtual prop, including:
the driving information acquisition module is used for acquiring multiple sections of driving information of the virtual prop; each section of driving information is used for driving at least part of the virtual prop, and the virtual prop is a deformable virtual prop;
the driving information fusion module is used for determining complete driving information of the virtual prop based on the plurality of sections of driving information;
the virtual prop driving module is used for driving the virtual prop to move in a 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after being rendered.
In a possible implementation manner, each piece of driving information is obtained through a motion capture device, each piece of driving information corresponds to different target portions of the virtual prop, the motion capture device is disposed at a corresponding position of the real prop, and the corresponding position of the real prop corresponds to the target portion of the virtual prop.
In a possible implementation manner, the driving information fusion module is specifically configured to:
and fusing the plurality of sections of driving information based on the attribute information of the virtual prop to obtain the complete driving information.
In a possible implementation manner, the attribute information of the virtual item includes at least one of a type of the virtual item, a deformation direction of the virtual item, a deformation range of the virtual item, and a structural form of the virtual item.
In a possible implementation manner, the driving information fusion module is specifically configured to:
determining whether data in the plurality of pieces of driving information meet a preset condition or not based on the attribute information of the virtual prop;
eliminating the data which do not accord with the preset condition to obtain a plurality of sections of target driving information;
and fusing the multiple sections of target driving information to obtain the complete driving information.
In a possible implementation manner, the driving information fusion module is specifically configured to:
determining whether the multiple pieces of driving information correspond to each target part of the virtual prop one by one;
determining first driving information of at least one first target region based on at least part of the plurality of pieces of driving information in the case that there is at least one target region lacking corresponding driving information, the first target region being a target region lacking corresponding driving information;
and obtaining the complete driving information based on the plurality of pieces of driving information and at least one piece of the first driving information.
In a possible implementation manner, the driving information fusion module is specifically configured to:
determining target data in each piece of driving information based on the attribute information of the virtual prop;
and fitting the target data in each section of driving information to obtain the complete driving information.
An embodiment of the present disclosure provides an electronic device, including: a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, the processor and the memory communicate with each other via the bus when the electronic device runs, and the machine-readable instructions are executed by the processor to execute the driving method of the virtual prop.
An embodiment of the present disclosure provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program executes the driving method for the virtual prop.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is to be understood that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art to which the disclosure pertains without the benefit of the inventive faculty, and that additional related drawings may be derived therefrom.
Fig. 1 shows a flowchart of a driving method of a first virtual prop provided in an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating correspondence between different parts of a virtual item and each piece of driving information according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram illustrating a driving of a virtual prop based on original multiple pieces of driving information according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram illustrating driving a virtual prop based on multiple pieces of fitted driving information according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a method for fusing multiple pieces of driving information according to an embodiment of the disclosure;
FIG. 6 is a flow chart illustrating another method for fusing multiple pieces of driving information according to an embodiment of the disclosure;
fig. 7 is a schematic structural diagram illustrating a driving device of a virtual prop according to an embodiment of the present disclosure;
fig. 8 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
In the live scene of the virtual character, action driving of the virtual character and the virtual prop is usually involved. The existing virtual props are rigid objects (such as glasses, microphones and the like) which cannot deform, and aiming at flexible virtual props (such as fitness rings) which can deform, if a motion capture processing mode of the rigid virtual props is still adopted, the display form of the flexible virtual props is distorted. Therefore, how to improve the driving effect for the flexible virtual prop is a goal pursued by the industry.
The present disclosure provides a driving method of a virtual prop, including:
acquiring multi-section driving information of the virtual prop; each section of driving information is used for driving at least part of the virtual prop, and the virtual prop is a deformable virtual prop;
determining complete driving information of the virtual prop based on the plurality of pieces of driving information;
driving the virtual prop to move in the 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual props after being rendered.
In the embodiment of the disclosure, since the complete driving information of the driving of the virtual prop is composed of multiple pieces of driving information, that is, each part of the flexible virtual prop is respectively considered as a target to be driven, so that, even if the flexible virtual prop is deformed, each part can also reach a corresponding driving effect, and further the virtual prop can display the complete driving effect, the situation that the display form is distorted due to the deformation of a certain part can not occur, and the display effect of the flexible virtual prop in the deformed state is improved.
The execution subject of the driving method for the virtual item provided in the embodiment of the present disclosure is generally an electronic device with certain computing capability, and the electronic device includes, for example: a terminal device, which may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a vehicle-mounted device, a wearable device, or a server or other processing device. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform and the like. In addition, the driving method of the virtual prop can be realized by calling a computer readable instruction stored in a memory by a processor.
Referring to fig. 1, which is a flowchart of a driving method of a virtual item provided in an embodiment of the present disclosure, the driving method of the virtual item includes the following steps S101 to S103:
s101, obtaining multi-section driving information of the virtual prop; each section of driving information is used for driving at least part of the virtual prop, and the virtual prop is a deformable virtual prop.
Illustratively, a virtual item refers to an object that exists in a 3D scene and is capable of interacting with a virtual object. Specifically, the type of the virtual item may be determined according to a specific scenario involved in a specific performance content, for example, the virtual item may be a fitness ring, a handbag, or the like. In addition, the virtual prop in the embodiment of the present disclosure refers to a flexible virtual prop that can deform, that is, the shape or shape of the virtual prop may change.
The 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual object information and virtual prop information, the virtual object information is used for generating a virtual object after being rendered, the virtual prop information is used for generating the virtual prop after being rendered, and the virtual object can comprise a virtual anchor or a digital person. Wherein, the image of the virtual anchor can be an animation image or a cartoon image.
Illustratively, the 3D scene information may run in a computer CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and a memory, which contains gridded model information and map texture information. Accordingly, as an example, the virtual prop information or virtual object information includes, but is not limited to, gridded model data, voxel data, and map texture data, or a combination thereof. Wherein the mesh includes, but is not limited to, a triangular mesh, a quadrilateral mesh, other polygonal mesh, or a combination thereof. In the embodiment of the present disclosure, the mesh is a triangular mesh.
The 3D scene information is rendered in a 3D rendering environment, which may generate a 3D scene. The 3D rendering environment may be a 3D engine running in the electronic device capable of generating imagery information based on one or more perspectives based on the data to be rendered. The virtual object information is a character model existing in the 3D engine, and can generate a corresponding virtual object after rendering. The virtual prop information is a prop model existing in the 3D engine, and can generate a corresponding virtual prop after rendering.
In some embodiments, a form of the virtual object is to capture the motion of (the person in) the actor, acquire the control signal, and then drive the virtual object motion in the 3D engine, and simultaneously acquire the sound of the actor, and fuse the sound of the actor with the picture of the virtual object to generate video data.
Similarly, one form of the virtual prop is to capture the motion of the real prop to obtain the motion data of the real prop, and then obtain the driving information based on the real motion data. The real prop refers to a device used by a real object (actor) in the process of performing. The real prop can be in interactive fit with the real object, and therefore the expected performance effect is achieved. In addition, the real props used are different for different show scene show contents.
In some embodiments, a plurality of optical mark points (for example, may be made of a reflective material) may be set on the real prop in advance, and then the position information of each optical mark point of the real prop may be captured by the optical device, so as to obtain the real motion data of the real prop. Wherein the optical capture device includes at least one of an infrared camera, an RGB camera, and a depth camera, and the disclosed embodiments do not limit the type of the optical capture device.
For example, the real prop and the virtual prop are taken as an example of the exercise ring, as shown in fig. 2, the real prop may be divided into a plurality of segments, for example, the exercise ring of the real prop is divided into four segments, i.e., "1 segment", "2 segment", "3 segment" and "4 segment", and correspondingly, the exercise ring of the virtual prop is also divided into four segments, i.e., a segment a, a segment B, a segment C and a segment D. Each section of the real prop corresponds to each part of the virtual prop one by one, namely, the motion capture device can be arranged at the section 1, the drive information of the part A is obtained through the motion capture device arranged at the section 1, and similarly, the part D is driven through the drive information obtained through the motion capture device arranged at the section 4.
Therefore, in this embodiment, each piece of driving information is obtained through a motion capture device, each piece of driving information corresponds to a different target portion (e.g., a portion a or a portion B) of the virtual prop, the motion capture device is disposed at a corresponding position (e.g., segment 1 or segment 2) of a real prop, and the corresponding position of the real prop corresponds to the target portion of the virtual prop.
It should be noted that the size of the virtual prop may be the same as or different from the size of the real prop, and it is only necessary to make the virtual prop proportional to the real prop.
S102, determining complete driving information of the virtual prop based on the plurality of pieces of driving information.
For example, after obtaining the multiple pieces of driving information, complete driving information of the virtual prop may be determined based on the multiple pieces of driving information. It can be understood that in order to realize a better driving effect of the virtual prop, the process of multi-segment driving information fusion needs to be restricted to prevent abnormal situations from occurring. For example, in a specific fusion process, position data in a certain section of driving information can be selected according to the attribute information of the virtual prop, and rotation data is discarded, and the specific data selection or rejection in each section of driving information needs to be determined according to actual conditions.
Therefore, in some embodiments, the multiple pieces of driving information may be fused based on the attribute information of the virtual item, so as to obtain the complete driving information. The attribute information of the virtual prop comprises at least one of the type of the virtual prop, the deformation direction of the virtual prop, the deformation range of the virtual prop and the structural form of the virtual prop. The structural form refers to structural features and morphological features.
For example, referring to fig. 2 again, by taking the exercise ring as an example, it is determined that the exercise ring can only be recessed inwards along the direction F under the action of external force and cannot be deformed in the direction opposite to the direction F according to the type of the virtual prop, so that when multiple pieces of driving information are fused, data which do not meet requirements (data opposite to the direction F) in the driving information corresponding to the part a are removed, and then the data are fused to obtain complete driving information.
S103, driving the virtual prop to move in the 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual props after being rendered.
Exemplarily, after obtaining the complete driving information of the virtual prop, the virtual prop can be driven to move in a 3D scene based on the complete driving information, so as to drive the flexible virtual prop.
In the embodiment of the present disclosure, since the complete driving information of the driving of the virtual prop is composed of multiple segments of driving information, that is, each part of the flexible virtual prop is respectively considered as a target to be driven, so that even when the flexible virtual prop deforms, each part can also reach a corresponding driving effect, and further the virtual prop can display the complete driving effect, and the display effect of the flexible virtual prop in the deformed state can not be promoted because some part deforms, which leads to the display of the distorted form.
In some embodiments, for step S102, when determining the complete driving information of the virtual item based on the plurality of pieces of driving information, the following (1) to (2) may be included:
(1) determining target data in each piece of driving information based on the attribute information of the virtual prop;
(2) and fitting the target data in each section of driving information to obtain the complete driving information.
For example, taking the virtual item as a hand bag for explanation, as shown in fig. 3, after obtaining multiple pieces of driving information S1-S6, if the multiple pieces of driving information are directly fused to obtain complete driving information, and then the virtual item is driven to be displayed, a connection point appears at a place where two adjacent pieces of driving information are connected (for example, S1 and S2, S2 and S3), so that a belt of the hand bag presents the driving effect shown in fig. 3, and further the viewing experience of a user is affected.
Therefore, in order to improve the display effect of the strap and achieve the effect similar to that of a real handbag strap, the target data in each piece of driving information can be determined based on the attribute information of the virtual prop (handbag strap), and then the target data in each piece of driving information is fitted to obtain the complete driving information, so as to obtain the effect of softness of the handbag strap by performing data fitting through multiple pieces of data (as characteristic values) to make the target data in two adjacent pieces of driving information transitionally linked, as shown in fig. 4.
In other embodiments, as shown in fig. 5, with respect to step S102, when determining the complete drive information of the virtual item based on the multiple pieces of drive information, the method may include the following steps S1021 to S1024:
s1021, determining whether data which do not accord with preset conditions exist in the data in the multiple pieces of driving information or not based on the attribute information of the virtual prop; if yes, go to step S1022; if not, go to step S1024.
And S1022, eliminating the data which do not meet the preset condition to obtain multiple sections of target driving information.
And S1023, fusing the multiple sections of target driving information to obtain the complete driving information.
And S1024, fusing the multiple pieces of driving information to obtain the complete driving information.
Exemplarily, before fusing multiple pieces of driving information, it is required to determine whether data in each piece of driving information all meets a preset condition, and if data in each piece of driving information all meets the preset condition, step S1024 may be executed to directly fuse multiple pieces of driving information to obtain complete driving information; if there is data that does not meet the preset condition in the multiple pieces of driving information, step S1022 needs to be executed to remove the data that does not meet the preset condition, so as to obtain multiple pieces of target driving information.
It should be noted that at least one of the plurality of pieces of driving information may include data that does not satisfy the preset condition, or each of the plurality of pieces of driving information may include data that does not satisfy the preset condition.
The preset condition may be specifically determined according to the attribute information of the virtual item, and is not limited herein. For example, noise data in each piece of driving information may be regarded as data that does not meet a preset condition, where the noise data refers to data that is not related to the virtual prop. If the virtual prop is a ping-pong ball, the motion area range of the ping-pong ball can be determined according to the previous frame image, and at this time, if the data in the driving information comprises data (such as data on the ground) outside the motion area range, the data can be determined as noise point data and needs to be eliminated.
For another example, the data that does not conform to the preset direction may be determined as the data that does not conform to the preset condition, and the rotation data may also be determined as the data that does not conform to the preset condition, which is not limited herein.
In the embodiment of the disclosure, before the multi-segment driving information is fused, whether data which are not in accordance with the preset condition exist in the multi-segment driving information is determined, if the data which are not in accordance with the preset condition exist, the data which are not in accordance with the preset condition are removed, and then the data are fused, so that the accuracy of the complete driving information can be improved, and the driving effect of the virtual prop is further improved.
In general, the driving information corresponding to each part of the virtual item may be acquired, however, in some cases, due to an abnormality of the motion capture device or an abnormality occurring in the information transmission process, a situation that some parts of the virtual item lack corresponding driving information may occur, and therefore, in order to ensure integrity of the complete driving information of the virtual item, in some embodiments, with respect to step S102 described above, as shown in fig. 6, when determining the complete driving information of the virtual item based on the multiple pieces of driving information, the following steps S102a to S102c may be included:
s102a, determining whether the multiple pieces of driving information correspond to each target part of the virtual prop one by one.
For example, referring to fig. 2 again, after obtaining the multiple pieces of driving information, it needs to be determined whether the multiple pieces of driving information correspond to each target location of the virtual item one to one, for example, whether the target location a has 1 piece of driving information corresponding thereto, whether the target location B has 2 pieces of driving information corresponding thereto, and so on until each target location of the virtual item is confirmed to be completed.
S102b, in a case where there is at least one target portion lacking corresponding driving information, determining first driving information of at least one first target portion based on at least part of the plurality of pieces of driving information, the first target portion being a target portion lacking corresponding driving information.
For example, if there is at least one target region lacking corresponding driving information, for example, if the B target region lacks corresponding 2 pieces of driving information, it is necessary to determine the first driving information of the B target region (first target region) based on the existing 1 piece of driving information and 3 pieces of driving information. That is, the missing drive information (first drive information) can be determined from the pieces of drive information that have been obtained.
Thus, in determining the first driving information of the first target portion, a second target portion having a predetermined relationship (e.g., adjacent) with the first target portion may be determined, and then the first driving information may be determined based on the driving information corresponding to the second target portion.
S102c, obtaining the complete driving information based on the plurality of pieces of driving information and at least one piece of the first driving information.
It is understood that after the first driving information is determined, each target portion has corresponding driving information, and the complete driving information can be obtained according to the plurality of pieces of driving information and at least one piece of the first driving information.
In the embodiment of the disclosure, because the missing drive information is determined according to the obtained drive information, the drive effect of the target part lacking the drive information can be coordinated with other target parts, and the display effect of the virtual prop under the condition of lacking the drive information is improved.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same technical concept, the embodiment of the present disclosure further provides a driving device of a virtual item corresponding to the driving method of the virtual item, and because the principle of solving the problem of the device in the embodiment of the present disclosure is similar to the driving method of the virtual item in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 7, a schematic diagram of a driving apparatus 500 for a virtual prop according to an embodiment of the present disclosure is shown, where the apparatus includes:
the driving information acquisition module is used for acquiring multiple sections of driving information of the virtual prop; each section of driving information is used for driving at least part of the virtual prop, and the virtual prop is a deformable virtual prop;
the driving information fusion module is used for determining complete driving information of the virtual prop based on the plurality of sections of driving information;
the virtual prop driving module is used for driving the virtual prop to move in a 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual props after being rendered.
In a possible implementation manner, each piece of driving information is obtained through a motion capture device, each piece of driving information corresponds to different target portions of the virtual prop, the motion capture device is disposed at a corresponding position of the real prop, and the corresponding position of the real prop corresponds to the target portion of the virtual prop.
In a possible implementation manner, the driving information fusion module 502 is specifically configured to:
and fusing the plurality of sections of driving information based on the attribute information of the virtual prop to obtain the complete driving information.
In a possible implementation manner, the attribute information of the virtual item includes at least one of a type of the virtual item, a deformation direction of the virtual item, a deformation range of the virtual item, and a structural form of the virtual item.
In a possible implementation manner, the driving information fusion module 502 is specifically configured to:
determining whether data in the plurality of pieces of driving information meet a preset condition or not based on the attribute information of the virtual prop;
eliminating the data which do not accord with the preset condition to obtain a plurality of sections of target driving information;
and fusing the multiple sections of target driving information to obtain the complete driving information.
In a possible implementation manner, the driving information fusion module 502 is specifically configured to:
determining whether the multiple pieces of driving information correspond to each target part of the virtual prop one by one;
determining first driving information of at least one first target region based on at least part of the plurality of pieces of driving information in the case where there is at least one target region lacking corresponding driving information, the first target region being a target region lacking corresponding driving information;
and obtaining the complete driving information based on the plurality of pieces of driving information and at least one piece of the first driving information.
In a possible implementation manner, the driving information fusion module 502 is specifically configured to:
determining target data in each piece of driving information based on the attribute information of the virtual prop;
and fitting the target data in each section of driving information to obtain the complete driving information.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the disclosure also provides electronic equipment. Referring to fig. 8, a schematic structural diagram of an electronic device 700 provided in the embodiment of the present disclosure includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory and temporarily stores operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In this embodiment, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and is controlled by the processor 701 to execute. That is, when the electronic device 700 is running, the processor 701 communicates with the memory 702 via the bus 703, so that the processor 701 executes the application program code stored in the memory 702, thereby executing the method described in any of the previous embodiments.
The Memory 702 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Read Only Memory (EPROM), an electrically Erasable Read Only Memory (EEPROM), and the like.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 700. In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for driving the virtual prop in the foregoing method embodiment are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The embodiment of the present disclosure further provides a computer program product, where the computer program product carries a program code, and instructions included in the program code may be used to execute steps of the method for driving the virtual prop in the foregoing method embodiment, which may be referred to specifically in the foregoing method embodiment, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used to illustrate the technical solutions of the present disclosure, but not to limit the technical solutions, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method for driving a virtual prop, comprising:
acquiring multi-section driving information of the virtual prop; each section of driving information is used for driving at least part of the virtual prop, and the virtual prop is a deformable virtual prop;
determining complete driving information of the virtual prop based on the plurality of pieces of driving information;
driving the virtual prop to move in the 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after being rendered.
2. The method of claim 1, wherein each piece of driving information is obtained through a motion capture device, each piece of driving information corresponds to a different target portion of the virtual prop, the motion capture device is disposed at a corresponding position of a real prop, and the corresponding position of the real prop corresponds to the target portion of the virtual prop.
3. The method of claim 1, wherein determining complete actuation information for the virtual prop based on the plurality of pieces of actuation information comprises:
and fusing the plurality of sections of driving information based on the attribute information of the virtual prop to obtain the complete driving information.
4. The method according to claim 3, wherein the attribute information of the virtual item includes at least one of a type of the virtual item, a deformation direction of the virtual item, a deformation range of the virtual item, and a structural form of the virtual item.
5. The method according to claim 3, wherein the fusing the plurality of pieces of driving information based on the attribute information of the virtual item to obtain the complete driving information comprises:
determining whether data in the multiple pieces of driving information meet preset conditions or not based on the attribute information of the virtual prop;
eliminating the data which do not accord with the preset condition to obtain a plurality of sections of target driving information;
and fusing the multiple sections of target driving information to obtain the complete driving information.
6. The method of claim 2, wherein determining complete actuation information for the virtual prop based on the plurality of pieces of actuation information comprises:
determining whether the multiple pieces of driving information correspond to each target part of the virtual prop one by one;
determining first driving information of at least one first target region based on at least part of the plurality of pieces of driving information in the case that there is at least one target region lacking corresponding driving information, the first target region being a target region lacking corresponding driving information;
and obtaining the complete driving information based on the plurality of pieces of driving information and at least one piece of the first driving information.
7. The method according to claim 3, wherein the fusing the plurality of pieces of driving information based on the attribute information of the virtual item to obtain the complete driving information comprises:
determining target data in each piece of driving information based on the attribute information of the virtual prop;
and fitting the target data in each section of driving information to obtain the complete driving information.
8. An apparatus for driving a virtual prop, comprising:
the driving information acquisition module is used for acquiring multiple sections of driving information of the virtual prop; each section of driving information is used for driving at least part of the virtual prop, and the virtual prop is a deformable virtual prop;
the driving information fusion module is used for determining complete driving information of the virtual prop based on the plurality of sections of driving information;
the virtual prop driving module is used for driving the virtual prop to move in the 3D scene based on the complete driving information; the 3D scene is generated after being rendered by 3D scene information in a 3D rendering environment, the 3D rendering environment runs in electronic equipment, the 3D scene information comprises virtual prop information, and the virtual prop information is used for generating the virtual prop after being rendered.
9. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine-readable instructions when executed by the processor performing the method of driving a virtual prop according to any one of claims 1 to 7.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program performs the method for driving the virtual prop according to any one of claims 1 to 7.
CN202210279097.9A 2022-03-21 2022-03-21 Driving method and device of virtual prop, electronic equipment and readable storage medium Pending CN114618163A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210279097.9A CN114618163A (en) 2022-03-21 2022-03-21 Driving method and device of virtual prop, electronic equipment and readable storage medium
PCT/CN2023/077873 WO2023179292A1 (en) 2022-03-21 2023-02-23 Virtual prop driving method and apparatus, electronic device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210279097.9A CN114618163A (en) 2022-03-21 2022-03-21 Driving method and device of virtual prop, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114618163A true CN114618163A (en) 2022-06-14

Family

ID=81904209

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210279097.9A Pending CN114618163A (en) 2022-03-21 2022-03-21 Driving method and device of virtual prop, electronic equipment and readable storage medium

Country Status (2)

Country Link
CN (1) CN114618163A (en)
WO (1) WO2023179292A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179292A1 (en) * 2022-03-21 2023-09-28 北京字跳网络技术有限公司 Virtual prop driving method and apparatus, electronic device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978975A (en) * 2019-03-12 2019-07-05 深圳市商汤科技有限公司 A kind of moving method and device, computer equipment of movement
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
WO2021258978A1 (en) * 2020-06-24 2021-12-30 北京字节跳动网络技术有限公司 Operation control method and apparatus
WO2022022029A1 (en) * 2020-07-31 2022-02-03 北京市商汤科技开发有限公司 Virtual display method, apparatus and device, and computer readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07129793A (en) * 1993-11-02 1995-05-19 Canon Inc Device and method for image processing
WO2019109343A1 (en) * 2017-12-08 2019-06-13 深圳先进技术研究院 Target contour extraction method, apparatus and device, and storage medium
CN110705094A (en) * 2019-09-29 2020-01-17 深圳市商汤科技有限公司 Flexible body simulation method and device, electronic equipment and computer readable storage medium
CN111760284A (en) * 2020-08-12 2020-10-13 腾讯科技(深圳)有限公司 Virtual item control method, device, equipment and storage medium
CN112233253B (en) * 2020-12-14 2021-03-16 成都完美时空网络技术有限公司 Virtual sphere deformation control method and device, electronic equipment and storage medium
CN116672712A (en) * 2020-12-29 2023-09-01 苏州幻塔网络科技有限公司 Prop control method and device, electronic equipment and storage medium
CN113713383B (en) * 2021-09-10 2023-06-27 腾讯科技(深圳)有限公司 Throwing prop control method, throwing prop control device, computer equipment and storage medium
CN114618163A (en) * 2022-03-21 2022-06-14 北京字跳网络技术有限公司 Driving method and device of virtual prop, electronic equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978975A (en) * 2019-03-12 2019-07-05 深圳市商汤科技有限公司 A kind of moving method and device, computer equipment of movement
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
WO2021258978A1 (en) * 2020-06-24 2021-12-30 北京字节跳动网络技术有限公司 Operation control method and apparatus
WO2022022029A1 (en) * 2020-07-31 2022-02-03 北京市商汤科技开发有限公司 Virtual display method, apparatus and device, and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023179292A1 (en) * 2022-03-21 2023-09-28 北京字跳网络技术有限公司 Virtual prop driving method and apparatus, electronic device and readable storage medium

Also Published As

Publication number Publication date
WO2023179292A1 (en) 2023-09-28

Similar Documents

Publication Publication Date Title
CN108619720B (en) Animation playing method and device, storage medium and electronic device
CN109242961B (en) Face modeling method and device, electronic equipment and computer readable medium
US11488346B2 (en) Picture rendering method and apparatus, storage medium, and electronic apparatus
CN108154548B (en) Image rendering method and device
US20180197321A1 (en) Image stitching
CN107851321B (en) Image processing method and dual-camera system
CN113905251A (en) Virtual object control method and device, electronic equipment and readable storage medium
US20240193846A1 (en) Scene rendering method, electronic device, and non-transitory readable storage medium
CN111161398B (en) Image generation method, device, equipment and storage medium
CN108322722B (en) Image processing method and device based on augmented reality and electronic equipment
CN109979013B (en) Three-dimensional face mapping method and terminal equipment
US20190392632A1 (en) Method and apparatus for reconstructing three-dimensional model of object
CN114095744A (en) Video live broadcast method and device, electronic equipment and readable storage medium
WO2022021217A1 (en) Multi-camera person association via pair-wise matching in continuous frames for immersive video
CN114618163A (en) Driving method and device of virtual prop, electronic equipment and readable storage medium
CN112802081A (en) Depth detection method and device, electronic equipment and storage medium
US10650488B2 (en) Apparatus, method, and computer program code for producing composite image
CN114697703A (en) Video data generation method and device, electronic equipment and storage medium
CN112714302A (en) Naked eye 3D image manufacturing method and device
CN114429513A (en) Method and device for determining visible element, storage medium and electronic equipment
CN112714263B (en) Video generation method, device, equipment and storage medium
CN113962864A (en) Image splicing method and device, storage medium and electronic device
CN115131507B (en) Image processing method, image processing device and meta space three-dimensional reconstruction method
CN114758041A (en) Virtual object display method and device, electronic equipment and storage medium
CN114470768A (en) Virtual item control method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination