CN110675474A - Virtual character model learning method, electronic device and readable storage medium - Google Patents

Virtual character model learning method, electronic device and readable storage medium Download PDF

Info

Publication number
CN110675474A
CN110675474A CN201910758741.9A CN201910758741A CN110675474A CN 110675474 A CN110675474 A CN 110675474A CN 201910758741 A CN201910758741 A CN 201910758741A CN 110675474 A CN110675474 A CN 110675474A
Authority
CN
China
Prior art keywords
virtual character
character model
bone
video image
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910758741.9A
Other languages
Chinese (zh)
Other versions
CN110675474B (en
Inventor
王乐
王�琦
洪毅强
胡良军
王科
杜欧杰
陈国仕
李鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
MIGU Animation Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Culture Technology Co Ltd
MIGU Animation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Culture Technology Co Ltd, MIGU Animation Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201910758741.9A priority Critical patent/CN110675474B/en
Publication of CN110675474A publication Critical patent/CN110675474A/en
Application granted granted Critical
Publication of CN110675474B publication Critical patent/CN110675474B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the field of computers, and discloses a virtual character model learning method, electronic equipment and a readable storage medium. In the present invention, the method for learning the virtual character model includes: acquiring first skeleton posture information corresponding to the action of a target person in a current video image frame; obtaining bone posture adjustment information of a virtual character model corresponding to the current video image frame according to the first bone posture information and the second bone posture information; the second skeleton posture information is the skeleton posture information of the virtual character model corresponding to the previous video image frame; and driving the virtual character model according to the bone posture adjustment information so that the virtual character model learns the action of the target character in the current video image frame, so that the learning process between the virtual character model and the human can be simulated to form interactive experiences of training, education, formation and the like between the human and the virtual character.

Description

Virtual character model learning method, electronic device and readable storage medium
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a virtual character model learning method, electronic equipment and a readable storage medium.
Background
Human posture recognition is an important research direction of computer vision, and the final aim of the human posture recognition is to output 3D structural parameters of human whole or local limbs, such as human body outline, position and orientation of the head, position of human body joint points or part categories. According to the known human body posture action 3D data, the known human body posture action can be effectively simulated.
However, the inventors found that at least the following problems exist in the related art: existing 3D skeletal motion fitting methods are mostly used for virtual characters to fully simulate the known motion of a human, and their goal is usually accurate motion simulation.
Disclosure of Invention
An object of embodiments of the present invention is to provide a method for learning a virtual character model, an electronic device, and a readable storage medium, so that a learning process between the virtual character model and a person can be simulated to form interactive experiences of training, education, and formation between the person and the virtual character.
In order to solve the above technical problem, an embodiment of the present invention provides a virtual character model learning method, including the following steps: acquiring first skeleton posture information corresponding to the action of a target person in a current video image frame; obtaining bone posture adjustment information of a virtual character model corresponding to the current video image frame according to the first bone posture information and the second bone posture information; the second skeleton posture information is the skeleton posture information of the virtual character model corresponding to the previous video image frame; and driving the virtual character model according to the bone posture adjustment information so that the virtual character model can learn the action of the target character in the current video image frame.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of virtual character model learning as described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the virtual character model learning method as described above.
Compared with the prior art, the method and the device for obtaining the skeleton gesture information have the advantages that the first skeleton gesture information corresponding to the action of the target person in the current video image frame is obtained; obtaining bone posture adjustment information of a virtual character model corresponding to the current video image frame according to the first bone posture information and the second bone posture information; the second skeleton posture information is the skeleton posture information of the virtual character model corresponding to the previous video image frame; and driving the virtual character model according to the bone posture adjustment information so that the virtual character model can learn the action of the target character in the current video image frame. Since the virtual character model is always driven by the bone pose adjustment information, the bone pose adjustment information is obtained according to the bone pose information of the virtual character model corresponding to the previous video image frame and the bone pose information corresponding to the action of the target person in the current video image frame. That is to say, the bone pose of the current frame of the virtual character model is always adjusted based on the bone pose of the previous frame, so that the learning process of the action of the virtual character model for learning the target character can be embodied, and the interactive experience of training, education, formation and the like between the target character and the virtual character model can be favorably formed.
In addition, before the calculating, according to the first class of spatial orientation vectors and the second class of spatial orientation vectors, spatial orientation adjustment vectors of two adjacent skeletal key points of the virtual character model corresponding to the current video image frame, the method further includes: obtaining the expected pose fitting similarity of the virtual character model and the target character; the calculating, according to the first class of spatial direction vectors and the second class of spatial direction vectors, spatial direction adjustment vectors of two adjacent skeletal key points of the virtual character model corresponding to the current video image frame, includes: and calculating the space direction adjustment vectors of every two adjacent bone key points of the virtual character model corresponding to the current video image frame according to the acquired attitude fitting similarity, the first type of space direction vectors and the second type of space direction vectors. By introducing expected gesture fitting similarity, the similarity between the action learned by the virtual character model and the action of the target character is favorable to meet the expectation.
In addition, the spatial orientation adjustment vector of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame is calculated according to the obtained pose fitting similarity, the first class of spatial orientation vector and the second class of spatial orientation vector, and is specifically calculated by the following formula:
Figure BDA0002169587310000021
wherein, the
Figure BDA0002169587310000022
Adjusting vectors for the spatial orientation of two adjacent skeletal key points of the virtual character model corresponding to the current video image frame
Figure BDA0002169587310000023
For the second class of spatial orientation vectors, the prob is the pose fitting similarity, the
Figure BDA0002169587310000024
For the first type of spatial orientation vector, i and j are sequence numbers used to represent two adjacent bone key points. The calculation formula of the spatial orientation adjustment vector is provided, and the spatial orientation adjustment vector of two adjacent bone key points of the virtual character model corresponding to the current video image frame can be conveniently and accurately calculated.
In addition, the bone pose adjustment information includes: the spatial coordinates of all skeleton key points of the virtual character model corresponding to the current video image frame; the obtaining of the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the distance between each two adjacent bone key points and the spatial orientation adjustment vector of each two adjacent bone key points includes: sequentially calculating the space coordinates of each bone key point of the virtual character model corresponding to the current video image frame according to the preset space coordinates of the reference bone key points, the distance between each two adjacent bone key points and the space pointing adjustment vector of each two adjacent bone key points; wherein the reference skeletal key point is one of the skeletal key points of the virtual character model. The preset spatial coordinates of one reference skeleton key point provide reasonable adjustment basis for adjusting the spatial coordinates of other skeleton key points of the virtual character model corresponding to the current video image frame, and are beneficial to accurately calculating the spatial coordinates of all the skeleton key points of the virtual character model corresponding to the current video image frame, so that the virtual character model can accurately learn the action of the target character.
In addition, the spatial coordinates of the first bone key point of the virtual character model corresponding to the current video image frame are calculated according to the spatial coordinates of the reference bone key point, the distance between the reference bone key point and the first bone key point, and the spatial orientation adjustment vector of the reference bone key point and the first bone key point, specifically by the following formula:
Figure BDA0002169587310000031
wherein said newQmThe spatial coordinates of the first skeletal key point of the virtual character model corresponding to the current video image frame, the newQrootIs the spatial coordinates of said reference bone key points, said
Figure BDA0002169587310000032
Is the distance of the reference bone key point from the first bone key point, the
Figure BDA0002169587310000033
And adjusting vectors for the spatial orientation of the reference skeleton key point and the first skeleton key point, wherein root is the sequence number of the reference skeleton key point, and m is the sequence number of the first skeleton key point. A specific calculation formula is provided, so that the spatial coordinates of the first bone key point of the virtual character model corresponding to the current video image frame can be conveniently and accurately acquired.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
Fig. 1 is a flowchart of a virtual character model learning method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of key points of each bone of a human body according to a first embodiment of the present invention;
FIG. 3 is a flowchart of an implementation of step 102 according to a first embodiment of the present invention;
fig. 4 is a schematic structural diagram of an electronic device according to a third embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The first embodiment of the invention relates to a virtual character model learning method, which is applied to electronic equipment, wherein the electronic equipment can be a mobile phone, a computer and the like. The virtual character model may be a 3D digital model, and the virtual character model may be stored in the electronic device in advance, or may be generated in real time according to actual needs, which is not specifically limited in this embodiment. In this embodiment, a learning process of learning the motion of the target character by the virtual character model is mainly described, and the similarity between the motion initially learned by the virtual character model and the motion of the target character is low in the learning process, and the similarity becomes higher and higher through learning. The following describes implementation details of the virtual character model learning method according to the present embodiment in detail, and the following description is only provided for easy understanding and is not necessary to implement the present embodiment.
As shown in fig. 1, the virtual character model learning method in the present embodiment specifically includes:
step 101: and acquiring first skeleton posture information corresponding to the action of the target person in the current video image frame.
The first skeletal pose information may include spatial coordinates of skeletal key points of the target person in the current video image frame. Fig. 2 may be referred to as a schematic diagram of skeleton key points of a human body, wherein each skeleton key point corresponds to a respective number, for example, the number of each skeleton key point in fig. 2 is from 0 to 15, and the number and name of each skeleton key point may be referred to as shown in table 1:
TABLE 1
Numbering Name (R) Numbering Name (R) Numbering Name (R)
0 Right ankle 6 Pelvis 12 Right shoulder
1 Right knee 7 Chest cavity 13 Left shoulder
2 Right hip 8 Cervical vertebrae 14 Left elbow
3 Left hip 9 Head top 15 Left wrist
4 Left knee 10 Right wrist
5 Left ankle 11 Elbow of right hand
Each motion of the target person corresponds to the spatial coordinates of each skeletal key point, and it can be understood that when the motion of the target person changes, the spatial coordinates of each skeletal key point also change. It should be noted that the schematic diagram of each skeletal key point of the human body is only an example in fig. 2, and the specific implementation is not limited thereto.
In one example, the target person in the current video image frame may be a natural person training and teaching the virtual character model, and when the natural person trains the virtual character model, an electronic device with a camera function, such as a mobile phone, may capture video images, mainly capturing the motion of the natural person. For example, the target person is person a, person a trains the virtual character model to learn to dance, the mobile phone may capture a video image of the dancing person a, and then process the captured video image using an artificial intelligence deep learning technique to obtain first skeleton posture information corresponding to the motion of person a in the current video image frame, that is, spatial coordinates of each skeleton key point corresponding to the motion of person a in the current video image frame. In a specific implementation, the mobile phone can also send the shot video image to the server, the server processes the video image, calculates the spatial coordinates of each skeletal key point corresponding to the action of the target person in the current video image frame, and then sends the calculated spatial coordinates of each skeletal key point to the mobile phone.
In one example, the target person in the current video image frame may be a person in a video file played online by video playing software on a mobile phone. The video file can be an offline video file stored in a memory on the mobile phone, or an online video file acquired by the mobile phone from a server. For example, the video file may be an online video file acquired by the mobile phone from the server, at this time, when the user requests to play the specified video file through the video playing software on the mobile phone, the mobile phone may transmit the video playing request to the server through the network, and the server may return a playing address of the specified video file, so that the specified video file may be played on the mobile phone. If the video file currently played by the mobile phone is a body-building guidance video, if a guidance teacher exists in the body-building guidance video, the guidance teacher is a target character; if a plurality of instructors exist in the body-building instruction video, one of the instructors can be selected as the target character. The mobile phone can acquire the spatial coordinates of each bone key point corresponding to the action of the selected target person in the body-building guidance video according to the played body-building guidance video.
Step 102: and obtaining the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the first bone posture information and the second bone posture information.
The second skeleton posture information is the skeleton posture information of the virtual character model corresponding to the previous video image frame, and the second skeleton posture information may include the spatial coordinates of each skeleton key point of the virtual character model corresponding to the previous video image frame. Referring to fig. 2, the spatial coordinates of the skeletal key points of the virtual character model corresponding to the previous video image frame can be understood as the spatial coordinates of the skeletal key points corresponding to the motion of the target person drawn by the virtual character model when learning the motion of the target person in the previous video image frame.
In one example, the flowchart for obtaining the bone pose adjustment information of the virtual character model corresponding to the current video image frame may refer to fig. 3, which includes:
step 1021: and calculating a first class of spatial orientation vectors of each two adjacent skeletal key points of the target person in the current video image frame according to the spatial coordinates of each skeletal key point in the first skeletal posture information.
Specifically, first, the distance between each two adjacent skeletal key points of the target person in the current video image frame can be calculated. Referring to fig. 2, taking the example of calculating the distance between two bone keypoints numbered 0 and 1, respectively, the spatial coordinate of bone keypoint 0 is denoted as P0 (x)0,y0,z0) The spatial coordinate of the bone key point 1 is expressed as P1 ═ (x)1,y1,z1) The distance between P1 and P0 is written as:
Figure BDA0002169587310000062
secondly, a first type of spatial orientation vector can be calculated according to the distance between each two adjacent skeletal key points of the target person in the current video image frame and the spatial coordinates of each two adjacent skeletal key points. For example, a first type of spatial pointing vector, where P1 points at P0, is denoted as:
Figure BDA0002169587310000063
Figure BDA0002169587310000064
wherein:
referring to the above calculation formula of the first-class spatial orientation vectors of two adjacent skeletal keypoints numbered 1 and 0 and the distribution diagram of each skeletal keypoint shown in fig. 2, the first-class spatial orientation vectors of all the two adjacent skeletal keypoints of the target person in the current video image frame can be sequentially calculated.
Step 1022: and calculating a second class of spatial orientation vectors of every two adjacent bone key points of the virtual character model corresponding to the first video image frame according to the spatial coordinates of every bone key point in the second bone posture information.
Specifically, the second-class spatial orientation vector is similar to the first-class spatial orientation vector in calculation, and the calculation formula of the first-class spatial orientation vector of two adjacent skeleton key points numbered 1 and 0 in step 1021 can be referred to as well, where a difference is that the spatial coordinates of each skeleton key point are the spatial coordinates of each skeleton key point of the virtual character model corresponding to the previous video image frame. Therefore, the description is omitted to avoid repetition.
Step 1023: and calculating the space direction adjustment vector of each two adjacent bone key points of the virtual character model corresponding to the current video image frame according to the first type of space direction vector and the second type of space direction vector.
Specifically, the pose fitting similarity between the desired virtual character model and the target character may be obtained, and the pose fitting similarity may represent the degree of similarity between the action learned by the virtual character model and the actual action of the target character. The expected pose fitting similarity can be manually input into the electronic device according to actual needs. In this embodiment, the spatial orientation adjustment vector of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame may be calculated according to the first-class spatial orientation vector, the second-class spatial orientation vector, and the pose fitting similarity.
In one example, the spatial orientation adjustment vector of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame can be calculated by the following formula:
Figure BDA0002169587310000066
wherein the content of the first and second substances,and adjusting the vector for the calculated spatial orientation of two adjacent bone key points of the virtual character model corresponding to the current video image frame.
Figure BDA0002169587310000068
The second type of spatial orientation vector is a spatial orientation vector which can represent each adjacent bone key point of the virtual character model corresponding to the previous video image frame. prob is the pose fit similarity.
Figure BDA0002169587310000069
The first type of spatial orientation vector is a spatial orientation vector that may represent adjacent skeletal keypoints of a target person in a current video image frame. The i and the j are serial numbers used to represent two adjacent bone key points. For ease of understanding, the following spatial orientation adjustment vectors are used to calculate two skeletal keypoints numbered 0 and 1, respectively
Figure BDA0002169587310000071
The above formula is explained for example, that is, i is 0, j is 1:
Figure BDA0002169587310000072
wherein the content of the first and second substances,
Figure BDA0002169587310000074
Figure BDA0002169587310000075
Figure BDA0002169587310000076
referring to the above-mentioned spatial orientation adjustment vectors for two skeletal key points with calculation numbers 0 and 1, respectively
Figure BDA0002169587310000077
The formula (2) sequentially calculates the spatial orientation adjustment vectors of other adjacent bone key points.
Step 1024: and obtaining the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the space pointing adjustment vector of each two adjacent bone key points.
Specifically, the bone posture adjustment information of the virtual character model corresponding to the current video image frame can be obtained according to the distance between each two adjacent bone key points and the space pointing adjustment vector of each two adjacent bone key points.
In one example, one of the skeletal keypoints of the virtual character model may be selected as a reference skeletal keypoint. And then sequentially calculating the space coordinates of each bone key point of the virtual character model corresponding to the current video image frame according to the space coordinates of the reference bone key points, the distance between every two adjacent bone key points and the space pointing adjustment vector of every two adjacent bone key points. For example, the spatial coordinates of each first bone key point adjacent to the reference bone key point may be calculated first, and specifically, the spatial coordinates of the first bone key point of the virtual character model corresponding to the current video image frame may be calculated according to the spatial coordinates of the reference bone key point, the distance between the reference bone key point and the first bone key point, and the spatial orientation adjustment vector between the reference bone key point and the first bone key point. And then, taking the first skeleton key point as a new reference skeleton key point, and sequentially calculating the space coordinates of all the remaining skeleton key points of the virtual character model corresponding to the current video image frame according to the adjacent relation among all the skeleton key points.
In one example, the spatial coordinates of the first bone key point of the virtual character model corresponding to the current video image frame are calculated according to the spatial coordinates of the reference bone key point, the distance between the reference bone key point and the first bone key point, and the spatial orientation adjustment vector of the reference bone key point and the first bone key point, and may be calculated according to the following formula:
Figure BDA0002169587310000078
wherein newQmIs the space coordinate, newQ, of the first skeleton key point of the virtual character model corresponding to the current video image framerootTo reference the spatial coordinates of the bone key points,to reference the distance of a bone keypoint from a first bone keypoint,
Figure BDA00021695873100000710
the spatial orientation adjustment vectors of the reference skeleton key point and the first skeleton key point are obtained, root is the serial number of the reference skeleton key point, and m is the serial number of the first skeleton key point. For example, referring to fig. 2, assuming that the selected reference bone key point is a bone key point 8, the first bone key point may include bone key points 7, 9, 12, and 13 adjacent to the bone key point 8, and the spatial coordinates of the bone key points 7, 9, 12, and 13 of the virtual character model corresponding to the current video image frame may be sequentially calculated by the following formula:
Figure BDA0002169587310000081
Figure BDA0002169587310000082
Figure BDA0002169587310000083
further, the bone key point 12 can be used as a new reference bone key point, and the spatial coordinates of the bone key points 11 adjacent to the bone key point 12 are calculated by the following formula:
Figure BDA0002169587310000084
then, the bone key point 11 can be used as a new reference bone key point, and the spatial coordinates of the bone key points 10 adjacent to the bone key point 11 are calculated by the following formula:
similarly, with the bone keypoint 7 as a reference bone keypoint, the spatial coordinates of the bone keypoint 6 adjacent to the bone keypoint 7 can be calculated. With the bone keypoint 6 as a reference bone keypoint, the spatial coordinates of the bone keypoints 2 and 3 adjacent to the bone keypoint 6 can be calculated. By the method, the spatial coordinates of all the bone key points of the virtual character model corresponding to the current video image frame can be obtained through calculation.
Step 103: and driving the virtual character model according to the bone posture adjustment information so that the virtual character model can learn the action of the target character in the current video image frame.
Specifically, the bone pose adjustment information includes spatial coordinates of each bone key point of the virtual character model corresponding to the current video image frame, that is, the spatial coordinates of each bone key point calculated in step 102.
In one example, driving the virtual character model according to the bone pose adjustment information for the virtual character model to learn the action of the target person in the current video image frame can be understood as: and taking the calculated space coordinates of each bone key point as input data of the virtual character model, and rendering and displaying. The output of the virtual character model is expressed in that the space coordinates of all skeleton key points of the virtual character model are adjusted to the input space coordinates of all skeleton key points, and the actions presented by all the skeleton key points are similar to the actions of the target person in the current video image frame.
The above examples in the present embodiment are only for convenience of understanding, and do not limit the technical aspects of the present invention.
Compared with the prior art, in the embodiment, the virtual character model is always driven by the bone posture adjustment information, and the bone posture adjustment information is obtained according to the bone posture information of the virtual character model corresponding to the previous video image frame and the bone posture information corresponding to the action of the target person in the current video image frame. That is to say, the bone pose of the current frame of the virtual character model is always adjusted based on the bone pose of the previous frame, so that the learning process of the virtual character model for learning the action of the target character can be embodied, and the interactive experience of training, education, formation and the like between the target character and the virtual character model can be formed.
A second embodiment of the present invention relates to a virtual character model learning method. The second embodiment is substantially the same as the first embodiment, and mainly differs therefrom in that: in the first embodiment, the pose fitting similarity can be manually input into the electronic device according to actual needs. In the second embodiment of the present invention, the posture fitting similarity may be obtained according to a corresponding relationship between a preset learning duration and the posture fitting similarity. The following describes implementation details of the virtual character model learning method according to the present embodiment in detail, and the following description is only provided for easy understanding and is not necessary to implement the present embodiment.
Specifically, the manner of obtaining the pose fitting similarity between the desired virtual character model and the target character may be: the method comprises the steps of firstly obtaining the expected learning duration of a virtual character model, and then obtaining the posture fitting similarity corresponding to the learning duration of the virtual character model according to the corresponding relation between the preset learning duration and the posture fitting similarity. In the preset corresponding relationship, the longer the learning duration is, the higher the corresponding gesture fitting degree can be, that is, the longer the learning duration of the virtual character model is, the more the learned action is similar to the action of the target character. The learning duration can be selected and input according to actual needs, for example, the learning duration is determined according to the difficulty of the action to be learned by the virtual character model, and it can be understood that the harder the action is learned, the longer the determined learning duration can be, so as to ensure the learning effect of the virtual character model.
In an example, the preset corresponding relationship between the learning duration and the pose fitting similarity may further be: the pose fitting similarity continuously increases according to the change of the learning duration. In other words, in the learning process of the virtual character model, as the learning time length increases, the posture fitting similarity dynamically increases, and the actual learning process, namely the process from the just-learned poor image to the increasingly-learned image, is easier to simulate.
In one example, the target person may teach the virtual character model to dance, and the target person may train the virtual character model by repeating a complete dance for a plurality of times while teaching the virtual character model to achieve the determined learning duration. In another example, the target character may also decompose dance movements during teaching, and the virtual character model may be trained to achieve a determined learning duration by repeating the respective decomposition movements, each for a period of time.
The above examples in the present embodiment are only for convenience of understanding, and do not limit the technical aspects of the present invention.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
The third embodiment of the present invention relates to a server, as shown in fig. 4, including at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; the memory 202 stores instructions executable by the at least one processor 201, and the instructions are executed by the at least one processor 201 to enable the at least one processor 201 to execute the virtual character model learning method according to the first or second embodiment.
Where the memory 202 and the processor 201 are coupled in a bus, the bus may comprise any number of interconnected buses and bridges, the buses coupling one or more of the various circuits of the processor 201 and the memory 202 together. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 201 is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor 201.
The processor 201 is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 202 may be used to store data used by the processor 201 in performing operations.
A fourth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (11)

1. A method for learning a virtual character model, comprising:
acquiring first skeleton posture information corresponding to the action of a target person in a current video image frame;
obtaining bone posture adjustment information of a virtual character model corresponding to the current video image frame according to the first bone posture information and the second bone posture information; the second skeleton posture information is the skeleton posture information of the virtual character model corresponding to the previous video image frame;
and driving the virtual character model according to the bone posture adjustment information so that the virtual character model can learn the action of the target character in the current video image frame.
2. The method of learning a virtual character model according to claim 1, wherein the first skeletal pose information includes spatial coordinates of each skeletal key point of the target person in the current video image frame, and the second skeletal pose information includes spatial coordinates of each skeletal key point of the virtual character model corresponding to the previous video image frame;
the obtaining of the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the first bone posture information and the second bone posture information includes:
calculating a first class of space orientation vectors of each two adjacent skeleton key points of a target person in the current video image frame according to the space coordinates of each skeleton key point in the first skeleton posture information;
calculating a second-class spatial orientation vector of each two adjacent skeleton key points of the virtual character model corresponding to the previous video image frame according to the spatial coordinates of each skeleton key point in the second skeleton posture information;
calculating a spatial orientation adjustment vector of each two adjacent bone key points of the virtual character model corresponding to the current video image frame according to the first class of spatial orientation vector and the second class of spatial orientation vector;
and obtaining bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the space pointing adjustment vector of each two adjacent bone key points.
3. The method for learning virtual character model according to claim 2, wherein before the calculating the spatial orientation adjustment vector of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame according to the first class spatial orientation vector and the second class spatial orientation vector, the method further comprises:
obtaining the expected pose fitting similarity of the virtual character model and the target character;
the calculating, according to the first class of spatial direction vectors and the second class of spatial direction vectors, spatial direction adjustment vectors of two adjacent skeletal key points of the virtual character model corresponding to the current video image frame, includes:
and calculating the space direction adjustment vectors of every two adjacent bone key points of the virtual character model corresponding to the current video image frame according to the acquired attitude fitting similarity, the first type of space direction vectors and the second type of space direction vectors.
4. The method of learning a virtual character model according to claim 3, wherein the obtaining of the desired similarity of the pose fit of the virtual character model to the target character comprises:
acquiring the expected learning duration of the virtual character model;
and acquiring the posture fitting similarity corresponding to the learning duration of the virtual character model according to the corresponding relation between the preset learning duration and the posture fitting similarity.
5. The method for learning a virtual character model according to claim 3 or 4, wherein the spatial orientation adjustment vector of each two adjacent skeletal key points of the virtual character model corresponding to the current video image frame is calculated according to the obtained pose fitting similarity, the first class spatial orientation vector and the second class spatial orientation vector, and is specifically calculated by the following formula:
wherein, the
Figure FDA0002169587300000022
Adjusting vectors for the spatial orientation of two adjacent skeletal key points of the virtual character model corresponding to the current video image frame
Figure FDA0002169587300000023
For the second class of spatial orientation vectors, the prob is the pose fitting similarity, the
Figure FDA0002169587300000024
For the first type of spatial orientation vector, i and j are sequence numbers used to represent two adjacent bone key points.
6. The method of claim 2, wherein before the obtaining the bone pose adjustment information of the virtual character model corresponding to the current video image frame according to the spatial orientation adjustment vector of each of the two adjacent bone key points, the method further comprises:
obtaining the distance between each two adjacent skeleton key points;
the obtaining of the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the spatial orientation adjustment vector of each of the two adjacent bone key points specifically includes:
and obtaining the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the distance between each two adjacent bone key points and the space pointing adjustment vector of each two adjacent bone key points.
7. The virtual character model learning method of claim 6, wherein the bone pose adjustment information comprises: the spatial coordinates of all skeleton key points of the virtual character model corresponding to the current video image frame;
the obtaining of the bone posture adjustment information of the virtual character model corresponding to the current video image frame according to the distance between each two adjacent bone key points and the spatial orientation adjustment vector of each two adjacent bone key points includes:
sequentially calculating the space coordinates of each bone key point of the virtual character model corresponding to the current video image frame according to the preset space coordinates of the reference bone key points, the distance between each two adjacent bone key points and the space pointing adjustment vector of each two adjacent bone key points; wherein the reference skeletal key point is one of the skeletal key points of the virtual character model.
8. The method for learning a virtual character model according to claim 7, wherein the sequentially calculating the spatial coordinates of the bone key points of the virtual character model corresponding to the current video image frame according to the preset spatial coordinates of the reference bone key points, the distance between each two adjacent bone key points, and the spatial orientation adjustment vector of each two adjacent bone key points comprises:
calculating the space coordinate of the first skeleton key point of the virtual character model corresponding to the current video image frame according to the space coordinate of the reference skeleton key point, the distance between the reference skeleton key point and the first skeleton key point and the space pointing adjustment vector of the reference skeleton key point and the first skeleton key point; wherein the first bone key point and the reference bone key point are two adjacent bone key points;
and taking the first skeleton key point as the reference skeleton key point, and sequentially calculating the space coordinates of all the rest skeleton key points of the virtual character model corresponding to the current video image frame according to the adjacent relation among all the skeleton key points.
9. The method for learning a virtual character model according to claim 8, wherein the spatial coordinates of the first skeletal key point of the virtual character model corresponding to the current video image frame are calculated according to the spatial coordinates of the reference skeletal key point, the distance between the reference skeletal key point and the first skeletal key point, and the spatial orientation adjustment vector of the reference skeletal key point and the first skeletal key point, specifically according to the following formula:
Figure FDA0002169587300000031
wherein said newQmThe spatial coordinates of the first skeletal key point of the virtual character model corresponding to the current video image frame, the newQrootIs the spatial coordinates of said reference bone key points, said
Figure FDA0002169587300000032
Is the distance of the reference bone key point from the first bone key point, the
Figure FDA0002169587300000033
And adjusting vectors for the spatial orientation of the reference skeleton key point and the first skeleton key point, wherein root is the sequence number of the reference skeleton key point, and m is the sequence number of the first skeleton key point.
10. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a method of learning a virtual character model according to any of claims 1 to 9.
11. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the virtual character model learning method of any one of claims 1 to 9.
CN201910758741.9A 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium Active CN110675474B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910758741.9A CN110675474B (en) 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910758741.9A CN110675474B (en) 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium

Publications (2)

Publication Number Publication Date
CN110675474A true CN110675474A (en) 2020-01-10
CN110675474B CN110675474B (en) 2023-05-02

Family

ID=69075361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910758741.9A Active CN110675474B (en) 2019-08-16 2019-08-16 Learning method for virtual character model, electronic device, and readable storage medium

Country Status (1)

Country Link
CN (1) CN110675474B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260764A (en) * 2020-02-04 2020-06-09 腾讯科技(深圳)有限公司 Method, device and storage medium for making animation
CN111383309A (en) * 2020-03-06 2020-07-07 腾讯科技(深圳)有限公司 Skeleton animation driving method, device and storage medium
CN111639612A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 Posture correction method and device, electronic equipment and storage medium
CN111652983A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Augmented reality AR special effect generation method, device and equipment
CN111885419A (en) * 2020-07-24 2020-11-03 青岛海尔科技有限公司 Posture processing method and device, storage medium and electronic device
CN112348931A (en) * 2020-11-06 2021-02-09 网易(杭州)网络有限公司 Foot reverse motion control method, device, equipment and storage medium
CN113791687A (en) * 2021-09-15 2021-12-14 咪咕视讯科技有限公司 Interaction method and device in VR scene, computing equipment and storage medium
CN114862992A (en) * 2022-05-19 2022-08-05 北京百度网讯科技有限公司 Virtual digital human processing method, model training method and device thereof
CN114900738A (en) * 2022-06-02 2022-08-12 咪咕文化科技有限公司 Film viewing interaction method and device and computer readable storage medium
WO2023185703A1 (en) * 2022-03-28 2023-10-05 百果园技术(新加坡)有限公司 Motion control method, apparatus and device for virtual character, and storage medium
US11809616B1 (en) 2022-06-23 2023-11-07 Qing Zhang Twin pose detection method and system based on interactive indirect inference

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (en) * 2007-07-26 2008-04-02 上海交通大学 Method for tracing three-dimensional human body movement based on multi-camera
US20120106819A1 (en) * 2009-04-25 2012-05-03 Siemens Aktiengesellschaft method and a system for assessing the relative pose of an implant and a bone of a creature
CN108876815A (en) * 2018-04-28 2018-11-23 深圳市瑞立视多媒体科技有限公司 Bone computation method for attitude, personage's dummy model driving method and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101154289A (en) * 2007-07-26 2008-04-02 上海交通大学 Method for tracing three-dimensional human body movement based on multi-camera
US20120106819A1 (en) * 2009-04-25 2012-05-03 Siemens Aktiengesellschaft method and a system for assessing the relative pose of an implant and a bone of a creature
CN108876815A (en) * 2018-04-28 2018-11-23 深圳市瑞立视多媒体科技有限公司 Bone computation method for attitude, personage's dummy model driving method and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李红波等: "基于骨骼信息的虚拟角色控制方法", 《重庆邮电大学学报(自然科学版)》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260764A (en) * 2020-02-04 2020-06-09 腾讯科技(深圳)有限公司 Method, device and storage medium for making animation
CN111383309B (en) * 2020-03-06 2023-03-17 腾讯科技(深圳)有限公司 Skeleton animation driving method, device and storage medium
CN111383309A (en) * 2020-03-06 2020-07-07 腾讯科技(深圳)有限公司 Skeleton animation driving method, device and storage medium
CN111639612A (en) * 2020-06-04 2020-09-08 浙江商汤科技开发有限公司 Posture correction method and device, electronic equipment and storage medium
CN111652983A (en) * 2020-06-10 2020-09-11 上海商汤智能科技有限公司 Augmented reality AR special effect generation method, device and equipment
CN111885419A (en) * 2020-07-24 2020-11-03 青岛海尔科技有限公司 Posture processing method and device, storage medium and electronic device
CN112348931A (en) * 2020-11-06 2021-02-09 网易(杭州)网络有限公司 Foot reverse motion control method, device, equipment and storage medium
CN112348931B (en) * 2020-11-06 2024-01-30 网易(杭州)网络有限公司 Foot reverse motion control method, device, equipment and storage medium
CN113791687A (en) * 2021-09-15 2021-12-14 咪咕视讯科技有限公司 Interaction method and device in VR scene, computing equipment and storage medium
CN113791687B (en) * 2021-09-15 2023-11-14 咪咕视讯科技有限公司 Interaction method, device, computing equipment and storage medium in VR scene
WO2023185703A1 (en) * 2022-03-28 2023-10-05 百果园技术(新加坡)有限公司 Motion control method, apparatus and device for virtual character, and storage medium
CN114862992A (en) * 2022-05-19 2022-08-05 北京百度网讯科技有限公司 Virtual digital human processing method, model training method and device thereof
CN114900738A (en) * 2022-06-02 2022-08-12 咪咕文化科技有限公司 Film viewing interaction method and device and computer readable storage medium
WO2023232103A1 (en) * 2022-06-02 2023-12-07 咪咕文化科技有限公司 Film-watching interaction method and apparatus, and computer-readable storage medium
US11809616B1 (en) 2022-06-23 2023-11-07 Qing Zhang Twin pose detection method and system based on interactive indirect inference

Also Published As

Publication number Publication date
CN110675474B (en) 2023-05-02

Similar Documents

Publication Publication Date Title
CN110675474B (en) Learning method for virtual character model, electronic device, and readable storage medium
JP7001841B2 (en) Image processing methods and equipment, image devices and storage media
US20240153187A1 (en) Virtual character posture adjustment
Seo et al. Anatomy builder VR: Applying a constructive learning method in the virtual reality canine skeletal system
CN110827383B (en) Attitude simulation method and device of three-dimensional model, storage medium and electronic equipment
US9330502B2 (en) Mixed reality simulation methods and systems
US9520072B2 (en) Systems and methods for projecting images onto an object
CN112437950A (en) Skeletal system for animating virtual head portraits
CN108389249A (en) A kind of spaces the VR/AR classroom of multiple compatibility and its construction method
CN111223170A (en) Animation generation method and device, electronic equipment and storage medium
Papagiannakis et al. Transforming medical education and training with vr using mages
WO2022051460A1 (en) 3d asset generation from 2d images
WO2023185703A1 (en) Motion control method, apparatus and device for virtual character, and storage medium
CN110782482A (en) Motion evaluation method and device, computer equipment and storage medium
CN111967407A (en) Action evaluation method, electronic device, and computer-readable storage medium
CN112070865A (en) Classroom interaction method and device, storage medium and electronic equipment
CN116958336A (en) Virtual character movement redirection method and device, storage medium and electronic equipment
CN114310870A (en) Intelligent agent control method and device, electronic equipment and storage medium
CN116977506A (en) Model action redirection method, device, electronic equipment and storage medium
US20220262075A1 (en) Deep Learning of Biomimetic Sensorimotor Control for Biomechanical Model Animation
CN113902845A (en) Motion video generation method and device, electronic equipment and readable storage medium
Meyer et al. Juggling 4.0: Learning complex motor skills with augmented reality through the example of juggling
CN114356100B (en) Body-building action guiding method, body-building action guiding device, electronic equipment and storage medium
Seo et al. Anatomy builder VR: comparative anatomy lab promoting spatial visualization through constructionist learning
WO2022209220A1 (en) Image processing device, image processing method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant