US20230394735A1 - Enhanced animation generation based on video with local phase - Google Patents

Enhanced animation generation based on video with local phase Download PDF

Info

Publication number
US20230394735A1
US20230394735A1 US18/329,339 US202318329339A US2023394735A1 US 20230394735 A1 US20230394735 A1 US 20230394735A1 US 202318329339 A US202318329339 A US 202318329339A US 2023394735 A1 US2023394735 A1 US 2023394735A1
Authority
US
United States
Prior art keywords
frames
motion
machine learning
motion capture
learning model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/329,339
Inventor
Mingyi Shi
Yiwei Zhao
Wolfram Sebastian Starke
Mohsen Sardari
Navid Aghdaie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electronic Arts Inc
Original Assignee
Electronic Arts Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electronic Arts Inc filed Critical Electronic Arts Inc
Priority to US18/329,339 priority Critical patent/US20230394735A1/en
Publication of US20230394735A1 publication Critical patent/US20230394735A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30221Sports video; Sports image

Definitions

  • the present disclosure relates to systems and techniques for animation generation. More specifically, this disclosure relates to machine learning techniques for dynamically generating animation of characters from motion capture video.
  • Electronic games are increasingly becoming more realistic due to an increase in available processing resources. This increase in realism may allow for more realistic gameplay experiences. For example, elements that form an in-game world, such as characters, may be more realistically presented. In this example, the elements may be increasingly rendered at higher resolutions, with more detailed textures, with more detailed underlying meshes, and so on. While this added realism may be beneficial to an end-user of an electronic game, it may place a substantial burden on electronic game developers. As an example, electronic game developers may be required to create very rich, and detailed, models of characters. As another example, electronic game designers may be required to create fluid, lifelike, movements of the characters.
  • characters may be designed to realistically adjust their arms, legs, and so on, while traversing an in-game world. In this way, the characters may walk, run, jump, and so on, in a lifelike manner.
  • substantial time may be spent ensuring that the characters appear to mimic real-world sports players.
  • electronic game designers may spend substantial time fine-tuning movements of an underlying character model.
  • Movement of a character model may be, at least in part, implemented based on movement of an underlying skeleton.
  • a skeleton may include a multitude of objects (e.g., bones or joints) which may represent a portion of the character model.
  • a first object may be a finger while a second object may correspond to a wrist.
  • the skeleton may therefore represent an underlying form on which the character model is built. In this way, movement of the skeleton may cause a corresponding adjustment of the character model.
  • an electronic game designer may therefore adjust positions of the above-described objects included in the skeleton. For example, the electronic game designer may create realistic running via adjustment of specific objects which form a character model's legs. This hand-tuned technique to enable movement of a character results in substantial complexity and usage of time.
  • realistic motion may be rapidly generated for real life character models.
  • the realistic motion can be configured for use in electronic games.
  • a machine learning model may be trained based on motion capture information to generate local motion phase.
  • Pose information can be determined based on video input.
  • a window of frames can be used to predict the next frame and a predicted local motion phase for the next frame.
  • the window of frames can be updated to include the next frame and the predicted local motion phase to be used to predict the following frame.
  • the character animations can perform motions far smoother than traditional systems and the dynamic animation generation system can improve the quality of the animations when the initial prediction of pose has missing pose information in one or more frames of the real life video.
  • One embodiment discloses a computer-implemented method for dynamically generating animation of characters from real life motion capture video, the method comprising: accessing motion capture video, the motion capture video including a motion capture actor in motion; inputting the motion capture video to a first neural network; receiving pose information of the motion capture actor for a plurality of frames in the motion capture video from the first neural network; identifying a first window of frames of the motion capture video, wherein the first window of frames comprises a current frame, one or more past frames to the current frame, and one or more future frames to the current frame; inputting the first window of frames of the motion capture video to a second neural network, wherein the second neural network predicts the next frame from the current frame; receiving as output of the second neural network a first predicted frame and a first local motion phase corresponding to the first predicted frame, wherein the first predicted frame comprises the predicted frame following the current frame; identifying a second window of frames, wherein the second window of frames comprises the generated first predicted frame; inputting the second window of frames and the first local motion phase to the
  • the computer-implemented method further comprises overlaying the pose information on the motion capture video to generate a modified motion capture video, wherein identifying a first window of frames comprises a first window of frames of the modified motion capture video.
  • the motion capture video comprises a video of a real-life person in motion.
  • the pose information comprises velocity information corresponding to joints of the motion capture actor.
  • the second window of frames comprises one or more past frames to the first predicted frame, and one or more future frames to the first predicted frame.
  • the second window of frames drops the oldest frame from the first window of frames.
  • the first window of frames comprises sampled frames of the motion capture video at a predefined time threshold.
  • the first window of frames comprises a current frame, and the same number of past frames and future frames to the current frame, wherein the second window of frames drops the oldest frame from the first window of frames.
  • the motion capture video is captured from a camera on a user's mobile device.
  • the motion capture video comprises a video of a real-life sporting event.
  • the first local motion phase includes phase information for each joint of the motion capture actor.
  • the first local motion phase includes phase information for each bone of the motion capture actor.
  • Some embodiments include a system comprising one or more processors and non-transitory computer storage media storing instructions that when executed by the one or more processors, cause the one or more processors to perform operations comprising: accessing motion capture video, the motion capture video including a motion capture actor in motion; inputting the motion capture video to a first neural network; receiving pose information of the motion capture actor for a plurality of frames in the motion capture video from the first neural network; identifying a first window of frames of the motion capture video, wherein the first window of frames comprises a current frame, one or more past frames to the current frame, and one or more future frames to the current frame; inputting the first window of frames of the motion capture video to a second neural network, wherein the second neural network predicts the next frame from the current frame; receiving as output of the second neural network a first predicted frame and a first local motion phase corresponding to the first predicted frame, wherein the first predicted frame comprises the predicted frame following the current frame; identifying a second window of frames, wherein the second window of frames comprises the generated first
  • the operations further comprise overlaying the pose information on the motion capture video to generate a modified motion capture video, wherein identifying a first window of frames comprises a first window of frames of the modified motion capture video.
  • the motion capture video comprises a video of a real-life person in motion.
  • the pose information comprises velocity information corresponding to joints of the motion capture actor.
  • the second neural network comprises an convolutional neural network.
  • the second neural network comprises an LSTM neural network.
  • Some embodiments include a non-transitory computer storage media storing instructions that when executed by a system of one or more processors, cause the one or more processors to perform operations comprising: accessing motion capture video, the motion capture video including a motion capture actor in motion; inputting the motion capture video to a first neural network; receiving pose information of the motion capture actor for a plurality of frames in the motion capture video from the first neural network; identifying a first window of frames of the motion capture video, wherein the first window of frames comprises a current frame, one or more past frames to the current frame, and one or more future frames to the current frame; inputting the first window of frames of the motion capture video to a second neural network, wherein the second neural network predicts the next frame from the current frame; receiving as output of the second neural network a first predicted frame and a first local motion phase corresponding to the first predicted frame, wherein the first predicted frame comprises the predicted frame following the current frame; identifying a second window of frames, wherein the second window of frames comprises the generated first predicted frame; inputting the
  • the operations further comprise overlaying the pose information on the motion capture video to generate a modified motion capture video, wherein identifying a first window of frames comprises a first window of frames of the modified motion capture video.
  • FIGS. 1 A- 1 H illustrate example animation generation by traditional systems according to some embodiments.
  • FIGS. 2 A- 2 H illustrate example animation generation by the dynamic animation generation system according to some embodiments.
  • FIG. 3 A illustrates an embodiment of a sliding window of past pose data used to predict first pose data in the current frame, and a new sliding window that is adjusted by one frame to predict pose data for the next frame according to some embodiments.
  • FIG. 3 B illustrates a flow diagram of the dynamic animation generation system applying the first and second neural network to generate predicted next frame data according to some embodiments.
  • FIGS. 4 A, 4 B, 4 C, and 4 D illustrate embodiments of differences with the local motion phase not in use, and with local motion phase in use according to some embodiments.
  • FIGS. 5 A, 5 B, 5 C, and 5 D illustrate additional embodiments of differences with the local motion phase not in use, and with local motion phase in use according to some embodiments.
  • FIG. 6 illustrates an embodiment of computing device that may implement aspects of the present disclosure according to some embodiments.
  • a system described herein may implement a machine learning model to generate local phase information based on analyses of motion capture information.
  • the dynamic animation generation system can then process a sliding window of frames to a second machine learning model to generate the next predicted frame and the local motion phase associated with the next predicted frame.
  • the dynamic animation generation system can slide the window over in time to now include the next predicted frame while dropping the oldest frame, and the local motion phase associated with the next predicted frame, and apply the new window in the neural network to generate the frame following the next predicted frame and the local motion phase.
  • the system may perform substantially automated analyses of the motion capture information such that complex machine learning labeling processes may be avoided.
  • the dynamic animation generation system can combine local motion phase techniques with human motion reconstruction from captured real life video. While electronic games are described, it may be appreciated that the techniques described herein may be applied generally to movement of character models. For example, animated content (e.g., TV shows, movies) may employ the techniques described herein.
  • a first example technique in these traditional systems to, at least in part, automate generation of character motion may include using software to automatically adjust a skeleton.
  • templates of running may be predefined.
  • a designer may select a running template which may cause adjustment of the joints on a skeleton.
  • this first example technique may lack the realism of real-world movement.
  • the quality of the animation is limited to the quality of these templates.
  • the prediction of frames in the human motion is also limited to the type of movement in these templates.
  • traditional systems that use templates may generate animations of running that are very similar to these templates.
  • Motion may be defined, at least in part, based on distinct poses of an in-game character.
  • each pose may represent a discrete sample of the motion to be performed by the in-game character.
  • the pose may identify positions of bones or joints of the in-game character.
  • each pose may represent a snapshot of the running.
  • a first frame generated by an electronic game may include the in-game character with both feet on a surface within the game world.
  • a second frame may include the in-game character beginning to move one of the feet upwards. It may be appreciated that subsequent frames may include the in-game character moving forward in a running motion.
  • a motion capture studio may be used to learn the realistic gait of an actor as he/she moves about the motion capture studio.
  • Specific portions of the actor such as joints or bones, may be monitored during this movement. Subsequently, movement of these portions may be extracted from image or video data of the actor. This movement may then be translated onto a skeleton or rig for use as an underlying framework of one or more in-game characters.
  • the skeleton or rig may include bones, which may be adjusted based on the motion capture images or video. In this way, the skeleton or rig may be animated to reproduce motion performed by the actor.
  • motion capture studios allow for realistic motion, they are limited in the types of motion able to be reproduced.
  • the above-described skeleton may be animated to reproduced motions which were specifically performed by the actor.
  • Other motions may thus need to be manually created by an electronic game designer.
  • a real-life basketball player may be used as an actor to perform common basketball motions. While this actor may perform a wide breadth of motions typically performed during a basketball game, as may be appreciated there are other motions which will not be recorded.
  • these other motions may be produced naturally by the actor during a real-world basketball game depending on locations of opponents, the actor's current stamina level, a location of the actor with respect to the basketball court, and so on.
  • the techniques described herein allow for the rapid generation of character animation based on automated analyses of motion capture information.
  • an actor may be placed in a motion capture studio or the dynamic animation generation system may receive data on a real life soccer game. The actor may then perform different movements, and movement of different portions of the actor (e.g., joints) may be stored by a system. Additionally, contact with an external environment may be recorded. Thus, the specific foot fall pattern used by an upper echelon boxer or basketball player may be recorded. Additionally, the specific contact made by an actor's hands with respect to a basketball, football, and so on, may be recorded. This recorded information may be used to increase a realism associated with animation generation.
  • motion can be generated for biped and/or human characters. In some embodiments, motion can be generated for quadruped characters.
  • the dynamic animation generation system can improve on the quality of human reconstruction by combining human video input with local motion phase information.
  • the dynamic animation generation system can first predict rough motion in real life video by applying real life capture data in a first model, such as a neural network, to receive the rough motion.
  • the rough motion can include pose data, such as local motion phase.
  • the machine learning model may be trained using local phase information extracted based on how individual body parts of a motion capture actor contacts external objects. This phase information may therefore represent local bone and/or joint phases corresponding to bones (e.g., arms, legs, hands) and/or joints (e.g., elbows, knees, knuckles) of the actor, and may be used to enhance the machine learning model's temporal alignment, and segmentation, of realistic motion.
  • the model may allow for enhanced nuance associated with the animation.
  • a real-life wrestler may be used as an actor.
  • video of the wrestler may be recorded which depicts the wrestler moving about a ring, interacting with an opponent, performing different moves, and so on.
  • the machine learning model may then be trained based on this video, such that the model can reproduce the highly stylized, and personal, movement of the wrestler.
  • the dynamic animation generation system can process the output of the first model in a second model, such as an autoregression model, which can conditionally update the poses for the entire sequence going forward.
  • the second step can include a model that applies a sliding window.
  • the initial sliding window can include the rough motion data outputted by the first neural network.
  • the model can apply the initial sliding window to predict a predicted pose and local phase information for the next window. Then the window can slide one frame forward which includes the predicted pose and local phase information, and the model can predict the next pose and local phase information.
  • FIGS. 1 A- 1 H illustrate example animation generation by a first neural network to generate rough motions according to some embodiments.
  • the rough motions can include pose data for each frame based on the video.
  • the rough motion data can include a multidimensional signal that includes joint information, such as rotation information for each joint of the human.
  • the rough motions can include calculations for each joint and for each frame in the video.
  • the pose data is overlaid on top of the original video input to generate a modified video input.
  • FIGS. 2 A- 2 H illustrate example animation generation using the modified video input as input to a second neural network that applies a sliding window according to some embodiments.
  • FIGS. 1 A- 1 H and FIGS. 2 A- 2 H include a real life video capturing a human in motion (in dashed line) and a computer generated animation of the human (in solid line).
  • the real life video can be captured by a studio, camera, or phone, or can be a stream of a real life event, such as a sporting event.
  • the real life video can be a single view of a person performing a certain action or motion.
  • the character animation 122 is an unusual and unrealistic pose of a human running, and character animation 132 is non-existent.
  • There are certain frames where there is no character animation because the system cannot determine pose data when there is a sharp motion, such as a character changing directions sharply.
  • the person in the real life video is moving very quickly and the image can sometimes get blurred, causing the system to fail to predict pose data for that frame.
  • the first neural network can miss the connection between different frames, there are a few frames where pose data is completely missing. This is unlike FIGS. 1 A- 1 B where the human is moving slowly and the system can recover pose data for every frame. Thus, there is missing joint data for these frames.
  • averaging or interpolation between frames can be used.
  • such approaches cannot add new details to the recovered signals and the resulting animation loses certain details or result in a blurred or unsmoothed motion.
  • the character animation 222 and 232 have been generated by the second neural network that use the sliding window technique.
  • the neural network can process the sliding window by taking as input, pose data for the past frames and predict the current frame with local phase information. Then the system can move the sliding window by one frame, remove the oldest frame, and add the current frame to generate a new sliding window. The new sliding window can be passed to the neural network to generate the next frame with local phase information.
  • the character animations 142 , 162 , and 170 are unusual and unrealistic poses of a human running, and the character animation 152 is missing completely.
  • the character animations 242 , 252 , 262 , 272 have been generated by the second neural network using the sliding window technique and is able to generate a smooth pose of a human running.
  • the character animations 242 , 252 , 262 , 272 follow the real life person running very closely.
  • the dynamic animation generation system can generate character animations from real life capture video producing a smoother resulting motion, and improves the quality of the generation when the initial prediction of pose data is missing.
  • FIG. 3 A illustrates an embodiment 300 of a sliding window of past pose data used to predict first pose data in the current frame, and a new sliding window that is adjusted by one frame to predict pose data for the next frame according to some embodiments.
  • FIG. 3 B illustrates a flow diagram 350 of the dynamic animation generation system applying the first and second neural network to generate predicted next frame data.
  • the flow diagram 350 will be described as being performed by a system of one or more computers (e.g., the dynamic animation generation system).
  • the dynamic animation generation system can receive video input comprising a character in motion, such as a basketball player dribbling toward the hoop.
  • the system accesses motion capture information.
  • the information may be stored according to different motion capture formats, such as BVH and so on.
  • the motion capture information may represent image or video data taken at a particular frame rate.
  • there may be 24, 30, 60, frames per second, which depict an actor moving about a motion capture studio.
  • the actor may have markers usable to track portions of the actor's body.
  • computer vision techniques may be used to identify specific feature of the actor's body (e.g., hands, arms, and so on).
  • an external object e.g., a basketball
  • an external object may have a marker on, or a sensor within, the object.
  • computer vision techniques may be used to analyze positions of the external object in the image or video data.
  • video may be obtained of real-world events. For example, video from a real-world sports game may be obtained and analyzed. In this example, a particular player may be analyzed identify specific portions of the player's body. Example portions may include the player's hands, feet, head, and so on.
  • the dynamic animation generation system can input the video input to a first model, such as a first neural network.
  • the first neural network can output pose information for each frame in the video input.
  • the system may generate realistic motion using one or more deep-learning models.
  • An example deep-learning model described herein includes a generative control model usable to inform generation of highly variable, and realistic, animations for characters.
  • the first model may be trained based on local bone and/or joint phases learned from motion capture information of real-world actors.
  • Machine learning models may be used to enhance generation of motion based on motion capture information. For example, a machine learning model may analyze motion capture information. In this example, the machine learning model may then be used to generate animation for an in-game character which is based on the motion capture information, creating rough motion data. Since these machine learning models may directly output motion data for use in a second neural network (using a sliding window, as described further below) that can generate motion data for animating an in-game character automatically, they may substantially reduce development time of the electronic game. Additionally, since they are trained using motion capture information the output poses may appear lifelike.
  • the local motion phase includes phase information for each joint and/or bone of the character.
  • phase on a global level e.g., one phase for the entire character
  • the dynamic animation generation system can apply local motion phase which is determined on the local level by segmenting movements to joints, bones, limbs, ligaments, etc.
  • the dynamic animation generation system inputs velocity data specific to joints, bones, limbs, ligaments, into a gaiting function to predict next pose data very granularly.
  • the dynamic animation generation system can identify a sliding window to apply to a second model, such as a second neural network.
  • the dynamic animation generation system can identify an initial window of frames that include a current frame, past frames, and future frames from the modified video input that includes the pose information outputted from the first model overlaid on top of the original real life video clip.
  • the dynamic animation generation system can take the real life video capture 302 of FIG. 3 A and generate a sliding window 304 that includes two past frames, a current frame, and two future frames.
  • the dynamic animation generation system can apply the sliding window to the second model to generate a predicted next pose.
  • the sliding window 304 of FIG. 3 A can be applied to the second model, and a predicted next pose 306 can be received from the second model.
  • the output of the first neural network can include frames that include very noisy potentially blurred images of the character, and/or missing pose data in certain frames.
  • the second neural network is trained to generate next pose data based on the frames within the sliding window.
  • the neural network can be trained to pose data in a predicted next frame even with incomplete or noisy input of frames within the sliding window.
  • the dynamic animation generation system can receive a first predicted next pose 306 , and at block 364 , the dynamic animation generation system can identify a second window of frames that includes the first predicted next pose. For example, the dynamic animation generation system can receive the predicted next pose 306 of FIG. 3 A and a first local motion phase that is outputted by the first neural network. The dynamic animation generation system can then generate a new sliding window that 308 includes pose 310 (which is the same pose as the predicted next pose 306 ), two past frames and two future frames.
  • the phase information may be determined independently for each of the bones and/or joints. As will be described, phase information for a bone and/or joint may be determined based on contacts by the bone and/or joint with an external environment.
  • an actor's left hand contacting a ball, an opponent, a rim, a net, other portions of the actor, and so on may be identified using video or images of the actor in the motion capture studio. Contacts with the left hand may be aggregated over a period of time and may be represented as a signal. The signal may, for example, indicate times at which contact occurred.
  • the dynamic animation generation system can apply the second window of frames and the first local motion phase to the same second neural network as in block 360 .
  • the dynamic animation generation system can receive a second predicted next frame and a second local motion phase from the second neural network. For example, the dynamic animation generation system applies the second window of frames 308 of FIG. 3 A to the second neural network to receive a second predicted next frame 312 .
  • the dynamic animation generation system continues to repeat the sliding window to generate next predicted frames and local motion phases.
  • the dynamic animation generation system can repeat the sliding window through the entire real life video clip, where for each frame, the local motion phase is predicted and outputted by the second neural network.
  • the second neural network is trained by applying training data that includes video clips with characters in motion.
  • the output of the model can then be used to adjust the model, such as based on a comparison between the actual output of the model and the expected output of the model.
  • the trained data can include the precise local motion phase data. Then the model can be trained using the outputted motion and phase information.
  • the dynamic animation generation system can sample the real life video input to apply to the neural networks.
  • the dynamic animation generation system can skip frames to generate the sliding window. For example, if the video includes a frame every 1/10 th of a second, the dynamic animation generation system can take every second of video and apply the video to the first and second neural network, and drop the other 9 frames between each second.
  • the dynamic animation generation system may directly generate character poses from real life video data, the dynamic animation generation system may allow for substantial storage savings with respect to character animations.
  • prior techniques to generate character animations have relied upon utilization of key-frames or animation clips.
  • an electronic game may select a multitude of key-frames and interpolate them to generate animation for output to an end-user. These key-frames and animation clips may therefore have to be stored as information for use by the electronic game. This may increase a size associated with the electronic game, such as a download size, an installation size, and so on.
  • the techniques described herein may allow for generation of animation based on use of one or more machine learning models.
  • these machine learning models may be represented as weights, biases, and so on, which may be of a substantially smaller size.
  • an electronic game may have a reduced size, reduced download time, reduced installation time, and so on, as compared to other electronic games.
  • FIGS. 4 A, 4 B, 4 C, and 4 D illustrate embodiments 400 , 420 , 440 , 460 of differences with the local motion phase not in use 402 , 422 , 442 , 462 , and with local motion phase in use 404 , 424 , 444 , 464 according to some embodiments.
  • FIGS. 5 A, 5 B, 5 C, and 5 D illustrate additional embodiments 500 , 520 , 540 , 560 of differences with the local motion phase not in use 502 , 522 , 542 , 562 , and with local motion phase in use 504 , 524 , 544 , 564 according to some embodiments. Similar to FIGS.
  • the character animation of 402 , 422 , 442 , and 462 in FIGS. 4 A, 4 B, 4 C, and 4 D can be blurred and result in unrealistic motion.
  • the motion from 402 to 422 is unnatural, and the pose in 422 is unrealistic.
  • the character animation from 402 , 422 , 442 , and 462 may reach a similar final gesture but cannot find the movement timing accurately.
  • the character animation 404 , 424 , 444 , 464 is far more synchronized and the pose is appropriate for the timing of the frames.
  • the use of local motion phase results in a better predictor of forward timing for specific posses.
  • FIG. 6 illustrates an embodiment of computing device 10 according to some embodiments.
  • the computing device 10 may include a game device, a smart phone, a tablet, a personal computer, a laptop, a smart television, a car console display, a server, and the like.
  • the computing device 10 includes a processing unit 20 that interacts with other components of the computing device 10 and also external components to computing device 10 .
  • a media reader 22 is included that communicates with media 12 .
  • the media reader 22 may be an optical disc reader capable of reading optical discs, such as CD-ROM or DVDs, or any other type of reader that can receive and read data from game media 12 .
  • One or more of the computing devices may be used to implement one or more of the systems disclosed herein.
  • Computing device 10 may include a separate graphics processor 24 .
  • the graphics processor 24 may be built into the processing unit 20 .
  • the graphics processor 24 may share Random Access Memory (RAM) with the processing unit 20 .
  • the computing device 10 may include a discrete graphics processor 24 that is separate from the processing unit 20 .
  • the graphics processor 24 may have separate RAM from the processing unit 20 .
  • Computing device 10 might be a handheld video game device, a dedicated game console computing system, a general-purpose laptop or desktop computer, a smart phone, a tablet, a car console, or other suitable system.
  • Computing device 10 also includes various components for enabling input/output, such as an I/O 32 , a user I/O 34 , a display I/O 36 , and a network I/O 38 .
  • I/O 32 interacts with storage element 40 and, through a device 42 , removable storage media 44 in order to provide storage for computing device 10 .
  • Processing unit 20 can communicate through I/O 32 to store data, such as game state data and any shared data files.
  • computing device 10 is also shown including ROM (Read-Only Memory) 46 and RAM 48 .
  • RAM 48 may be used for data that is accessed frequently, such as when a game is being played.
  • User I/O 34 is used to send and receive commands between processing unit 20 and user devices, such as game controllers.
  • the user I/O can include a touchscreen inputs.
  • the touchscreen can be capacitive touchscreen, a resistive touchscreen, or other type of touchscreen technology that is configured to receive user input through tactile inputs from the user.
  • Display I/O 36 provides input/output functions that are used to display images from the game being played.
  • Network I/O 38 is used for input/output functions for a network. Network I/O 38 may be used during execution of a game, such as when a game is being played online or being accessed online.
  • Display output signals produced by display I/O 36 comprising signals for displaying visual content produced by computing device 10 on a display device, such as graphics, user interfaces, video, and/or other visual content.
  • Computing device 10 may comprise one or more integrated displays configured to receive display output signals produced by display I/O 36 .
  • display output signals produced by display I/O 36 may also be output to one or more display devices external to computing device 10 , such a display 16 .
  • the computing device 10 can also include other features that may be used with a game, such as a clock 50 , flash memory 52 , and other components.
  • An audio/video player 56 might also be used to play a video sequence, such as a movie. It should be understood that other components may be provided in computing device 10 and that a person skilled in the art will appreciate other variations of computing device 10 .
  • the computing device 10 can include one or more components for the interactive computing system 160 , and/or a player computing system 152 A, 152 B.
  • the interactive computing system 160 , and/or a player computing system 152 A, 1528 can include one or more components of the computing device 10 .
  • Program code can be stored in ROM 46 , RAM 48 or storage 40 (which might comprise hard disk, other magnetic storage, optical storage, other non-volatile storage or a combination or variation of these).
  • ROM Read Only Memory
  • PROM programmable
  • EPROM EPROM
  • EEPROM electrically erasable programmable read-only memory
  • program code can be found embodied in a tangible non-transitory signal-bearing medium.
  • Random access memory (RAM) 48 (and possibly other storage) is usable to store variables and other game and processor data as needed. RAM is used and holds data that is generated during the execution of an application and portions thereof might also be reserved for frame buffers, application state information, and/or other data needed or usable for interpreting user input and generating display outputs. Generally, RAM 48 is volatile storage and data stored within RAM 48 may be lost when the computing device 10 is turned off or loses power.
  • RAM 48 a memory device
  • data from storage 40 , ROM 46 , servers accessed via a network (not shown), or removable storage media 46 may be read and loaded into RAM 48 .
  • data is described as being found in RAM 48 , it will be understood that data does not have to be stored in RAM 48 and may be stored in other memory accessible to processing unit 20 or distributed among several media, such as media 12 and storage 40 .
  • All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors.
  • the code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
  • a processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like.
  • a processor can include electrical circuitry configured to process computer-executable instructions.
  • a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions.
  • a processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a processor may also include primarily analog components.
  • some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry.
  • a computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
  • Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
  • a device configured to are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations.
  • a processor configured to carry out recitations A, B and C can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Embodiments of the systems and methods described herein provide a dynamic animation generation system that can apply a real-life video clip with a character in motion to a first neural network to receive rough motion data, such as pose information, for each of the frames of the video clip, and overlay the pose information on top of the video clip to generate a modified video clip. The system can identify a sliding window that includes a current frame, past frames, and future frames of the modified video clip, and apply the modified video clip to a second neural network to predict a next frame. The dynamic animation generation system can then move the sliding window to the next frame while including the predicted next frame, and apply the new sliding window to the second neural network to predict the following frame to the next frame.

Description

    INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS
  • Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are incorporated by reference under 37 CFR 1.57 and made a part of this specification.
  • TECHNICAL FIELD
  • The present disclosure relates to systems and techniques for animation generation. More specifically, this disclosure relates to machine learning techniques for dynamically generating animation of characters from motion capture video.
  • BACKGROUND
  • Electronic games are increasingly becoming more realistic due to an increase in available processing resources. This increase in realism may allow for more realistic gameplay experiences. For example, elements that form an in-game world, such as characters, may be more realistically presented. In this example, the elements may be increasingly rendered at higher resolutions, with more detailed textures, with more detailed underlying meshes, and so on. While this added realism may be beneficial to an end-user of an electronic game, it may place a substantial burden on electronic game developers. As an example, electronic game developers may be required to create very rich, and detailed, models of characters. As another example, electronic game designers may be required to create fluid, lifelike, movements of the characters.
  • With respect to the example of movement, characters may be designed to realistically adjust their arms, legs, and so on, while traversing an in-game world. In this way, the characters may walk, run, jump, and so on, in a lifelike manner. With respect to a sports electronic game, substantial time may be spent ensuring that the characters appear to mimic real-world sports players. For example, electronic game designers may spend substantial time fine-tuning movements of an underlying character model. Movement of a character model may be, at least in part, implemented based on movement of an underlying skeleton. For example, a skeleton may include a multitude of objects (e.g., bones or joints) which may represent a portion of the character model. As an example, a first object may be a finger while a second object may correspond to a wrist. The skeleton may therefore represent an underlying form on which the character model is built. In this way, movement of the skeleton may cause a corresponding adjustment of the character model.
  • To create realistic movement, an electronic game designer may therefore adjust positions of the above-described objects included in the skeleton. For example, the electronic game designer may create realistic running via adjustment of specific objects which form a character model's legs. This hand-tuned technique to enable movement of a character results in substantial complexity and usage of time.
  • SUMMARY OF EMBODIMENTS
  • The systems, methods, and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for the all of the desirable attributes disclosed herein.
  • Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. Utilizing the techniques described herein, realistic motion may be rapidly generated for real life character models. For example, the realistic motion can be configured for use in electronic games. As will be described, a machine learning model may be trained based on motion capture information to generate local motion phase. Pose information can be determined based on video input. Subsequently, a window of frames can be used to predict the next frame and a predicted local motion phase for the next frame. The window of frames can be updated to include the next frame and the predicted local motion phase to be used to predict the following frame. Advantageously, the character animations can perform motions far smoother than traditional systems and the dynamic animation generation system can improve the quality of the animations when the initial prediction of pose has missing pose information in one or more frames of the real life video.
  • One embodiment discloses a computer-implemented method for dynamically generating animation of characters from real life motion capture video, the method comprising: accessing motion capture video, the motion capture video including a motion capture actor in motion; inputting the motion capture video to a first neural network; receiving pose information of the motion capture actor for a plurality of frames in the motion capture video from the first neural network; identifying a first window of frames of the motion capture video, wherein the first window of frames comprises a current frame, one or more past frames to the current frame, and one or more future frames to the current frame; inputting the first window of frames of the motion capture video to a second neural network, wherein the second neural network predicts the next frame from the current frame; receiving as output of the second neural network a first predicted frame and a first local motion phase corresponding to the first predicted frame, wherein the first predicted frame comprises the predicted frame following the current frame; identifying a second window of frames, wherein the second window of frames comprises the generated first predicted frame; inputting the second window of frames and the first local motion phase to the second neural network; and receiving as output of the second neural network a second predicted frame and a second local motion phase corresponding to the second predicted frame, wherein the second predicted frame comprises the predicted frame following the first predicted frame.
  • In some embodiments, the computer-implemented method further comprises overlaying the pose information on the motion capture video to generate a modified motion capture video, wherein identifying a first window of frames comprises a first window of frames of the modified motion capture video.
  • In some embodiments, the motion capture video comprises a video of a real-life person in motion.
  • In some embodiments, the pose information comprises velocity information corresponding to joints of the motion capture actor.
  • In some embodiments, the second window of frames comprises one or more past frames to the first predicted frame, and one or more future frames to the first predicted frame.
  • In some embodiments, the second window of frames drops the oldest frame from the first window of frames.
  • In some embodiments, the first window of frames comprises sampled frames of the motion capture video at a predefined time threshold.
  • In some embodiments, the first window of frames comprises a current frame, and the same number of past frames and future frames to the current frame, wherein the second window of frames drops the oldest frame from the first window of frames.
  • In some embodiments, the motion capture video is captured from a camera on a user's mobile device.
  • In some embodiments, the motion capture video comprises a video of a real-life sporting event.
  • In some embodiments, the first local motion phase includes phase information for each joint of the motion capture actor.
  • In some embodiments, the first local motion phase includes phase information for each bone of the motion capture actor.
  • Some embodiments include a system comprising one or more processors and non-transitory computer storage media storing instructions that when executed by the one or more processors, cause the one or more processors to perform operations comprising: accessing motion capture video, the motion capture video including a motion capture actor in motion; inputting the motion capture video to a first neural network; receiving pose information of the motion capture actor for a plurality of frames in the motion capture video from the first neural network; identifying a first window of frames of the motion capture video, wherein the first window of frames comprises a current frame, one or more past frames to the current frame, and one or more future frames to the current frame; inputting the first window of frames of the motion capture video to a second neural network, wherein the second neural network predicts the next frame from the current frame; receiving as output of the second neural network a first predicted frame and a first local motion phase corresponding to the first predicted frame, wherein the first predicted frame comprises the predicted frame following the current frame; identifying a second window of frames, wherein the second window of frames comprises the generated first predicted frame; inputting the second window of frames and the first local motion phase to the second neural network; and receiving as output of the second neural network a second predicted frame and a second local motion phase corresponding to the second predicted frame, wherein the second predicted frame comprises the predicted frame following the first predicted frame.
  • In some embodiments, the operations further comprise overlaying the pose information on the motion capture video to generate a modified motion capture video, wherein identifying a first window of frames comprises a first window of frames of the modified motion capture video.
  • In some embodiments, the motion capture video comprises a video of a real-life person in motion.
  • In some embodiments, the pose information comprises velocity information corresponding to joints of the motion capture actor.
  • In some embodiments, the second neural network comprises an convolutional neural network.
  • In some embodiments, the second neural network comprises an LSTM neural network.
  • Some embodiments include a non-transitory computer storage media storing instructions that when executed by a system of one or more processors, cause the one or more processors to perform operations comprising: accessing motion capture video, the motion capture video including a motion capture actor in motion; inputting the motion capture video to a first neural network; receiving pose information of the motion capture actor for a plurality of frames in the motion capture video from the first neural network; identifying a first window of frames of the motion capture video, wherein the first window of frames comprises a current frame, one or more past frames to the current frame, and one or more future frames to the current frame; inputting the first window of frames of the motion capture video to a second neural network, wherein the second neural network predicts the next frame from the current frame; receiving as output of the second neural network a first predicted frame and a first local motion phase corresponding to the first predicted frame, wherein the first predicted frame comprises the predicted frame following the current frame; identifying a second window of frames, wherein the second window of frames comprises the generated first predicted frame; inputting the second window of frames and the first local motion phase to the second neural network; and receiving as output of the second neural network a second predicted frame and a second local motion phase corresponding to the second predicted frame, wherein the second predicted frame comprises the predicted frame following the first predicted frame.
  • In some embodiments, the operations further comprise overlaying the pose information on the motion capture video to generate a modified motion capture video, wherein identifying a first window of frames comprises a first window of frames of the modified motion capture video.
  • Although certain embodiments and examples are disclosed herein, inventive subject matter extends beyond the examples in the specifically disclosed embodiments to other alternative embodiments and/or uses, and to modifications and equivalents thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate embodiments of the subject matter described herein and not to limit the scope thereof.
  • FIGS. 1A-1H illustrate example animation generation by traditional systems according to some embodiments.
  • FIGS. 2A-2H illustrate example animation generation by the dynamic animation generation system according to some embodiments.
  • FIG. 3A illustrates an embodiment of a sliding window of past pose data used to predict first pose data in the current frame, and a new sliding window that is adjusted by one frame to predict pose data for the next frame according to some embodiments.
  • FIG. 3B illustrates a flow diagram of the dynamic animation generation system applying the first and second neural network to generate predicted next frame data according to some embodiments.
  • FIGS. 4A, 4B, 4C, and 4D illustrate embodiments of differences with the local motion phase not in use, and with local motion phase in use according to some embodiments.
  • FIGS. 5A, 5B, 5C, and 5D illustrate additional embodiments of differences with the local motion phase not in use, and with local motion phase in use according to some embodiments.
  • FIG. 6 illustrates an embodiment of computing device that may implement aspects of the present disclosure according to some embodiments.
  • DETAILED DESCRIPTION Overview
  • This specification describes, among other things, technical improvements with respect to generation of motion for characters configured for use in electronic games. As will be described, a system described herein (e.g., the dynamic animation generation system) may implement a machine learning model to generate local phase information based on analyses of motion capture information. The dynamic animation generation system can then process a sliding window of frames to a second machine learning model to generate the next predicted frame and the local motion phase associated with the next predicted frame. The dynamic animation generation system can slide the window over in time to now include the next predicted frame while dropping the oldest frame, and the local motion phase associated with the next predicted frame, and apply the new window in the neural network to generate the frame following the next predicted frame and the local motion phase. Advantageously, the system may perform substantially automated analyses of the motion capture information such that complex machine learning labeling processes may be avoided. The dynamic animation generation system can combine local motion phase techniques with human motion reconstruction from captured real life video. While electronic games are described, it may be appreciated that the techniques described herein may be applied generally to movement of character models. For example, animated content (e.g., TV shows, movies) may employ the techniques described herein.
  • Reconstruction of human motion from real life video is a promising technology for many businesses to generate animation content. When traditional systems reconstruct human motion using real life captured video, these traditional systems receive a real life video clip, extract human motion by determining what the person is doing and/or what kind of motion the human is performing, and determine location of joints of the human. While traditional systems have many advantages, such as a simple set-up, low cost, and applicability to different video resources, these traditional approaches have technical challenges, such as resulting in blurred or obstructed human video input causing discontinuities and an unsmoothed prediction of next frames. The resulting video can be very unrealistic and contain a lot of noise or disconnects in joints or even animation character.
  • Moreover, traditional techniques that generate realistic motion for character models heavily rely upon designers adjusting character models to define different types of motion. For example, to define running, a designer may string together certain adjustments of joints on a skeleton of a character model. In this example, the designer may adjust the knees, cause a movement of the arms, and so on. While this may allow for motion to be generated, it may also involve a substantial burden on the designer.
  • A first example technique in these traditional systems to, at least in part, automate generation of character motion, may include using software to automatically adjust a skeleton. For example, templates of running may be predefined. In this example, a designer may select a running template which may cause adjustment of the joints on a skeleton. Thus, the designer may more rapidly generate motion for characters in an in-game world. However, this first example technique may lack the realism of real-world movement. For example, since different templates are being selected, the lifelike differences in movement between real-world persons is lost. Moreover, the quality of the animation is limited to the quality of these templates. Furthermore, the prediction of frames in the human motion is also limited to the type of movement in these templates. For example, traditional systems that use templates may generate animations of running that are very similar to these templates.
  • Motion may be defined, at least in part, based on distinct poses of an in-game character. As an example, each pose may represent a discrete sample of the motion to be performed by the in-game character. For this example, the pose may identify positions of bones or joints of the in-game character. Thus, if motion is to depict running, each pose may represent a snapshot of the running. For example, a first frame generated by an electronic game may include the in-game character with both feet on a surface within the game world. As another example, a second frame may include the in-game character beginning to move one of the feet upwards. It may be appreciated that subsequent frames may include the in-game character moving forward in a running motion.
  • To generate motions for in-game characters, electronic game designers are increasingly leveraging motion capture studios. For example, a motion capture studio may be used to learn the realistic gait of an actor as he/she moves about the motion capture studio. Specific portions of the actor, such as joints or bones, may be monitored during this movement. Subsequently, movement of these portions may be extracted from image or video data of the actor. This movement may then be translated onto a skeleton or rig for use as an underlying framework of one or more in-game characters. The skeleton or rig may include bones, which may be adjusted based on the motion capture images or video. In this way, the skeleton or rig may be animated to reproduce motion performed by the actor.
  • While motion capture studios allow for realistic motion, they are limited in the types of motion able to be reproduced. For example, the above-described skeleton may be animated to reproduced motions which were specifically performed by the actor. Other motions may thus need to be manually created by an electronic game designer. For example, and with respect to a sports electronic game, a real-life basketball player may be used as an actor to perform common basketball motions. While this actor may perform a wide breadth of motions typically performed during a basketball game, as may be appreciated there are other motions which will not be recorded. For example, these other motions may be produced naturally by the actor during a real-world basketball game depending on locations of opponents, the actor's current stamina level, a location of the actor with respect to the basketball court, and so on.
  • In contrast, the techniques described herein allow for the rapid generation of character animation based on automated analyses of motion capture information. For example, an actor may be placed in a motion capture studio or the dynamic animation generation system may receive data on a real life soccer game. The actor may then perform different movements, and movement of different portions of the actor (e.g., joints) may be stored by a system. Additionally, contact with an external environment may be recorded. Thus, the specific foot fall pattern used by an upper echelon boxer or basketball player may be recorded. Additionally, the specific contact made by an actor's hands with respect to a basketball, football, and so on, may be recorded. This recorded information may be used to increase a realism associated with animation generation. In some embodiments, motion can be generated for biped and/or human characters. In some embodiments, motion can be generated for quadruped characters.
  • In some embodiments, the dynamic animation generation system can improve on the quality of human reconstruction by combining human video input with local motion phase information. The dynamic animation generation system can first predict rough motion in real life video by applying real life capture data in a first model, such as a neural network, to receive the rough motion. The rough motion can include pose data, such as local motion phase. The machine learning model may be trained using local phase information extracted based on how individual body parts of a motion capture actor contacts external objects. This phase information may therefore represent local bone and/or joint phases corresponding to bones (e.g., arms, legs, hands) and/or joints (e.g., elbows, knees, knuckles) of the actor, and may be used to enhance the machine learning model's temporal alignment, and segmentation, of realistic motion.
  • In some embodiments, by training a machine learning model to generate animation based on motion capture information, the model may allow for enhanced nuance associated with the animation. As an example, a real-life wrestler may be used as an actor. In this example, video of the wrestler may be recorded which depicts the wrestler moving about a ring, interacting with an opponent, performing different moves, and so on. The machine learning model may then be trained based on this video, such that the model can reproduce the highly stylized, and personal, movement of the wrestler.
  • In some embodiments, in a second step, the dynamic animation generation system can process the output of the first model in a second model, such as an autoregression model, which can conditionally update the poses for the entire sequence going forward. The second step can include a model that applies a sliding window. The initial sliding window can include the rough motion data outputted by the first neural network. The model can apply the initial sliding window to predict a predicted pose and local phase information for the next window. Then the window can slide one frame forward which includes the predicted pose and local phase information, and the model can predict the next pose and local phase information.
  • Example Traditional System Output with Output from the Dynamic Animation Generation System
  • FIGS. 1A-1H illustrate example animation generation by a first neural network to generate rough motions according to some embodiments. The rough motions can include pose data for each frame based on the video. The rough motion data can include a multidimensional signal that includes joint information, such as rotation information for each joint of the human. The rough motions can include calculations for each joint and for each frame in the video. The pose data is overlaid on top of the original video input to generate a modified video input.
  • FIGS. 2A-2H illustrate example animation generation using the modified video input as input to a second neural network that applies a sliding window according to some embodiments. FIGS. 1A-1H and FIGS. 2A-2H include a real life video capturing a human in motion (in dashed line) and a computer generated animation of the human (in solid line). The real life video can be captured by a studio, camera, or phone, or can be a stream of a real life event, such as a sporting event. The real life video can be a single view of a person performing a certain action or motion.
  • As shown from 100 and 110 of FIGS. 1A-1B and 200 and 210 of FIGS. 2A-2B, there is not much difference in the character animation 102, 112, 202, 212 when the real life person is running in a straight line. One reason for this is that there is not much of an explosive change between the frames. Moreover, traditional systems can simply apply existing “running” templates and generate the character animation as shown in FIGS. 1A-1B.
  • As shown from 120 and 130 of FIGS. 1C-1D, the character animation 122 is an unusual and unrealistic pose of a human running, and character animation 132 is non-existent. There are certain frames where there is no character animation because the system cannot determine pose data when there is a sharp motion, such as a character changing directions sharply. Moreover, the person in the real life video is moving very quickly and the image can sometimes get blurred, causing the system to fail to predict pose data for that frame. Because the first neural network can miss the connection between different frames, there are a few frames where pose data is completely missing. This is unlike FIGS. 1A-1B where the human is moving slowly and the system can recover pose data for every frame. Thus, there is missing joint data for these frames. Although averaging or interpolation between frames can be used. However, such approaches cannot add new details to the recovered signals and the resulting animation loses certain details or result in a blurred or unsmoothed motion.
  • In 220 and 230 of FIGS. 2C-2D, the character animation 222 and 232 have been generated by the second neural network that use the sliding window technique. The neural network can process the sliding window by taking as input, pose data for the past frames and predict the current frame with local phase information. Then the system can move the sliding window by one frame, remove the oldest frame, and add the current frame to generate a new sliding window. The new sliding window can be passed to the neural network to generate the next frame with local phase information.
  • Similar to FIGS. 1D and 1E, as shown in 140, 150, 160, and 170 of FIGS. 1E-1H, the character animations 142, 162, and 170 are unusual and unrealistic poses of a human running, and the character animation 152 is missing completely. However, in 240, 250, 260, and 270 of FIGS. 2E-2G, the character animations 242, 252, 262, 272 have been generated by the second neural network using the sliding window technique and is able to generate a smooth pose of a human running. Moreover, the character animations 242, 252, 262, 272, as well as the other character animations in FIGS. 2A-2D follow the real life person running very closely. Advantageously, the dynamic animation generation system can generate character animations from real life capture video producing a smoother resulting motion, and improves the quality of the generation when the initial prediction of pose data is missing.
  • Sliding Window of Pose Data to Predict Next Pose Data
  • FIG. 3A illustrates an embodiment 300 of a sliding window of past pose data used to predict first pose data in the current frame, and a new sliding window that is adjusted by one frame to predict pose data for the next frame according to some embodiments. FIG. 3B illustrates a flow diagram 350 of the dynamic animation generation system applying the first and second neural network to generate predicted next frame data. For convenience, the flow diagram 350 will be described as being performed by a system of one or more computers (e.g., the dynamic animation generation system).
  • At block 352, the dynamic animation generation system can receive video input comprising a character in motion, such as a basketball player dribbling toward the hoop. The system accesses motion capture information. The information may be stored according to different motion capture formats, such as BVH and so on. Optionally, the motion capture information may represent image or video data taken at a particular frame rate. Thus, there may be 24, 30, 60, frames per second, which depict an actor moving about a motion capture studio. Optionally, the actor may have markers usable to track portions of the actor's body. Optionally, computer vision techniques may be used to identify specific feature of the actor's body (e.g., hands, arms, and so on). In some embodiments, an external object (e.g., a basketball) may have a marker on, or a sensor within, the object. Optionally, computer vision techniques may be used to analyze positions of the external object in the image or video data. While the description above described use of an actor, in some embodiments video may be obtained of real-world events. For example, video from a real-world sports game may be obtained and analyzed. In this example, a particular player may be analyzed identify specific portions of the player's body. Example portions may include the player's hands, feet, head, and so on.
  • At block 354, the dynamic animation generation system can input the video input to a first model, such as a first neural network. The first neural network can output pose information for each frame in the video input. The system may generate realistic motion using one or more deep-learning models. An example deep-learning model described herein includes a generative control model usable to inform generation of highly variable, and realistic, animations for characters. For example, the first model may be trained based on local bone and/or joint phases learned from motion capture information of real-world actors.
  • Machine learning models may be used to enhance generation of motion based on motion capture information. For example, a machine learning model may analyze motion capture information. In this example, the machine learning model may then be used to generate animation for an in-game character which is based on the motion capture information, creating rough motion data. Since these machine learning models may directly output motion data for use in a second neural network (using a sliding window, as described further below) that can generate motion data for animating an in-game character automatically, they may substantially reduce development time of the electronic game. Additionally, since they are trained using motion capture information the output poses may appear lifelike.
  • In some embodiments, the local motion phase includes phase information for each joint and/or bone of the character. In contrast, phase on a global level (e.g., one phase for the entire character) may not scale well when the character is moving asynchronously or when we want to combine movements for the animation. This is why the dynamic animation generation system can apply local motion phase which is determined on the local level by segmenting movements to joints, bones, limbs, ligaments, etc. Thus, the dynamic animation generation system inputs velocity data specific to joints, bones, limbs, ligaments, into a gaiting function to predict next pose data very granularly.
  • At block 358, the dynamic animation generation system can identify a sliding window to apply to a second model, such as a second neural network. The dynamic animation generation system can identify an initial window of frames that include a current frame, past frames, and future frames from the modified video input that includes the pose information outputted from the first model overlaid on top of the original real life video clip. For example, the dynamic animation generation system can take the real life video capture 302 of FIG. 3A and generate a sliding window 304 that includes two past frames, a current frame, and two future frames.
  • At block 360, the dynamic animation generation system can apply the sliding window to the second model to generate a predicted next pose. For example, the sliding window 304 of FIG. 3A can be applied to the second model, and a predicted next pose 306 can be received from the second model. Advantageously, the output of the first neural network can include frames that include very noisy potentially blurred images of the character, and/or missing pose data in certain frames. However, the second neural network is trained to generate next pose data based on the frames within the sliding window. The neural network can be trained to pose data in a predicted next frame even with incomplete or noisy input of frames within the sliding window.
  • At block 362, the dynamic animation generation system can receive a first predicted next pose 306, and at block 364, the dynamic animation generation system can identify a second window of frames that includes the first predicted next pose. For example, the dynamic animation generation system can receive the predicted next pose 306 of FIG. 3A and a first local motion phase that is outputted by the first neural network. The dynamic animation generation system can then generate a new sliding window that 308 includes pose 310 (which is the same pose as the predicted next pose 306), two past frames and two future frames. The phase information may be determined independently for each of the bones and/or joints. As will be described, phase information for a bone and/or joint may be determined based on contacts by the bone and/or joint with an external environment. For example, an actor's left hand contacting a ball, an opponent, a rim, a net, other portions of the actor, and so on, may be identified using video or images of the actor in the motion capture studio. Contacts with the left hand may be aggregated over a period of time and may be represented as a signal. The signal may, for example, indicate times at which contact occurred.
  • At block 366, the dynamic animation generation system can apply the second window of frames and the first local motion phase to the same second neural network as in block 360. At block 368, the dynamic animation generation system can receive a second predicted next frame and a second local motion phase from the second neural network. For example, the dynamic animation generation system applies the second window of frames 308 of FIG. 3A to the second neural network to receive a second predicted next frame 312. The dynamic animation generation system continues to repeat the sliding window to generate next predicted frames and local motion phases. The dynamic animation generation system can repeat the sliding window through the entire real life video clip, where for each frame, the local motion phase is predicted and outputted by the second neural network.
  • In some embodiments, the second neural network is trained by applying training data that includes video clips with characters in motion. The output of the model can then be used to adjust the model, such as based on a comparison between the actual output of the model and the expected output of the model. For example, the trained data can include the precise local motion phase data. Then the model can be trained using the outputted motion and phase information.
  • In some embodiments, the dynamic animation generation system can sample the real life video input to apply to the neural networks. The dynamic animation generation system can skip frames to generate the sliding window. For example, if the video includes a frame every 1/10th of a second, the dynamic animation generation system can take every second of video and apply the video to the first and second neural network, and drop the other 9 frames between each second.
  • Advantageously since the dynamic animation generation system may directly generate character poses from real life video data, the dynamic animation generation system may allow for substantial storage savings with respect to character animations. For example, prior techniques to generate character animations have relied upon utilization of key-frames or animation clips. In this example, an electronic game may select a multitude of key-frames and interpolate them to generate animation for output to an end-user. These key-frames and animation clips may therefore have to be stored as information for use by the electronic game. This may increase a size associated with the electronic game, such as a download size, an installation size, and so on.
  • In contrast, the techniques described herein may allow for generation of animation based on use of one or more machine learning models. As may be appreciated, these machine learning models may be represented as weights, biases, and so on, which may be of a substantially smaller size. In this way, an electronic game may have a reduced size, reduced download time, reduced installation time, and so on, as compared to other electronic games.
  • Sliding Window of Pose Data to Predict Next Pose Data
  • FIGS. 4A, 4B, 4C, and 4D illustrate embodiments 400, 420, 440, 460 of differences with the local motion phase not in use 402, 422, 442, 462, and with local motion phase in use 404, 424, 444, 464 according to some embodiments. FIGS. 5A, 5B, 5C, and 5D illustrate additional embodiments 500, 520, 540, 560 of differences with the local motion phase not in use 502, 522, 542, 562, and with local motion phase in use 504, 524, 544, 564 according to some embodiments. Similar to FIGS. 5A, 5B, 5C, and 5D, the character animation of 402, 422, 442, and 462 in FIGS. 4A, 4B, 4C, and 4D, without using local motion phase, can be blurred and result in unrealistic motion. The motion from 402 to 422 is unnatural, and the pose in 422 is unrealistic. Moreover, the character animation from 402, 422, 442, and 462 may reach a similar final gesture but cannot find the movement timing accurately. In contrast, the character animation 404, 424, 444, 464 is far more synchronized and the pose is appropriate for the timing of the frames. Thus, the use of local motion phase results in a better predictor of forward timing for specific posses.
  • Overview of Computing Device
  • FIG. 6 illustrates an embodiment of computing device 10 according to some embodiments. Other variations of the computing device 10 may be substituted for the examples explicitly presented herein, such as removing or adding components to the computing device 10. The computing device 10 may include a game device, a smart phone, a tablet, a personal computer, a laptop, a smart television, a car console display, a server, and the like. As shown, the computing device 10 includes a processing unit 20 that interacts with other components of the computing device 10 and also external components to computing device 10. A media reader 22 is included that communicates with media 12. The media reader 22 may be an optical disc reader capable of reading optical discs, such as CD-ROM or DVDs, or any other type of reader that can receive and read data from game media 12. One or more of the computing devices may be used to implement one or more of the systems disclosed herein.
  • Computing device 10 may include a separate graphics processor 24. In some cases, the graphics processor 24 may be built into the processing unit 20. In some such cases, the graphics processor 24 may share Random Access Memory (RAM) with the processing unit 20. Alternatively, or in addition, the computing device 10 may include a discrete graphics processor 24 that is separate from the processing unit 20. In some such cases, the graphics processor 24 may have separate RAM from the processing unit 20. Computing device 10 might be a handheld video game device, a dedicated game console computing system, a general-purpose laptop or desktop computer, a smart phone, a tablet, a car console, or other suitable system.
  • Computing device 10 also includes various components for enabling input/output, such as an I/O 32, a user I/O 34, a display I/O 36, and a network I/O 38. I/O 32 interacts with storage element 40 and, through a device 42, removable storage media 44 in order to provide storage for computing device 10. Processing unit 20 can communicate through I/O 32 to store data, such as game state data and any shared data files. In addition to storage 40 and removable storage media 44, computing device 10 is also shown including ROM (Read-Only Memory) 46 and RAM 48. RAM 48 may be used for data that is accessed frequently, such as when a game is being played.
  • User I/O 34 is used to send and receive commands between processing unit 20 and user devices, such as game controllers. In some embodiments, the user I/O can include a touchscreen inputs. The touchscreen can be capacitive touchscreen, a resistive touchscreen, or other type of touchscreen technology that is configured to receive user input through tactile inputs from the user. Display I/O 36 provides input/output functions that are used to display images from the game being played. Network I/O 38 is used for input/output functions for a network. Network I/O 38 may be used during execution of a game, such as when a game is being played online or being accessed online.
  • Display output signals produced by display I/O 36 comprising signals for displaying visual content produced by computing device 10 on a display device, such as graphics, user interfaces, video, and/or other visual content. Computing device 10 may comprise one or more integrated displays configured to receive display output signals produced by display I/O 36. According to some embodiments, display output signals produced by display I/O 36 may also be output to one or more display devices external to computing device 10, such a display 16.
  • The computing device 10 can also include other features that may be used with a game, such as a clock 50, flash memory 52, and other components. An audio/video player 56 might also be used to play a video sequence, such as a movie. It should be understood that other components may be provided in computing device 10 and that a person skilled in the art will appreciate other variations of computing device 10. The computing device 10 can include one or more components for the interactive computing system 160, and/or a player computing system 152A, 152B. In some embodiments, the interactive computing system 160, and/or a player computing system 152A, 1528 can include one or more components of the computing device 10.
  • Program code can be stored in ROM 46, RAM 48 or storage 40 (which might comprise hard disk, other magnetic storage, optical storage, other non-volatile storage or a combination or variation of these). Part of the program code can be stored in ROM that is programmable (ROM, PROM, EPROM, EEPROM, and so forth), part of the program code can be stored in storage 40, and/or on removable media such as game media 12 (which can be a CD-ROM, cartridge, memory chip or the like, or obtained over a network or other electronic channel as needed). In general, program code can be found embodied in a tangible non-transitory signal-bearing medium.
  • Random access memory (RAM) 48 (and possibly other storage) is usable to store variables and other game and processor data as needed. RAM is used and holds data that is generated during the execution of an application and portions thereof might also be reserved for frame buffers, application state information, and/or other data needed or usable for interpreting user input and generating display outputs. Generally, RAM 48 is volatile storage and data stored within RAM 48 may be lost when the computing device 10 is turned off or loses power.
  • As computing device 10 reads media 12 and provides an application, information may be read from game media 12 and stored in a memory device, such as RAM 48. Additionally, data from storage 40, ROM 46, servers accessed via a network (not shown), or removable storage media 46 may be read and loaded into RAM 48. Although data is described as being found in RAM 48, it will be understood that data does not have to be stored in RAM 48 and may be stored in other memory accessible to processing unit 20 or distributed among several media, such as media 12 and storage 40.
  • It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
  • All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
  • Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (for example, not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, for example, through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
  • The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processing unit or processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
  • Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
  • Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
  • Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
  • Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
  • It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure.

Claims (21)

1-20. (canceled)
21. A computer-implemented method comprising:
accessing motion capture data associated with an entity in motion;
providing the motion capture data as input to a first machine learning model;
generating pose information associated with the entity using the first machine learning model;
identifying a first plurality of frames of the motion capture data, wherein the first plurality of frames comprises a current frame;
providing the first plurality of frames of the motion capture data as input to a second machine learning model; and
generating a first predicted frame and a first local motion phase corresponding to the first predicted frame using the second machine learning model.
22. The computer-implemented method of claim 21, wherein the computer-implemented method further comprises overlaying the pose information on the motion capture data to generate modified motion capture data.
23. The computer-implemented method of claim 21, wherein the pose information comprises velocity information corresponding to joints of the entity.
24. The computer-implemented method of claim 21, wherein the first plurality of frames comprises sampled frames of the motion capture data at a predefined time threshold.
25. The computer-implemented method of claim 21, wherein the first plurality of frames comprises an equal number of past frames and future frames relative to the current frame.
26. The computer-implemented method of claim 21, wherein the motion capture data is captured from a camera on a mobile computing device.
27. The computer-implemented method of claim 21, wherein at least one of the first machine learning model or the second machine learning model is a neural network.
28. The computer-implemented method of claim 21, wherein the first local motion phase includes phase information for each joint of the entity.
29. The computer-implemented method of claim 21, wherein the first local motion phase includes phase information for each bone of the entity.
30. The computer-implemented method of claim 21, wherein the entity is a real-life person in motion.
31. A non-transitory computer storage medium storing instructions that when executed by a system of one or more processors, cause the one or more processors to perform operations comprising:
accessing motion capture data associated with an entity in motion;
providing the motion capture data as input to a first machine learning model;
generating pose information associated with the entity using the first machine learning model;
identifying a first plurality of frames of the motion capture data, wherein the first plurality of frames comprises a current frame;
providing the first plurality of frames of the motion capture data as input to a second machine learning model; and
generating a first predicted frame and a first local motion phase corresponding to the first predicted frame using the second machine learning model.
32. The non-transitory computer storage medium of claim 31, wherein the pose information comprises velocity information corresponding to joints of the entity.
33. The non-transitory computer storage medium of claim 31, wherein the first plurality of frames comprises sampled frames of the motion capture data at a predefined time threshold.
34. The non-transitory computer storage medium of claim 31, wherein at least one of the first machine learning model or the second machine learning model is a neural network.
35. The non-transitory computer storage medium of claim 31, wherein the first local motion phase includes phase information for each joint of the entity.
36. The non-transitory computer storage medium of claim 31, wherein the first local motion phase includes phase information for each bone of the entity.
37. A system comprising one or more processors and non-transitory computer storage media storing instructions that when executed by the one or more processors, cause the one or more processors to perform operations comprising:
accessing motion capture data associated with an entity in motion;
providing the motion capture data as input to a first machine learning model;
generating pose information associated with the entity using the first machine learning model;
identifying a first plurality of frames of the motion capture data, wherein the first plurality of frames comprises a current frame;
providing the first plurality of frames of the motion capture data as input to a second machine learning model; and
generating a first predicted frame and a first local motion phase corresponding to the first predicted frame using the second machine learning model.
38. The system of claim 37, wherein the pose information comprises velocity information corresponding to joints of the entity.
39. The system of claim 37, wherein at least one of the first machine learning model or the second machine learning model is a neural network.
40. The system of claim 37, wherein the first local motion phase includes phase information for each joint and bone of the entity.
US18/329,339 2021-07-01 2023-06-05 Enhanced animation generation based on video with local phase Pending US20230394735A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/329,339 US20230394735A1 (en) 2021-07-01 2023-06-05 Enhanced animation generation based on video with local phase

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/305,229 US11670030B2 (en) 2021-07-01 2021-07-01 Enhanced animation generation based on video with local phase
US18/329,339 US20230394735A1 (en) 2021-07-01 2023-06-05 Enhanced animation generation based on video with local phase

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/305,229 Continuation US11670030B2 (en) 2021-07-01 2021-07-01 Enhanced animation generation based on video with local phase

Publications (1)

Publication Number Publication Date
US20230394735A1 true US20230394735A1 (en) 2023-12-07

Family

ID=84786204

Family Applications (2)

Application Number Title Priority Date Filing Date
US17/305,229 Active 2041-09-06 US11670030B2 (en) 2021-07-01 2021-07-01 Enhanced animation generation based on video with local phase
US18/329,339 Pending US20230394735A1 (en) 2021-07-01 2023-06-05 Enhanced animation generation based on video with local phase

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US17/305,229 Active 2041-09-06 US11670030B2 (en) 2021-07-01 2021-07-01 Enhanced animation generation based on video with local phase

Country Status (1)

Country Link
US (2) US11670030B2 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11562523B1 (en) 2021-08-02 2023-01-24 Electronic Arts Inc. Enhanced animation generation based on motion matching using local bone phases
US12028540B2 (en) * 2022-06-20 2024-07-02 International Business Machines Corporation Video size reduction by reconstruction
US11830159B1 (en) * 2022-12-08 2023-11-28 Flawless Holding Limited Generative films

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120029699A1 (en) * 2010-06-04 2012-02-02 Institute Of Automation, Chinese Academy Of Sciences System and method for robot trajectory generation with continuous accelerations
US20210335004A1 (en) * 2020-04-27 2021-10-28 Snap Inc. Texture-based pose validation

Family Cites Families (129)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5274801A (en) 1988-04-29 1993-12-28 International Business Machines Corp. Artifical intelligence delivery system
US5548798A (en) 1994-11-10 1996-08-20 Intel Corporation Method and apparatus for solving dense systems of linear equations with an iterative method that employs partial multiplications using rank compressed SVD basis matrices of the partitioned submatrices of the coefficient matrix
CN100452072C (en) 1995-02-13 2009-01-14 英特特拉斯特技术公司 Systems and methods for secure transaction management and electronic rights protection
US5982389A (en) 1996-06-17 1999-11-09 Microsoft Corporation Generating optimized motion transitions for computer animated objects
JP2918499B2 (en) 1996-09-17 1999-07-12 株式会社エイ・ティ・アール人間情報通信研究所 Face image information conversion method and face image information conversion device
US5999195A (en) 1997-03-28 1999-12-07 Silicon Graphics, Inc. Automatic generation of transitions between motion cycles in an animation
US6064808A (en) 1997-08-01 2000-05-16 Lucent Technologies Inc. Method and apparatus for designing interconnections and passive components in integrated circuits and equivalent structures by efficient parameter extraction
US6636219B2 (en) 1998-02-26 2003-10-21 Learn.Com, Inc. System and method for automatic animation generation
EP1037134A2 (en) 1999-03-16 2000-09-20 Matsushita Electric Industrial Co., Ltd. Virtual space control data receiving apparatus and method
EP1039417B1 (en) 1999-03-19 2006-12-20 Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. Method and device for the processing of images based on morphable models
JP3942335B2 (en) 2000-02-28 2007-07-11 東芝テック株式会社 Window design changing method and system
JP2002109560A (en) 2000-10-02 2002-04-12 Sharp Corp Animation reproducing unit, animation reproducing system, animation reproducing method, recording medium readable by computer storing program for executing animation reproducing method
US7363199B2 (en) 2001-04-25 2008-04-22 Telekinesys Research Limited Method and apparatus for simulating soft object movement
US6956582B2 (en) 2001-08-23 2005-10-18 Evans & Sutherland Computer Corporation System and method for auto-adjusting image filtering
JP2005516503A (en) 2002-01-24 2005-06-02 ニューポート コースト インヴェストメンツ エルエルシー Dynamic selection and scheduling of radio frequency (RF) communications
US7110598B2 (en) 2002-08-28 2006-09-19 Micron Technology, Inc. Automatic color constancy for image sensors
US7006090B2 (en) 2003-02-07 2006-02-28 Crytek Gmbh Method and computer program product for lighting a computer graphics image and a computer
US20040227761A1 (en) 2003-05-14 2004-11-18 Pixar Statistical dynamic modeling method and apparatus
US7944449B2 (en) 2003-05-14 2011-05-17 Pixar Methods and apparatus for export of animation data to non-native articulation schemes
US7307633B2 (en) 2003-05-14 2007-12-11 Pixar Statistical dynamic collisions method and apparatus utilizing skin collision points to create a skin collision response
US7788071B2 (en) 2004-12-03 2010-08-31 Telekinesys Research Limited Physics simulation apparatus and method
US7493244B2 (en) 2005-03-23 2009-02-17 Nvidia Corporation Computer simulation of body dynamics including a solver that solves in linear time for a set of constraints using vector processing
WO2006102599A2 (en) 2005-03-23 2006-09-28 Electronic Arts Inc. Computer simulation of body dynamics including a solver that solves in linear time for a set of constraints
US7415152B2 (en) 2005-04-29 2008-08-19 Microsoft Corporation Method and system for constructing a 3D representation of a face from a 2D representation
US7403202B1 (en) 2005-07-12 2008-07-22 Electronic Arts, Inc. Computer animation of simulated characters using combinations of motion-capture data and external force modelling or other physics models
US7616204B2 (en) 2005-10-19 2009-11-10 Nvidia Corporation Method of simulating dynamic objects using position based dynamics
US7545379B2 (en) 2005-10-28 2009-06-09 Dreamworks Animation Llc Artist directed volume preserving deformation and collision resolution for animation
EP2005342B1 (en) 2006-04-08 2019-06-05 Allan Millman Method and system for interactive simulation of materials
US8281281B1 (en) 2006-06-07 2012-10-02 Pixar Setting level of detail transition points
US20080049015A1 (en) 2006-08-23 2008-02-28 Baback Elmieh System for development of 3D content used in embedded devices
JP4709723B2 (en) 2006-10-27 2011-06-22 株式会社東芝 Attitude estimation apparatus and method
US8142282B2 (en) 2006-11-15 2012-03-27 Microsoft Corporation Console integrated downloadable game service
US20080111831A1 (en) 2006-11-15 2008-05-15 Jay Son Efficient Panoramic Image Generation
US8128476B1 (en) 2007-02-02 2012-03-06 Popcap Games, Inc. Electronic game, such as a computer game involving removing pegs
GB2447095B (en) 2007-03-01 2010-07-28 Sony Comp Entertainment Europe Entertainment device and method
JP5427343B2 (en) 2007-04-20 2014-02-26 任天堂株式会社 Game controller
US20080268961A1 (en) 2007-04-30 2008-10-30 Michael Brook Method of creating video in a virtual world and method of distributing and using same
US7868885B2 (en) 2007-06-22 2011-01-11 Microsoft Corporation Direct manipulation of subdivision surfaces using a graphics processing unit
US8154544B1 (en) 2007-08-03 2012-04-10 Pixar User specified contact deformations for computer graphics
US8390628B2 (en) 2007-09-11 2013-03-05 Sony Computer Entertainment America Llc Facial animation using motion capture data
US8197313B2 (en) 2007-10-29 2012-06-12 Microsoft Corporation User to user game referrals
GB2454681A (en) 2007-11-14 2009-05-20 Cybersports Ltd Selection of animation for virtual entity based on behaviour of the entity
US20090213138A1 (en) 2008-02-22 2009-08-27 Pixar Mesh transfer for shape blending
US8648863B1 (en) 2008-05-20 2014-02-11 Pixar Methods and apparatus for performance style extraction for quality control of animation
US8154524B2 (en) 2008-06-24 2012-04-10 Microsoft Corporation Physics simulation-based interaction for surface computing
JP4439572B2 (en) 2008-07-11 2010-03-24 任天堂株式会社 Digital data correction program and digital data correction apparatus
US8730245B2 (en) 2008-12-01 2014-05-20 Naturalmotion Ltd. Defining an animation of a virtual object within a virtual world
US8207971B1 (en) 2008-12-31 2012-06-26 Lucasfilm Entertainment Company Ltd. Controlling animated character expressions
AU2010229693B2 (en) 2009-03-27 2014-04-03 Russell Brands, Llc Monitoring of physical training events
US20100251185A1 (en) 2009-03-31 2010-09-30 Codemasters Software Company Ltd. Virtual object appearance control
CN101877137B (en) 2009-04-30 2013-01-02 国际商业机器公司 Method for highlighting thematic element and system thereof
WO2011004612A1 (en) 2009-07-10 2011-01-13 パナソニック株式会社 Marker display control device, integrated circuit, and marker display control method
US20110012903A1 (en) 2009-07-16 2011-01-20 Michael Girard System and method for real-time character animation
US9672646B2 (en) 2009-08-28 2017-06-06 Adobe Systems Incorporated System and method for image editing using visual rewind operation
KR101615719B1 (en) 2009-09-18 2016-04-27 삼성전자주식회사 Apparatus and method for extracting user's third dimension facial expression
US20110074807A1 (en) 2009-09-30 2011-03-31 Hitachi, Ltd. Method of color customization of content screen
US8406528B1 (en) 2009-10-05 2013-03-26 Adobe Systems Incorporated Methods and apparatuses for evaluating visual accessibility of displayable web based content and/or other digital images
US8795072B2 (en) 2009-10-13 2014-08-05 Ganz Method and system for providing a virtual presentation including a virtual companion and virtual photography
KR101098834B1 (en) 2009-12-02 2011-12-26 한국전자통신연구원 Apparatus and method for generating motion based on dynamics
FR2954986B1 (en) 2010-01-05 2012-02-10 St Microelectronics Grenoble 2 METHOD FOR DETECTION OF CONTOUR ORIENTATION.
US8599206B2 (en) 2010-05-25 2013-12-03 Disney Enterprises, Inc. Systems and methods for animating non-humanoid characters with human motion data
EP2601782A4 (en) * 2010-08-02 2016-09-28 Univ Beijing Representative motion flow extraction for effective video classification and retrieval
US8860732B2 (en) 2010-09-27 2014-10-14 Adobe Systems Incorporated System and method for robust physically-plausible character animation
US20120083330A1 (en) 2010-10-05 2012-04-05 Zynga Game Network, Inc. System and Method for Generating Achievement Objects Encapsulating Captured Event Playback
US20120115580A1 (en) 2010-11-05 2012-05-10 Wms Gaming Inc. Wagering game with player-directed pursuit of award outcomes
JP6302614B2 (en) 2011-02-25 2018-03-28 任天堂株式会社 Communication system, information processing apparatus, program, and information processing method
US8267764B1 (en) 2011-04-21 2012-09-18 Wms Gaming Inc. Wagering game having enhancements to queued outcomes
US9050538B2 (en) 2011-05-26 2015-06-09 Sony Corporation Collision detection and motion simulation in game virtual space
US8526763B2 (en) 2011-05-27 2013-09-03 Adobe Systems Incorporated Seamless image composition
JP5396434B2 (en) 2011-06-07 2014-01-22 株式会社ソニー・コンピュータエンタテインメント Image generating apparatus, image generating method, program, and information storage medium
JP5732353B2 (en) 2011-08-31 2015-06-10 株式会社キーエンス Magnification observation apparatus, magnification observation method, and magnification observation program
JP5754312B2 (en) 2011-09-08 2015-07-29 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
CN102509272B (en) 2011-11-21 2013-07-10 武汉大学 Color image enhancement method based on color constancy
MX2014008563A (en) 2012-01-19 2014-09-26 Microsoft Corp Simultaneous display of multiple content items.
US9449416B2 (en) 2012-02-29 2016-09-20 Zynga Inc. Animation processing of linked object parts
US9747495B2 (en) 2012-03-06 2017-08-29 Adobe Systems Incorporated Systems and methods for creating and distributing modifiable animated video messages
US9529486B2 (en) 2012-03-29 2016-12-27 FiftyThree, Inc. Methods and apparatus for providing a digital illustration system
US9304669B2 (en) 2012-05-15 2016-04-05 Capso Vision Inc. System and method for displaying annotated capsule images
US9251618B2 (en) 2012-06-27 2016-02-02 Pixar Skin and flesh simulation using finite elements, biphasic materials, and rest state retargeting
US9616329B2 (en) 2012-06-28 2017-04-11 Electronic Arts Inc. Adaptive learning system for video game enhancement
US9661296B2 (en) 2012-07-12 2017-05-23 Samsung Electronics Co., Ltd. Image processing apparatus and method
WO2014020641A1 (en) 2012-07-31 2014-02-06 株式会社スクウェア・エニックス Content provision system, content provision device, content playback device, control method, program, and recording medium
LU92074B1 (en) 2012-09-18 2014-03-19 Iee Sarl Depth image enhancement method
SG11201503743QA (en) 2012-11-12 2015-06-29 Univ Singapore Technology & Design Clothing matching system and method
US9659397B2 (en) 2013-01-11 2017-05-23 Disney Enterprises, Inc. Rig-based physics simulation
US9892539B2 (en) 2013-01-11 2018-02-13 Disney Enterprises, Inc. Fast rig-based physics simulation
US9117134B1 (en) 2013-03-19 2015-08-25 Google Inc. Image merging with blending
US9317954B2 (en) 2013-09-23 2016-04-19 Lucasfilm Entertainment Company Ltd. Real-time performance capture with on-the-fly correctives
US9761033B2 (en) 2013-10-18 2017-09-12 Apple Inc. Object matching in a presentation application using a matching function to define match categories
GB2518019B (en) 2013-12-13 2015-07-22 Aveva Solutions Ltd Image rendering of laser scan data
US9418465B2 (en) 2013-12-31 2016-08-16 Dreamworks Animation Llc Multipoint offset sampling deformation techniques
US9990754B1 (en) 2014-02-04 2018-06-05 Electronic Arts Inc. System for rendering using position based finite element simulation
US9779775B2 (en) 2014-02-24 2017-10-03 Lyve Minds, Inc. Automatic generation of compilation videos from an original video based on metadata associated with the original video
CN104134226B (en) 2014-03-12 2015-08-19 腾讯科技(深圳)有限公司 Speech simulation method, device and client device in a kind of virtual scene
WO2015139231A1 (en) 2014-03-19 2015-09-24 Intel Corporation Facial expression and/or interaction driven avatar apparatus and method
WO2015181163A1 (en) 2014-05-28 2015-12-03 Thomson Licensing Method and system for touch input
EP2960905A1 (en) 2014-06-25 2015-12-30 Thomson Licensing Method and device of displaying a neutral facial expression in a paused video
KR102199218B1 (en) 2014-09-05 2021-01-07 삼성디스플레이 주식회사 Display apparatus, Display control apparatus and Display method
KR101643573B1 (en) 2014-11-21 2016-07-29 한국과학기술연구원 Method for face recognition, recording medium and device for performing the method
KR102261422B1 (en) 2015-01-26 2021-06-09 삼성디스플레이 주식회사 A display apparatus
US9827496B1 (en) 2015-03-27 2017-11-28 Electronics Arts, Inc. System for example-based motion synthesis
US10022628B1 (en) 2015-03-31 2018-07-17 Electronic Arts Inc. System for feature-based motion adaptation
GB2537636B (en) 2015-04-21 2019-06-05 Sony Interactive Entertainment Inc Device and method of selecting an object for 3D printing
US9858700B2 (en) 2015-05-13 2018-01-02 Lucasfilm Entertainment Company Ltd. Animation data transfer between geometric models and associated animation models
GB2539250B (en) 2015-06-12 2022-11-02 Okulo Ltd Methods and systems for testing aspects of vision
US10755466B2 (en) 2015-09-21 2020-08-25 TuringSense Inc. Method and apparatus for comparing two motions
US10792566B1 (en) 2015-09-30 2020-10-06 Electronic Arts Inc. System for streaming content within a game application environment
US9741146B1 (en) 2015-09-30 2017-08-22 Electronic Arts, Inc. Kinetic energy smoother
US9818217B2 (en) 2015-11-10 2017-11-14 Disney Enterprises, Inc. Data driven design and animation of animatronics
US20170301316A1 (en) 2016-04-13 2017-10-19 James Paul Farell Multi-path graphics rendering
US9984658B2 (en) 2016-04-19 2018-05-29 Apple Inc. Displays with improved color accessibility
US10403018B1 (en) 2016-07-12 2019-09-03 Electronic Arts Inc. Swarm crowd rendering system
US10118097B2 (en) 2016-08-09 2018-11-06 Electronic Arts Inc. Systems and methods for automated image processing for images with similar luminosities
US9826898B1 (en) 2016-08-19 2017-11-28 Apple Inc. Color vision assessment for displays
US10726611B1 (en) 2016-08-24 2020-07-28 Electronic Arts Inc. Dynamic texture mapping using megatextures
GB2555605B (en) 2016-11-03 2020-10-21 Naturalmotion Ltd Animating a virtual object in a virtual world
WO2018112112A1 (en) 2016-12-13 2018-06-21 DeepMotion, Inc. Improved virtual reality system using multiple force arrays for a solver
US10417483B2 (en) 2017-01-25 2019-09-17 Imam Abdulrahman Bin Faisal University Facial expression recognition
US10096133B1 (en) 2017-03-31 2018-10-09 Electronic Arts Inc. Blendshape compression system
US10878540B1 (en) 2017-08-15 2020-12-29 Electronic Arts Inc. Contrast ratio detection and rendering system
US10535174B1 (en) 2017-09-14 2020-01-14 Electronic Arts Inc. Particle-based inverse kinematic rendering system
US10860838B1 (en) 2018-01-16 2020-12-08 Electronic Arts Inc. Universal facial expression translation and character rendering system
JP7190264B2 (en) 2018-03-19 2022-12-15 株式会社リコー Color vision test device, color vision test method, color vision test program and storage medium
US10198845B1 (en) 2018-05-29 2019-02-05 LoomAi, Inc. Methods and systems for animating facial expressions
GB201810309D0 (en) 2018-06-22 2018-08-08 Microsoft Technology Licensing Llc System for predicting articulatd object feature location
US10314477B1 (en) 2018-10-31 2019-06-11 Capital One Services, Llc Systems and methods for dynamically modifying visual content to account for user visual impairment
US10861170B1 (en) * 2018-11-30 2020-12-08 Snap Inc. Efficient human pose tracking in videos
US20220254157A1 (en) * 2019-05-15 2022-08-11 Northeastern University Video 2D Multi-Person Pose Estimation Using Multi-Frame Refinement and Optimization
US10902618B2 (en) 2019-06-14 2021-01-26 Electronic Arts Inc. Universal body movement translation and character rendering system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120029699A1 (en) * 2010-06-04 2012-02-02 Institute Of Automation, Chinese Academy Of Sciences System and method for robot trajectory generation with continuous accelerations
US20210335004A1 (en) * 2020-04-27 2021-10-28 Snap Inc. Texture-based pose validation

Also Published As

Publication number Publication date
US11670030B2 (en) 2023-06-06
US20230005203A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
US11836843B2 (en) Enhanced pose generation based on conditional modeling of inverse kinematics
US11670030B2 (en) Enhanced animation generation based on video with local phase
US11992768B2 (en) Enhanced pose generation based on generative modeling
US11532172B2 (en) Enhanced training of machine learning systems based on automatically generated realistic gameplay information
US9741146B1 (en) Kinetic energy smoother
US11113860B2 (en) Particle-based inverse kinematic rendering system
US11995754B2 (en) Enhanced animation generation based on motion matching using local bone phases
US20120218262A1 (en) Animation of photo-images via fitting of combined models
EP3657445A1 (en) Method and system for determining identifiers for tagging video frames with
US10885691B1 (en) Multiple character motion capture
US20230177755A1 (en) Predicting facial expressions using character motion states
US20220327755A1 (en) Artificial intelligence for capturing facial expressions and generating mesh data
US11830121B1 (en) Neural animation layering for synthesizing martial arts movements
Liu et al. Posetween: Pose-driven tween animation
US20230267668A1 (en) Joint twist generation for animation
US20240257429A1 (en) Neural animation layering for synthesizing movement
US20230310998A1 (en) Learning character motion alignment with periodic autoencoders
TWI814318B (en) Method for training a model using a simulated character for animating a facial expression of a game character and method for generating label values for facial expressions of a game character using three-imensional (3d) image capture
US20240233230A9 (en) Automated system for generation of facial animation rigs
Lai et al. Tennis real play: an interactive tennis game with models from real videos
Furukawa et al. Interactive 3D animation creation and viewing system based on motion graph and pose estimation method
Kim et al. Interactive Locomotion Style Control for a Human Character based on Gait Cycle Features
Venkatrayappa et al. Survey of 3D Human Body Pose and Shape Estimation Methods for Contemporary Dance Applications
Agrawal et al. Trajectory Augmentation for Robust Neural Locomotion Controllers

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED