CN114757855A - Method, device, equipment and storage medium for correcting action data - Google Patents

Method, device, equipment and storage medium for correcting action data Download PDF

Info

Publication number
CN114757855A
CN114757855A CN202210677372.2A CN202210677372A CN114757855A CN 114757855 A CN114757855 A CN 114757855A CN 202210677372 A CN202210677372 A CN 202210677372A CN 114757855 A CN114757855 A CN 114757855A
Authority
CN
China
Prior art keywords
data
motion data
action
smoothing
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210677372.2A
Other languages
Chinese (zh)
Other versions
CN114757855B (en
Inventor
刘舟
徐键滨
吴梓辉
雷紫娟
章郴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Sanqi Jiyao Network Technology Co ltd
Original Assignee
Guangzhou Sanqi Jiyao Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Sanqi Jiyao Network Technology Co ltd filed Critical Guangzhou Sanqi Jiyao Network Technology Co ltd
Priority to CN202210677372.2A priority Critical patent/CN114757855B/en
Publication of CN114757855A publication Critical patent/CN114757855A/en
Application granted granted Critical
Publication of CN114757855B publication Critical patent/CN114757855B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2004Aligning objects, relative positioning of parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Architecture (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for correcting action data, which are used for acquiring first action data, wherein the first action data comprises three-dimensional space position information of bones; carrying out filtering smoothing processing on the first action data to obtain second action data; and performing differential correction processing on the second motion data to obtain third motion data, wherein the differential correction processing is performed on the joint points in the second motion data. Specifically, the differential correction processing includes counting differential values of position data of the joint points of two adjacent frames; when the difference value is smaller than the amplitude threshold value, deleting the current frame; and when the differential value is larger than the amplitude threshold value, the current frame is reserved. Therefore, the motion data after filtering and smoothing by using differential correction processing aiming at the joint points can eliminate signal jitter and keep real motion.

Description

Method, device, equipment and storage medium for correcting action data
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for correcting motion data.
Background
With the rapid development of computer software and hardware technologies and the improvement of animation requirements, Motion Capture (Motion Capture) has entered into the practical stage, and many manufacturers have successively proposed various commercialized Motion Capture devices, and the Motion Capture devices are successfully used in many aspects such as virtual reality, games, ergonomic research, simulation training, and biomechanical research. From the technical point of view, the essence of motion capture is to measure, track and record the motion trajectory of an object in three-dimensional space. Colloquially, motion capture is understood to mean the capture of an actor's motion by a professional device and migration to a character in a scene such as a movie/game/virtual reality. However, the dynamic capture industry also faces the problems of high price of professional equipment, serious influence on the performance of actors when the professional equipment is worn, large repeated workload and the like.
Therefore, the AI kinetic trapping technology appears, and the technical problems are solved. The AI kinetic capturing realizes the reconstruction of the three-dimensional posture of the virtual character by processing the obtained video data by utilizing a video kinetic capturing technology without needing an expensive professional device worn by an actor. In the AI dynamic capture technology, there are phenomena of insufficient smoothness of motion, noise, drift, etc., and motion data needs to be corrected to solve the above problems, however, related motion data processing in the prior art often adopts a single filtering smoothing manner, so that the motion data processing result is not accurate and smooth enough.
In summary, the problems of the related art need to be solved.
Disclosure of Invention
The present application aims to solve at least one of the technical problems existing in the related art to some extent.
Accordingly, an object of the embodiments of the present application is to provide an operation data correction method, apparatus, device, and storage medium for solving at least one of the problems of insufficient smoothness of operation, existence of noise, drift, and the like in the dynamic capture technology.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the application comprises the following steps:
in one aspect, an embodiment of the present application provides a method for correcting motion data, where the method includes:
acquiring first motion data, wherein the first motion data comprises three-dimensional space position information of bones;
carrying out filtering smoothing processing on the first action data to obtain second action data;
performing differential correction processing on the second motion data to obtain third motion data, wherein the performing differential correction processing on the second motion data to obtain the third motion data includes:
counting the difference value of the position data of the joint points of two adjacent frames;
when the difference value is smaller than the amplitude threshold value, deleting the current frame; and when the differential value is greater than or equal to the amplitude threshold value, reserving the current frame.
In addition, the motion data correction method according to the above embodiment of the present application may further include the following additional features:
further, in an embodiment of the present application, the deleting the current frame when the differential value is smaller than the action amplitude threshold includes:
and when the difference value is smaller than the action amplitude threshold value, deleting the current frame and taking the previous frame of the current frame as the current frame.
Further, in one embodiment of the present application, the method further comprises:
calculating the standard deviation of all the differential values according to the differential values of the joint point position data of two adjacent frames,
determining the amplitude threshold based on the standard deviation.
Further, in one embodiment of the present application, the method further comprises:
the filtering smoothing process comprises a moving average smoothing process; the filtering and smoothing the first action data to obtain second action data includes:
acquiring action speed information of the first action data, wherein the numerical value of the action speed information represents the speed of the action in the first action data;
and determining the smoothing degree of the moving average smoothing processing according to the action speed information, wherein the smoothing degree is inversely proportional to the magnitude of the action speed information.
Further, in one embodiment of the present application, the method further comprises:
and determining the action speed information according to the change range of the position information of the key point of the first action data in preset time.
Further, in one embodiment of the present application, the method further comprises:
the moving average smoothing process includes: and weighting the previous frames and the current frame to obtain a result as a new current frame.
Further, in an embodiment of the present application, the filter smoothing process further includes kalman filtering and low-pass filtering.
On the other hand, an embodiment of the present application further provides an action data modification apparatus, including:
an acquisition unit configured to acquire first motion data including three-dimensional spatial position information of a bone;
the processing unit is used for carrying out filtering smoothing processing on the first action data to obtain second action data; performing difference correction processing on the second motion data to obtain third motion data; wherein the performing differential correction processing on the second motion data to obtain third motion data includes: counting the difference value of the position data of the joint points of two adjacent frames, and deleting the current frame when the difference value is smaller than an amplitude threshold value; and when the differential value is larger than the amplitude threshold value, the current frame is reserved.
On the other hand, an embodiment of the present application provides a terminal device, including:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one program causes the at least one processor to implement the above-described motion data modification method.
On the other hand, the embodiment of the present application further provides a computer-readable storage medium, in which a program executable by a processor is stored, and the program executable by the processor is used for implementing the motion data correction method when being executed by the processor.
Advantages and benefits of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application:
the embodiment of the application discloses a motion data correction method, which comprises the steps of obtaining first motion data, wherein the first motion data comprises three-dimensional space position information of bones; carrying out filtering smoothing processing on the first action data to obtain second action data; and performing differential correction processing on the second motion data to obtain third motion data, wherein the differential correction processing is performed on the joint points in the second motion data. Specifically, the differential correction processing includes counting differential values of position data of the joint points of two adjacent frames; when the difference value is smaller than the amplitude threshold value, deleting the current frame; and when the differential value is larger than the amplitude threshold value, the current frame is reserved. Therefore, the motion data after filtering and smoothing by differential correction processing aiming at the joint points can eliminate signal jitter and keep real motion.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings of the embodiments of the present application or the related technical solutions in the prior art are described below, it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments of the technical solutions of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flow chart of an AI kinetic trapping technique;
fig. 2 is a schematic flowchart of an action data modification method provided in an embodiment of the present application;
fig. 3 is a schematic structural diagram of an action data modification apparatus provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
The present application is further described with reference to the following figures and specific examples. The described embodiments should not be considered as limiting the present application, and all other embodiments that can be obtained by a person skilled in the art without making any inventive step are within the scope of protection of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
With the rapid development of computer software and hardware technologies and the improvement of animation requirements, Motion Capture (Motion Capture) has entered into the practical stage, and many manufacturers have successively proposed various commercialized Motion Capture devices, and the Motion Capture devices are successfully used in many aspects such as virtual reality, games, ergonomic research, simulation training, and biomechanical research. From the technical point of view, the essence of motion capture is to measure, track and record the motion trajectory of an object in three-dimensional space. Motion capture, in general terms, is understood to mean the capture of an actor's motion by a professional device and migration to a character in a scene such as a movie/game. However, the mobile fishing industry also faces the problems that professional equipment is expensive, the performance of actors is seriously influenced when the professional equipment is worn, and the large repeated workload is caused.
Therefore, the AI kinetic trapping technology appears, and the technical problems are solved. The AI kinetic capture utilizes a video kinetic capture technology, and three-dimensional posture reconstruction of a virtual character is realized by processing the obtained video data without needing an expensive professional device worn by an actor. Fig. 1 is a schematic flow chart of implementation of an AI kinetic trapping technology, and the implementation flow of the AI kinetic trapping technology includes:
s100, acquiring a target video and inputting the target video to a dynamic capture model;
s101, aligning video frame rates and transcoding;
s102, disassembling a video, extracting a target video frame, and performing data preprocessing operations such as data enhancement and normalization;
s103, predicting camera external parameters, shape parameters and posture parameters of the character 3D model by using models such as a deep neural network;
s104, building a person 3D model;
s105, acquiring character vertex information and 3D key point information;
s106, post-processing optimization;
s107, converting the spatial position information of the key points into rotation information by using the motion driving model to generate a BVH driving model;
s108, exporting a BVH (biological vision Hierarchy motion file) action model into an FBX action model (an animation skeleton model file format), further binding with a character skeleton, and exporting a BIP skeleton model (an animation skeleton model file format);
And S109, performing vertex matching on the BIP skeleton model and the human skin skeleton model to finish skinning operation.
In the AI kinetic capture technology, the problems can be solved only by correcting the motion data due to the phenomena of insufficient smoothness of motion, noise, drift and the like, and the design of the post-processing optimization step is particularly important for solving the technical problems. In the post-processing optimization step, operations such as smoothing filtering, anomaly identification and the like need to be performed on the video data, so that reliable video data is provided for the driving steps of step 107 and step 108. The post-processing optimization is a key step for influencing whether the animation output is accurate and smooth, however, the related action data processing in the prior art usually adopts a single filtering smoothing mode, so that the action data processing result is not accurate and smooth enough.
Fig. 2 is a flowchart of a method for correcting motion data according to an embodiment of the present disclosure, where an execution subject of the method may be an electronic device supporting motion data correction, such as a smart phone, a tablet computer, a cloud server, and the like, and the method is capable of performing optimization processing on motion data. As shown in fig. 2, the method includes:
step S210, acquiring first motion data, wherein the first motion data comprises three-dimensional space position information of bones;
Step S220, carrying out filtering smoothing processing on the first action data to obtain second action data;
step S230, performing a difference correction process on the second motion data to obtain third motion data, where the performing a difference correction process on the second motion data to obtain third motion data includes:
counting the difference value of the position data of the joint points of two adjacent frames;
when the difference value is smaller than the amplitude threshold value, deleting the current frame; and when the differential value is greater than the amplitude threshold value, the current frame is reserved.
In step S210, the video data to be corrected in the post-processing optimization step, that is, the first motion data in this embodiment is obtained, and the first motion data is skeleton animation data and includes 3D spatial position data of a skeleton. The embodiment mainly performs data correction according to the position data.
In step S220, the first operation data is subjected to filter smoothing processing to obtain second operation data as target data of the difference correction processing.
Step S230 describes the differential correction processing procedure taken on the second motion data, which is processed separately for each joint, each joint having its own amplitude range, the amplitude ranges of the different joints being different. The problem of drift caused by too large amplitude of a certain joint point is solved through the steps. For example, for a local joint point ankle, the difference between two adjacent frames is counted, and then the standard deviation of the first-order difference is calculated. Since the first derivative of the discrete function reflects the variation trend and the variation amplitude of the signal, the magnitude of the first difference between the two adjacent frames can reflect the variation amplitude of the local joint point. And obtaining the statistical information of the variation amplitude of the motion amplitude in the whole motion process, and then eliminating the signal with the too small variation amplitude according to the standard deviation, namely eliminating the jitter signal, and reserving the information with the larger amplitude, namely reserving the real action.
Suppose the signals are [ x (1), x (2), x (3),.. x (n) ], n is a positive integer and represents the frame number of the motion data, and x (1), x (2), x (3),. x (n) is the position data of a certain joint point and represents the position data of a certain joint point of a corresponding frame. For example, x (1) represents position data of joint point a at the ankle of the first frame.
And calculating a difference result [ x (2) -x (1), x (3) -x (2),.., x (n) -x (n-1) ], comparing the difference result with an amplitude threshold, and if the difference result is greater than or equal to the amplitude threshold, indicating that the amplitude is changed greatly and is not the dithering noise, and keeping the difference result. And when the amplitude is smaller than the amplitude threshold value, the amplitude change is not large, and the frame belongs to the dithering noise, and the frame is deleted. It is understood that when x (i) -x (i-1) is smaller than the amplitude threshold, indicating that the amplitude of the ith frame is not changed much from the previous frame, and belongs to dithering noise, the ith frame is deleted, i is 2, 3, … …, n. For example, when the value of x (3) -x (2) is smaller than the amplitude threshold, the 3 rd frame is jittered with respect to the second frame, and the 3 rd frame is deleted.
In some embodiments, said deleting the current frame when said differential value is less than the amplitude threshold comprises:
and when the difference value is smaller than the amplitude threshold value, deleting the current frame and taking the previous frame of the current frame as the current frame.
In this embodiment, when the current frame is deleted, the current frame is replaced with a frame previous to the current frame, thereby removing the dithering noise. For example, the i-1 th frame is used as the i-th frame.
In some embodiments, the method further comprises:
calculating a standard deviation of all the differential values according to the differential values of the position data of the joint points of two adjacent frames,
determining the amplitude threshold based on the standard deviation.
In this example, the standard deviation e = std ([ x (2) -x (1), x (3) -x (2),.., x (n) -x (n-1) ]). Comparing each difference in [ x (2) -x (1), x (3) -x (2) ], x (n) -x (n-1) ] with an amplitude threshold value to make a judder determination.
The standard deviation of all the difference values is used as a reference standard for formulating the amplitude threshold, and compared with the fixed threshold setting, the threshold setting mode can set the threshold according to the characteristics of the motion data to be optimized, the obtained threshold value can adapt to the self scene characteristics of the current motion data, and the judgment of the jitter noise can be more accurate and reliable.
It is to be understood that, when the scene in the second motion data is not only one, for example, the second motion data includes a segment where the person runs and a segment where the person walks, a scene analyzing step may be added before the amplitude threshold is set, for example, the second motion data may be recognized by observation, smart recognition, and the like to include a first segment and a second segment, where the scenes of the first segment and the second segment are different, or the person motion states (or person speed states) of the first segment and the second segment are different. The amplitude threshold may be set separately for each segment, for example, a first amplitude threshold and a second amplitude threshold are set for a first segment and a second segment, respectively, the first amplitude threshold is set according to the related data in the first segment, the setting mode is the same as the foregoing, and the standard deviation of the differential values of all frames in the first segment is used as a reference. Similarly, the second amplitude threshold may be set as well. It can be understood that the segment division of the second action data is not limited to two segments, and may be any number of segments, and the specific number may be flexibly set according to actual requirements. For example, the second motion data may be divided into a plurality of segments according to the number of scenes in the second motion data, for example, when the second motion data has a plurality of scenes such as eating, sleeping, running, and walking.
In addition, the amplitude threshold may be set to the standard deviation multiplied by a predetermined factor. For example, when the predetermined coefficient is 1.5, each of [ x (2) -x (1), x (3) -x (2),.., x (n), (x) x (n-1) ] is determined, and if x (i) -x (i-1) >1.5 × e, x (i) = x (i) is set, and if x (i) -x (i-1) <=1.5 × e, x (i) = x (i-1) is set. By setting the preset coefficient, the amplitude threshold value can be conveniently set according to the standard deviation.
In some embodiments, the filter smoothing process comprises a moving average smoothing process; the filtering and smoothing the first motion data to obtain second motion data includes:
acquiring action speed information of the first action data, wherein the numerical value of the action speed information represents the speed of the action in the first action data;
and determining the smoothing degree of the moving average smoothing processing according to the action speed information, wherein the smoothing degree is inversely proportional to the magnitude of the action speed information.
It is to be understood that the motion speed information in the present embodiment may also be interpreted as the aforementioned scene information, and the scene information includes speed information. In this embodiment, the speed of the action speed may be determined by the variation amplitude of the key point in unit time (in a preset time). The degree of moving average smoothing in this embodiment is not fixed, but is related to the action speed, and when the action speed is faster (the numerical value is larger), the degree of smoothing is lower, otherwise, the degree of smoothing is not consistent with the real scene, and the action effect of quick action cannot be reflected; when the action speed is slower (the numerical value is smaller), the smoothness is higher, otherwise, the phenomenon of action discontinuity is easy to occur.
The degree of the moving average smoothing may be selected by the user, or may be automatically set by the processing unit after calculating the motion speed information.
In some embodiments, the method further comprises: and determining the action speed information according to the change amplitude of the position information of the key point of the first action data in preset time.
In the embodiment, a method for acquiring motion speed information is provided, and the motion speed of the first motion data is determined by the change amplitude of the key point in the preset time (or unit time).
In some embodiments, the moving average smoothing process comprises: and weighting the previous frames and the current frame to obtain a result as a new current frame.
In this embodiment, a moving average smoothing process is given, for example, if the smoothing window length is w, then when calculating the j-th frame, the j-w +1, j-w +2, … …, j frame is selected to be weighted as the result of the j-th frame, i.e. x (j) =1/w x (j-w +1) +1/w x (j-w +2) + … … +1/w x (j).
In some embodiments, the filter smoothing process comprises a moving average smoothing process; the filtering and smoothing the first action data to obtain second action data includes:
Acquiring action speed information of the first action data, wherein the numerical value of the action speed information represents the speed of the action in the first action data;
and determining the length of a smoothing window of the moving average smoothing processing according to the action speed information, wherein the length is inversely proportional to the size of the action speed information.
In this embodiment, the degree of smoothing is determined by the length of the smoothing window. The longer the length of the smoothing window is, the more frames are involved in the calculation, the more obvious the smoothing effect is, or the higher the smoothing degree is. When the action speed is high, a moving average smoothing scheme with low smoothing degree can be selected; for example, the smoothing window length may be set to w 1. When the motion speed is slow, a moving average smoothing scheme with a high smoothing degree can be selected, for example, the smoothing window length can be set to w2, and w1< w 2. In calculating the jth frame, (j-w1+1) to j frames, (j-w2+1) to j frames are selected for weighting as the result of the jth frame, and the weight of each frame is 1/w1 and 1/w2 respectively. For example, x (j) is used to represent the result of the j frame, the w1 window weighted x (j) result value is: x' (j) =1/w1 x (j-w1+1) +1/w1 x (j-w1+2) + … … +1/w1 x (j); w2 window weighted x' (j) =1/w2 x (j-w2+1) +1/w2 x (j-w2+2) + … … +1/w2 x (j).
Optionally, in this embodiment, the filter smoothing process may further include kalman filtering and low-pass filtering. The first motion data may be subjected to kalman filtering and low-pass filtering before the moving average smoothing processing is performed. Firstly, mining useful information as much as possible through Kalman filtering, and then carrying out frequency domain smoothing jitter elimination through low-pass filtering; and the moving average smoothing process realizes smooth jitter elimination in a time domain.
Referring to fig. 3, an embodiment of the present application further discloses an action data modification apparatus, including:
an obtaining unit 310, configured to obtain first motion data, where the first motion data includes three-dimensional spatial position information of a bone;
the processing unit 320 is configured to perform filtering and smoothing processing on the first motion data to obtain second motion data; performing difference correction processing on the second motion data to obtain third motion data; wherein the performing differential correction processing on the second motion data to obtain third motion data includes: counting the difference value of the position data of the joint points of two adjacent frames, and deleting the current frame when the difference value is smaller than an amplitude threshold value; and when the differential value is larger than the amplitude threshold value, the current frame is reserved.
Referring to fig. 4, an embodiment of the present application further discloses a terminal device, including:
at least one processor 410;
at least one memory 420 for storing at least one program;
when the at least one program is executed by the at least one processor 410, the at least one processor 410 is caused to implement the embodiment of the motion data modification method shown in fig. 2.
The embodiment of the application also discloses a computer readable storage medium, wherein a program executable by a processor is stored, and the program executable by the processor is used for realizing the embodiment of the motion data correction method shown in fig. 2 when being executed by the processor.
It is to be understood that the contents in the embodiment of the motion data modification method shown in fig. 2 are all applicable to the embodiment of the computer readable storage medium, the functions implemented by the embodiment of the computer readable storage medium are the same as those in the embodiment of the motion data modification method shown in fig. 2, and the advantageous effects achieved by the embodiment of the motion data modification method shown in fig. 2 are also the same as those achieved by the embodiment of the motion data modification method shown in fig. 2.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present application are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present application is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the functions and/or features may be integrated in a single physical device and/or software module, or one or more functions and/or features may be implemented in a separate physical device or software module. It will also be understood that a detailed discussion regarding the actual implementation of each module is not necessary for an understanding of the present application. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer given the nature, function, and interrelationships of the modules. Accordingly, those of ordinary skill in the art will be able to implement the present application as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the application, which is to be determined by the appended claims along with their full scope of equivalents.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a terminal device (which may be a personal computer, a game server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following technologies, which are well known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: numerous changes, modifications, substitutions and variations can be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present application have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
In the description herein, references to the description of "one embodiment," "another embodiment," or "certain embodiments," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present application have been shown and described, it will be understood by those of ordinary skill in the art that: numerous changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the application, the scope of which is defined by the claims and their equivalents.

Claims (10)

1. A method for correcting motion data, the method comprising:
acquiring first motion data, wherein the first motion data comprises three-dimensional space position information of bones;
carrying out filtering smoothing processing on the first action data to obtain second action data;
performing differential correction processing on the second motion data to obtain third motion data, wherein the performing differential correction processing on the second motion data to obtain the third motion data includes:
counting the difference value of the position data of two adjacent frames of joint points;
when the difference value is smaller than the amplitude threshold value, deleting the current frame; and when the differential value is greater than or equal to the amplitude threshold value, the current frame is reserved.
2. The method as claimed in claim 1, wherein the deleting the current frame when the difference value is smaller than the amplitude threshold comprises:
And when the differential value is smaller than the amplitude threshold value, deleting the current frame and taking the previous frame of the current frame as the current frame.
3. The motion data modification method according to any one of claims 1 to 2, further comprising:
calculating a standard deviation of all the differential values according to the differential values of the position data of the joint points of two adjacent frames,
determining the amplitude threshold value according to the standard deviation.
4. The motion data modification method according to claim 1, wherein the filter smoothing process includes a moving average smoothing process; the filtering and smoothing the first motion data to obtain second motion data includes:
acquiring action speed information of the first action data, wherein the numerical value of the action speed information represents the speed of the action in the first action data;
and determining the smoothing degree of the moving average smoothing processing according to the action speed information, wherein the smoothing degree is inversely proportional to the size of the action speed information.
5. The motion data modification method according to claim 4, further comprising:
and determining the action speed information according to the change range of the position information of the key point of the first action data in preset time.
6. The motion data modification method according to any one of claims 4 to 5, further comprising:
the moving average smoothing process includes: and weighting the previous frames and the current frame to obtain a result as a new current frame.
7. The motion data modification method according to claim 1, wherein the filter smoothing process includes a moving average smoothing process; the filtering and smoothing the first motion data to obtain second motion data includes:
acquiring action speed information of the first action data, wherein the numerical value of the action speed information represents the speed of the action in the first action data;
and determining the length of a smoothing window of the moving average smoothing processing according to the action speed information, wherein the length is inversely proportional to the size of the action speed information.
8. An operation data correction device, comprising:
an acquisition unit configured to acquire first motion data including three-dimensional spatial position information of a bone;
the processing unit is used for carrying out filtering smoothing processing on the first action data to obtain second action data; performing difference correction processing on the second motion data to obtain third motion data; wherein the performing differential correction processing on the second motion data to obtain third motion data includes: counting the difference value of the position data of two adjacent frames of joint points, and deleting the current frame when the difference value is smaller than an amplitude threshold value; and when the differential value is larger than the amplitude threshold value, the current frame is reserved.
9. A terminal device, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, the at least one program causes the at least one processor to implement the action data modification method of any one of claims 1-7.
10. A computer-readable storage medium in which a program executable by a processor is stored, characterized in that: the processor-executable program when executed by a processor is for implementing the method of action data modification of any one of claims 1 to 7.
CN202210677372.2A 2022-06-16 2022-06-16 Motion data correction method, device, equipment and storage medium Active CN114757855B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210677372.2A CN114757855B (en) 2022-06-16 2022-06-16 Motion data correction method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210677372.2A CN114757855B (en) 2022-06-16 2022-06-16 Motion data correction method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114757855A true CN114757855A (en) 2022-07-15
CN114757855B CN114757855B (en) 2022-09-23

Family

ID=82336466

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210677372.2A Active CN114757855B (en) 2022-06-16 2022-06-16 Motion data correction method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114757855B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346640A (en) * 2022-10-14 2022-11-15 佛山科学技术学院 Intelligent monitoring method and system for closed-loop feedback of functional rehabilitation training

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120020440A1 (en) * 2010-07-26 2012-01-26 Huawei Device Co., Ltd. Method and device for determining smooth window length in channel estimation
US20130222565A1 (en) * 2012-02-28 2013-08-29 The Johns Hopkins University System and Method for Sensor Fusion of Single Range Camera Data and Inertial Measurement for Motion Capture
US20160125263A1 (en) * 2014-11-03 2016-05-05 Texas Instruments Incorporated Method to compute sliding window block sum using instruction based selective horizontal addition in vector processor
CN110363748A (en) * 2019-06-19 2019-10-22 平安科技(深圳)有限公司 Dithering process method, apparatus, medium and the electronic equipment of key point
CN112183153A (en) * 2019-07-01 2021-01-05 ***通信集团浙江有限公司 Object behavior detection method and device based on video analysis
CN113160295A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Method and device for correcting joint point position
WO2021167394A1 (en) * 2020-02-20 2021-08-26 Samsung Electronics Co., Ltd. Video processing method, apparatus, electronic device, and readable storage medium
CN113706699A (en) * 2021-10-27 2021-11-26 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114187639A (en) * 2021-12-15 2022-03-15 北京奇艺世纪科技有限公司 Face key point filtering method and device, electronic equipment and storage medium
CN114495274A (en) * 2022-01-25 2022-05-13 上海大学 System and method for realizing human motion capture by using RGB camera

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120020440A1 (en) * 2010-07-26 2012-01-26 Huawei Device Co., Ltd. Method and device for determining smooth window length in channel estimation
US20130222565A1 (en) * 2012-02-28 2013-08-29 The Johns Hopkins University System and Method for Sensor Fusion of Single Range Camera Data and Inertial Measurement for Motion Capture
US20160125263A1 (en) * 2014-11-03 2016-05-05 Texas Instruments Incorporated Method to compute sliding window block sum using instruction based selective horizontal addition in vector processor
CN110363748A (en) * 2019-06-19 2019-10-22 平安科技(深圳)有限公司 Dithering process method, apparatus, medium and the electronic equipment of key point
CN112183153A (en) * 2019-07-01 2021-01-05 ***通信集团浙江有限公司 Object behavior detection method and device based on video analysis
WO2021167394A1 (en) * 2020-02-20 2021-08-26 Samsung Electronics Co., Ltd. Video processing method, apparatus, electronic device, and readable storage medium
CN113160295A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Method and device for correcting joint point position
CN113706699A (en) * 2021-10-27 2021-11-26 腾讯科技(深圳)有限公司 Data processing method and device, electronic equipment and computer readable storage medium
CN114187639A (en) * 2021-12-15 2022-03-15 北京奇艺世纪科技有限公司 Face key point filtering method and device, electronic equipment and storage medium
CN114495274A (en) * 2022-01-25 2022-05-13 上海大学 System and method for realizing human motion capture by using RGB camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谢文军等: "语义中间骨架驱动的自动异构运动重定向", 《计算机辅助设计与图形学学报》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115346640A (en) * 2022-10-14 2022-11-15 佛山科学技术学院 Intelligent monitoring method and system for closed-loop feedback of functional rehabilitation training

Also Published As

Publication number Publication date
CN114757855B (en) 2022-09-23

Similar Documents

Publication Publication Date Title
JP4699564B2 (en) Visual background extractor
CN109376256B (en) Image searching method and device
CN111444828A (en) Model training method, target detection method, device and storage medium
CN103262119A (en) Method and system for segmenting an image
CN110443210A (en) A kind of pedestrian tracting method, device and terminal
KR20080066671A (en) Bi-directional tracking using trajectory segment analysis
CN107944381B (en) Face tracking method, face tracking device, terminal and storage medium
CN114757855B (en) Motion data correction method, device, equipment and storage medium
CN111079764A (en) Low-illumination license plate image recognition method and device based on deep learning
CN112560962B (en) Gesture matching method and device for bone animation, electronic equipment and storage medium
CN111079507A (en) Behavior recognition method and device, computer device and readable storage medium
CN113920109A (en) Medical image recognition model training method, recognition method, device and equipment
Wu et al. Traffic object detections and its action analysis
CN110516572A (en) A kind of method, electronic equipment and storage medium identifying competitive sports video clip
CN111899318B (en) Data processing method and device and computer readable storage medium
CN114419619A (en) Erythrocyte detection and classification method and device, computer storage medium and electronic equipment
CN114596440A (en) Semantic segmentation model generation method and device, electronic equipment and storage medium
CA2628553C (en) Method and system for line segment extraction
CN108876812A (en) Image processing method, device and equipment for object detection in video
CN115393532B (en) Face binding method, device, equipment and storage medium
CN111611917A (en) Model training method, feature point detection device, feature point detection equipment and storage medium
CN110008881A (en) The recognition methods of the milk cow behavior of multiple mobile object and device
CN107818287B (en) Passenger flow statistics device and system
CN114782287B (en) Motion data correction method, device, equipment and storage medium
CN114491410A (en) Motion mode identification method and system, intelligent wearable device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant