CN112131985B - Real-time light human body posture estimation method based on OpenPose improvement - Google Patents
Real-time light human body posture estimation method based on OpenPose improvement Download PDFInfo
- Publication number
- CN112131985B CN112131985B CN202010953721.XA CN202010953721A CN112131985B CN 112131985 B CN112131985 B CN 112131985B CN 202010953721 A CN202010953721 A CN 202010953721A CN 112131985 B CN112131985 B CN 112131985B
- Authority
- CN
- China
- Prior art keywords
- human body
- module
- body posture
- posture estimation
- estimation method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 241000282414 Homo sapiens Species 0.000 title claims abstract description 45
- 238000000034 method Methods 0.000 title claims abstract description 43
- 230000006872 improvement Effects 0.000 title claims abstract description 11
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000007670 refining Methods 0.000 claims abstract description 12
- 238000004364 calculation method Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000004458 analytical method Methods 0.000 claims abstract description 6
- 238000012549 training Methods 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 9
- 230000008569 process Effects 0.000 claims description 3
- 238000012795 verification Methods 0.000 claims description 3
- 230000009286 beneficial effect Effects 0.000 abstract description 4
- 238000011161 development Methods 0.000 abstract description 3
- 210000003414 extremity Anatomy 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 210000003423 ankle Anatomy 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a real-time lightweight human body posture estimation method based on OpenPose improvement, which comprises the steps of acquiring a 2D image frame, preprocessing the 2D image frame, inputting the preprocessed 2D image frame into a trained feature extraction module, and acquiring initial features by using a MobileNet V2 lightweight moduleFWill beFInputting the images into an initial module, and obtaining a joint point heat map of the figure gesture by utilizing a part of double-branch structureS 1 And partial affinity graphL 1 To be obtainedF、S 1 、L 1 Inputting the serial data into a refining module, and acquiring a joint point heat map by utilizing a continuous small convolution layer structure and a partial double-branch structure which are fused with a residual error structureS 2 And partial affinity graphL 2 And connecting the detected joint points and limbs by using a greedy analysis method, and visually outputting. The invention reduces the quantity of parameters and calculation amount required by human body posture estimation, improves the execution speed on a CPU and a GPU, keeps the accuracy at an acceptable level, and is beneficial to the development and application of human body posture estimation on portable and embedded equipment.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a real-time lightweight human body posture estimation method based on OpenPose improvement.
Background
The human body posture estimation aims at predicting and connecting the positions of all the joints of the human skeleton through vision or depth information, and is an important technical means for realizing the application of human body action recognition, action teaching, man-machine interaction, smart city and the like. Human body posture estimation provides new possibilities for application of artificial intelligence, and aims to automatically identify meaning wanted to be conveyed by a target and respond correspondingly by monitoring behaviors of human beings, so that a phase mode of the human beings and the machine is more harmonious and efficient.
There are various classification modes for human body posture estimation algorithms: the method can be divided into 2D image pose estimation and 3D image pose estimation containing depth information according to the type of an estimation object; the estimated object number can be divided into single person human body posture estimation and multi-person human body posture estimation; the algorithm model can be divided into a top-down method and a bottom-up method; the human body posture estimation method can be divided into traditional human body posture estimation and human body posture estimation based on deep learning according to the development stage.
The bottom-up algorithm thinking is that all joint points in an image are detected and classified, and then all joints are matched with corresponding persons through a connection algorithm, namely the method mainly comprises two parts: joint detection and joint component clustering. The bottom-up method has the advantage of keeping the recognition speed substantially consistent without being limited by the number of people. The top-down algorithm thinking is to detect the person in the image first and then to detect the joints of each person. The top-down method has an advantage in that the recognition detection rate is high, but the time taken for recognition is linearly proportional to the number of detected persons, and is particularly slow when the number of persons in the figure is excessive.
Traditional algorithms can be largely divided into two major types, graph-structure-based models and feature-based direct regression. The main idea based on the graph structure model is to decompose the appearance of the target into a local part template according to the geometric constraint of each part in the practical experience, and the obtained structure can model the joint after each part is parameterized based on the position and the direction of the pixel point. The main idea of direct regression based on features is to solve the human body posture estimation problem directly as a classification or regression problem, but the accuracy of the method is relatively general, and the method is suitable for scenes with clean backgrounds.
The human body posture estimation algorithm based on deep learning mainly utilizes convolutional neural network CNN and various CNN-based derivative networks to extract characteristic information from images. Compared with the traditional method, the CNN can extract the feature vectors of the human joint points and the context information thereof under various receptive fields and various scales, and coordinate regression is carried out on the feature vectors to obtain the posture estimation result of the human body in the image. However, these tasks that rely on deep neural networks often also involve millions or even billions of parameter volumes, which are difficult to deviate from the image processing units GPUs with high computing power to achieve the same effect on other processing units. The deeper and more complex network architecture makes the size of the network often very large and difficult to deploy directly on portable mobile devices and embedded devices.
Disclosure of Invention
The invention aims to provide an OpenPose-based improved real-time light human body posture estimation method, which realizes the remarkable compression of a network model in parameter quantity and calculation quantity and the remarkable improvement in execution speed, and when the overlapping degree of characters in an image is not high, the accuracy can still be kept at a higher level.
In order to achieve the above purpose, the invention adopts the following technical scheme:
an openPose improvement-based real-time lightweight human body posture estimation method comprises the following steps:
s1, acquiring a 2D image frame, carrying out standardized preprocessing on the image,
s2, inputting the preprocessed image into a trained feature extraction module, and acquiring initial features by using a MobileNet V2 light weight moduleF,
S3, initial characteristics are setFInputting the images into an initial module, and respectively acquiring joint point heat maps of the figure gestures by utilizing partial double-branch structuresS 1 And partial affinity graphL 1 ,
S4, to be acquiredF、S 1 、L 1 Inputting the serial data into a refining module, and acquiring a joint point heat map by utilizing a continuous small convolution layer structure and a part of double-branch structure which are fused with residual error structuresS 2 And partial affinity graphL 2 ,
And S5, connecting the detected joint points and limbs by using a greedy analysis method, and visually outputting a calculation result.
Preferably, the method comprises the steps of,in S2: scaling the input image to 256 pixels, and performing standardized preprocessing on the scaled image to satisfy the following formula: (img-mean)×scaleWherein, the method comprises the steps of, wherein,imgis the RGB value of the original image,meanthe method takes the component 128 of the method,scaletake 1/256.
Preferably, in S2: the feature extraction module uses the first 7 bottleneck structures of MobileNetV2 and will be the 5 th and 7 th bottleneck structuress2 becomess1。
Preferably, the initial module and the refining module in S3 and S4 are responsible for outputting the joint point heat map and the partial affinity map respectively, except that the last two continuous 1×1 convolution layers are dual-branch outputs, and the rest convolution layers are all merged convolutions.
Preferably, in S3: will initiate the featureFThe features are further extracted by inputting the features into three continuous 3×3 convolution layers, and the features are respectively input into two branches to perform two-layer 1×1 convolution, so as to obtain the joint point heat map respectivelyS 1 And partial affinity graphL 1 Respectively satisfy the formulaS 1 =ρ 1 (F),L 1 =φ 1 (F)。
Preferably, in S4: each continuous small convolution module with fused residual structure in the refining module uses three continuous 3×3 convolution layers, and inputs and outputs of each small convolution module are added by using an addition channel without parameters, and the addition outputs respectively meet the formulaS 2 =ρ 2 (F、S 1 、L 1 ),L 2 =φ 2 (F、S 1 、L 1 )。
Preferably, in S3: the number of the output articulation points and the number of the partial affinity graphs are consistent with the actual human situation, and are respectively 19 articulation points and 38 partial affinities.
Preferably, the characteristic extraction and the joint point detection are carried out by adopting a coding and decoding structure network, so that the sizes of the visualized input image and the original input image are ensured to be the same.
Preferably, the accuracy rate is calculated by using a part of verification set in the network training process of the feature extraction module, and if the current accuracy rate is the highest, the current training model data is saved.
Preferably, after the joint points and partial affinities obtained through detection are respectively output to enter a connecting module after the refining module, the connecting module selects limbs with the least number in the graph to connect with the trunk of the spanning tree, the matching optimal problem is decomposed into a series of even matching sub-problems, the matching condition of each node is respectively determined, and the maximum limb matching is obtained.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages:
the invention adopts the partial network structure modified by the MobileNet v2, the partial double-branch structure and the continuous small convolution layer structure fused with the residual error structure, realizes the light weight of the network model, realizes the detection and connection of the human object joint points and limbs in the 2D image by utilizing the convolution neural network and a greedy analysis algorithm, effectively reduces the parameter quantity and the calculated quantity of the model, is beneficial to saving the memory space and the calculation resource of hardware equipment, and keeps the accuracy rate and has better operation effect.
According to the invention, the lightweight improvement is carried out based on OpenPose, the lightweight network and the residual error structure are fused, the significant improvement of the execution speed on the CPU and the GPU is realized, the deployment of human body posture estimation on the embedded or portable equipment in the actual situation is friendly, and particularly, the detection result of the human skeleton accuracy rate can be shown while the lightweight and high-speed detection is kept under the conditions that the human body overlapping degree in the image is low and the background environment is not excessively noisy.
Drawings
FIG. 1 is a flow chart of the method of the present embodiment;
FIG. 2 is a schematic illustration of a human body joint and limb connection used in the present embodiment;
FIG. 3 is a schematic diagram of a continuous small convolution layer module with a fused residual structure according to the present embodiment;
fig. 4 is a diagram of the human body posture estimation result according to the present embodiment.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The openPose improvement-based real-time lightweight human body posture estimation method as shown in fig. 1 specifically comprises the following steps:
and 2D image frames are acquired through a video, a picture or a camera and the like, and standardized preprocessing is carried out on the images.
Inputting the preprocessed image into a trained feature extraction module, and acquiring initial features by using a MobileNet V2 lightweight moduleF. The specific method is as follows:
scaling the input image to 256 pixels, storing the scaling ratio so as to restore the original size of the image during output, and carrying out standardized preprocessing on the scaled image, wherein the following formula is satisfied: (img-mean)×scaleWherein, the method comprises the steps of, wherein,imgis the RGB value of the original image,meanthe method takes the component 128 of the method,scaletake 1/256.
And in the network training process of the feature extraction module, using a part of verification set to calculate the accuracy rate, and if the current accuracy rate is the highest, storing the current training model data. The training data set employs a COCO 2017 training set and a validation set. Setting the training period as 100 times, and selecting an Adam algorithm as an optimizer of the loss function. The feature extraction module uses the first 7 bottleneck structures of MobileNetV2 and will be the 5 th and 7 th bottleneck structuress2 becomess1. The detection results are shown in FIG. 4.
Will initiate the featureFInputting the images into an initial module, and respectively acquiring joint point heat maps of the figure gestures by utilizing partial double-branch structuresS 1 And partial affinity graphL 1 . The specific method is as follows:
will initiate the featureFThe features are further extracted by inputting the features into three continuous 3×3 convolution layers, and the features are respectively input into two branches to perform two-layer 1×1 convolution, so as to obtain the joint point heat map respectivelyS 1 And partial affinity graphL 1 Respectively satisfy the formulaS 1 =ρ 1 (F),L 1 =φ 1 (F)。
Because the weight and the feature map of the double branches on the same middle layer have certain similarity, the combination of most of the double branch structures into a single branch structure is beneficial to further saving space and reducing the resources consumed by calculation. End use of each branchL 2 lossThe difference between the predicted value and the true value is calculated by the loss function, so that everyone is not marked in order to avoid some data sets, and therefore, the binary parameter is addedW(p) When the pixel isWhen the position of the position is not marked,W(p) Zero, avoiding penalizing the originally correct predictions, resulting in distortion of the training parameters.
According to the structural characteristics of the human body, as shown in fig. 2, the number of the output joint point heat maps is 19, namely a nose, a left eye, a right eye, a left ear, a right ear, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left hip, a right hip, a left knee, a right knee, a left ankle, a right ankle, a neck and a background; the number of partial affinity maps was 38.
To be obtainedF、S 1 、L 1 Inputting the serial data into a refining module, and acquiring a joint point heat map by utilizing a continuous small convolution layer structure and a part of double-branch structure which are fused with residual error structuresS 2 And partial affinity graphL 2 . The specific method is as follows:
the input to the refining stage being an initial featureFJoint point heat mapS 1 And partial affinity graphL 1 Obtained after being connected in series, the output respectively meets the formulaS 2 =ρ 2 (F、S 1 、L 1 ),L 2 =φ 2 (F、S 1 、L 1 )。
The receptive field is on the characteristic diagram of each layer of output in the depth convolution networkTo maintain the size of the receptive field, the original single 7×7 convolution layer is replaced by three continuous 3×3 convolution layers to reduce the size of the original imageIs added to the parameters and calculated amount. Meanwhile, due to depth deepening, the problem of sudden drop of accuracy rate caused by depth deepening is avoided by introducing a residual structure, and as shown in fig. 3, a continuous small convolution layer module fused with the residual structure adopts 5 small convolution layer modules in total to carry out series convolution in a network model.
And after the refining module, respectively outputting the detected joint points and partial affinities to enter a connection module, wherein the connection module selects the trunk of the limb connection spanning tree with the least number in the graph, decomposes the optimal matching problem into a series of even matching sub-problems, determines the matching condition of each node respectively, and searches the maximum limb matching by using a Hungary algorithm.
And connecting the detected joint points and limbs from bottom to top by using a greedy analysis method, and visually outputting a calculation result. Wherein: the principle of connecting limbs is that the same joint under the same joint point can not be connected with two joints under the other joint point at the same time, and the bottom-up expression mode for detecting the joint point and partial affinity can fully encode global context information, so that the greedy analysis method can greatly shorten the time for solving the problem and can achieve a higher precision result.
According to the embodiment, the three lightweight networks and residual error structures are combined, the partial structure of the OpenPose network is changed, and through testing, the execution speed of human body posture estimation can be remarkably improved on a CPU (Central processing Unit) and a GPU (graphics processing Unit), and the memory space and the computing resources required by a model are remarkably reduced. When the light source is sufficient and the overlapping degree between people is not high, the accuracy can still be at a higher level, and a better operation effect is achieved. The human body posture estimation method is beneficial to the development and application of human body posture estimation on portable or embedded equipment.
The above embodiments are provided to illustrate the technical concept and features of the present invention and are intended to enable those skilled in the art to understand the content of the present invention and implement the same, and are not intended to limit the scope of the present invention. All equivalent changes or modifications made in accordance with the spirit of the present invention should be construed to be included in the scope of the present invention.
Claims (8)
1. The real-time light human body posture estimation method based on OpenPose improvement is characterized by comprising the following steps of: comprising the following steps:
s1, acquiring a 2D image frame, carrying out standardized preprocessing on the image,
s2, inputting the preprocessed image into a trained feature extraction module, and acquiring initial features by using a MobileNet V2 light weight moduleF,
S3, initial characteristics are setFInputting the images into an initial module, and respectively acquiring joint point heat maps of the figure gestures by utilizing partial double-branch structuresS 1 And partial affinity graphL 1 ,
Will initiate the featureFThe features are further extracted by inputting the features into three continuous 3×3 convolution layers, and the features are respectively input into two branches to perform two-layer 1×1 convolution, so as to obtain the joint point heat map respectivelyS 1 And partial affinity graphL 1 Respectively satisfy the formulaS 1 =ρ 1 (F),L 1 =φ 1 (F),
S4, to be acquiredF、S 1 、L 1 Inputting the serial data into a refining module, and acquiring a joint point heat map by utilizing a continuous small convolution layer structure and a part of double-branch structure which are fused with residual error structuresS 2 And partial affinity graphL 2 ,
The initial module and the refining module are responsible for outputting the joint point heat map and the partial affinity map respectively except that the last two continuous 1 multiplied by 1 convolution layers are double-branch output, the rest convolution layers are all combined convolution,
and S5, connecting the detected joint points and limbs by using a greedy analysis method, and visually outputting a calculation result.
2. According to claimThe openwise improvement-based real-time lightweight human body posture estimation method of claim 1, which is characterized in that: in S2: scaling the input image to 256 pixels, and performing standardized preprocessing on the scaled image to satisfy the following formula: (img-mean)×scaleWherein, the method comprises the steps of, wherein,imgis the RGB value of the original image,meanthe method takes the component 128 of the method,scaletake 1/256.
3. The openwise improved real-time lightweight human body posture estimation method according to claim 1, wherein: in S2: the feature extraction module uses the first 7 bottleneck structures of MobileNetV2 and will be the 5 th and 7 th bottleneck structuress2 becomess1。
4. The openwise improved real-time lightweight human body posture estimation method according to claim 1, wherein: in S4: each continuous small convolution module with fused residual structure in the refining module uses three continuous 3×3 convolution layers, and inputs and outputs of each small convolution module are added by using an addition channel without parameters, and the addition outputs respectively meet the formulaS 2 =ρ 2 (F、S 1 、L 1 ),L 2 =φ 2 (F、S 1 、L 1 )。
5. The openwise improved real-time lightweight human body posture estimation method according to claim 1, wherein: in S3: the number of the output nodes and the number of the partial affinity graphs are consistent with the actual situation of human beings.
6. The openwise improved real-time lightweight human body posture estimation method according to claim 1, wherein: and adopting a coding and decoding structure network to perform feature extraction and joint point detection.
7. The openwise improved real-time lightweight human body posture estimation method according to claim 1, wherein: and in the network training process of the feature extraction module, using a part of verification set to calculate the accuracy rate, and if the current accuracy rate is the highest, storing the current training model data.
8. The openwise improved real-time lightweight human body posture estimation method according to claim 1, wherein: and after the refining module, respectively outputting the detected joint points and partial affinities to enter a connection module, wherein the connection module selects the trunk of the limb connection spanning tree with the least number in the graph, decomposes the optimal matching problem into a series of even matching sub-problems, and respectively determines the matching condition of each node to obtain the maximum limb matching.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010953721.XA CN112131985B (en) | 2020-09-11 | 2020-09-11 | Real-time light human body posture estimation method based on OpenPose improvement |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010953721.XA CN112131985B (en) | 2020-09-11 | 2020-09-11 | Real-time light human body posture estimation method based on OpenPose improvement |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112131985A CN112131985A (en) | 2020-12-25 |
CN112131985B true CN112131985B (en) | 2024-01-09 |
Family
ID=73846193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010953721.XA Active CN112131985B (en) | 2020-09-11 | 2020-09-11 | Real-time light human body posture estimation method based on OpenPose improvement |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112131985B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113177432B (en) * | 2021-03-16 | 2023-08-29 | 重庆兆光科技股份有限公司 | Head posture estimation method, system, equipment and medium based on multi-scale lightweight network |
CN113191242A (en) * | 2021-04-25 | 2021-07-30 | 西安交通大学 | Embedded lightweight driver leg posture estimation method based on OpenPose improvement |
CN113221924A (en) * | 2021-06-02 | 2021-08-06 | 福州大学 | Portrait shooting system and method based on OpenPose |
CN113368487A (en) * | 2021-06-10 | 2021-09-10 | 福州大学 | OpenPose-based 3D private fitness system and working method thereof |
CN113420676B (en) * | 2021-06-25 | 2023-06-02 | 华侨大学 | 3D human body posture estimation method of two-way feature interlacing fusion network |
CN113255597B (en) * | 2021-06-29 | 2021-09-28 | 南京视察者智能科技有限公司 | Transformer-based behavior analysis method and device and terminal equipment thereof |
CN113838131A (en) * | 2021-09-17 | 2021-12-24 | 佛山科学技术学院 | Lightweight two-dimensional human body posture estimation method and device, storage medium and equipment |
CN114140828B (en) * | 2021-12-06 | 2024-02-02 | 西北大学 | Real-time lightweight 2D human body posture estimation method |
CN114882526A (en) * | 2022-04-24 | 2022-08-09 | 华南师范大学 | Human back acupuncture point identification method, human back acupuncture point identification device and computer storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222665A (en) * | 2019-06-14 | 2019-09-10 | 电子科技大学 | Human motion recognition method in a kind of monitoring based on deep learning and Attitude estimation |
EP3547211A1 (en) * | 2018-03-30 | 2019-10-02 | Naver Corporation | Methods for training a cnn and classifying an action performed by a subject in an inputted video using said cnn |
CN110433471A (en) * | 2019-08-13 | 2019-11-12 | 宋雅伟 | A kind of badminton track monitoring analysis system and method |
WO2019222383A1 (en) * | 2018-05-15 | 2019-11-21 | Northeastern University | Multi-person pose estimation using skeleton prediction |
CN110738154A (en) * | 2019-10-08 | 2020-01-31 | 南京熊猫电子股份有限公司 | pedestrian falling detection method based on human body posture estimation |
CN110929584A (en) * | 2019-10-28 | 2020-03-27 | 九牧厨卫股份有限公司 | Network training method, monitoring method, system, storage medium and computer equipment |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10430966B2 (en) * | 2017-04-05 | 2019-10-01 | Intel Corporation | Estimating multi-person poses using greedy part assignment |
-
2020
- 2020-09-11 CN CN202010953721.XA patent/CN112131985B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3547211A1 (en) * | 2018-03-30 | 2019-10-02 | Naver Corporation | Methods for training a cnn and classifying an action performed by a subject in an inputted video using said cnn |
WO2019222383A1 (en) * | 2018-05-15 | 2019-11-21 | Northeastern University | Multi-person pose estimation using skeleton prediction |
CN110222665A (en) * | 2019-06-14 | 2019-09-10 | 电子科技大学 | Human motion recognition method in a kind of monitoring based on deep learning and Attitude estimation |
CN110433471A (en) * | 2019-08-13 | 2019-11-12 | 宋雅伟 | A kind of badminton track monitoring analysis system and method |
CN110738154A (en) * | 2019-10-08 | 2020-01-31 | 南京熊猫电子股份有限公司 | pedestrian falling detection method based on human body posture estimation |
CN110929584A (en) * | 2019-10-28 | 2020-03-27 | 九牧厨卫股份有限公司 | Network training method, monitoring method, system, storage medium and computer equipment |
Non-Patent Citations (1)
Title |
---|
体育视频分析中姿态估计进展的综述;宗立波;宋一凡;王熠明;马波;王东洋;李英杰;张鹏;;小型微型计算机***(第08期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112131985A (en) | 2020-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112131985B (en) | Real-time light human body posture estimation method based on OpenPose improvement | |
CN112800903B (en) | Dynamic expression recognition method and system based on space-time diagram convolutional neural network | |
WO2020078119A1 (en) | Method, device and system for simulating user wearing clothing and accessories | |
CN111383638A (en) | Signal processing device, signal processing method and related product | |
CN108805058B (en) | Target object change posture recognition method and device and computer equipment | |
CN112446302B (en) | Human body posture detection method, system, electronic equipment and storage medium | |
CN112232106B (en) | Two-dimensional to three-dimensional human body posture estimation method | |
CN111709289B (en) | Multitask deep learning model for improving human body analysis effect | |
CN112101262B (en) | Multi-feature fusion sign language recognition method and network model | |
CN111680550B (en) | Emotion information identification method and device, storage medium and computer equipment | |
CN112258555A (en) | Real-time attitude estimation motion analysis method, system, computer equipment and storage medium | |
CN111191630A (en) | Performance action identification method suitable for intelligent interactive viewing scene | |
CN111401116B (en) | Bimodal emotion recognition method based on enhanced convolution and space-time LSTM network | |
Wei et al. | Learning to infer semantic parameters for 3D shape editing | |
CN112836755B (en) | Sample image generation method and system based on deep learning | |
Zhao et al. | Human action recognition based on improved fusion attention CNN and RNN | |
CN117635897A (en) | Three-dimensional object posture complement method, device, equipment, storage medium and product | |
CN112199994A (en) | Method and device for detecting interaction between 3D hand and unknown object in RGB video in real time | |
CN113255514B (en) | Behavior identification method based on local scene perception graph convolutional network | |
CN113192186B (en) | 3D human body posture estimation model establishing method based on single-frame image and application thereof | |
Usman et al. | Skeleton-based motion prediction: A survey | |
CN112634411B (en) | Animation generation method, system and readable medium thereof | |
Wang et al. | Design of static human posture recognition algorithm based on CNN | |
Li et al. | Multimodal information-based broad and deep learning model for emotion understanding | |
Zhong et al. | Facial Expression recognition method based on convolution neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |