CN110688980A - Human body posture classification method based on computer vision - Google Patents

Human body posture classification method based on computer vision Download PDF

Info

Publication number
CN110688980A
CN110688980A CN201910966746.0A CN201910966746A CN110688980A CN 110688980 A CN110688980 A CN 110688980A CN 201910966746 A CN201910966746 A CN 201910966746A CN 110688980 A CN110688980 A CN 110688980A
Authority
CN
China
Prior art keywords
target
human
data
posture
posture classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910966746.0A
Other languages
Chinese (zh)
Other versions
CN110688980B (en
Inventor
张剑书
卢阿丽
杨炼鑫
樊英泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201910966746.0A priority Critical patent/CN110688980B/en
Publication of CN110688980A publication Critical patent/CN110688980A/en
Application granted granted Critical
Publication of CN110688980B publication Critical patent/CN110688980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a human body posture classification method based on computer vision, which comprises the steps of collecting video monitoring data through a monitoring camera; constructing a training data set for classifying human body postures; screening out effective human body posture classification characteristics; based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining a training data set and the screened effective human posture classification characteristics, and training a human posture classification model; carrying out target detection and identification operation on the video monitoring data, carrying out posture estimation on the region identified as a human target, calculating human posture characteristic data based on the posture estimation result, importing the calculated characteristic data into a human posture classification model, and judging the posture of a person appearing in the video; the method does not need to wear various sensors or optical marks on the target object, does not influence the comfort of movement, and has the advantages of low data acquisition cost, high real-time performance and high processing efficiency.

Description

Human body posture classification method based on computer vision
Technical Field
The invention relates to a human body posture classification method based on computer vision.
Background
The existing human body posture classification mainly adopts a traditional human body posture classification method based on wearable equipment, but the method needs to wear various sensors or optical markers on a target object, and the motion of the target object can be influenced. Meanwhile, the human body posture classification method based on the wearable equipment is difficult to be widely applied to public places.
At present, the coverage area of surveillance cameras installed in public places is quite large, and these cameras record every corner of human life. The traditional video monitoring system monitors the abnormity in the video by means of manual viewing, and the method needs considerable labor cost and is easy to miss.
The above-mentioned problems should be considered and solved in the human posture classification process.
Disclosure of Invention
The invention aims to provide a human body posture classification method based on computer vision, and solves the problem that the traditional human body posture classification method based on wearable equipment in the prior art can influence the motion of a target object due to the fact that a plurality of sensors or optical marks need to be worn on the target object.
The invention aims to provide a human body posture classification method based on computer vision on the basis of target detection and target identification aiming at video monitoring data, and the method is an important link in the intelligent video monitoring technology for judging the postures of all human targets in a video monitoring coverage range.
The technical solution of the invention is as follows:
a human body posture classification method based on computer vision comprises the following steps,
s1, deploying a monitoring camera in the scene, and acquiring video monitoring data through the monitoring camera;
s2, analyzing a target class label, a target position label, a picture description label and a human target joint point label in the open source image data set Microsoft COCO, screening out pictures and label data from the target class label, the target position label, the picture description label and the human target joint point label, and constructing a training data set for human posture classification;
s3, constructing an original feature set based on the labeled data in the training data set obtained in the step S2, calculating high-order features of the data on the basis, and screening out effective human posture classification features through feature pruning;
s4, based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining the training data set obtained in the step S2 and the effective human posture classification characteristics screened in the step S3, and training a human posture classification model;
and S5, carrying out target detection and recognition operation on the video monitoring data obtained in the step S1, carrying out posture estimation on the region recognized as the human target, calculating effective human posture characteristic data based on the posture estimation result, importing the calculated characteristic data into the human posture classification model obtained in the step S4, and judging the posture of the human appearing in the video.
Optionally, in step S2, a step of constructing a training data set for human body posture classification, specifically,
s21, selecting pictures only containing one human target in the data set based on the class labels of the targets in the open source image data set Microsoft COCO, and taking the selected pictures and various corresponding label data thereof as alternative training data sets;
s22, analyzing picture description labels corresponding to the pictures in the alternative training data set obtained in the step S21, keeping the pictures in which the keywords related to the gesture exist in the alternative training data set, adding gesture labels to the pictures, and deleting the other pictures;
s23, checking the human target joint point labels corresponding to the pictures in the alternative training data set, deleting the pictures with incomplete joint point labels, and taking the rest pictures and label data thereof as a training data set for human posture classification.
Optionally, in step S3, screening out effective human posture classification features, specifically,
s31, reconstructing a plane rectangular coordinate system by taking the lower left corner of the surrounding frame where the target is located as the origin of coordinates of the coordinate data of each target in the training data set obtained in the step S2, repositioning the surrounding frame coordinates and the joint point coordinate data corresponding to the target in the training data set, and taking the coordinates as original characteristics;
s32, calculating the length-width ratio of the surrounding frame where each target is located and the joint angle formed by all any 3 adjacent joint points, and taking the characteristic data as the high-order characteristic of the target;
s33, calculating the variance among the high-order features, and screening out a set number of features from the high-order features as effective human body posture classification features based on the principle that the feature similarity among the postures of the same type is large and the feature similarity among the postures of different types is small.
Optionally, in step S4, a human posture classification model is trained, specifically,
s41, constructing a five-layer neural network model, wherein the first layer is an input layer, each neuron of the input layer represents a feature, and the number of the neurons of the input layer is the number of the screened features; the second layer is a hidden layer I, the hidden layer performs multi-level abstraction on input features, different types of data are linearly divided, and the activation function is a relu function, namely a linear rectification function; the third layer is a dropout layer, namely a random inactivation layer, and part of neurons of the hidden layer I are deleted randomly in the test process of model training, so that the occurrence of the over-fitting problem is reduced; the fourth layer is a hidden layer II, the hidden layer performs multi-level abstraction on the features extracted by the third layer, different types of data are linearly divided, and the activation function is a relu function, namely a linear rectification function; the fifth layer is an output layer which outputs the probability that the postures of the targets respectively belong to different categories;
and S42, dividing the training data set obtained in the step S2 into a training set and a verification set, inputting the training set and the human posture classification features obtained in the step S3 into the neural network model constructed in the step S41, adopting a mean square error function as a loss function, optimizing the model by using a small batch gradient descent method, and stopping training when the model converges to obtain the human posture classification model.
Alternatively, in step S5, the posture of the person appearing in the video is determined, specifically,
s51, for the video monitoring data captured by the monitoring camera deployed in the step S1, firstly, an effective target is found through a target detection and target identification technology, and category information and position information of the target are extracted from the effective target;
s52, screening the surrounding frame of which the recognition result is the human target from the target data obtained in the step S51, analyzing the covering area of the surrounding frame by adopting a human posture estimation algorithm, extracting joint point position data of the target, and calculating effective human posture characteristic data screened in the step S33 based on the joint point position data.
And S53, analyzing the effective human body posture characteristic data of each human target by adopting the human body posture classification model obtained by training in the step S4 to obtain the posture information of each human target in the video.
The invention has the beneficial effects that: compared with the traditional human posture classification method based on wearable equipment, the human posture classification method based on computer vision does not need a target object to wear various sensors or optical marks, data collection is completely completed through an external monitoring camera, the movement comfort is not affected, and the data collection cost is low. According to the human body posture classification method based on computer vision, the posture classification of the target can be realized only by analyzing the target information in the current frame of the video monitoring data, and the behavior of the target in the previous frames does not need to be analyzed, so that the method is a posture classification method with high real-time performance and has high processing efficiency.
Drawings
FIG. 1 is a flowchart illustrating a human body posture classification method based on computer vision according to an embodiment of the present invention.
FIG. 2 is a schematic illustration of the positions of 17 joint point marker data of the human target in an embodiment.
FIG. 3 is an explanatory diagram of high-order features of a partial human body posture in the embodiment.
Fig. 4 is an explanatory diagram of a neural network structure constructed in the embodiment.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Examples
A human body posture classification method based on computer vision, as shown in figure 1, comprises the following steps,
and S1, deploying a monitoring camera in the scene, and acquiring video data through the monitoring camera. The camera height is preferably between 2 meters and 2.5 meters, is an overlook angle, and simultaneously adjusts the camera angle according to the placing condition of fixed objects in a scene so as to reduce monitoring dead angles.
S2, analyzing a target class label, a target position label, a picture description label and a human target joint point label in the open source image data set Microsoft COCO, screening out picture and label data from the open source image data set Microsoft COCO, and constructing a training data set for human posture classification, wherein the specific implementation steps are as follows:
s21, traversing all picture files in the Microsoft COCO dataset, finding out label data corresponding to each picture in instance label files in an annotation folder, and selecting a picture containing only one human target as an alternative training dataset.
S22, finding the preferred 5 picture description sentences corresponding to each picture in the candidate training data set obtained in step S21 in the captions' json (descriptive word labels of pictures) file in the annotation folder, if the keywords related to the gesture appear in the candidate training data set, the picture is retained in the training data set, and the gesture label is added to the person in the picture, otherwise, the picture is deleted from the candidate training data set.
S23, searching the keypoints and json (key point labels) files in the candidate training data set obtained in step S22 for the joint point position data of the human target in the remaining pictures in the candidate training data set, wherein the labels and positions of the 17 joint points of the human target are, as shown in fig. 2, deleted from the pictures with incomplete joint point labeling due to occlusion or angle, and if there is no occlusion, the joint point label data of each person should include the positions of the 17 key points of "nose", "left/right eye", "left/right ear", "left/right shoulder", "left/right elbow", "left/right wrist", "left/right hip", "left/right knee", and "left/right ankle", and after deleting the picture with the number of valid key points less than 17 from the training data set, adding keypoints labels to the people in the remaining pictures, and forming a training data set for human body posture classification.
S3, constructing an original feature set based on the labeled data in the training data set obtained in the step S2, calculating high-order features of data on the basis of basic data such as coordinates of a target bounding box, the size of the target bounding box, coordinates of 17 joint points of the target and the like, and extracting effective human posture classification features;
and S31, reconstructing a plane rectangular coordinate system by taking the lower left corner of the surrounding frame where the target is located as the origin of coordinates of the coordinate data of each target in the training data set obtained in the step S2, repositioning the coordinates of the surrounding frame corresponding to the target in the training data set and the coordinate data of preferably 17 joint points, and taking the coordinates as original features.
S32, calculating the aspect ratio of the bounding box where the human target is located and the joint angle formed by any 3 adjacent joint points based on the original features obtained in the step S31, and taking the data as high-order features, wherein the high-order features of the partial human body posture are marked in the figure 3.
And S33, calculating the variance of the high-order features between the targets in different postures and the variance of the high-order features between the targets in the same posture for the high-order features of all the human targets obtained in the step S32, constructing an influence factor index of the features on the basis of the principle that the variance of the features in the classes is large and the variance of the features between the classes is small, and screening out the set number with the highest influence factor of the features, preferably 10 high-order features as effective human posture features.
S4, based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining the training data set obtained in the step S2 and the effective human posture classification characteristics screened in the step S3, and training a human posture classification model;
s41, constructing a neural network model with a five-layer structure, wherein as shown in FIG. 4, the first layer is an input layer, the activation function of the input layer is a relu function, namely a linear rectification function, each neuron of the input layer represents a feature, the number of the neurons of the input layer is the number of the screened features, and the number of the neurons of the input layer is 10 because of 10 screened features; the second layer is a hidden layer I, the hidden layer I is used for abstracting input features in multiple levels, the purpose is to divide different types of data in a better linear mode, the number of neurons is 15, an activation function is a relu function, the third layer is a dropout layer, namely a random inactivation layer, partial neurons of the first hidden layer are deleted randomly in the testing process of model training, the occurrence of the overfitting problem can be effectively reduced, the discarding probability is set to be 0.3, the fourth layer is a hidden layer II, the hidden layer II is used for abstracting features extracted by the third layer in multiple levels, the purpose is to divide different types of data in a better linear mode, the number of the neurons is 12, the activation function is a relu function, the last layer is an output layer, the activation function is a softmax function, the output is the probability that postures of the target belong to different types respectively, and the number of the neurons of the output layer represents the number of classification labels, the class label of the output includes three classes, and thus the number of neurons of the output layer is 3.
And S42, importing the training data set obtained in the step S2 and the effective human posture features obtained in the step S3 into a neural network model, and taking 70% of the training data set as a training set and 30% of the training data set as a verification set. And (3) adopting a mean square error function as a loss function, optimizing parameters of the neural network model by using a small-batch gradient descent method, calculating the accuracy of the model on the verification set every iteration for 1000 times, and stopping training when the model is converged to obtain the human posture classification model.
And S5, carrying out target detection and recognition operation on the video monitoring data obtained in the step S1, carrying out posture estimation operation on the coverage area of the bounding box recognized as the human target in the video monitoring data after the target detection and recognition operation is finished, importing the effective human posture characteristic data obtained by calculation into a human posture classification model, and judging the posture of the human appearing in the video. The method comprises the following specific steps:
and S51, analyzing the video monitoring data captured by the monitoring camera through a target detection and target recognition technology, finding a target in the video monitoring data, and extracting the category information and the position information of the target.
S52, screening the surrounding frame of which the recognition result is the human target from the target data obtained in the step S51, analyzing the covering area of the surrounding frame by adopting a human posture estimation algorithm, extracting the joint point position data of the target, and calculating 10 effective human posture characteristic data screened in the step S33 based on the joint point position data.
S53, respectively inputting the effective human body posture characteristic data corresponding to each human target obtained by calculation in the step S52 into the human body posture classification model obtained by training in the step S4, and obtaining the posture information of each human target.
The human body posture classification method based on computer vision of the embodiment comprises the steps of collecting video monitoring data through a monitoring camera; analyzing pictures and marking data in an open source image data set Microsoft COCO, selecting a picture which only has a human target, a definite target posture description sentence and complete joint point position data of the target, and constructing a training data set for human posture classification; extracting label data from the training data set to construct an original characteristic set, and screening out effective human posture characteristics on the basis; training a human posture classification model by combining a training data set and the screened human posture classification features based on a neural network algorithm; and carrying out target detection and target recognition on the video monitoring data, finding and positioning a human target in the video, carrying out posture estimation operation on the human target, then calculating to obtain effective human posture characteristic data, and judging the posture of the human appearing in the video through a human posture classification model on the basis.
Compared with the traditional human posture classification method based on wearable equipment, the human posture classification method based on computer vision does not need a target object to wear various sensors or optical marks, data collection is completely completed through an external monitoring camera, the comfort of movement is not affected, and the data collection cost is low.
The method comprises the steps of constructing a training data set by screening targets and labeled data in a Microsoft COCO data set, and acquiring effective human body posture characteristic data by combining with characteristic engineering; training a human posture classification model through a neural network algorithm; the model can be directly applied to an intelligent video monitoring system, and the postures of all human targets in the video monitoring coverage range are judged by combining target detection, recognition and human posture estimation algorithms.
Original attitude data of people in the video is obtained through target detection, target recognition and attitude estimation algorithms, and classification of target attitudes is achieved through a neural network algorithm on the basis. And discovering the human target and the position data thereof in the video monitoring data by utilizing a target detection and target recognition algorithm based on deep learning. For each human target, the positions of important joint points in the skeleton of the human target are analyzed through a human posture estimation algorithm, the types of target postures are distinguished on the basis of the positions, and the human posture classification based on computer vision is realized.
The human body posture classification method based on computer vision can realize the posture classification of the target only by analyzing the target information in the current frame without analyzing the behavior of the target in the previous frames, is a posture classification method with high real-time performance, and has higher processing efficiency.
The human body posture classification based on computer vision aims to obtain the posture of a person in a video by analyzing video data, and based on the posture, dangerous behaviors in the video are found, so that the human body posture classification is an important link in an intelligent video monitoring system and is an important research problem in the field of computer vision. According to the method, the gesture category of the human target in the video is judged by using a machine learning algorithm according to the video monitoring data in the public place, and a good basis is provided for timely finding out dangerous behaviors in a monitoring scene.

Claims (5)

1. A human body posture classification method based on computer vision is characterized in that: comprises the following steps of (a) carrying out,
s1, deploying a monitoring camera in the scene, and acquiring video monitoring data through the monitoring camera;
s2, analyzing a target class label, a target position label, a picture description label and a human target joint point label in the open source image data set Microsoft COCO, screening out pictures and label data from the target class label, the target position label, the picture description label and the human target joint point label, and constructing a training data set for human posture classification;
s3, constructing an original feature set based on the labeled data in the training data set obtained in the step S2, calculating high-order features of the data on the basis, and screening out effective human posture classification features through feature pruning;
s4, based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining the training data set obtained in the step S2 and the effective human posture classification characteristics screened in the step S3, and training a human posture classification model;
and S5, carrying out target detection and recognition operation on the video monitoring data obtained in the step S1, carrying out posture estimation on the region recognized as the human target, calculating effective human posture characteristic data based on the posture estimation result, importing the calculated characteristic data into the human posture classification model obtained in the step S4, and judging the posture of the human appearing in the video.
2. The computer vision based human body posture classification method of claim 1, characterized in that: in step S2, a training data set for human body posture classification is constructed, specifically,
s21, selecting pictures only containing one human target in the data set based on the class labels of the targets in the open source image data set Microsoft COCO, and taking the selected pictures and various corresponding label data thereof as alternative training data sets;
s22, analyzing picture description labels corresponding to the pictures in the alternative training data set obtained in the step S21, keeping the pictures in which the keywords related to the gesture exist in the alternative training data set, adding gesture labels to the pictures, and deleting the other pictures;
s23, checking the human target joint point labels corresponding to the pictures in the alternative training data set, deleting the pictures with incomplete joint point labels, and taking the rest pictures and label data thereof as a training data set for human posture classification.
3. The computer vision based human body posture classification method of claim 1, characterized in that: in step S3, effective human body posture classification features are screened out, specifically,
s31, reconstructing a plane rectangular coordinate system by taking the lower left corner of the surrounding frame where the target is located as the origin of coordinates of the coordinate data of each target in the training data set obtained in the step S2, repositioning the surrounding frame coordinates and the joint point coordinate data corresponding to the target in the training data set, and taking the coordinates as original characteristics;
s32, calculating the length-width ratio of the surrounding frame where each target is located and the joint angle formed by all any 3 adjacent joint points, and taking the characteristic data as the high-order characteristic of the target;
s33, calculating the variance among the high-order features, and screening out a set number of features from the high-order features as effective human body posture classification features based on the principle that the feature similarity among the postures of the same type is large and the feature similarity among the postures of different types is small.
4. The computer vision-based human body posture classification method of any one of claims 1-3, characterized by: in step S4, a human posture classification model is trained, specifically,
s41, constructing a five-layer neural network model, wherein the first layer is an input layer, each neuron of the input layer represents a feature, and the number of the neurons of the input layer is the number of the screened features; the second layer is a hidden layer I, the hidden layer performs multi-level abstraction on input features, different types of data are linearly divided, and the activation function is a relu function, namely a linear rectification function; the third layer is a dropout layer, namely a random inactivation layer, and part of neurons of the hidden layer I are deleted randomly in the test process of model training, so that the occurrence of the over-fitting problem is reduced; the fourth layer is a hidden layer II, the hidden layer performs multi-level abstraction on the features extracted by the third layer, different types of data are linearly divided, and the activation function is a relu function, namely a linear rectification function; the fifth layer is an output layer which outputs the probability that the postures of the targets respectively belong to different categories;
and S42, dividing the training data set obtained in the step S2 into a training set and a verification set, inputting the training set and the human posture classification features obtained in the step S3 into the neural network model constructed in the step S41, adopting a mean square error function as a loss function, optimizing the model by using a small batch gradient descent method, and stopping training when the model converges to obtain the human posture classification model.
5. The computer vision-based human body posture classification method of any one of claims 1-3, characterized by: in step S5, the posture of the person appearing in the video is determined, specifically,
s51, for the video monitoring data captured by the monitoring camera deployed in the step S1, firstly, an effective target is found through a target detection and target identification technology, and category information and position information of the target are extracted from the effective target;
s52, screening an enclosure with a recognition result of a human target from the target data obtained in the step S51, analyzing an enclosure coverage area by adopting a human posture estimation algorithm, extracting joint point position data of the target, and calculating effective human posture characteristic data screened in the step S33 based on the joint point position data;
and S53, analyzing the effective human body posture characteristic data of each human target by adopting the human body posture classification model obtained by training in the step S4 to obtain the posture information of each human target in the video.
CN201910966746.0A 2019-10-12 2019-10-12 Human body posture classification method based on computer vision Active CN110688980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910966746.0A CN110688980B (en) 2019-10-12 2019-10-12 Human body posture classification method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910966746.0A CN110688980B (en) 2019-10-12 2019-10-12 Human body posture classification method based on computer vision

Publications (2)

Publication Number Publication Date
CN110688980A true CN110688980A (en) 2020-01-14
CN110688980B CN110688980B (en) 2023-04-07

Family

ID=69112635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910966746.0A Active CN110688980B (en) 2019-10-12 2019-10-12 Human body posture classification method based on computer vision

Country Status (1)

Country Link
CN (1) CN110688980B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368806A (en) * 2020-04-01 2020-07-03 大连理工大学 Worker construction state monitoring method based on artificial intelligence
CN111539377A (en) * 2020-05-11 2020-08-14 浙江大学 Human body movement disorder detection method, device and equipment based on video
CN112329571A (en) * 2020-10-27 2021-02-05 同济大学 Self-adaptive human body posture optimization method based on posture quality evaluation
CN114998803A (en) * 2022-06-13 2022-09-02 北京理工大学 Body-building movement classification and counting method based on video
CN116645732A (en) * 2023-07-19 2023-08-25 厦门工学院 Site dangerous activity early warning method and system based on computer vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130090571A1 (en) * 2011-10-06 2013-04-11 The Board Of Regents Of The University Of Texas System Methods and systems for monitoring and preventing pressure ulcers
CN110222634A (en) * 2019-06-04 2019-09-10 河海大学常州校区 A kind of human posture recognition method based on convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130090571A1 (en) * 2011-10-06 2013-04-11 The Board Of Regents Of The University Of Texas System Methods and systems for monitoring and preventing pressure ulcers
CN110222634A (en) * 2019-06-04 2019-09-10 河海大学常州校区 A kind of human posture recognition method based on convolutional neural networks

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368806A (en) * 2020-04-01 2020-07-03 大连理工大学 Worker construction state monitoring method based on artificial intelligence
CN111368806B (en) * 2020-04-01 2023-06-13 大连理工大学 Worker construction state monitoring method based on artificial intelligence
CN111539377A (en) * 2020-05-11 2020-08-14 浙江大学 Human body movement disorder detection method, device and equipment based on video
CN112329571A (en) * 2020-10-27 2021-02-05 同济大学 Self-adaptive human body posture optimization method based on posture quality evaluation
CN114998803A (en) * 2022-06-13 2022-09-02 北京理工大学 Body-building movement classification and counting method based on video
CN116645732A (en) * 2023-07-19 2023-08-25 厦门工学院 Site dangerous activity early warning method and system based on computer vision
CN116645732B (en) * 2023-07-19 2023-10-10 厦门工学院 Site dangerous activity early warning method and system based on computer vision

Also Published As

Publication number Publication date
CN110688980B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN110688980B (en) Human body posture classification method based on computer vision
Nath et al. Deep learning for site safety: Real-time detection of personal protective equipment
Khan et al. Situation recognition using image moments and recurrent neural networks
Wu et al. Metric learning based structural appearance model for robust visual tracking
CN111539276B (en) Method for detecting safety helmet in real time in power scene
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
US20070154088A1 (en) Robust Perceptual Color Identification
CN111898736A (en) Efficient pedestrian re-identification method based on attribute perception
CN103810500B (en) A kind of place image-recognizing method based on supervised learning probability topic model
Budiman et al. Student attendance with face recognition (LBPH or CNN): Systematic literature review
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
Manaf et al. Computer vision-based survey on human activity recognition system, challenges and applications
CN108898623A (en) Method for tracking target and equipment
CN113920326A (en) Tumble behavior identification method based on human skeleton key point detection
Liao et al. A two-stage method for hand-raising gesture recognition in classroom
CN113780145A (en) Sperm morphology detection method, sperm morphology detection device, computer equipment and storage medium
Avola et al. Machine learning for video event recognition
Naseer et al. Multimodal Objects Categorization by Fusing GMM and Multi-layer Perceptron
Akhter et al. Abnormal action recognition in crowd scenes via deep data mining and random forest
CN114708645A (en) Object identification device and object identification method
CN117475353A (en) Video-based abnormal smoke identification method and system
Zhang et al. A Multiple Instance Learning and Relevance Feedback Framework for Retrieving Abnormal Incidents in Surveillance Videos.
Turan et al. Different application areas of object detection with deep learning
Yadav et al. Human Illegal Activity Recognition Based on Deep Learning Techniques
CN115830635A (en) PVC glove identification method based on key point detection and target identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant