CN110688980B - Human body posture classification method based on computer vision - Google Patents

Human body posture classification method based on computer vision Download PDF

Info

Publication number
CN110688980B
CN110688980B CN201910966746.0A CN201910966746A CN110688980B CN 110688980 B CN110688980 B CN 110688980B CN 201910966746 A CN201910966746 A CN 201910966746A CN 110688980 B CN110688980 B CN 110688980B
Authority
CN
China
Prior art keywords
target
human
data
human body
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910966746.0A
Other languages
Chinese (zh)
Other versions
CN110688980A (en
Inventor
张剑书
卢阿丽
杨炼鑫
樊英泽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute of Technology
Original Assignee
Nanjing Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute of Technology filed Critical Nanjing Institute of Technology
Priority to CN201910966746.0A priority Critical patent/CN110688980B/en
Publication of CN110688980A publication Critical patent/CN110688980A/en
Application granted granted Critical
Publication of CN110688980B publication Critical patent/CN110688980B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a human body posture classification method based on computer vision, which comprises the steps of collecting video monitoring data through a monitoring camera; constructing a training data set for classifying human body postures; screening out effective human body posture classification characteristics; based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining a training data set and the screened effective human body posture classification characteristics, and training a human body posture classification model; carrying out target detection and identification operation on the video monitoring data, carrying out posture estimation on the region identified as a human target, calculating human posture characteristic data based on the posture estimation result, importing the calculated characteristic data into a human posture classification model, and judging the posture of a person appearing in the video; the method does not need to wear various sensors or optical marks on the target object, does not influence the comfort of movement, and has the advantages of low data acquisition cost, high real-time performance and high processing efficiency.

Description

Human body posture classification method based on computer vision
Technical Field
The invention relates to a human body posture classification method based on computer vision.
Background
The existing human posture classification mainly adopts a traditional human posture classification method based on wearable equipment, but the method needs to wear various sensors or optical markers on a target object, and the motion of the target object can be influenced. Meanwhile, the human body posture classification method based on the wearable equipment is difficult to be widely applied to public places.
At present, the coverage area of surveillance cameras installed in public places is quite large, and these cameras record every corner of human life. The traditional video monitoring system monitors the abnormity in the video by means of manual viewing, and the method needs considerable labor cost and is easy to miss.
The above-mentioned problems should be considered and solved in the human posture classification process.
Disclosure of Invention
The invention aims to provide a human body posture classification method based on computer vision, and solves the problem that the traditional human body posture classification method based on wearable equipment in the prior art can influence the motion of a target object due to the fact that a plurality of sensors or optical marks need to be worn on the target object.
The invention aims to provide a human body posture classification method based on computer vision on the basis of target detection and target identification aiming at video monitoring data, and the method is an important link in the intelligent video monitoring technology for judging the postures of all human targets in a video monitoring coverage range.
The technical solution of the invention is as follows:
a human body posture classification method based on computer vision comprises the following steps,
s1, deploying a monitoring camera in a scene, and acquiring video monitoring data through the monitoring camera;
s2, analyzing a target class label, a target position label, a picture description label and a human target joint point label in the open source image dataset Microsoft COCO, screening out a picture and label data from the target class label, the target position label, the picture description label and the human target joint point label, and constructing a training dataset for human posture classification;
s3, constructing an original feature set based on the marked data in the training data set obtained in the step S2, calculating high-order features of the data on the basis, and screening out effective human body posture classification features through feature trimming;
s4, based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining the training data set obtained in the step S2 and the effective human body posture classification features screened out in the step S3, and training a human body posture classification model;
and S5, carrying out target detection and recognition operation on the video monitoring data obtained in the step S1, carrying out posture estimation on the region recognized as the human target, calculating effective human posture characteristic data based on the posture estimation result, importing the calculated characteristic data into the human posture classification model obtained in the step S4, and judging the posture of the human appearing in the video.
Optionally, in step S2, a step of constructing a training data set for human body posture classification, specifically,
s21, selecting pictures only containing one human target in the data set based on the class labels of the targets in the open source image data set Microsoft COCO, and taking the selected pictures and various corresponding label data thereof as alternative training data sets;
s22, analyzing picture description labels corresponding to the pictures in the alternative training data set obtained in the step S21, keeping the pictures in which the keywords related to the posture exist in the alternative training data set, adding the posture labels to the pictures, and deleting the rest pictures;
s23, checking the human target joint point labels corresponding to the pictures in the alternative training data set, deleting the pictures with incomplete joint point labels, and taking the rest pictures and label data thereof as a training data set for human posture classification.
Optionally, in step S3, effective human posture classification features are screened out, specifically,
s31, reconstructing a plane rectangular coordinate system for the coordinate data of each target in the training data set obtained in the step S2 by taking the lower left corner of the surrounding frame where the target is located as the origin of coordinates, repositioning the surrounding frame coordinates and the joint point coordinate data corresponding to the target in the training data set, and taking the coordinates as original features;
s32, calculating the length-width ratio of an enclosing frame where each target is located and joint angles formed by all 3 adjacent joint points, and taking the feature data as high-order features of the targets;
and S33, calculating the variance among the high-order features, and screening out a set number of features from the high-order features as effective human body posture classification features on the basis of the principle that the feature similarity among the postures of the same type is large and the feature similarity among the postures of different types is small.
Optionally, in step S4, a human posture classification model is trained, specifically,
s41, constructing a five-layer neural network model, wherein the first layer is an input layer, each neuron of the input layer represents a feature, and the number of the neurons of the input layer is the number of the screened features; the second layer is a hidden layer I, the hidden layer performs multi-level abstraction on input features, different types of data are linearly divided, and the activation function is a relu function, namely a linear rectification function; the third layer is a dropout layer, namely a random inactivation layer, and part of neurons of the hidden layer I are deleted randomly in the test process of model training, so that the occurrence of the over-fitting problem is reduced; the fourth layer is a hidden layer II, the hidden layer performs multi-level abstraction on the features extracted by the third layer, different types of data are linearly divided, and the activation function is a relu function, namely a linear rectification function; the fifth layer is an output layer which outputs the probability that the postures of the targets respectively belong to different categories;
and S42, dividing the training data set obtained in the step S2 into a training set and a verification set, inputting the training set and the human posture classification features obtained in the step S3 into the neural network model constructed in the step S41, adopting a mean square error function as a loss function, optimizing the model by using a small batch gradient descent method, and stopping training when the model is converged to obtain the human posture classification model.
Optionally, in step S5, the pose of the person appearing in the video is determined, specifically,
s51, for the video monitoring data captured by the monitoring camera deployed in the step S1, firstly, an effective target is found through a target detection and target identification technology, and category information and position information of the target are extracted from the effective target;
s52, screening out an enclosure with a recognition result of a human target from the target data obtained in the step S51, analyzing an enclosure coverage area by adopting a human posture estimation algorithm, extracting joint point position data of the target, and calculating effective human posture characteristic data screened out in the step S33 based on the joint point position data.
And S53, analyzing effective human body posture characteristic data of each human target by adopting the human body posture classification model obtained by training in the step S4 to obtain posture information of each human target in the video.
The beneficial effects of the invention are: compared with the traditional human posture classification method based on wearable equipment, the human posture classification method based on computer vision does not need a target object to wear various sensors or optical marks, data collection is completely completed through an external monitoring camera, the movement comfort is not affected, and the data collection cost is low. According to the human body posture classification method based on computer vision, the posture classification of the target can be realized only by analyzing the target information in the current frame of the video monitoring data, and the behavior of the target in the previous frames does not need to be analyzed, so that the method is a posture classification method with high real-time performance and has high processing efficiency.
Drawings
FIG. 1 is a flowchart illustrating a human body posture classification method based on computer vision according to an embodiment of the present invention.
FIG. 2 is a schematic illustration of the positions of 17 joint point marker data of the human target in an embodiment.
FIG. 3 is an explanatory diagram of high-order features of a partial human body posture in the embodiment.
Fig. 4 is an explanatory diagram of a neural network structure constructed in the embodiment.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Examples
A human body posture classification method based on computer vision, as shown in figure 1, comprises the following steps,
s1, deploying a monitoring camera in a scene, and acquiring video data through the monitoring camera. The camera height is preferably between 2 meters and 2.5 meters, is an overlook angle, and simultaneously adjusts the camera angle according to the placing condition of fixed objects in a scene so as to reduce monitoring dead angles.
S2, analyzing a target class label, a target position label, a picture description label and a human target joint point label in the open source image data set Microsoft COCO, screening out a picture and marking data from the open source image data set Microsoft COCO, and constructing a training data set for human posture classification, wherein the specific implementation steps are as follows:
s21, traversing all picture files in the Microsoft COCO data set, searching for label data corresponding to each picture in instance label files in an annotation folder, and selecting the picture only containing one human target as an alternative training data set.
S22, searching a caption words label (text label) file of a picture in the annotation folder for the preferred 5 picture description sentences corresponding to each picture in the candidate training data set obtained in step S21, if keywords related to the gesture, such as "stand", "site", and "lie", appear therein, then retaining the picture in the training data set, and adding a gesture label to the person in the picture, otherwise, deleting the picture from the candidate training data set.
S23, searching for joint point position data of the human target in the remaining pictures in the candidate training data set obtained in step S22 in a keypoints-labels file in the annotation folder for annotations, wherein labels and positions of 17 joint points of the human target are, as shown in fig. 2, deleted from the pictures in which joint point labeling is incomplete due to occlusion or angle, if there is no occlusion, the joint point labeling data of each person should include positions of 17 key points, namely "nose", "left/right eye", "left/right ear", "left/right shoulder", "left/right elbow", "left/right wrist", "left/right hip", "left/right knee", and "left/right ankle", and after deleting the picture with the number of valid key points less than 17 from the training data set, adding keypoints labels to the people in the remaining pictures to form a training data set for human body posture classification.
S3, constructing an original feature set based on the labeled data in the training data set obtained in the step S2, starting from basic data such as the coordinates of the target bounding box, the size of the target bounding box, the coordinates of 17 joint points of the target and the like, calculating high-order features of the data on the basis, and extracting effective human posture classification features;
and S31, reconstructing a plane rectangular coordinate system for the coordinate data of each target in the training data set obtained in the step S2 by taking the lower left corner of the surrounding frame where the target is located as the origin of coordinates, repositioning the coordinates of the surrounding frame corresponding to the target in the training data set and the coordinate data of the optimized 17 joint points, and taking the coordinates as original features.
And S32, calculating the aspect ratio of the surrounding frame where the human target is located and the joint angle formed by any 3 adjacent joint points based on the original features obtained in the step S31, and taking the data as high-order features, wherein the high-order features of partial human body postures are marked in the figure 3.
And S33, calculating the variance of the high-order features between the targets in different postures and the variance of the high-order features between the targets in the same posture for the high-order features of all the human targets obtained in the step S32, constructing an influence factor index of the features on the basis of the principle that the variance of the features in the classes is large and the variance of the features between the classes is small, and screening out the set number with the highest influence factor of the features, preferably 10 high-order features as effective human posture features.
S4, based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining the training data set obtained in the step S2 and the effective human body posture classification characteristics screened in the step S3, and training a human body posture classification model;
s41, constructing a neural network model with a five-layer structure, wherein as shown in FIG. 4, the first layer is an input layer, the activation function of the input layer is a relu function, namely a linear rectification function, each neuron of the input layer represents a feature, the number of the neurons of the input layer is the number of the screened features, and the number of the neurons of the input layer is 10 because of 10 screened features; the second layer is a hidden layer I, the hidden layer I is used for abstracting input features in a multi-level mode, the purpose is to divide data of different types in a better linear mode, the number of neurons is 15, an activation function is a relu function, the third layer is a dropout layer, namely a random inactivation layer, part of neurons of the first hidden layer are deleted randomly in the testing process of model training, the occurrence of the overfitting problem can be effectively reduced, the discarding probability is set to be 0.3, the fourth layer is a hidden layer II, the hidden layer II is used for abstracting the features extracted by the third layer in a multi-level mode, the purpose is to divide data of different types in a better linear mode, the number of the neurons is 12, the activation function is a relu function, the last layer is an output layer, the activation function is a softmax function, the probability that the posture output to the target belongs to different types respectively is achieved, the number of the neurons of classification labels of the output layers represents the number of the classification labels, and therefore the number of the neurons of the output layers is 3.
And S42, importing the training data set obtained in the step S2 and the effective human body posture characteristics obtained in the step S3 into a neural network model, and taking 70% of the training data set as a training set and 30% of the training data set as a verification set. And (3) adopting a mean square error function as a loss function, optimizing parameters of the neural network model by using a small batch gradient descent method, calculating the accuracy of the model on the verification set every 1000 iterations, and stopping training when the model is converged to obtain the human posture classification model.
And S5, carrying out target detection and recognition operation on the video monitoring data obtained in the step S1, carrying out posture estimation operation on the coverage area of the bounding box recognized as the human target in the video monitoring data after the target detection and recognition operation is finished, importing the effective human posture characteristic data obtained by calculation into a human posture classification model, and judging the posture of the human appearing in the video. The method comprises the following specific steps:
s51, analyzing the video monitoring data captured by the monitoring camera through a target detection and target recognition technology, finding a target in the video monitoring data, and extracting the category information and the position information of the target.
S52, screening out the surrounding frame of which the recognition result is the human target from the target data obtained in the step S51, analyzing the covering area of the surrounding frame by adopting a human posture estimation algorithm, extracting the joint point position data of the target, and calculating 10 effective human posture characteristic data screened out in the step S33 based on the joint point position data.
And S53, respectively inputting the effective human body posture characteristic data corresponding to each human target obtained by calculation in the step S52 into the human body posture classification model obtained by training in the step S4 to obtain the posture information of each human target.
The human body posture classification method based on computer vision of the embodiment comprises the steps of collecting video monitoring data through a monitoring camera; analyzing pictures and marking data in an open source image data set Microsoft COCO, selecting a picture which only has a human target, a definite target posture description sentence and complete joint point position data of the target, and constructing a training data set for human posture classification; extracting label data from the training data set to construct an original characteristic set, and screening out effective human posture characteristics on the basis; training a human body posture classification model by combining a training data set and the screened human body posture classification features based on a neural network algorithm; and carrying out target detection and target recognition on the video monitoring data, finding and positioning a human target in the video, carrying out posture estimation operation on the human target, then calculating to obtain effective human posture characteristic data, and judging the posture of the human appearing in the video through a human posture classification model on the basis.
Compared with the traditional human posture classification method based on wearable equipment, the human posture classification method based on computer vision does not need a target object to wear various sensors or optical marks, data collection is completely completed through an external monitoring camera, the comfort of movement is not affected, and the data collection cost is low.
The method comprises the steps of constructing a training data set by screening targets and labeled data in a Microsoft COCO data set, and acquiring effective human body posture characteristic data by combining with characteristic engineering; training a human body posture classification model through a neural network algorithm; the model can be directly applied to an intelligent video monitoring system, and the postures of all human targets in the video monitoring coverage range are judged by combining target detection, recognition and human posture estimation algorithms.
Original attitude data of people in the video is obtained through target detection, target recognition and attitude estimation algorithms, and classification of target attitudes is achieved through a neural network algorithm on the basis. And discovering the human target and the position data thereof in the video monitoring data by utilizing a target detection and target recognition algorithm based on deep learning. For each human target, the positions of important joint points in the skeleton of the human target are analyzed through a human posture estimation algorithm, the types of target postures are distinguished on the basis of the positions, and the human posture classification based on computer vision is realized.
The human body posture classification method based on computer vision can realize the posture classification of the target only by analyzing the target information in the current frame without analyzing the behavior of the target in the previous frames, is a posture classification method with high real-time performance, and has higher processing efficiency.
The human body posture classification based on computer vision aims to obtain the posture of a person in a video by analyzing video data, and based on the posture, dangerous behaviors in the video are found, so that the human body posture classification is an important link in an intelligent video monitoring system and is an important research problem in the field of computer vision. According to the method, the gesture category of the human target in the video is judged by using a machine learning algorithm according to the video monitoring data in the public place, and a good basis is provided for timely finding out dangerous behaviors in a monitoring scene.

Claims (5)

1. A human body posture classification method based on computer vision is characterized in that: comprises the following steps of (a) carrying out,
s1, deploying a monitoring camera in a scene, and acquiring video monitoring data through the monitoring camera;
s2, analyzing a target class label, a target position label, a picture description label and a human target joint point label in the open source image dataset Microsoft COCO, screening out a picture and label data from the target class label, the target position label, the picture description label and the human target joint point label, and constructing a training dataset for human posture classification;
s3, constructing an original feature set based on the labeled data in the training data set obtained in the step S2, calculating high-order features of the data in the original feature set on the basis, and screening out effective human body posture classification features through feature trimming;
s4, based on a neural network algorithm, selecting a loss function and an optimization algorithm by combining the training data set obtained in the step S2 and the effective human body posture classification features screened out in the step S3, and training a human body posture classification model;
and S5, carrying out target detection and recognition operation on the video monitoring data obtained in the step S1, carrying out posture estimation on the region recognized as the human target, calculating effective human posture characteristic data based on the posture estimation result, importing the calculated characteristic data into the human posture classification model obtained in the step S4, and judging the posture of the human appearing in the video.
2. The computer vision-based human body posture classification method of claim 1, characterized in that: in step S2, a training data set for human body posture classification is constructed, specifically,
s21, selecting pictures only containing one human target in the data set based on the class labels of the targets in the open source image data set Microsoft COCO, and taking the selected pictures and various corresponding label data thereof as alternative training data sets;
s22, analyzing picture description labels corresponding to the pictures in the alternative training data set obtained in the step S21, keeping the pictures in which the keywords related to the posture exist in the alternative training data set, adding the posture labels to the pictures, and deleting the rest pictures;
s23, checking the human target joint point labels corresponding to the pictures in the alternative training data set, deleting the pictures with incomplete joint point labels, and taking the rest pictures and label data thereof as a training data set for human posture classification.
3. The computer vision based human body posture classification method of claim 1, characterized in that: in step S3, effective human body posture classification features are screened out, specifically,
s31, reconstructing a plane rectangular coordinate system for the coordinate data of each target in the training data set obtained in the step S2 by taking the lower left corner of the surrounding frame where the target is located as the origin of coordinates, repositioning the surrounding frame coordinates and the joint point coordinate data corresponding to the target in the training data set, and taking the coordinates as original features;
s32, calculating the length-width ratio of an enclosing frame where each target is located and joint angles formed by all 3 adjacent joint points, and taking the feature data as high-order features of the targets;
and S33, calculating the variance among the high-order features, and screening a set number of features from the high-order features to serve as effective human body posture classification features based on the principle that the feature similarity between the postures of the same type is large and the feature similarity between the postures of different types is small.
4. The computer vision-based human body posture classification method of any one of claims 1-3, characterized by: in step S4, a human posture classification model is trained, specifically,
s41, constructing a five-layer neural network model, wherein the first layer is an input layer, each neuron of the input layer represents a feature, and the number of the neurons of the input layer is the number of the screened features; the second layer is a hidden layer I, the hidden layer performs multi-level abstraction on input features, different types of data are linearly divided, and an activation function is a relu function, namely a linear rectification function; the third layer is a dropout layer, namely a random inactivation layer, and part of neurons of the hidden layer I are deleted randomly in the test process of model training, so that the occurrence of the over-fitting problem is reduced; the fourth layer is a hidden layer II, the hidden layer performs multi-level abstraction on the features extracted by the third layer, different types of data are linearly divided, and the activation function is a relu function, namely a linear rectification function; the fifth layer is an output layer which outputs the probability that the postures of the targets respectively belong to different categories;
and S42, dividing the training data set obtained in the step S2 into a training set and a verification set, inputting the training set and the human posture classification features obtained in the step S3 into the neural network model constructed in the step S41, adopting a mean square error function as a loss function, optimizing the model by using a small batch gradient descent method, and stopping training when the model is converged to obtain the human posture classification model.
5. The computer vision-based human body posture classification method of any one of claims 1-3, characterized by: in step S5, the posture of the person appearing in the video is determined, specifically,
s51, for the video monitoring data captured by the monitoring camera deployed in the step S1, firstly, an effective target is found through a target detection and target identification technology, and category information and position information of the target are extracted from the effective target;
s52, screening out an enclosure frame of which the recognition result is the human target from the target data obtained in the step S51, analyzing an enclosure frame coverage area by adopting a human posture estimation algorithm, extracting joint point position data of the target, and calculating effective human posture characteristic data screened out in the step S33 based on the joint point position data;
and S53, analyzing effective human body posture characteristic data of each human target by adopting the human body posture classification model obtained by training in the step S4 to obtain posture information of each human target in the video.
CN201910966746.0A 2019-10-12 2019-10-12 Human body posture classification method based on computer vision Active CN110688980B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910966746.0A CN110688980B (en) 2019-10-12 2019-10-12 Human body posture classification method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910966746.0A CN110688980B (en) 2019-10-12 2019-10-12 Human body posture classification method based on computer vision

Publications (2)

Publication Number Publication Date
CN110688980A CN110688980A (en) 2020-01-14
CN110688980B true CN110688980B (en) 2023-04-07

Family

ID=69112635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910966746.0A Active CN110688980B (en) 2019-10-12 2019-10-12 Human body posture classification method based on computer vision

Country Status (1)

Country Link
CN (1) CN110688980B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111368806B (en) * 2020-04-01 2023-06-13 大连理工大学 Worker construction state monitoring method based on artificial intelligence
CN111539377A (en) * 2020-05-11 2020-08-14 浙江大学 Human body movement disorder detection method, device and equipment based on video
CN112329571B (en) * 2020-10-27 2022-12-16 同济大学 Self-adaptive human body posture optimization method based on posture quality evaluation
CN114998803A (en) * 2022-06-13 2022-09-02 北京理工大学 Body-building movement classification and counting method based on video
CN116645732B (en) * 2023-07-19 2023-10-10 厦门工学院 Site dangerous activity early warning method and system based on computer vision

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222634A (en) * 2019-06-04 2019-09-10 河海大学常州校区 A kind of human posture recognition method based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130090571A1 (en) * 2011-10-06 2013-04-11 The Board Of Regents Of The University Of Texas System Methods and systems for monitoring and preventing pressure ulcers

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110222634A (en) * 2019-06-04 2019-09-10 河海大学常州校区 A kind of human posture recognition method based on convolutional neural networks

Also Published As

Publication number Publication date
CN110688980A (en) 2020-01-14

Similar Documents

Publication Publication Date Title
CN110688980B (en) Human body posture classification method based on computer vision
Nath et al. Deep learning for site safety: Real-time detection of personal protective equipment
WO2019232894A1 (en) Complex scene-based human body key point detection system and method
Wu et al. Metric learning based structural appearance model for robust visual tracking
Wang et al. Lying pose recognition for elderly fall detection
CN111539276B (en) Method for detecting safety helmet in real time in power scene
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
Sidig et al. KArSL: Arabic sign language database
Budiman et al. Student attendance with face recognition (LBPH or CNN): Systematic literature review
CN112541403B (en) Indoor personnel falling detection method by utilizing infrared camera
Li et al. Improved YOLOv4 network using infrared images for personnel detection in coal mines
Manaf et al. Computer vision-based survey on human activity recognition system, challenges and applications
CN113920326A (en) Tumble behavior identification method based on human skeleton key point detection
CN113065515A (en) Abnormal behavior intelligent detection method and system based on similarity graph neural network
Najafizadeh et al. A feasibility study of using Google street view and computer vision to track the evolution of urban accessibility
CN114170672A (en) Classroom student behavior identification method based on computer vision
Dou et al. An improved yolov5s fire detection model
Karim et al. A region-based deep learning algorithm for detecting and tracking objects in manufacturing plants
Naseer et al. Multimodal Objects Categorization by Fusing GMM and Multi-layer Perceptron
Mazzamuto et al. Weakly supervised attended object detection using gaze data as annotations
CN117541994A (en) Abnormal behavior detection model and detection method in dense multi-person scene
Zhang et al. A Multiple Instance Learning and Relevance Feedback Framework for Retrieving Abnormal Incidents in Surveillance Videos.
Yadav et al. Human Illegal Activity Recognition Based on Deep Learning Techniques
CN115830635A (en) PVC glove identification method based on key point detection and target identification
Banerjee et al. Multimodal behavior analysis in computer-enabled laboratories using nonverbal cues

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant