CN112766091A - Video unsafe behavior recognition system and method based on human skeleton key points - Google Patents
Video unsafe behavior recognition system and method based on human skeleton key points Download PDFInfo
- Publication number
- CN112766091A CN112766091A CN202110006822.0A CN202110006822A CN112766091A CN 112766091 A CN112766091 A CN 112766091A CN 202110006822 A CN202110006822 A CN 202110006822A CN 112766091 A CN112766091 A CN 112766091A
- Authority
- CN
- China
- Prior art keywords
- video
- key points
- detection
- module
- falling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0407—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
- G08B21/043—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/04—Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
- G08B21/0438—Sensor means for detecting
- G08B21/0476—Cameras to detect unsafe condition, e.g. video cameras
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Emergency Management (AREA)
- Data Mining & Analysis (AREA)
- Social Psychology (AREA)
- Gerontology & Geriatric Medicine (AREA)
- Business, Economics & Management (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Psychology (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Alarm Systems (AREA)
Abstract
The invention discloses a video unsafe behavior recognition system and method based on human skeleton key points. The invention improves the accuracy of the falling behavior detection of the personnel, and can be applied to intelligent video monitoring of high-risk production operation places such as oil gas and the like; the problem of unsafe behaviors of 24-hour real-time supervision operators such as unworn safety helmets, falling down incapability and the like is solved; the method solves the problem that the traditional detection method needs manual monitoring, and greatly reduces the labor cost.
Description
Technical Field
The invention relates to the technical field of monitoring systems, in particular to a system and a method for identifying unsafe behaviors of videos based on key points of human skeletons.
Background
At present, aiming at the characteristics of high-risk production operation places such as oil gas and the like, such as flammability, explosiveness and high risk, a monitoring system is deployed in a working area to monitor and identify the operation behaviors of the personnel in the high-risk production operation places, and the unsafe behaviors that the personnel do not wear safety helmets and fall down and the like are avoided, but the traditional monitoring mode needs manual monitoring and cannot realize the real-time effective monitoring in 24 hours all day. Therefore, the video unsafe behavior identification method based on the human skeleton key points is provided, various dangerous operation and illegal operation behaviors are analyzed, identified and alarmed in real time, and the production safety is guaranteed.
Disclosure of Invention
Aiming at the defects in the prior art, the video unsafe behavior recognition system and method based on the human skeleton key points solve the problem that real-time supervision operators have unsafe behaviors such as not wearing safety helmets and not falling down.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a video unsafe behavior recognition system based on human skeleton key points comprises an IPC camera, a video intelligent behavior analysis system, a streaming media service system and a display terminal, wherein the IPC camera is respectively connected with the video intelligent behavior analysis system and a user computer through a switch, the video intelligent behavior analysis system is respectively connected with the streaming media service system and the display terminal, the streaming media service system is connected with the display terminal, and the user computer is provided with the display terminal;
the IPC camera is used for monitoring the personnel operation behaviors in the high-risk production operation places and switching the monitoring video stream to the switch;
the switch is used for transmitting the video data stream to the video intelligent behavior analysis system;
the video intelligent behavior analysis system is used for analyzing and counting the personnel operation behaviors of high-risk production operation places in the video stream and feeding back the analyzed and counted data and the detected video stream result to the streaming media service system through the switch;
the streaming media service system is used for pushing the summary of various analysis data and the detection video streaming result to a display terminal;
the display terminal is used for configuring detection and management parameters of the video intelligent behavior analysis system and performing access, viewing and management through a user computer.
Further: the video intelligent behavior analysis system comprises a video intelligent analysis module and an alarm statistic analysis module;
the video intelligent analysis module rapidly analyzes video image information through an algorithm based on human skeleton key points, wherein the algorithm comprises falling detection of people and detection of the fact that the people do not wear a safety helmet;
the alarm statistic analysis module is used for carrying out statistics and analysis on violation behaviors of the operating personnel.
Further: the display terminal comprises an equipment management module, an alarm manual auditing module, an alarm processing module, a parameter setting module and an information data uploading module;
the equipment management module is used for effectively managing the whole equipment and the video source;
the alarm manual auditing module is used for manually analyzing and judging the violation behaviors of the operators;
the alarm processing module is used for displaying and counting the violation behaviors of the operators;
the parameter setting module is used for setting the detection parameters of the violation behaviors of the operators;
the information data uploading module is used for uploading violation warning information data of the operating personnel.
A video unsafe behavior identification method based on human skeleton key points comprises the following steps:
s1, frame extraction processing is carried out on the 1080 video stream according to a certain rule, and fuzzy and redundant images in the image file are cleaned and removed;
s2, preprocessing the processed effective image file, and enhancing the training data;
s3, importing the preprocessed image into a COCO Antator marking platform, marking the position region and the category of the detection target as a main classifier, and marking the key point position and the region of the sub-target as a sub-classifier;
s4, training and fine-tuning a target detection model according to the manually marked detection target area and position until the accuracy of a training model test image is greater than a set expected value, and transmitting a preprocessed image into the target detection model for target detection processing;
s5, taking the improved opencast network as a detection network of human body key points, transmitting images obtained by target detection processing into the detection network, and detecting the key points to obtain human body skeleton key point images;
and S6, enabling the human skeleton key point image to pass through a two-classification network, and judging whether the operator falls down by combining the output result of the two-classification network with the descending speed of the waist key point, the ratio of the height difference between the waist key point and the two-foot key point and the height difference between the left and right key points and the two-foot key point.
Further: the preprocessing operations in step S2 include flipping, cropping, random brightness, random contrast, and scaling.
Further: the specific steps of step S5 are: and (3) taking MobileV1 as a basic structure and fine tuning, respectively outputting PAFs (body key points) related to all human body key points in the image and PCM (confidence coefficient maps) of the body key points, and combining the PAFs and the PCM to obtain a human skeleton key point image.
Further: the specific steps of step S6 are:
s61, taking the human skeleton key point images as a training set and a testing set, creating falling and non-falling folders in the training set and the testing set, putting the corresponding skeleton images, and training and finely adjusting a falling state detection model;
s62, transmitting the skeleton image into a full-connection layer, judging the skeleton image through the full-connection layer, and outputting whether the human body state falls or not;
s63, simultaneously processing the returned skeleton images in real time, calculating the descending speed V of the waist key point once every 10 adjacent frames, and detecting a first falling feature when V is greater than a threshold value VT;
s64, simultaneously taking the key points of the two feet as base lines relative to the ground, calculating the height difference between the key points of the waist and the two feet and the height difference between the key points of the left and right eyes and the key points of the two feet, and detecting a second falling feature if the ratio K of the height difference to the key points of the left and right eyes is less than a preset fixed value KT;
and S65, when the output human body state is a falling state and the first falling feature or the second falling feature is detected, the final falling state is judged.
The invention has the beneficial effects that: the invention improves the accuracy of the falling behavior detection of the personnel, and can be applied to intelligent video monitoring of high-risk production operation places such as oil gas and the like; the problem of unsafe behaviors of 24-hour real-time supervision operators such as unworn safety helmets, falling down incapability and the like is solved; the method solves the problem that the traditional detection method needs manual monitoring, and greatly reduces the labor cost.
Drawings
FIG. 1 is a block diagram of an unsafe behavior identification system of the present invention;
FIG. 2 is a data processing flow diagram of a method for unsafe behavior identification according to the present invention;
fig. 3 is a flowchart of a fall recognition method for a person in the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a video unsafe behavior recognition system based on human skeleton key points comprises an IPC camera, a video intelligent behavior analysis system, a streaming media service system and a display terminal, wherein the IPC camera is respectively connected with the video intelligent behavior analysis system and a user computer through a switch, the video intelligent behavior analysis system is respectively connected with the streaming media service system and the display terminal, the streaming media service system is connected with the display terminal, and the user computer is provided with the display terminal;
the IPC camera is used for monitoring the personnel operation behaviors in the high-risk production operation places and switching the monitoring video stream to the switch;
the switch is used for transmitting the video data stream to the video intelligent behavior analysis system;
the video intelligent behavior analysis system is used for analyzing and counting the personnel operation behaviors of high-risk production operation places in the video stream and feeding back the analyzed and counted data and the detected video stream result to the streaming media service system through the switch;
the video intelligent behavior analysis system comprises a video intelligent analysis module and an alarm statistic analysis module;
the video intelligent analysis module rapidly analyzes video image information through an algorithm based on human skeleton key points, wherein the algorithm comprises falling detection of people and detection of the fact that the people do not wear a safety helmet;
the alarm statistic analysis module is used for carrying out statistics and analysis on violation behaviors of the operating personnel.
The streaming media service system is used for pushing the summary of various analysis data and the detection video streaming result to a display terminal;
the display terminal is used for configuring detection and management parameters of the video intelligent behavior analysis system and performing access, viewing and management through a user computer. The display terminal comprises an equipment management module, an alarm manual auditing module, an alarm processing module, a parameter setting module and an information data uploading module;
the equipment management module is used for effectively managing the whole equipment and the video source;
the alarm manual auditing module is used for carrying out logic analysis on the violation behaviors of the operators;
the alarm processing module is used for displaying and alarming the violation behaviors of the operators;
the parameter setting module is used for setting the detection parameters of the violation behaviors of the operators;
the information data uploading module is used for uploading violation warning information data of the operating personnel.
As shown in fig. 2, a video unsafe behavior identification method based on human skeleton key points includes the following steps:
s1, frame extraction processing is carried out on the 1080 video stream according to a certain rule, and fuzzy and redundant images in the image file are cleaned and removed;
s2, carrying out preprocessing operations such as turning, cutting and scaling on the processed effective image file, and enhancing the training data; including random mirror inversion, -30 degrees- +30 degrees of random rotation, 0.5-1.2 random scaling, random clipping, random brightness, random contrast, etc.
S3, importing the preprocessed image into a COCO Antator marking platform, marking the position region and the category of the detection target as a main classifier, and marking the key point position and the region of the sub-target as a sub-classifier;
s4, training and fine-tuning a target detection model according to the manually marked detection target area and position until the accuracy of a training model test image is greater than a set expected value, and transmitting a preprocessed image into the target detection model for target detection processing;
s5, taking the improved opencast network as a detection network of human body key points, transmitting images obtained by target detection processing into the detection network, and detecting the key points to obtain human body skeleton key point images; using MobileV1 as a basic structure and fine tuning, respectively outputting body key Point Associations (PAFs) of all human bodies in the image and confidence maps (PCM) of all body key points through two branches. After the outputs of the PCM and the PAF are connected, finally, the network outputs the human skeleton key point image as a training set and a testing set.
The network selects MobileV1 as a basic structure, and the calculation speed is greatly improved. The 7 × 7 convolution kernel is replaced by 3 continuous 3 × 3 convolution kernels, and hole convolution with dil 2 is used to maintain the field of view, thereby reducing the amount of calculation and maintaining the field of view. Operation of the formerThe number of times is 2X 72-1 ═ 97, whereas the latter is only 51 times. The output of each of the 3 convolution kernels is cascaded, the number of nonlinear layers is increased by two, and the network can maintain both lower-level and higher-level features.
After the input image is analyzed by the CNN network, a group of characteristic graphs F are generated and used as the first stage of input. At this stage, the network outputs body key Point Associations (PAFs) for all human bodies in the image.
The second stage predicts PCM, over TPSecondary iteration, starting with the latest PAF prediction, the iterative process is repeated for confidence map detection:
ρtdenotes CNN, T used for reasoning at stage TCThe total number of confidence map stages is indicated.
S6, enabling the human skeleton key point image to pass through a two-classification network, and judging whether the operator falls down by combining the output result of the two-classification network with the descending speed of the waist key point, the ratio of the height difference between the waist key point and the two-foot key point and the height difference between the left and right key points and the two-foot key point;
the specific steps of step S6 are shown in fig. 3:
s61, creating falling and non-falling folders in the training set and the testing set, putting corresponding skeleton images, and training and finely adjusting a falling state detection model;
s62, transmitting the skeleton image into a full-connection layer, judging the skeleton image through the full-connection layer, and outputting whether the human body state falls or not;
s63, simultaneously processing the returned skeleton images in real time, calculating the descending speed V of the waist key point once every 10 adjacent frames, and detecting a first falling feature when V is greater than a threshold value VT; and selecting 1.3m/s as a threshold value of the descending speed of the waist key point according to the test result of the experiment. Once the threshold is exceeded, the first fall feature is considered to be detected.
S64, simultaneously taking the key points of the two feet as base lines relative to the ground, calculating the height difference between the key points of the waist and the two feet and the height difference between the key points of the left and right eyes and the key points of the two feet, and detecting a second falling feature if the ratio K of the height difference to the key points of the left and right eyes is less than a preset fixed value KT; in combination with experimental test results, 0.25 was chosen as the scaling threshold. Once below the threshold, a second fall feature is considered to be detected.
And S65, when the output human body state is a falling state and the first falling feature or the second falling feature is detected, the final falling state is judged.
Claims (7)
1. A video unsafe behavior recognition system based on human skeleton key points is characterized by comprising an IPC camera, a video intelligent behavior analysis system, a streaming media service system and a display terminal, wherein the IPC camera is respectively connected with the video intelligent behavior analysis system and a user computer through a switch;
the IPC camera is used for monitoring the personnel operation behaviors in the high-risk production operation places and switching the monitoring video stream to the switch;
the switch is used for transmitting the video data stream to the video intelligent behavior analysis system;
the video intelligent behavior analysis system is used for analyzing and counting the personnel operation behaviors of high-risk production operation places in the video stream and feeding back the analyzed and counted data and the detected video stream result to the streaming media service system through the switch;
the streaming media service system is used for pushing the summary of various analysis data and the detection video streaming result to a display terminal;
the display terminal is used for configuring detection and management parameters of the video intelligent behavior analysis system and performing access, viewing and management through a user computer.
2. The human skeletal key point-based video unsafe behavior recognition system of claim 1, wherein the video intelligent behavior analysis system comprises a video intelligent analysis module and an alarm statistic analysis module;
the video intelligent analysis module rapidly analyzes video image information through an algorithm based on human skeleton key points, wherein the algorithm comprises falling detection of people and detection of the fact that the people do not wear a safety helmet;
the alarm statistic analysis module is used for carrying out statistics and analysis on violation behaviors of the operating personnel.
3. The human skeleton key point-based video unsafe behavior recognition system of claim 1, wherein the display terminal comprises an equipment management module, an alarm manual review module, an alarm processing module, a parameter setting module and an information data uploading module;
the equipment management module is used for effectively managing the whole equipment and the video source;
the alarm manual auditing module is used for manually analyzing and judging the violation behaviors of the operators;
the alarm processing module is used for displaying and counting the violation behaviors of the operators;
the parameter setting module is used for setting the detection parameters of the violation behaviors of the operators;
the information data uploading module is used for uploading violation warning information data of the operating personnel.
4. A video unsafe behavior identification method based on human skeleton key points is characterized by comprising the following steps:
s1, frame extraction processing is carried out on the 1080 video stream according to a certain rule, and fuzzy and redundant images in the image file are cleaned and removed;
s2, preprocessing the processed effective image file, and enhancing the training data;
s3, importing the preprocessed image into a COCO Antator marking platform, marking the position region and the category of the detection target as a main classifier, and marking the key point position and the region of the sub-target as a sub-classifier;
s4, training and fine-tuning a target detection model according to the manually marked detection target area and position until the accuracy of a training model test image is greater than a set expected value, and transmitting a preprocessed image into the target detection model for target detection processing;
s5, taking the improved opencast network as a detection network of human body key points, transmitting images obtained by target detection processing into the detection network, and detecting the key points to obtain human body skeleton key point images;
and S6, enabling the human skeleton key point image to pass through a two-classification network, and judging whether the operator falls down by combining the output result of the two-classification network with the descending speed of the waist key point, the ratio of the height difference between the waist key point and the two-foot key point and the height difference between the left and right key points and the two-foot key point.
5. The method for identifying unsafe behavior of videos based on key points of human bones as claimed in claim 4, wherein the preprocessing operations in step S2 include flipping, cropping, random brightness, random contrast and scaling.
6. The method for identifying unsafe behaviors of videos based on key points of human bones as claimed in claim 4, wherein the specific steps of the step S5 are as follows: and (3) taking MobileV1 as a basic structure and fine tuning, respectively outputting PAFs (body key points) related to all human body key points in the image and PCM (confidence coefficient maps) of the body key points, and combining the PAFs and the PCM to obtain a human skeleton key point image.
7. The method for identifying unsafe behaviors of videos based on key points of human bones as claimed in claim 4, wherein the specific steps of the step S6 are as follows:
s61, taking the human skeleton key point images as a training set and a testing set, creating falling and non-falling folders in the training set and the testing set, putting the corresponding skeleton images, and training and finely adjusting a falling state detection model;
s62, transmitting the skeleton image into a full-connection layer, judging the skeleton image through the full-connection layer, and outputting whether the human body state falls or not;
s63, simultaneously processing the returned skeleton images in real time, calculating the descending speed V of the waist key point once every 10 adjacent frames, and detecting a first falling feature when V is greater than a threshold value VT;
s64, simultaneously taking the key points of the two feet as base lines relative to the ground, calculating the height difference between the key points of the waist and the two feet and the height difference between the key points of the left and right eyes and the key points of the two feet, and detecting a second falling feature if the ratio K of the height difference to the key points of the left and right eyes is less than a preset fixed value KT;
and S65, when the output human body state is a falling state and the first falling feature or the second falling feature is detected, the final falling state is judged.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110006822.0A CN112766091B (en) | 2021-01-05 | 2021-01-05 | Video unsafe behavior recognition system and method based on human skeleton key points |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110006822.0A CN112766091B (en) | 2021-01-05 | 2021-01-05 | Video unsafe behavior recognition system and method based on human skeleton key points |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112766091A true CN112766091A (en) | 2021-05-07 |
CN112766091B CN112766091B (en) | 2023-09-29 |
Family
ID=75699246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110006822.0A Active CN112766091B (en) | 2021-01-05 | 2021-01-05 | Video unsafe behavior recognition system and method based on human skeleton key points |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112766091B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113660455A (en) * | 2021-07-08 | 2021-11-16 | 深圳宇晰科技有限公司 | Method, system and terminal for fall detection based on DVS data |
CN113903051A (en) * | 2021-07-23 | 2022-01-07 | 南方科技大学 | DVS camera data-based human body posture detection method and terminal equipment |
CN115146903A (en) * | 2022-05-09 | 2022-10-04 | 中国中煤能源集团有限公司 | Coal mine unsafe behavior recognition early warning system based on key skeleton points |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104159088A (en) * | 2014-08-23 | 2014-11-19 | 中科院成都信息技术股份有限公司 | System and method of remote monitoring of intelligent vehicle |
CN110022466A (en) * | 2019-04-24 | 2019-07-16 | 中科院成都信息技术股份有限公司 | A kind of video analysis platform and its control method based on wisdom big data |
CN111144263A (en) * | 2019-12-20 | 2020-05-12 | 山东大学 | Construction worker high-fall accident early warning method and device |
CN111209848A (en) * | 2020-01-03 | 2020-05-29 | 北京工业大学 | Real-time fall detection method based on deep learning |
CN111274954A (en) * | 2020-01-20 | 2020-06-12 | 河北工业大学 | Embedded platform real-time falling detection method based on improved attitude estimation algorithm |
-
2021
- 2021-01-05 CN CN202110006822.0A patent/CN112766091B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104159088A (en) * | 2014-08-23 | 2014-11-19 | 中科院成都信息技术股份有限公司 | System and method of remote monitoring of intelligent vehicle |
CN110022466A (en) * | 2019-04-24 | 2019-07-16 | 中科院成都信息技术股份有限公司 | A kind of video analysis platform and its control method based on wisdom big data |
CN111144263A (en) * | 2019-12-20 | 2020-05-12 | 山东大学 | Construction worker high-fall accident early warning method and device |
CN111209848A (en) * | 2020-01-03 | 2020-05-29 | 北京工业大学 | Real-time fall detection method based on deep learning |
CN111274954A (en) * | 2020-01-20 | 2020-06-12 | 河北工业大学 | Embedded platform real-time falling detection method based on improved attitude estimation algorithm |
Non-Patent Citations (4)
Title |
---|
佃松宜;程鹏;王凯;雒瑞森;: "基于双向循环神经网络的跌倒行为识别", 计算机工程与设计 * |
杜启亮;黄理广;田联房;黄迪臻;靳守杰;李淼;: "基于视频监控的手扶电梯乘客异常行为识别", 华南理工大学学报(自然科学版) * |
段俊臣;梁美祥;王瑞;: "基于人体骨骼点检测与多层感知机的人体姿态识别", 电子测量技术 * |
蔡文郁;郑雪晨;郭嘉豪;阮智祥;: "基于SVM-MultiCNN模型的视觉感知跌倒检测算法", 杭州电子科技大学学报(自然科学版) * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113660455A (en) * | 2021-07-08 | 2021-11-16 | 深圳宇晰科技有限公司 | Method, system and terminal for fall detection based on DVS data |
CN113660455B (en) * | 2021-07-08 | 2023-04-07 | 深圳宇晰科技有限公司 | Method, system and terminal for fall detection based on DVS data |
CN113903051A (en) * | 2021-07-23 | 2022-01-07 | 南方科技大学 | DVS camera data-based human body posture detection method and terminal equipment |
CN113903051B (en) * | 2021-07-23 | 2022-12-27 | 南方科技大学 | DVS camera data-based human body posture detection method and terminal equipment |
CN115146903A (en) * | 2022-05-09 | 2022-10-04 | 中国中煤能源集团有限公司 | Coal mine unsafe behavior recognition early warning system based on key skeleton points |
Also Published As
Publication number | Publication date |
---|---|
CN112766091B (en) | 2023-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112766091A (en) | Video unsafe behavior recognition system and method based on human skeleton key points | |
EP1131802B1 (en) | Smoke detection | |
CN109829429B (en) | Security sensitive article detection method based on YOLOv3 under monitoring scene | |
CN109447168A (en) | A kind of safety cap wearing detection method detected based on depth characteristic and video object | |
CN110188807A (en) | Tunnel pedestrian target detection method based on cascade super-resolution network and improvement Faster R-CNN | |
CN110659391A (en) | Video detection method and device | |
CN111709661B (en) | Risk processing method, device, equipment and storage medium for business data | |
CN105913022A (en) | Handheld calling state determining method and handheld calling state determining system based on video analysis | |
CN112036327A (en) | SSD-based lightweight safety helmet detection method | |
CN114373162A (en) | Dangerous area personnel intrusion detection method and system for transformer substation video monitoring | |
CN107301373A (en) | Data processing method, device and storage medium | |
CN115909212A (en) | Real-time early warning method for typical violation behaviors of power operation | |
CN110427894A (en) | A kind of crowd's abnormal motion monitoring method, system, equipment and medium | |
CN111666916B (en) | Kitchen violation identification method based on self-learning technology | |
CN116822929A (en) | Alarm method, alarm device, electronic equipment and storage medium | |
CN113609925A (en) | LNG loading and unloading operation safety management and control system and illegal behavior identification method | |
Krishnan et al. | Automatic Detection of Anomalies in Video Surveillance using Artificial Intelligence | |
Setyadi et al. | Deep Learning Approaches to Social Distancing Compliance and Mask Detection in Dining Environment | |
CN115345465B (en) | Multicolor level management method and system based on basic level treatment event | |
Pathade et al. | Recognition of crowd abnormal activities using fusion of handcrafted and deep features | |
CN114707856B (en) | Risk identification analysis and early warning system based on computer vision | |
CN108664478A (en) | Object search method and device | |
Shanthi et al. | Automatic social distance monitoring system using deep learning algorithms | |
Biswas et al. | Real-Time Construction Safety Gear Detection Using YOLOv4 with Darknet | |
Jiao et al. | Abnormal Crowd Behavior Detection Based on the Fusion of Macro and Micro Features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |