CN114360209B - Video behavior recognition security system based on deep learning - Google Patents

Video behavior recognition security system based on deep learning Download PDF

Info

Publication number
CN114360209B
CN114360209B CN202210050744.9A CN202210050744A CN114360209B CN 114360209 B CN114360209 B CN 114360209B CN 202210050744 A CN202210050744 A CN 202210050744A CN 114360209 B CN114360209 B CN 114360209B
Authority
CN
China
Prior art keywords
unit
child
joint point
max
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210050744.9A
Other languages
Chinese (zh)
Other versions
CN114360209A (en
Inventor
刘忠杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou College of Information Technology CCIT
Original Assignee
Changzhou College of Information Technology CCIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou College of Information Technology CCIT filed Critical Changzhou College of Information Technology CCIT
Priority to CN202210050744.9A priority Critical patent/CN114360209B/en
Publication of CN114360209A publication Critical patent/CN114360209A/en
Application granted granted Critical
Publication of CN114360209B publication Critical patent/CN114360209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a video behavior recognition security system based on deep learning, which comprises: the system comprises an image acquisition processing module, a target image acquisition module, a system management center, a static analysis module, a dynamic trend prediction module, a danger alarm module and a safety defending terminal, wherein the image acquisition processing module is used for acquiring real-time images and depth maps of children on a climbing net, the images are identified through a convolutional neural network identification algorithm, the target image acquisition module is used for acquiring two-dimensional images of bone joints of the children, the static analysis module is used for positioning the joints, analyzing and extracting the relative distance from the children to the edge of the climbing net, the dynamic trend prediction module is used for acquiring part of key joints, predicting the climbing movement direction of the children according to the change track of the part of key joints, and the danger alarm module is used for sending alarm signals when predicting that the children are likely to fall, so that the safety monitoring target of the children is realized, and the climbing safety is improved.

Description

Video behavior recognition security system based on deep learning
Technical Field
The invention relates to the technical field of video behavior security, in particular to a video behavior identification security system based on deep learning.
Background
The game and the action are basic expression forms of daily activities of the children, the exercise can be improved to promote the development of the motor skills, cognition and self concepts of the children, the climbing belongs to the basic exercise forms of the children, the balance ability of the children can be exercised, the space positioning ability of the children can also be improved, a plurality of climbing type amusement park equipment such as climbing nets exist in the amusement park, the amusement park equipment is attractive to the children, physical attributes of the children can be exercised and expanded, and the limb movement ability and limb coordination of the children are enhanced;
however, there are many dangerous situations for children to play on climbing nets: firstly, when a child climbs to an edge area, if the two feet are carelessly separated from the climbing net, the child falls off from the edge of the climbing net easily due to insufficient upper limb strength, the climbing net has a certain height, and the bone of the child is weaker than that of an adult, so that the child falls from a high place to easily cause injuries such as fracture; secondly, often not only child plays on the climbing net, and the mutual collision that inevitably will have, if children just in time is knocked down in dangerous area, also can take place the incident that falls, among the prior art, although improve the degree of safety that improves climbing net equipment constantly, still there is the potential safety hazard to children, need carry out real-time supervision and discernment to the motion state of children on the climbing net to carry out safety protection.
Therefore, a video behavior recognition security system based on deep learning is needed to solve the above problems.
Disclosure of Invention
The invention aims to provide a video behavior recognition security system based on deep learning so as to solve the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: video behavior discernment security protection system based on degree of depth study, its characterized in that: the system comprises: the system comprises an image acquisition processing module, a target image acquisition module, a system management center, a static analysis module, a dynamic trend prediction module, a danger alarm module and a security terminal;
the image acquisition processing module acquires a real-time image of the child on the climbing net by using a depth camera, and recognizes the image by using a convolutional neural network recognition algorithm; outputting the corresponding image and the identification result thereof to the target image acquisition module, wherein the target image acquisition module is used for acquiring a two-dimensional image of the bone joint point of the child; the static analysis module is used for positioning the joint points, and analyzing and extracting the relative distance from the child to the edge of the climbing net through the support vector machine; the dynamic trend prediction module is used for acquiring part of key nodes and predicting the climbing movement direction of the child according to the change track of the part of key nodes in the image; the dangerous alarm module is used for sending out an alarm signal through a dangerous alarm when the possibility of falling of the child is predicted; the security protection terminal is used for receiving the alarm signal and performing security protection.
Further, the image acquisition processing module comprises an image acquisition unit, a main processing unit and an image identification output unit, wherein the image acquisition unit is used for acquiring a child climbing depth map shot by the depth camera; the main processing unit is used for identifying the image by utilizing a convolutional network identification algorithm; the image recognition output unit is used for outputting the processed image to the target image processing module.
Further, the target image acquisition module comprises a skeleton data extraction unit, a climbing net modeling unit and a joint point projection unit, wherein the skeleton data extraction unit is used for extracting skeleton joint point data of children in the image; the climbing net modeling unit is used for establishing a two-dimensional coordinate system by taking the climbing net as a center; the joint point projection unit is used for projecting the joint points into the two-dimensional model through the perspective projection camera, and obtaining a two-dimensional image of the bone joint points of the children in the model.
Further, the static analysis module comprises a joint point positioning unit and a relative distance extraction unit, wherein the joint point positioning unit is used for acquiring the coordinates of the bone joint points in the two-dimensional model; the relative distance extraction unit is used for analyzing and extracting the relative distance from the child to the edge of the climbing net according to the overall joint point coordinates.
Further, the dynamic trend prediction module comprises a key joint point analysis unit, a deflection angle measurement unit and a moving direction prediction unit, wherein the key joint point analysis unit is used for acquiring a key joint point according to the relative distance on the premise of safety of the relative distance, and the key joint point is a side joint point closest to the edge of the climbing net; the deflection angle measuring unit is used for measuring the deflection angle of the whole movement track of the single articulation point; the movement direction prediction unit is used for predicting the overall movement direction of the child according to the deflection angle; the danger alarm module comprises a falling danger prediction unit and a voice reminding unit, wherein the falling danger prediction unit is used for analyzing whether a child has danger of falling to climb a net according to prediction data; when a danger is predicted, the voice reminding unit is used for reminding safety defenders of carrying out safety protection on corresponding children, the prediction result is transmitted to the system management center, the system management center comprises a danger frequency statistics unit, the number of times that the children are in or go to the corresponding danger areas in the prediction result is counted by the danger frequency statistics unit, a danger frequency threshold value is set, the statistics result is transmitted to the voice reminding unit, and when the number of times that the children are in or go to the corresponding danger areas exceeds the threshold value, the voice reminding unit is used for informing the safety defenders of carrying out safety reinforcement on the corresponding areas.
Further, the image is identified by using a convolutional network identification algorithm, which comprises the following steps:
s1: acquiring a child skeleton action sequence from the acquired images;
s2: visualizing the sequence into a series of bone color maps, and overlaying the bone color maps to obtain a bone energy map;
s3: and extracting the space-time characteristics of the climbing action from the images of the two groups of channels of the bone energy map and the depth map by using a convolutional neural network model.
Further, in step S3: establishing a multiple convolution neural network model, and inputting a sequence I m Obtaining a series of color image sets as
Figure BDA0003474173630000031
Normalizing each image to N pixels, applying mean removal to all input images, then each color image is processed by CNN for image +.>
Figure BDA0003474173630000032
Output Y c Thereafter, according to the formula:
Figure BDA0003474173630000033
obtaining posterior probability after normalization, wherein L represents the first action category, L represents the total number of action categories, and the posterior probability is obtained according to a category score formula
Figure BDA0003474173630000034
Average all CNN outputs, prob (l|l c ) Representing an image
Figure BDA0003474173630000035
The probability of belonging to the first action category, L action categories are stored in the system management center, and the image is judged according to the average value of all CNN outputs +.>
Figure BDA0003474173630000036
The method comprises the steps of matching the action categories with action categories stored in a system management center, extracting space-time characteristics of climbing actions, outputting images and characteristics thereof after matching, wherein CNN refers to a convolutional neural network, a double-flow convolutional neural network model is established by combining a depth map and a skeleton energy map, extracted characteristic information is enriched, the convolutional neural network can directly input images subjected to simple pretreatment and extract characteristics, a recognition classification result is output, and the probability that samples belong to each category is generated under multi-classification tasks by using Softmax normalization, so that the accuracy of child climbing action recognition is improved.
Further, the skeleton data extraction unit is used for extracting skeleton joint point data of children in the image, the climbing net modeling unit is used for establishing a two-dimensional coordinate system by taking the center of the climbing net as an origin, the length of the climbing net is a, the width of the climbing net is b, and a dangerous area in the horizontal direction is set to be greater than a from the center of the climbing net Boundary (L) Is greater than b from the center of the climbing net Boundary (L) Projecting the joint points into a two-dimensional coordinate system by the joint point projection unit, and acquiring a position coordinate set of the bone joint points of the child by using the joint point positioning unit as (X, Y) = { (X) 1 ,Y 1 ),(X 2 ,Y 2 ),...,(X n ,Y n ) Screening out the joint point with the maximum absolute value of the abscissa, wherein the coordinate of the joint point is (X) max ,Y j ) Comparison a Boundary (L) And X max : if |X max |<a Boundary (L) Judging that the child does not enter a dangerous area in the horizontal direction; if it is|X max |≥a Boundary (L) Judging that the child enters a dangerous area in the horizontal direction; screening out the joint point with the largest absolute value of the ordinate, wherein the coordinate of the joint point is (X) i ,Y max ) Comparison b Boundary (L) And Y max : if |Y max |≥b Boundary (L) Judging that the child does not enter a dangerous area in the vertical direction; if |Y max |<b Boundary (L) After the situation that the child enters the dangerous area in the vertical direction is judged, voice reminding information is sent to the safety protection terminal, whether the child is currently in the dangerous area of the climbing net or not is measured through the relative distance between the skeleton articulation point of the child and the climbing net, static behavior identification is carried out, the purpose is to help remind safety protection personnel of dangerous situations in time, so that the aim of carrying out safety monitoring on the child is achieved, and meanwhile, the position of the key articulation point is provided for the follow-up prediction of the moving direction of the child, so that dynamic safety monitoring on the child is facilitated.
Further, when it is judged that the child does not enter the dangerous area on the climbing net, the coordinate is set as (X max ,Y j ) The joint point of (2) is used as a key joint point in the horizontal direction, and the coordinate is (X i ,Y max ) The joint points of the key joint points are used as key joint points in the vertical direction, the key joint point analysis unit is used for obtaining movement change tracks of the corresponding key joint points, the starting and ending position connection vector coordinates of the key joint point tracks in the horizontal direction and the vertical direction are respectively obtained to be (x, y) and (x ', y'), and the deflection angle measurement unit is used for measuring the deflection angle of the whole movement track of the key joint points: the deflection angle α of the key articulation point in the horizontal direction is calculated according to the following formula:
Figure BDA0003474173630000041
the deflection angle β of the critical node in the vertical direction is calculated according to the following formula:
Figure BDA0003474173630000042
and transmitting the deflection angle of the whole movement track of the key articulation point to the movement direction prediction unit.
Further, the movement direction prediction unit predicts the movement direction of the child: if X max >0&Alpha < 90 DEG or X max <0&Alpha is more than 90 degrees, and the children are predicted to move to dangerous areas in the horizontal direction of the climbing net; if X max >0&Alpha > 90 DEG or X max <0&Alpha is smaller than 90 degrees, and the child is predicted to move in the opposite direction of the dangerous area in the horizontal direction of the climbing net; if Y max >0&Beta < 90 DEG or Y max <0&Beta is more than 90 degrees, and the movement of the child to a dangerous area in the vertical direction of the climbing net is predicted; if Y max >0&Beta > 90 DEG or Y max <0&Beta is smaller than 90 degrees, the movement of children in the opposite direction of the dangerous area in the vertical direction of the climbing net is predicted, when the movement of the children in the dangerous area is predicted, the voice reminding unit is utilized to send alarm signals to the safety protection terminal, the dangerous frequency counting unit is utilized to count the number of times that the children are in or go to the corresponding dangerous area in the prediction result as W, and the dangerous frequency threshold is set as W Threshold value If W>W Threshold value The method is characterized in that the dangerous times exceed a threshold value, safety defenders are notified to carry out safety reinforcement on corresponding areas by utilizing the voice reminding unit, deflection angles are deflection angles corresponding to positive directions, the overall direction of climbing of the children is obtained through the change track of key articulation points of the children, the purpose of calculating the deflection angles of the key articulation points in the horizontal direction and the vertical direction is to predict whether the children are approaching to the dangerous areas or not, so that the safety defenders are reminded in advance, the children play to the central part of the net, the situation that the children fall down from the dangerous areas is avoided, the whole action change is predicted from one node change, the whole is estimated through local change, and the behavior recognition work difficulty is reduced.
Compared with the prior art, the invention has the following beneficial effects:
according to the invention, the image of the child on the climbing net is obtained through shooting by the depth camera, and the image is identified through a double-flow convolutional neural network identification algorithm: the double-flow convolutional neural network model is established by combining the depth map and the bone energy map, the extracted characteristic information is enriched, the convolutional neural network can directly input the image subjected to simple pretreatment and extract the characteristics, the recognition classification result is output, the probability that the sample belongs to each category is generated under the multi-classification task by using Softmax normalization, and the accuracy of child climbing action recognition is improved; the method comprises the steps of extracting skeleton joint point data of children, measuring whether the children are currently in a dangerous area of a climbing net or not through the relative distance between the skeleton joint points of the children and the climbing net so as to conduct static behavior recognition, acquiring key joint points, acquiring the whole climbing direction of the children through the change track of the key joint points of the children, predicting the whole movement change from one joint point change, presuming the whole through the local change, relieving the behavior recognition work difficulty, helping security defenders remind the children to play in a net center in time, avoiding falling down from the dangerous area, and improving the climbing safety.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a block diagram of a video behavior recognition security system based on deep learning of the present invention;
FIG. 2 is a block diagram of a video behavior recognition security system based on deep learning according to the present invention;
FIG. 3 is a behavior recognition flow chart of the video behavior recognition security system based on deep learning.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Referring to fig. 1-3, the present invention provides the following technical solutions: video behavior discernment security protection system based on degree of depth study, its characterized in that: the system comprises: the system comprises an image acquisition processing module, a target image acquisition module, a system management center, a static analysis module, a dynamic trend prediction module, a danger alarm module and a security terminal;
the image acquisition processing module acquires a real-time image of the child on the climbing net by using the depth camera, and recognizes the image by using a convolutional neural network recognition algorithm; outputting the corresponding image and the identification result thereof to a target image acquisition module, wherein the target image acquisition module is used for acquiring a two-dimensional image of the bone joint point of the child; the static analysis module is used for positioning the joint points, and analyzing and extracting the relative distance from the child to the edge of the climbing net through the support vector machine; the dynamic trend prediction module is used for acquiring part of key nodes and predicting the climbing movement direction of the child according to the change track of the part of key nodes in the image; the danger alarm module is used for sending out an alarm signal through the danger alarm when the possibility of falling of the child is predicted; the security terminal is used for receiving the alarm signal and performing security.
The image acquisition processing module comprises an image acquisition unit, a main processing unit and an image identification output unit, wherein the image acquisition unit is used for acquiring a child climbing depth map shot by the depth camera; the main processing unit is used for identifying the image by utilizing a convolutional network identification algorithm; the image recognition output unit is used for outputting the processed image to the target image processing module.
The target image acquisition module comprises a skeleton data extraction unit, a climbing net modeling unit and a joint point projection unit, wherein the skeleton data extraction unit is used for extracting skeleton joint point data of children in the image; the climbing net modeling unit is used for establishing a two-dimensional coordinate system by taking the climbing net as a center; the joint point projection unit is used for projecting joint points into the two-dimensional model through the perspective projection camera, and obtaining a two-dimensional image of the bone joint points of the children in the model.
The static analysis module comprises a joint point positioning unit and a relative distance extraction unit, wherein the joint point positioning unit is used for acquiring the coordinates of the bone joint points in the two-dimensional model; the relative distance extraction unit is used for extracting the relative distance from the child to the edge of the climbing net according to the overall joint point coordinate analysis.
The dynamic trend prediction module comprises a key joint point analysis unit, a deflection angle measurement unit and a movement direction prediction unit, wherein the key joint point analysis unit is used for acquiring a key joint point according to the relative distance on the premise of safety of the relative distance, and the key joint point is a side joint point closest to the edge of the climbing net; the deflection angle measuring unit is used for measuring the deflection angle of the whole movement track of the single articulation point; the movement direction prediction unit is used for predicting the overall movement direction of the child according to the deflection angle; the danger alarm module comprises a falling danger prediction unit and a voice reminding unit, wherein the falling danger prediction unit is used for analyzing whether a child has danger of falling to climb a net according to prediction data; when a danger is predicted, the voice reminding unit is used for reminding the security defender of carrying out security protection on the child, and the security defender checks the condition of the child at the position where the child in the image goes after receiving the reminding information: if the child does not fall from the dangerous area but is in the dangerous area, reminding the child to play in the center of the climbing net; if the child falls from the dangerous area, checking the child body in time, and treating abnormal conditions; the prediction result is transmitted to a system management center, the system management center comprises a dangerous frequency statistics unit, the number of times that children are in or go to the corresponding dangerous areas in the prediction result is counted through the dangerous frequency statistics unit, a dangerous frequency threshold value is set, the statistics result is transmitted to a voice reminding unit, and when the number of times that the children are in or go to the corresponding dangerous areas exceeds the threshold value, safety protection staff are informed to carry out safety reinforcement on the corresponding areas by the voice reminding unit.
The image is identified by a convolutional network identification algorithm, which comprises the following steps:
s1: acquiring a child skeleton action sequence from the acquired images;
s2: visualizing the sequence into a series of bone color maps, and overlaying the bone color maps to obtain a bone energy map;
s3: and extracting the space-time characteristics of the climbing action from the images of the two groups of channels of the bone energy map and the depth map by using a convolutional neural network model.
In step S3: establishing a multiple convolution neural network model, and inputting a sequence I m Obtaining a series of color image sets as
Figure BDA0003474173630000061
Figure BDA0003474173630000062
Normalizing each image to N pixels, applying mean removal to all input images, then each color image is processed by CNN for image +.>
Figure BDA0003474173630000071
Output Y c Thereafter, according to the formula:
Figure BDA0003474173630000072
obtaining posterior probability after normalization, wherein L represents the first action category, L represents the total number of action categories, and the posterior probability is obtained according to a category score formula
Figure BDA0003474173630000073
Average all CNN outputs, prob (l|l c ) Representing an image
Figure BDA0003474173630000074
The probability of belonging to the first action category, L action categories are stored in the system management center, and the image is judged according to the average value of all CNN outputs +.>
Figure BDA0003474173630000075
The method comprises the steps of matching the action categories with action categories stored in a system management center, extracting space-time characteristics of climbing actions, outputting images and characteristics thereof after matching, wherein CNN refers to a convolutional neural network, establishing a double-flow convolutional neural network model by combining a depth map and a skeleton energy map, enriching extracted characteristic information, directly inputting the images subjected to simple pretreatment by the convolutional neural network, extracting characteristics, outputting a recognition classification result, generating probability that samples belong to each category under multi-classification tasks by using Softmax normalization, and effectively improving the accuracy of child climbing action recognition.
The method comprises the steps of extracting bone joint point data of children in an image by using a bone data extraction unit, establishing a two-dimensional coordinate system by using a climbing net modeling unit with the center of the climbing net as an origin, obtaining the length a and the width b of the climbing net, and setting a dangerous area in the horizontal direction to be larger than a from the center of the climbing net Boundary (L) Is greater than b from the center of the climbing net Boundary (L) The joint point projection unit is used for projecting the joint point into a two-dimensional coordinate system, and the joint point positioning unit is used for acquiring the position coordinate set of the bone joint point of the child as (X, Y) = { (X) 1 ,Y 1 ),(X 2 ,Y 2 ),...,(X n ,Y n ) Screening out the joint point with the maximum absolute value of the abscissa, wherein the coordinate of the joint point is (X) max ,Y j ) Comparison a Boundary (L) And X max : if |X max |<a Boundary (L) Judging that the child does not enter a dangerous area in the horizontal direction; if |X max |≥a Boundary (L) Judging that the child enters a dangerous area in the horizontal direction; screening out the joint point with the largest absolute value of the ordinate, wherein the coordinate of the joint point is (X) i ,Y max ) Comparison b Boundary (L) And Y max : if |Y max |<b Boundary (L) Judging that the child does not enter a dangerous area in the vertical direction; if |Y max |≥b Boundary (L) After the situation that the child enters the dangerous area in the vertical direction is judged, voice reminding information is sent to the safety protection terminal, whether the child is currently in the dangerous area of the climbing net or not is measured through the relative distance between the skeleton articulation point of the child and the climbing net, static behavior identification is carried out, the purpose is to help remind safety protection personnel of dangerous situations in time, the aim of carrying out safety monitoring on the child is achieved, meanwhile, the position of the key articulation point is provided for the follow-up prediction of the moving direction of the child, and dynamic safety monitoring on the child is facilitated.
When it is judged that the child does not enter the dangerous area on the climbing net, the coordinate is set as (X max ,Y j ) The joint point of (2) is used as a key joint point in the horizontal direction, and the coordinate is (X i ,Y max ) Is taken as a key joint point in the vertical directionThe key joint point analysis unit is utilized to acquire a movement change track of a corresponding key joint point, the connection vector coordinates of the start position and the end position of the key joint point track in the horizontal direction and the vertical direction are respectively acquired as (x, y) and (x ', y'), and the deflection angle measurement unit is utilized to measure the deflection angle of the whole movement track of the key joint point: the deflection angle α of the key articulation point in the horizontal direction is calculated according to the following formula:
Figure BDA0003474173630000081
the deflection angle β of the critical node in the vertical direction is calculated according to the following formula:
Figure BDA0003474173630000082
and transmitting the deflection angle of the whole movement track of the key articulation point to a movement direction prediction unit.
Predicting a movement direction of the child using the movement direction prediction unit: if X max >0&Alpha < 90 DEG or X max <0&Alpha is more than 90 degrees, and the children are predicted to move to dangerous areas in the horizontal direction of the climbing net; if X max >0&Alpha > 90 DEG or X max <0&Alpha is smaller than 90 degrees, and the child is predicted to move in the opposite direction of the dangerous area in the horizontal direction of the climbing net; if Y max >0&Beta < 90 DEG or Y max <0&Beta is more than 90 degrees, and the movement of the child to a dangerous area in the vertical direction of the climbing net is predicted; if Y max >0&Beta > 90 DEG or Y max <0&Beta is smaller than 90 degrees, the movement of children in the opposite direction of the dangerous area in the vertical direction of the climbing net is predicted, when the movement of the children in the dangerous area is predicted, an alarm signal is sent to a safety protection terminal by utilizing a voice reminding unit, the number of times that the children are in or go to the corresponding dangerous area in the prediction result is counted as W by utilizing a dangerous frequency counting unit, and a dangerous frequency threshold value is set as W Threshold value If W>W Threshold value Indicating that the dangerous times exceed the threshold value, and notifying security personnel by utilizing voice reminding unitThe safety reinforcement is carried out on the corresponding area, the deflection angles are all deflection angles relative to the positive direction, the whole climbing direction of the child is obtained through the change track of the key articulation point of the child, the purpose of calculating the deflection angles of the key articulation point in the horizontal direction and the vertical direction is to predict whether the child is approaching to the dangerous area or not, so that safety defenders can be reminded in advance, the child can play to the central part of the net, the situation that the child falls down from the dangerous area is avoided, the whole action change is predicted from one node change, the whole is presumed through the local change, and the behavior recognition work difficulty is effectively reduced.
Embodiment one: the method comprises the steps of extracting bone joint point data of children in an image by using a bone data extraction unit, establishing a two-dimensional coordinate system by using a climbing net modeling unit with the center of the climbing net as an origin, and obtaining the length of the climbing net as a=2, the width as b=1.5, wherein the units are as follows: the rice is used for setting the dangerous area in the horizontal direction to be larger than a from the center of the climbing net Boundary (L) Region=0.6, the dangerous region in the vertical direction is greater than b from the center of the climbing net Boundary (L) In the region of =0.47, the joint point is projected into the two-dimensional coordinate system by the joint point projection unit, and the set of position coordinates of the bone joint points of the child obtained by the joint point positioning unit is (X, Y) = { (X) 1 ,Y 1 ),(X 2 ,Y 2 ),(X 3 ,Y 3 ),(X 4 ,Y 4 ),(X 5 ,Y 5 ) The joint points with the largest absolute value of the abscissa were selected from = { (0,0.4), (0.1, 0.3), (0,0.2), (0.4, 0), (0.5, 0.2) }, and the coordinates of the joint points were (X) max ,Y j ) = (0.5, 0.2), compare a Boundary (L) And X max :|X max |<a Boundary (L) Judging that the child does not enter a dangerous area in the horizontal direction; screening out the joint point with the largest absolute value of the ordinate, wherein the coordinate of the joint point is (X) i ,Y max ) = (0,0.4), comparison b Boundary (L) And Y max :|Y max |<b Boundary (L) And judging that the child does not enter the dangerous area in the vertical direction.
Embodiment two: when it is judged that the child does not enter the dangerous area on the climbing net, the coordinate is set as (X max ,Y j ) Closing of= (0.5, 0.2)The node is used as a key node in the horizontal direction, and coordinates (X i ,Y max ) The joint point of the = (0,0.4) is taken as a key joint point in the vertical direction, a key joint point analysis unit is used for obtaining a movement change track of a corresponding key joint point, and the connection vector coordinates of the start position and the end position of the key joint point track in the horizontal direction and the vertical direction are respectively obtained to be (x, y) = (-0.3, -0.2), and (x ', y') = (0.2, -0.1), and a deflection angle measurement unit is used for measuring the deflection angle of the whole movement track of the key joint point: according to the formula
Figure BDA0003474173630000091
Calculating the deflection angle alpha approximately 146 DEG of the key articulation point in the horizontal direction, and according to the formula ∈146 DEG>
Figure BDA0003474173630000092
Calculating a deflection angle beta (approximately 117 DEG) of a key joint point in the vertical direction, transmitting the deflection angle of the whole movement track of the key joint point to a movement direction prediction unit, and predicting the movement direction of the child by using the movement direction prediction unit: x is X max >0&Alpha is more than 90 degrees, and the child is predicted to move in the opposite direction of the dangerous area in the horizontal direction of the climbing net; y is Y max >0&Beta > 90 degrees, and the children are predicted to move in the opposite direction of the dangerous area in the vertical direction of the climbing net.
Finally, it should be noted that: the foregoing is merely a preferred example of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. Video behavior discernment security protection system based on degree of depth study, its characterized in that: the system comprises: the system comprises an image acquisition processing module, a target image acquisition module, a system management center, a static analysis module, a dynamic trend prediction module, a danger alarm module and a security terminal;
the image acquisition processing module acquires a real-time image of the child on the climbing net by using a depth camera, and recognizes the image by using a convolutional neural network recognition algorithm; outputting the corresponding image and the identification result thereof to the target image acquisition module, wherein the target image acquisition module is used for acquiring a two-dimensional image of the bone joint point of the child; the static analysis module is used for positioning the joint points, analyzing and extracting the relative distance from the child to the edge of the climbing net; the dynamic trend prediction module is used for acquiring part of key nodes and predicting the climbing movement direction of the child according to the change track of the part of key nodes in the image; the danger alarm module is used for sending out an alarm signal when the possibility of falling of the child is predicted; the security protection terminal is used for receiving the alarm signal and performing security protection;
the image acquisition processing module comprises an image acquisition unit, a main processing unit and an image identification output unit, wherein the image acquisition unit is used for acquiring a child climbing depth map shot by the depth camera; the main processing unit is used for identifying the image by utilizing a convolutional network identification algorithm; the image recognition output unit is used for outputting the processed image to the target image processing module;
the image is identified by a convolutional network identification algorithm, which comprises the following steps:
s1: acquiring a child skeleton action sequence from the acquired images;
s2: visualizing the sequence into a series of bone color maps, and overlaying the bone color maps to obtain a bone energy map;
s3: extracting space-time characteristics of climbing actions from images of two groups of channels of a bone energy map and a depth map by using a convolutional neural network model;
in step S3: establishing a multiple convolution neural network model, and inputting a sequence I m Obtaining a series of color image sets as
Figure FDA0004209487380000011
Normalizing each image to N pixels, applying mean removal to all input images, then each color image is processed by CNN for image +.>
Figure FDA0004209487380000012
Output Y c Thereafter, according to the formula:
Figure FDA0004209487380000021
obtaining posterior probability after normalization, wherein L represents the first action category, L represents the total number of action categories, and the posterior probability is obtained according to a category score formula
Figure FDA0004209487380000022
Average all CNN outputs, prob (l|l c ) Representation of image->
Figure FDA0004209487380000023
The probability of belonging to the first action category, L action categories are stored in the system management center, and the image is judged according to the average value of all CNN outputs +.>
Figure FDA0004209487380000024
And matching the belonging action categories with action categories stored in the system management center, extracting space-time characteristics of climbing actions, and outputting images and characteristics thereof after matching.
2. The video behavior recognition security system based on deep learning of claim 1, wherein: the target image acquisition module comprises a skeleton data extraction unit, a climbing net modeling unit and a joint point projection unit, wherein the skeleton data extraction unit is used for extracting skeleton joint point data of children in an image; the climbing net modeling unit is used for establishing a two-dimensional coordinate system by taking the climbing net as a center; the joint point projection unit is used for projecting joint points into the two-dimensional model and obtaining a two-dimensional image of the bone joint points of the children in the model.
3. The video behavior recognition security system based on deep learning of claim 2, wherein: the static analysis module comprises a joint point positioning unit and a relative distance extraction unit, wherein the joint point positioning unit is used for acquiring the coordinates of a bone joint point in the two-dimensional model; the relative distance extraction unit is used for analyzing and extracting the relative distance from the child to the edge of the climbing net according to the overall joint point coordinates.
4. The video behavior recognition security system based on deep learning of claim 1, wherein: the dynamic trend prediction module comprises a key joint point analysis unit, a deflection angle measurement unit and a movement direction prediction unit, wherein the key joint point analysis unit is used for acquiring a key joint point according to the relative distance on the premise of safety of the relative distance, and the key joint point is a side joint point closest to the edge of a climbing net; the deflection angle measuring unit is used for measuring the deflection angle of the whole movement track of the single articulation point; the movement direction prediction unit is used for predicting the overall movement direction of the child according to the deflection angle; the system management center comprises a danger frequency statistics unit, the number of times that the child is in or goes to a corresponding danger area in the predicted result is counted by the danger frequency statistics unit, a danger number threshold is set, the counted result is transmitted to the voice reminding unit, and when the number of times that the child is in or goes to the corresponding danger area exceeds the threshold, the voice reminding unit is used for informing the safety protection personnel of safety reinforcement to the corresponding area.
5. The video behavior recognition security system based on deep learning of claim 3, wherein: the skeleton data extraction unit is used for extracting skeleton joint point data of children in the image, the climbing net modeling unit is used for establishing a two-dimensional coordinate system by taking the center of the climbing net as an origin, the length of the climbing net is a, the width of the climbing net is b, and a dangerous area in the horizontal direction is set to be greater than a from the center of the climbing net Boundary (L) Is greater than b from the center of the climbing net Boundary (L) Projecting the joint points into a two-dimensional coordinate system by the joint point projection unit, and acquiring a position coordinate set of the bone joint points of the child by using the joint point positioning unit as (X, Y) = { (X) 1 ,Y 1 ),(X 2 ,Y 2 ),...,(X n ,Y n ) Screening out the joint point with the maximum absolute value of the abscissa, wherein the coordinate of the joint point is (X) max ,Y j ) Comparison a Boundary (L) And X max : if X max |<a Boundary (L) Judging that the child does not enter a dangerous area in the horizontal direction; if |X max |≥a Boundary (L) Judging that the child enters a dangerous area in the horizontal direction; screening out the joint point with the largest absolute value of the ordinate, wherein the coordinate of the joint point is (X) i ,Y max ) Comparison b Boundary (L) And Y max : if |Y max |≥b Boundary (L) Judging that the child does not enter a dangerous area in the vertical direction; if |Y max |<b Boundary (L) And after the child is judged to enter the dangerous area, sending voice reminding information to the safety defending terminal.
6. The video behavior recognition security system based on deep learning of claim 4, wherein: when it is judged that the child does not enter the dangerous area on the climbing net, the coordinate is set as (X max ,Y j ) The joint point of (2) is used as a key joint point in the horizontal direction, and the coordinate is (X i ,Y max ) The joint points of the (2) are used as key joint points in the vertical direction, and the key joint point analysis unit is used for obtaining the movement change track of the corresponding key joint pointsThe coordinates of connection vectors of the start position and the end position of the key joint point track in the horizontal direction and the vertical direction are respectively obtained to be (x, y) and (x ', y'), and the deflection angle measuring unit is used for measuring the deflection angle of the whole movement track of the key joint point: the deflection angle α of the key articulation point in the horizontal direction is calculated according to the following formula:
Figure FDA0004209487380000041
the deflection angle β of the critical node in the vertical direction is calculated according to the following formula:
Figure FDA0004209487380000042
and transmitting the deflection angle of the whole movement track of the key articulation point to the movement direction prediction unit.
7. The video behavior recognition security system based on deep learning of claim 6, wherein: predicting a movement direction of the child using the movement direction prediction unit: if X max >0&Alpha < 90 DEG or X max <0&Alpha is more than 90 degrees, and the children are predicted to move to dangerous areas in the horizontal direction of the climbing net; if X max >0&Alpha > 90 DEG or X max <0&Alpha is smaller than 90 degrees, and the child is predicted to move in the opposite direction of the dangerous area in the horizontal direction of the climbing net; if Y max >0&Beta < 90 DEG or Y max <0&Beta is more than 90 degrees, and the movement of the child to a dangerous area in the vertical direction of the climbing net is predicted; if Y max >0&Beta > 90 DEG or Y max <0&Beta is smaller than 90 degrees, the movement of children in the opposite direction of the dangerous area in the vertical direction of the climbing net is predicted, when the movement of the children in the dangerous area is predicted, the voice reminding unit is utilized to send alarm signals to the safety protection terminal, the dangerous frequency counting unit is utilized to count the number of times that the children are in or go to the corresponding dangerous area in the prediction result as W, and the dangerous frequency is setThreshold value of W Threshold value If W>W Threshold value And indicating that the dangerous times exceed a threshold value, and notifying security defenders to secure the corresponding area by using the voice reminding unit.
CN202210050744.9A 2022-01-17 2022-01-17 Video behavior recognition security system based on deep learning Active CN114360209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210050744.9A CN114360209B (en) 2022-01-17 2022-01-17 Video behavior recognition security system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210050744.9A CN114360209B (en) 2022-01-17 2022-01-17 Video behavior recognition security system based on deep learning

Publications (2)

Publication Number Publication Date
CN114360209A CN114360209A (en) 2022-04-15
CN114360209B true CN114360209B (en) 2023-06-23

Family

ID=81092265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210050744.9A Active CN114360209B (en) 2022-01-17 2022-01-17 Video behavior recognition security system based on deep learning

Country Status (1)

Country Link
CN (1) CN114360209B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115862267B (en) * 2022-12-01 2023-10-13 深圳市特区建发科技园区发展有限公司 Intelligent property early warning reminding management system and method based on artificial intelligence
CN115761902B (en) * 2022-12-08 2023-07-21 厦门农芯数字科技有限公司 Inlet disinfection identification method based on human skeleton joint point identification

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201820332U (en) * 2010-07-29 2011-05-04 上海理工大学 Warning device capable of indicating children at dangerous positions
CN102074095A (en) * 2010-11-09 2011-05-25 无锡中星微电子有限公司 System and method for monitoring infant behaviors
JP2017046209A (en) * 2015-08-27 2017-03-02 富士通株式会社 Image processing apparatus, image processing method, and image processing program
CN106874863A (en) * 2017-01-24 2017-06-20 南京大学 Vehicle based on depth convolutional neural networks is disobeyed and stops detection method of driving in the wrong direction
JP2018067203A (en) * 2016-10-20 2018-04-26 学校法人 埼玉医科大学 Danger notification device, danger notification method, and calibration method for danger notification device
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN109949540A (en) * 2019-04-03 2019-06-28 合肥科塑信息科技有限公司 A kind of artificial intelligence early warning system
CN111815898A (en) * 2019-04-12 2020-10-23 易程(苏州)电子科技股份有限公司 Infant behavior monitoring and alarming system and method
CN111968343A (en) * 2020-08-25 2020-11-20 方艳梅 Indoor infant safety protection system

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201820332U (en) * 2010-07-29 2011-05-04 上海理工大学 Warning device capable of indicating children at dangerous positions
CN102074095A (en) * 2010-11-09 2011-05-25 无锡中星微电子有限公司 System and method for monitoring infant behaviors
JP2017046209A (en) * 2015-08-27 2017-03-02 富士通株式会社 Image processing apparatus, image processing method, and image processing program
JP2018067203A (en) * 2016-10-20 2018-04-26 学校法人 埼玉医科大学 Danger notification device, danger notification method, and calibration method for danger notification device
CN106874863A (en) * 2017-01-24 2017-06-20 南京大学 Vehicle based on depth convolutional neural networks is disobeyed and stops detection method of driving in the wrong direction
CN108830252A (en) * 2018-06-26 2018-11-16 哈尔滨工业大学 A kind of convolutional neural networks human motion recognition method of amalgamation of global space-time characteristic
CN109949540A (en) * 2019-04-03 2019-06-28 合肥科塑信息科技有限公司 A kind of artificial intelligence early warning system
CN111815898A (en) * 2019-04-12 2020-10-23 易程(苏州)电子科技股份有限公司 Infant behavior monitoring and alarming system and method
CN111968343A (en) * 2020-08-25 2020-11-20 方艳梅 Indoor infant safety protection system

Also Published As

Publication number Publication date
CN114360209A (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN114360209B (en) Video behavior recognition security system based on deep learning
CN110765964B (en) Method for detecting abnormal behaviors in elevator car based on computer vision
CN109298785A (en) A kind of man-machine joint control system and method for monitoring device
CN108062349A (en) Video frequency monitoring method and system based on video structural data and deep learning
CN112287891B (en) Method for evaluating learning concentration through video based on expression behavior feature extraction
CN111062303A (en) Image processing method, system and computer storage medium
CN110781771A (en) Abnormal behavior real-time monitoring method based on deep learning
CN110287825A (en) It is a kind of that motion detection method is fallen down based on crucial skeleton point trajectory analysis
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN110414400A (en) A kind of construction site safety cap wearing automatic testing method and system
CN112183438B (en) Image identification method for illegal behaviors based on small sample learning neural network
CN111860117A (en) Human behavior recognition method based on deep learning
CN115223246A (en) Personnel violation identification method, device, equipment and storage medium
CN115482580A (en) Multi-person evaluation system based on machine vision skeletal tracking technology
CN114187664B (en) Rope skipping counting system based on artificial intelligence
CN115273150A (en) Novel identification method and system for wearing safety helmet based on human body posture estimation
CN115841497B (en) Boundary detection method and escalator area intrusion detection method and system
CN116893386A (en) Electric energy meter mounting process detection device and method based on deep learning image recognition
CN111695404A (en) Pedestrian falling detection method and device, electronic equipment and storage medium
CN114264239B (en) Motion platform laser calibration system
CN111881863B (en) Regional group abnormal behavior detection method
CN113536950A (en) Personnel electric shock detection method and system based on artificial intelligence
CN114663835A (en) Pedestrian tracking method, system, equipment and storage medium
CN112822460B (en) Billiard game video monitoring method and system
CN110502992A (en) A kind of fast face recognition method of the fixed scene video based on relation map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant