CN114155595A - Behavior detection monitoring method, intelligent camera and intelligent monitoring system - Google Patents

Behavior detection monitoring method, intelligent camera and intelligent monitoring system Download PDF

Info

Publication number
CN114155595A
CN114155595A CN202111155799.8A CN202111155799A CN114155595A CN 114155595 A CN114155595 A CN 114155595A CN 202111155799 A CN202111155799 A CN 202111155799A CN 114155595 A CN114155595 A CN 114155595A
Authority
CN
China
Prior art keywords
detection
target
image
monitoring
key points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111155799.8A
Other languages
Chinese (zh)
Inventor
张意通
乔国坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Aishen Yingtong Information Technology Co Ltd
Original Assignee
Shenzhen Aishen Yingtong Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Aishen Yingtong Information Technology Co Ltd filed Critical Shenzhen Aishen Yingtong Information Technology Co Ltd
Priority to CN202111155799.8A priority Critical patent/CN114155595A/en
Publication of CN114155595A publication Critical patent/CN114155595A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

The application relates to a behavior detection monitoring method, an intelligent camera and an intelligent monitoring system, which comprises the steps of obtaining a detection sequence; determining a target set based on the detection image; determining attack key points and stress key points based on the target characteristic region; and comparing the distance between the attack key point of one target characteristic region in the target set and the stress key point of the other target characteristic region in the target set, and performing an alarm step. And detecting the distance between the attack key point and the stress key point to estimate whether the two persons are in limb contact or not, further estimating whether improper behaviors occur or not, and outputting an early warning signal to trigger an alarm step if the improper behaviors occur. By means of image analysis, behaviors of people can be timely evaluated, campus inappropriate behaviors can be timely found, the inappropriate behaviors can be timely stopped conveniently, related objects can be timely protected, images can be left, and evidence is provided for penalty of follow-up inappropriate behaviors.

Description

Behavior detection monitoring method, intelligent camera and intelligent monitoring system
Technical Field
The present application relates to the field of behavior detection, and in particular, to a behavior detection monitoring method, an intelligent camera, and an intelligent monitoring system.
Background
In recent years, as network monitoring is popularized, more videos and pictures related to campus events are exposed, improper behaviors of some students often cause other students to be injured, and many teachers and parents are busy working and difficult to find and observe the influence of the improper behaviors on the students. Therefore, how to discover the misbehavior in time, protect the students in time, and provide evidence for the penalty and education made subsequently to the misbehavior event is a problem that needs to be solved.
Disclosure of Invention
The method for detecting and monitoring the behaviors has the advantages that the improper behaviors of the personnel can be detected timely.
The above object of the present invention is achieved by the following technical solutions:
the behavior detection monitoring method comprises the following steps:
acquiring a detection sequence; wherein the detection sequence comprises a plurality of frames of detection images associated with a person capable of causing misbehaviour;
determining a target set based on the detection image; the target set comprises at least two target characteristic areas, and the target characteristic areas correspond to one person in the detection image one by one;
determining attack key points and stress key points based on the target characteristic region; the attack key points are used for reflecting human body parts of the person who sends attack actions, and the stress key points are used for reflecting human body parts of the person who receives the attack actions;
and comparing the distance between the attack key point of one target characteristic region in the target set and the stress key point of the other target characteristic region in the target set, and performing an alarm step.
By adopting the technical scheme, a plurality of target characteristic regions are extracted from the detection image to reflect the distribution conditions of a plurality of persons, the positions of human body parts such as hands where the persons can initiate attack behaviors can be determined by analyzing the attack key points in the target characteristic regions, and the positions of the attacked human body parts such as faces of the persons can be determined by analyzing the stress key points in the target characteristic regions. By utilizing the distance detection between the attack key points and the stress key points, whether one person uses the human body part corresponding to the attack key points to contact the human body part corresponding to the stress key points of the other person can be estimated, whether improper behaviors occur is further estimated, and if yes, an early warning signal is output to trigger an alarm step. By means of image analysis, behaviors of people can be evaluated in time, improper behaviors in a campus can be found in time, a supervisor can conveniently stop the improper behaviors in time, relevant people are protected in time, images are left, and evidence is provided for punishment and education subsequently made of the improper behaviors.
Optionally, in a specific method for obtaining a detection sequence, the method includes:
acquiring a monitoring sequence; wherein the monitoring sequence comprises a plurality of frames of continuous monitoring images;
determining an original characteristic region based on the monitoring image; wherein the original feature region is used for indicating a person in the monitoring image;
carrying out potential danger judgment based on each original characteristic region in the monitoring image, and determining a prepared image according to a judgment result; judging whether the distance between any two original characteristic regions is smaller than a first safety distance or not according to the judgment condition of the potential danger judgment;
and determining n continuous frames of preparation images in the monitoring sequence as a detection sequence, wherein n is a positive integer.
By adopting the technical scheme, the distance between different persons in the monitored image is reflected by utilizing the distance between different original characteristic regions, and when the distance between any two persons is too close within the time of n frames, the condition that improper behaviors occur between the two persons can be considered to be met; on the contrary, the two persons are far away from each other, so that action interaction is difficult to meet, subsequent calculation and analysis are not needed, and the probability of misjudgment is reduced.
Optionally, in a specific method for determining an original feature region based on a monitored image, the method includes:
performing target detection based on the monitored image, and determining an original characteristic region; the backbone network for target detection comprises mobilenetv2, and the branches of the backbone network are step = {8, 16, 24 }.
By adopting the technical scheme, a plurality of branches are set, so that the target with larger area in the image due to the close distance and the target with smaller area in the image due to the far distance can be considered, and the analysis accuracy is improved.
Optionally, the Aspect ratio of the target detection is Aspect ratios = {1, 0.5, 0.33, 0.25 }.
By adopting the technical scheme, a plurality of aspect ratios are set, the device is suitable for personnel in various different postures such as standing, holding and sitting, and the analysis accuracy is improved.
Optionally, in the method for determining a loss function of target detection, the method includes:
determining simulation key points, and converting the simulation key points into the maximum activation points in the matrix through Gaussian distribution; the simulation key points are key points used for indicating characteristic parts of the human body;
l2 loss calculation is performed based on the matrix of the backbone network output to determine the loss function.
By adopting the technical scheme, the simulated key points are converted into the maximum activation points to form a feature map capable of reflecting a plurality of local features of the human body, and the plurality of local features of the human body can have a mutual correlation relationship, so that the training model can learn context semantic information in the image, the probability of overfitting can be effectively reduced, and more accurate key points can be predicted in subsequent analysis.
Optionally, the method further includes:
an alarming step: and sending the detection image to a monitoring background so that the detection image is sent to a supervisor for determination through a short message or a network mode.
By adopting the technical scheme, the information is pushed to the supervisor by using a short message or network mode, and the supervisor is reminded of determining the improper behavior or preventing interference.
The second purpose of the application is to provide a behavior detection monitoring device which has the characteristic of being capable of detecting the improper behavior of personnel in time.
The second objective of the present invention is achieved by the following technical solutions:
the image acquisition module is used for acquiring a detection sequence; wherein the detection sequence comprises a plurality of frames of detection images associated with a person capable of causing misbehaviour;
the target selection module is used for determining a target set based on the detection image; the target set comprises at least two target characteristic areas, and the target characteristic areas correspond to one person in the detection image one by one;
the characteristic extraction module is used for determining attack key points and stress key points based on the target characteristic region; the attack key points are used for reflecting human body parts of the person who sends attack actions, and the stress key points are used for reflecting human body parts of the person who receives the attack actions;
and the alarm triggering module is used for comparing the distance between the attack key point of one target characteristic region in the target set and the stress key point of the other target characteristic region in the target set and executing the alarm step.
The third purpose of this application is to provide an intelligent camera, has the characteristics that can in time detect personnel's improper action.
The third object of the invention is achieved by the following technical scheme:
an intelligent camera comprises a memory and a processor, wherein the memory is stored with a computer program which can be loaded by the processor and executes the behavior detection monitoring method.
The fourth purpose of this application is to provide an intelligent monitoring system, has the characteristics that can in time detect personnel's improper behavior.
The fourth object of the present invention is achieved by the following technical solutions:
the utility model provides an intelligent monitoring system, above-mentioned intelligent camera still includes:
the background host is used for receiving and processing the detection image sent by the intelligent camera and sending alarm information to the remote terminal;
the communication device is used for providing network communication for the background host and the intelligent camera;
and the training device is used for carrying out model training so as to provide a target detection model for the intelligent camera.
Drawings
Fig. 1 is a schematic flow chart of a behavior detection monitoring method according to the present application.
Fig. 2 is a schematic diagram of the image acquisition module in a photographing state.
Fig. 3 is a schematic view of a sub-flow of acquiring a detection image of the behavior detection monitoring method of the present application.
Fig. 4 is a sub-flow diagram of the behavior detection monitoring method of the present application for generating a detection sequence.
Fig. 5 is a block diagram of the behavior detection monitoring apparatus of the present application.
FIG. 6 is a block schematic diagram of the intelligent monitoring system of the present application.
In the figure, 1, an image acquisition module; 2. a target selection module; 3. a feature extraction module; 4. an alarm triggering module; 5. an early warning module; 6. a background host; 7. a communication device; 8. an exercise device.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the reference numerals of the steps in this embodiment are only for convenience of description, and do not represent the limitation of the execution sequence of the steps, and in actual application, the execution sequence of the steps may be adjusted or performed simultaneously as needed, and these adjustments or substitutions all belong to the protection scope of the present invention.
Embodiments of the present application are described in further detail below with reference to figures 1-6 of the specification.
The first embodiment is as follows:
the embodiment of the application provides a behavior detection monitoring method, and the main flow of the method is described as follows.
Referring to fig. 1, S1, a detection sequence is acquired.
The detection sequence comprises a plurality of frames of detection images, and at least two persons with the possibility of improper behaviors and limb conflicts are shot in each detection image. The detection images are sequentially arranged according to the increasing sequence of the shooting time.
Referring to fig. 2 and 3, in step S1, the method includes:
and S11, acquiring a monitoring sequence in real time.
The monitoring sequence comprises a plurality of continuous monitoring images, and the monitoring images are images obtained by the image acquisition module 1 through real-time continuous shooting of a preset monitoring area.
And S12, determining the original characteristic region based on the monitoring image.
The original feature region refers to a region in the monitored image for indicating a human body. In this embodiment, the system extracts a preselected frame including a human body from the monitored image by using a target detection algorithm, so as to determine an original feature region, wherein the target detection algorithm is preferably an SSD target detection algorithm. The target detection algorithm can generate a preselected frame from the image according to a preset step length and an aspect ratio based on the input image, then the backbone network can output a fixed vector to learn the frame and the category of the target, and in the inference stage, the position and the category of the target are output at one time.
In this embodiment, the SSD object detection algorithm learns the feature vectors by using the network structure of mobilenetv2 as the backbone network. The network structure of mobilenetv2 is provided with 3 branches, step = {8, 16, 24 }. In the process of shooting an image by the image acquisition module 1, the smaller the distance between a human body and the image acquisition module 1 is, the larger the imaging area of the human body in the image is, the larger the original characteristic area is, and the larger the detected and identified target is; the larger the distance between the human body and the image acquisition module 1 is, the smaller the imaging area of the human body in the image is, the smaller the original characteristic area is, and the smaller the target to be detected and identified is. The 3 grades of branches are arranged, so that a large target with a short distance and a small target with a slightly long distance can be considered, and the accuracy of target detection is improved.
The preselected box for SSD destination detection is set to rectangle, and the Aspect ratio is set to Aspect ratios = {1, 0.5, 0.33, 0.25 }. In the process of shooting images by the image acquisition module 1, personnel can have various behaviors and postures, such as standing, squatting, sitting, walking and the like, and a plurality of different aspect ratios are set, so that the system is suitable for human bodies in various postures, and the accuracy of target detection is improved.
Specifically, the loss function of the training model of the SSD target detection algorithm is shown in equation (1):
Figure RE-767876DEST_PATH_IMAGE001
(1)
wherein Lconf is the classification loss error, Lloc is the positioning loss error, and a is the scaling factor for adjusting the ratio between the classification loss error and the positioning loss error, and the scaling factor is preset to 1.
The calculation process of the positioning loss error is shown in formula (2),
Figure RE-53364DEST_PATH_IMAGE002
(2)
wherein i represents the ith sample, m represents the x coordinate containing cx, cy, w, h, cx being the center point of the preselected frame, cy being the y coordinate of the center point of the preselected frame,
Figure RE-321534DEST_PATH_IMAGE003
representing the predicted value of the training model for the ith sample,
Figure RE-376078DEST_PATH_IMAGE004
representing the annotated value of the ith sample.
Calculation of positioning errorPass through
Figure RE-71501DEST_PATH_IMAGE003
And
Figure RE-844285DEST_PATH_IMAGE004
is calculated as an absolute value therebetween, as shown in equation (3),
Figure RE-650567DEST_PATH_IMAGE005
(3)
where xi represents the output value of the ith positive sample, and p (xi) represents the output probability after the softmax process. The loss is calculated by the model output probability of the positive sample, and the classification error of the model can be optimized.
In a method for determining a loss function for target detection, comprising:
and determining simulation key points, converting the simulation key points into the maximum activation points in the matrix through Gaussian distribution, then performing L2 loss calculation based on the matrix output by the backbone network, and determining a loss function according to the calculation result.
The simulation key points are key points used for indicating characteristic parts of the human body, and in the embodiment, the number of the simulation key points is 21, and the 21 key points are distributed in a plurality of parts of the human body. Each simulated keypoint can be transformed into the maximum activation point in the matrix of 64 x 64 by gaussian distribution, and optimized by calculating L2 loss as a loss function from the 21 x 64 heatmap of 21 x 64 transformed by the label before correspondence by the matrix of 21 x 64 output by the network.
The simulated key points are converted into maximum activation points to form a feature map capable of reflecting a plurality of local features of the human body, and the plurality of local features of the human body can have a mutual correlation relationship, so that the training model can learn context semantic information in the image, the probability of overfitting can be effectively reduced, and more accurate key points can be predicted in subsequent analysis.
And S13, performing potential danger judgment based on each original characteristic region in the monitored image, and determining a preparation image according to the judgment result.
Wherein the potential hazard judgment is used to estimate the risk that an inappropriate behavior will occur or may occur in the monitored image.
The condition for judging the potential danger comprises a first danger judging condition, wherein the first danger judging condition is as follows: whether the distance between any two original characteristic regions is smaller than a preset first safety distance or not; if the distance between at least two original characteristic regions in the monitored image is smaller than the first safety distance, the fact that the distance between two human bodies in the monitored region is short is indicated, the basic condition that improper behaviors occur is met, and the monitored image is determined to meet the first danger judgment condition.
In this embodiment, the condition for determining the potential risk further includes a second risk determining condition, where the second risk determining condition is: whether the human body in any original characteristic region has potential dangerous behaviors or not, wherein the potential dangerous behaviors comprise fist making, hand waving, leg lifting, flapping, carpenter holding, cutter holding and the like. And carrying out posture recognition on the human body in the original characteristic region, and if potential dangerous behaviors exist in any one frame of monitoring image, determining that the monitoring image meets a second dangerous judgment condition.
If the monitored image at least meets one of the first danger judgment condition and the second danger judgment condition, the monitored image meets the condition of potential danger judgment, and the monitored image is determined to be a prepared image; and if the monitoring image does not meet any one of the first danger judgment condition and the second danger judgment condition, the monitoring image does not meet the condition of potential danger judgment.
And according to the image sequence in the monitoring sequence, carrying out potential danger judgment and detection on each monitoring image in sequence, and determining the preparation image in sequence.
And S14, determining the continuous n frames of preliminary images in the monitoring sequence as a detection sequence.
When n frames of continuous preliminary images exist, the time for establishing the potential dangerous condition is longer, the risk of improper behavior is higher, and the n frames of preliminary images can be used as detection images to form a detection sequence; if n consecutive preliminary images are not present, it is indicated that the time for which the potential risk condition is established is short, and the risk of misbehavior is small. In the present embodiment, the value of n is set to 5.
Referring to fig. 3 and 4, in step S14, the method includes:
s141, judging whether a preparation image exists in the database, if so, executing S142; if not, S145 is executed.
Wherein the database is used for storing the preliminary image. In step S13, the system sequentially obtains multiple frames of preliminary images and sequentially executes step S141, and if no preliminary image exists in the database, the preliminary image needs to be added to the database, otherwise, step S142 is executed.
S142, judging whether the continuity requirement is met, if so, executing S143; if not, S146.
The continuity requirement is used for judging whether a plurality of prepared images in the database are continuous or not, determining whether the current prepared image is continuously obtained on the time line or not on the basis of the position of the current prepared image in the monitoring sequence and the position of the newly added prepared image in the database in the monitoring sequence, and if so, executing S143; otherwise, after the current preliminary image is added to the database, if there are no consecutive n frames of images in the database, step S146 is executed.
S143, judging whether m frames of preliminary images exist in the database, if so, executing S144; if not, the database is emptied and S146 is performed.
Wherein m = n-1. If m frames of preliminary images exist in the database, n frames of images exist in the database after the current preliminary image is newly added.
S144, all the preliminary images in the database are determined as detection images, a detection sequence is generated based on all the detection images, and S2 is executed.
The number of the preliminary images in the database reaches n, and all the preliminary images in the database are continuous, so that the preliminary images can be acquired as detection images, and a detection sequence can be generated.
S145, add the current preliminary image to the database, and return to S13.
If the database does not have the prepared image, the prepared image is required to be added into the database; if the database can not meet the continuity requirement only depending on the current preparation image, the database needs to be reset, and the preparation image of the next frame is selected; if the detection sequence has already been extracted from the database, the database needs to be reset and the preliminary image of the next frame is selected.
S146, emptying the database, and returning to S145.
And S2, determining a target set based on the detection image.
The target set comprises at least two target characteristic regions, the target characteristic regions correspond to one person in the detection image one by one, and the target characteristic regions are used for indicating human body regions in the detection image. Thus, the set of targets may indicate all human body regions in the same detected image.
In this embodiment, the system extracts a preselected frame including a human body from the detected image through an SSD object detection algorithm, so as to determine the object feature region, the specific content of the SSD object detection algorithm is consistent with the content of the SSD object detection algorithm described in step S12, and the specific analysis can refer to the related description of the foregoing method steps, which will not be described herein again.
And S3, determining attack key points and stress key points based on the target characteristic region.
The attack key points are simulation key points located at individual characteristic parts of a human body and used for reflecting human body parts of the person sending attack actions. The stress key points are simulation key points located at individual characteristic parts of a human body and used for reflecting the human body parts of the person subjected to the attack action.
S4, judging whether a detection image meeting the limb contact condition exists in the detection sequence, if so, executing S5; otherwise, the process is finished.
Wherein, the limb contact conditions are as follows: if the minimum distance between the attack key point and the stress key point of one target feature region in the target set is smaller than the second safety distance, the human body part corresponding to the attack key point is in contact with the human body part corresponding to the stress key point, and the risk of improper behavior is high; and if the minimum distance between the attack key point and the stress key point is greater than or equal to the second safety distance, the risk of the improper behavior is smaller.
And when any one frame of detection image in the detection sequence meets the limb contact condition, triggering an alarm step.
S5, alarming: and sending the detection image to a monitoring background.
The monitoring background generates alarm information based on the detection image, and sends the alarm information and the detection image to a supervisor for determination through a short message or a network mode.
The supervisor can check alarm information and detection images through a remote terminal such as a smart phone, and the supervisor can broadcast through a sound box or directly go to the site to stop the occurrence of improper behaviors.
The implementation principle of the embodiment of the application is as follows: a plurality of target characteristic regions are extracted from the detection image to reflect the distribution conditions of a plurality of persons, the positions of human body parts such as hands where the persons can initiate attack behaviors can be determined by analyzing attack key points in the target characteristic regions, and the positions of the attacked human body parts such as faces of the persons can be determined by analyzing stress key points in the target characteristic regions. By utilizing the distance detection between the attack key points and the stress key points, whether one person uses the human body part corresponding to the attack key points to contact the human body part corresponding to the stress key points of the other person can be estimated, whether improper behaviors occur is further estimated, and if yes, an early warning signal is output to trigger an alarm step. By means of image analysis, behaviors of people can be evaluated in time, improper behaviors in a campus can be found in time, a supervisor can conveniently stop the improper behaviors in time, relevant people are protected in time, images are left, and evidence is provided for punishment and education subsequently made by the improper behaviors.
The system can acquire multi-frame detection images, upload the multi-frame detection images to a monitoring background, and send the multi-frame detection images to a supervisor in a short message or network mode for further confirmation, if the fact that the behavior is inappropriate is judged, the plot can be dissuaded through voice communication if the plot is slight, and if the plot is serious, the alarm area can be locked by the image acquisition module 1 and can be timely caught up to the site for stopping.
Example two:
referring to fig. 5, in an embodiment, a behavior detection monitoring apparatus is provided, which corresponds to the behavior detection monitoring method in the first embodiment one to one, and the system includes an image acquisition module 1, a target selection module 2, a feature extraction module 3, an alarm triggering module 4, and an early warning module 5. The functional modules are explained in detail as follows:
and the image acquisition module 1 is used for acquiring the detection sequence. Wherein the detection sequence includes a plurality of frames of detection images associated with a person capable of causing the misbehaviour.
And the target selection module 2 is used for determining a target set based on the detection image. The target set comprises at least two target characteristic areas, and the target characteristic areas correspond to one person in the detection image one by one.
And the feature extraction module 3 is used for determining attack key points and stress key points based on the target feature region. The attack key points are used for reflecting human body parts of the person who sends attack actions, and the stress key points are used for reflecting the human body parts of the person who receives the attack actions.
And the alarm triggering module 4 is used for comparing the distance between the attack key point of one target characteristic region in the target set and the stress key point of the other target characteristic region in the target set and executing an alarm step.
And the early warning module 5 is used for sending the detection image to the monitoring background.
The behavior detection monitoring apparatus provided in this embodiment can achieve the same technical effects as the foregoing embodiment because of the functions of the modules themselves and the logical connections between the modules, and the principle analysis can refer to the related descriptions of the method steps, which will not be described herein again.
Example three:
in one embodiment, an intelligent camera is provided and includes a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the memory stores training data, algorithm formulas, filtering mechanisms, and the like in a training model. The processor is used for providing calculation and control capability, and the processor realizes the following steps when executing the computer program:
and S1, acquiring a detection sequence.
In step S1, the method includes:
and S11, acquiring a monitoring sequence in real time.
And S12, determining the original characteristic region based on the monitoring image.
And S13, performing potential danger judgment based on each original characteristic region in the monitored image, and determining a preparation image according to the judgment result.
And S14, determining the continuous n frames of preliminary images in the monitoring sequence as a detection sequence.
In step S14, the method includes:
s141, judging whether a preparation image exists in the database, if so, executing S142; if not, S145 is executed.
S142, judging whether the continuity requirement is met, if so, executing S143; if not, S146.
S143, judging whether m frames of preliminary images exist in the database, if so, executing S144; if not, the database is emptied and S146 is performed.
S144, all the preliminary images in the database are determined as detection images, a detection sequence is generated based on all the detection images, and S2 is executed.
S145, add the current preliminary image to the database, and return to S13.
S146, emptying the database, and returning to S145.
And S2, determining a target set based on the detection image.
And S3, determining attack key points and stress key points based on the target characteristic region.
S4, judging whether a detection image meeting the limb contact condition exists in the detection sequence, if so, executing S5; otherwise, the process is finished.
S5, alarming: and sending the detection image to a monitoring background.
The supervisor can deploy the intelligent cameras in the environments such as schools, teenager cultivation bases, main street parks and the like, and the intelligent cameras have the advantages of being simple in deployment, high in efficiency, good in effect and the like. On the other hand, the intelligent camera can be monitored 24 hours a day, is less interfered by weather environment, and alarm information transmission is timely, and the recognition range is wide, and the user can effectively monitor the range of different distances according to the hardware parameter of the intelligent camera, and simultaneously, the functions of face snapshot, blacklist recognition alarm and the like can be expanded.
In the intelligent camera provided by this embodiment, after the computer program in the memory is run on the processor, the steps of the foregoing embodiment are implemented, so that the same technical effects as those of the foregoing embodiment can be achieved, and for principle analysis, reference may be made to the related description of the steps of the foregoing method, which will not be described herein again.
Example four:
in an embodiment, referring to fig. 6, there is provided an intelligent monitoring system, which includes the above-mentioned intelligent camera, and further includes:
and the background host 6 is used for receiving and processing the detection image sent by the intelligent camera and sending alarm information to the remote terminal. The remote terminal comprises a smart phone, a tablet computer and other mobile intelligent devices.
And the communication device 7 is used for providing network communication for the background host 6 and the intelligent camera.
And the training device 8 is used for carrying out model training to provide a target detection model for the intelligent camera.
The embodiments are preferred embodiments of the present application, and the scope of the present application is not limited by the embodiments, so: all equivalent variations made according to the methods and principles of the present application should be covered by the protection scope of the present application.

Claims (9)

1. The behavior detection monitoring method is characterized by comprising the following steps:
acquiring a detection sequence; wherein the detection sequence comprises a plurality of frames of detection images associated with a person capable of causing misbehaviour;
determining a target set based on the detection image; the target set comprises at least two target characteristic areas, and the target characteristic areas correspond to one person in the detection image one by one;
determining attack key points and stress key points based on the target characteristic region; the attack key points are used for reflecting human body parts of the person who sends attack actions, and the stress key points are used for reflecting human body parts of the person who receives the attack actions;
and comparing the distance between the attack key point of one target characteristic region in the target set and the stress key point of the other target characteristic region in the target set, and performing an alarm step.
2. The behavior detection monitoring method according to claim 1, wherein the specific method for obtaining the detection sequence comprises:
acquiring a monitoring sequence; wherein the monitoring sequence comprises a plurality of frames of continuous monitoring images;
determining an original characteristic region based on the monitoring image; wherein the original feature region is used for indicating a person in the monitoring image;
carrying out potential danger judgment based on each original characteristic region in the monitoring image, and determining a prepared image according to a judgment result; judging whether the distance between any two original characteristic regions is smaller than a first safety distance or not according to the judgment condition of the potential danger judgment;
and determining n continuous frames of preparation images in the monitoring sequence as a detection sequence, wherein n is a positive integer.
3. The behavior detection monitoring method according to claim 2, wherein in a specific method of determining the original feature region based on the monitored image, the method comprises:
performing target detection based on the monitored image, and determining an original characteristic region; the backbone network for target detection comprises mobilenetv2, and the branches of the backbone network are step = {8, 16, 24 }.
4. A method for behavioral detection monitoring according to claim 3, characterized in that: the Aspect ratio of the target detection is Aspect ratios = {1, 0.5, 0.33, 0.25 }.
5. The behavior detection monitoring method according to claim 4, wherein the loss function determination method for target detection comprises:
determining simulation key points, and converting the simulation key points into the maximum activation points in the matrix through Gaussian distribution; the simulation key points are key points used for indicating characteristic parts of the human body;
l2 loss calculation is performed based on the matrix of the backbone network output to determine the loss function.
6. The behavior detection monitoring method according to claim 1, further comprising:
an alarming step: and sending the detection image to a monitoring background so that the detection image is sent to a supervisor for determination through a short message or a network mode.
7. Behavior detection monitoring device, characterized by, includes:
an image acquisition module (1) for acquiring a detection sequence; wherein the detection sequence comprises a plurality of frames of detection images associated with a person capable of causing misbehaviour;
the target selection module (2) is used for determining a target set based on the detection image; the target set comprises at least two target characteristic areas, and the target characteristic areas correspond to one person in the detection image one by one;
the characteristic extraction module (3) is used for determining attack key points and stress key points based on the target characteristic region; the attack key points are used for reflecting human body parts of the person who sends attack actions, and the stress key points are used for reflecting human body parts of the person who receives the attack actions;
and the alarm triggering module (4) is used for comparing the distance between the attack key point of one target characteristic region in the target set and the stress key point of the other target characteristic region in the target set and executing an alarm step.
8. Smart camera, characterized in that it comprises a memory and a processor, said memory having stored thereon a computer program that can be loaded by the processor and that executes the method according to any one of claims 1 to 6.
9. The intelligent monitoring system, comprising the intelligent camera of claim 9, further comprising:
the background host (6) is used for receiving and processing the detection image sent by the intelligent camera and sending alarm information to the remote terminal;
the communication device (7) is used for providing network communication for the background host (6) and the intelligent camera;
and the training device (8) is used for carrying out model training so as to provide a target detection model for the intelligent camera.
CN202111155799.8A 2021-09-30 2021-09-30 Behavior detection monitoring method, intelligent camera and intelligent monitoring system Pending CN114155595A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111155799.8A CN114155595A (en) 2021-09-30 2021-09-30 Behavior detection monitoring method, intelligent camera and intelligent monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111155799.8A CN114155595A (en) 2021-09-30 2021-09-30 Behavior detection monitoring method, intelligent camera and intelligent monitoring system

Publications (1)

Publication Number Publication Date
CN114155595A true CN114155595A (en) 2022-03-08

Family

ID=80462285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111155799.8A Pending CN114155595A (en) 2021-09-30 2021-09-30 Behavior detection monitoring method, intelligent camera and intelligent monitoring system

Country Status (1)

Country Link
CN (1) CN114155595A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821808A (en) * 2022-05-18 2022-07-29 湖北大学 Attack behavior early warning method and system
CN114863352A (en) * 2022-07-07 2022-08-05 光谷技术有限公司 Personnel group behavior monitoring method based on video analysis
CN117392616A (en) * 2023-12-13 2024-01-12 深圳鲲云信息科技有限公司 Method and device for identifying supervision behaviors of garbage throwing, electronic equipment and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821808A (en) * 2022-05-18 2022-07-29 湖北大学 Attack behavior early warning method and system
CN114863352A (en) * 2022-07-07 2022-08-05 光谷技术有限公司 Personnel group behavior monitoring method based on video analysis
CN117392616A (en) * 2023-12-13 2024-01-12 深圳鲲云信息科技有限公司 Method and device for identifying supervision behaviors of garbage throwing, electronic equipment and medium
CN117392616B (en) * 2023-12-13 2024-04-02 深圳鲲云信息科技有限公司 Method and device for identifying supervision behaviors of garbage throwing, electronic equipment and medium

Similar Documents

Publication Publication Date Title
CN114155595A (en) Behavior detection monitoring method, intelligent camera and intelligent monitoring system
Lestari et al. Fire hotspots detection system on CCTV videos using you only look once (YOLO) method and tiny YOLO model for high buildings evacuation
WO2021082112A1 (en) Neural network training method, skeleton diagram construction method, and abnormal behavior monitoring method and system
WO2021095351A1 (en) Monitoring device, monitoring method, and program
CN109815813A (en) Image processing method and Related product
CN111753643B (en) Character gesture recognition method, character gesture recognition device, computer device and storage medium
CN112733690A (en) High-altitude parabolic detection method and device and electronic equipment
CN112562159B (en) Access control method and device, computer equipment and storage medium
CN111832434B (en) Campus smoking behavior recognition method under privacy protection and processing terminal
KR20220000226A (en) A system for providing a security surveillance service based on edge computing
KR102511287B1 (en) Image-based pose estimation and action detection method and appratus
CN114359976A (en) Intelligent security method and device based on person identification
CN115798047A (en) Behavior recognition method and apparatus, electronic device, and computer-readable storage medium
CN113628172A (en) Intelligent detection algorithm for personnel handheld weapons and smart city security system
CN113505643A (en) Violation target detection method and related device
CN114821486B (en) Personnel identification method in power operation scene
CN116152745A (en) Smoking behavior detection method, device, equipment and storage medium
CN111310647A (en) Generation method and device for automatic identification falling model
CN113743293B (en) Fall behavior detection method and device, electronic equipment and storage medium
CN115049988A (en) Edge calculation method and device for power distribution network monitoring and prejudging
CN112818929B (en) Method and device for detecting people fighting, electronic equipment and storage medium
CN114782883A (en) Abnormal behavior detection method, device and equipment based on group intelligence
CN113963202A (en) Skeleton point action recognition method and device, electronic equipment and storage medium
CN114170677A (en) Network model training method and equipment for detecting smoking behavior
CN111723741A (en) Temporary fence movement detection alarm system based on visual analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination