CN115346170B - Intelligent monitoring method and device for gas facility area - Google Patents

Intelligent monitoring method and device for gas facility area Download PDF

Info

Publication number
CN115346170B
CN115346170B CN202210960279.2A CN202210960279A CN115346170B CN 115346170 B CN115346170 B CN 115346170B CN 202210960279 A CN202210960279 A CN 202210960279A CN 115346170 B CN115346170 B CN 115346170B
Authority
CN
China
Prior art keywords
personnel
detection
area
model
detection frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210960279.2A
Other languages
Chinese (zh)
Other versions
CN115346170A (en
Inventor
李勇
王亮
邢琳琳
祁丽荣
黄冬虹
李夏喜
董新利
徐怡兮
李玮昊
张琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Gas Group Co Ltd
Original Assignee
Beijing Gas Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Gas Group Co Ltd filed Critical Beijing Gas Group Co Ltd
Priority to CN202210960279.2A priority Critical patent/CN115346170B/en
Publication of CN115346170A publication Critical patent/CN115346170A/en
Application granted granted Critical
Publication of CN115346170B publication Critical patent/CN115346170B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Emergency Alarm Devices (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides an intelligent monitoring method and device for a gas facility area. The method comprises the following steps: constructing a personnel detection model based on an artificial neural network, wherein the model outputs personnel positions and personnel types including staff and non-staff; acquiring a monitoring video image of a gas facility area in real time, and inputting the image into a trained personnel detection model to obtain the position and the category of personnel; and detecting abnormal behaviors of the personnel based on the positions and the categories of the personnel, and giving an alarm if the abnormal behaviors exist. According to the invention, the personnel category entering the monitoring area can be automatically identified by constructing the personnel detection model; by detecting abnormal behaviors of personnel entering the monitoring area, potential hazards existing in the monitoring area can be automatically detected and alarmed.

Description

Intelligent monitoring method and device for gas facility area
Technical Field
The invention belongs to the technical field of safety monitoring, and particularly relates to an intelligent monitoring method and device for a gas facility area.
Background
The fuel gas is used as a necessity in the life of people, so that the life quality of people is improved on one hand, and potential risks exist on the other hand. Improper use and artificial damage of the gas facilities can possibly cause damage of the gas facilities and leakage of the gas, so that personnel poisoning, fire and even gas explosion can occur, and the personal safety of the gas facilities and the personnel is seriously threatened. The gas accident not only causes huge loss to the country, but also forms serious threat to the life and property safety of people. It is counted that 539 cases of gas accidents occur in China in 2020, resulting in 88 people being in distress and 496 people being injured. Therefore, ensuring the stable operation of the gas facility has important significance for constructing a safe living environment.
With the continuous development and update of electronic, communication and video monitoring technologies, video monitoring systems are widely used in various fields. In order to realize the safety monitoring of a gas facility area, the invention provides an intelligent monitoring method and device based on video images.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides an intelligent monitoring method and device for a gas facility area.
In order to achieve the above object, the present invention adopts the following technical scheme.
In a first aspect, the present invention provides an intelligent monitoring method for a gas facility area, including the steps of:
constructing a personnel detection model based on an artificial neural network, wherein the model outputs personnel positions and personnel types including staff and non-staff;
acquiring a monitoring video image of a gas facility area in real time, and inputting the image into a trained personnel detection model to obtain the position and the category of personnel;
and detecting abnormal behaviors of the personnel based on the positions and the categories of the personnel, and giving an alarm if the abnormal behaviors exist.
Further, the personnel detection model adopts an improved YOLOv4-Tiny network structure, and a characteristic layer with a larger size is added on the basis of the original two characteristic layers and is used for predicting a small target; calculating the width and height dimensions of the personnel images in the current data set by using a K-means++ clustering algorithm, so as to determine the number and the dimension of the detection frames; an attention mechanism module is introduced, and feature fusion weights of different scales are determined through training.
Further, the abnormal behavior detection includes a carried foreign matter detection, and the method is as follows:
constructing a foreign matter detection model, and training the model;
inputting the video image acquired in real time into the trained model to obtain a foreign matter detection frame;
calculating the ratio K of the intersection area of the personnel detection frame and the foreign matter detection frame to the area of the foreign matter detection frame:
Figure BDA0003792774920000021
/>
wherein S is A∩B S is the intersection area of the human detection frame and the foreign matter detection frame B Is the area of the foreign matter detection frame;
and if K is larger than the set threshold value, the person is considered to carry foreign matters.
Further, the abnormal behavior detection comprises detection of person wander, and the method comprises the following steps:
inputting a video image acquired in real time into a personnel detection model, identifying personnel types and acquiring a personnel detection frame;
tracking the person, and estimating the movement speed V of the detection frame;
calculating the time T for the person to pass through the gas facility area;
if T is greater than a set threshold, the person is considered to have loitering behavior, the threshold being inversely related to V, i.e., the greater V the smaller the threshold.
Further, the method also comprises the step of generating a video monitoring interface, wherein the monitoring interface comprises a partition display interface, a detailed information inquiring interface, a device distribution display interface, a device state display interface, a real-time monitoring picture interface and a scene display interface.
In a second aspect, the present invention provides an intelligent monitoring device for a gas facility area, comprising:
the model building module is used for building a personnel detection model based on the artificial neural network, and the model outputs personnel positions and personnel types including staff and non-staff;
the personnel detection module is used for acquiring monitoring video images of the gas facility area in real time, inputting the images into the trained personnel detection model, and obtaining the positions and the categories of the personnel;
the abnormal detection module is used for detecting abnormal behaviors of the personnel based on the positions and the categories of the personnel, and giving an alarm if the abnormal behaviors exist.
Further, the personnel detection model adopts an improved YOLOv4-Tiny network structure, and a characteristic layer with a larger size is added on the basis of the original two characteristic layers and is used for predicting a small target; calculating the width and height dimensions of the personnel images in the current data set by using a K-means++ clustering algorithm, so as to determine the number and the dimension of the detection frames; an attention mechanism module is introduced, and feature fusion weights of different scales are determined through training.
Further, the abnormal behavior detection includes a carried foreign matter detection, and the method is as follows:
constructing a foreign matter detection model, and training the model;
inputting the video image acquired in real time into the trained model to obtain a foreign matter detection frame;
calculating the ratio K of the intersection area of the personnel detection frame and the foreign matter detection frame to the area of the foreign matter detection frame:
Figure BDA0003792774920000031
wherein S is A∩B S is the intersection area of the human detection frame and the foreign matter detection frame B Is the area of the foreign matter detection frame;
and if K is larger than the set threshold value, the person is considered to carry foreign matters.
Further, the abnormal behavior detection comprises detection of person wander, and the method comprises the following steps:
inputting a video image acquired in real time into a personnel detection model, identifying personnel types and acquiring a personnel detection frame;
tracking the person, and estimating the movement speed V of the detection frame;
calculating the time T for the person to pass through the gas facility area;
if T is greater than a set threshold, the person is considered to have loitering behavior, the threshold being inversely related to V, i.e., the greater V the smaller the threshold.
Further, the device also comprises a display module for generating a video monitoring interface, wherein the monitoring interface comprises a partition display interface, a detailed information inquiring interface, a device distribution display interface, a device state display interface, a real-time monitoring picture interface and a scene display interface.
Compared with the prior art, the invention has the following beneficial effects.
According to the invention, a personnel detection model is constructed based on an artificial neural network, the model outputs personnel positions and personnel types including staff and non-staff, a monitoring video image of a gas facility area is acquired in real time, the image is input into the trained personnel detection model to obtain the positions and the types of the personnel, abnormal behavior detection is carried out on the personnel based on the positions and the types of the personnel, and if abnormal behaviors exist, an alarm is given, so that automatic monitoring and alarm of the gas facility area are realized. According to the invention, the personnel category entering the monitoring area can be automatically identified by constructing the personnel detection model; by detecting abnormal behaviors of personnel entering the monitoring area, potential hazards existing in the monitoring area can be automatically detected and alarmed.
Drawings
Fig. 1 is a flowchart of an intelligent monitoring method for a gas facility area according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a person carrying a foreign matter detection.
Fig. 3 is a block diagram of an intelligent monitoring device for a gas facility area according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the drawings and the detailed description below, in order to make the objects, technical solutions and advantages of the present invention more apparent. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a flowchart of an intelligent monitoring method for a gas facility area according to an embodiment of the present invention, including the following steps:
step 101, constructing a personnel detection model based on an artificial neural network, wherein the model outputs personnel positions and personnel types including staff and non-staff;
102, acquiring a monitoring video image of a gas facility area in real time, and inputting the image into a trained personnel detection model to obtain the position and the category of personnel;
and 103, detecting abnormal behaviors of the personnel based on the positions and the categories of the personnel, and giving an alarm if the abnormal behaviors exist.
In this embodiment, step 101 is mainly used for constructing a personnel detection model. The embodiment builds a personnel detection model based on an artificial neural network. The person classes to be identified in this embodiment include staff and non-staff. It is important to identify the staff and the non-staff, for example, the same actions of the staff and the non-staff have obvious differences on the influence on the gas facilities, so that the staff type needs to be identified, and the staff type must be marked when an alarm is given. The method comprises the steps of constructing a personnel detection model, firstly determining a network structure of the model, then establishing a training data set by collecting historical monitoring video images, training the model by utilizing the training data set, and determining model parameters. The number of images in the training dataset that include staff and non-staff is comparable; one obvious difference between staff and non-staff is that clothing is different, and staff typically have uniform with the same pattern and color, rather than poor regularity or consistency of staff's clothing.
In this embodiment, step 102 is mainly used for personnel category recognition. According to the embodiment, the monitoring video image of the gas facility area obtained in real time is input into the trained personnel detection model, so that the positions and the categories of the personnel are obtained. In the embodiment, the monitoring video image of the gas facility area is obtained in real time by erecting the camera, and the camera generally selects an IP network camera. The IP network camera is a digital device based on network transmission, and besides the ordinary composite video signal output interface BNC, the network camera also has a network output interface, so that the camera can be directly connected to a local area network, and each IP network camera has an own IP address. For example, a Huacheng D3220P camera can be selected, the resolution can reach 1080P, the frame rate can be 32FPS at the highest, and the transmission bandwidth can reach 5M Z The traditional mode adopts cloud transmission.
In this embodiment, step 103 is mainly used for detecting abnormal behaviors of personnel. The definition of abnormal behavior of a person is related to the actual monitoring environment, that is, different monitoring environments and purposes define the abnormal behavior differently. Since the monitoring object of the present embodiment is a gas facility in order to secure the gas facility, the definition of abnormal behavior is related to whether or not it is possible to pose a threat to the gas facility. For example, in the application scenario of the gas facility area, when a person carrying foreign matters enters the gas facility area, the person may damage the gas facility, so that the person carrying foreign matters can enter the gas facility area to visually recognize abnormal behaviors, and timely warning and recording are needed. Also, kicking and loitering are both considered abnormal behaviors. In the embodiment, abnormal behavior detection is performed on the basis of personnel category identification, and if abnormal behavior is detected, timely alarm is given; the type of personnel (such as 'staff carrying foreign matters') is described when the alarm is given so as to take different countermeasures for different personnel types. The abnormal behavior detection is still based on an artificial neural network to construct a detection model, and different judgment rules are adopted according to different abnormal behaviors.
According to the embodiment, the personnel category entering the monitoring area can be automatically identified by constructing the personnel detection model; by detecting abnormal behaviors of personnel entering the monitoring area, potential hazards existing in the monitoring area can be automatically detected and alarmed.
As an alternative embodiment, the personnel detection model adopts an improved YOLOv4-Tiny network structure, and a larger-size feature layer is added on the basis of the original two feature layers for predicting a small target; calculating the width and height dimensions of the personnel images in the current data set by using a K-means++ clustering algorithm, so as to determine the number and the dimension of the detection frames; an attention mechanism module is introduced, and feature fusion weights of different scales are determined through training.
The embodiment provides a network structure of a personnel detection model. The personnel detection model of the embodiment aims at personnel entering a gas facility area, so that the personnel position information entering the area and the corresponding identity types (working or non-working personnel) of the personnel are accurately and rapidly detected, and the real-time requirement is high. In order to achieve both detection accuracy and real-time performance, the embodiment adopts a target detection algorithm YOLOv4-Tiny model with higher running efficiency at present, and improves the YOLOv4-Tiny model. The main improvement is as follows:
in partial gas facility areas, the field of view of monitoring equipment is wider, and the situation that personnel occupy smaller parts on a monitoring picture exists. In order to improve the detection effect of the model, a large-size feature layer is added on the basis of the original two feature layers, and is responsible for predicting the small target, so that the detection capability of the model on the small target is improved.
Because the object to be detected by the model is a person entering the gas facility area, the proportion of the person detection frame is relatively fixed, the aspect ratio occupied by the person in the current data set is calculated through a K-means++ clustering algorithm, and the width and height dimensions which can most represent all data are determined through the clustering algorithm, so that the dimension of the detection frame is adjusted.
The attention mechanism module is introduced, and the important features are focused more accurately by training and determining the feature fusion weights of different scales, so that the fine granularity detection effect of the model is improved, and the capability of distinguishing different personnel category features of the model is enhanced.
As an alternative embodiment, the abnormal behavior detection includes carrying foreign matter detection, and the method is as follows:
constructing a foreign matter detection model, and training the model;
inputting the video image acquired in real time into the trained model to obtain a foreign matter detection frame;
calculating the ratio K of the intersection area of the personnel detection frame and the foreign matter detection frame to the area of the foreign matter detection frame:
Figure BDA0003792774920000071
/>
wherein S is A∩B S is the intersection area of the human detection frame and the foreign matter detection frame B Is the area of the foreign matter detection frame;
and if K is larger than the set threshold value, the person is considered to carry foreign matters.
The embodiment provides a technical scheme for carrying foreign matter detection. In the embodiment, firstly, a foreign matter detection model is established, a video image acquired in real time is input into the trained model, and a foreign matter detection frame is obtained, as shown in fig. 2; then calculating the area of the overlapping part of the personnel detection frame and the foreign matter detection frame, namely the area S of intersection of the two detection frames A∩B And calculate S A∩B Area S with foreign matter detection frame B Is a ratio K of (c). The size of K reflects the approaching degree of the personnel and the foreign matters, and the bigger the K is, the closer the personnel and the foreign matters are; last rootAnd judging according to the relative magnitude of the K value and the set threshold value, and if the K value is larger than the set threshold value, considering that the person carries foreign matters. The magnitude of the threshold has direct influence on the detection result, so that the threshold is selected carefully, and is generally set according to industry experience or repeated experiments.
As an optional embodiment, the abnormal behavior detection includes a person loitering detection, and the method includes the following steps:
inputting a video image acquired in real time into a personnel detection model, identifying personnel types and acquiring a personnel detection frame;
tracking the person, and estimating the movement speed V of the detection frame;
calculating the time T for the person to pass through the gas facility area;
if T is greater than a set threshold, the person is considered to have loitering behavior, the threshold being inversely related to V, i.e., the greater V the smaller the threshold.
The embodiment provides a technical scheme of loitering detection. For personnel entering a gas facility area, normal behavior is always assumed by the area along a certain direction; if loitering phenomena such as right look-aside, frequent change of walking direction and the like occur, the abnormal behavior is considered. The loitering detection method is many, and the technical principle of loitering detection in this embodiment is that: the time to wander through the area is greater than the time to walk normally through the area. It is thus possible to determine whether there is loitering behaviour by calculating the time T taken for a person to pass through the gas installation area and comparing it with a set threshold value. Since the size of T is related to the walking speed (the greater the speed is, the smaller the T is), in order to improve the accuracy of loitering detection, the present embodiment designs a dynamic detection threshold, that is, the size of the detection threshold is dynamically adjusted according to the walking speed of the person, the greater the speed is, and the smaller the detection threshold is set. The walking speed can be obtained by calculating the distance of the center point of the human detection frame moving in a short period of time.
As an optional embodiment, the method further comprises generating a video monitoring interface, wherein the monitoring interface comprises a partition display interface, a detailed information query interface, a device distribution display interface, a device state display interface, a real-time monitoring picture interface and a scene display interface.
The embodiment provides a technical scheme for displaying a video monitoring picture. The embodiment monitors and displays the video by generating a video monitoring interface on the display screen. The monitoring interface includes a plurality of interfaces, which are described below.
Partition display interface: the method is mainly used for displaying the conditions of different devices in different areas. For example, if there is an abnormal situation in the area, the type of abnormal event that has occurred, and the total number of occurrences. The method can better help staff to carry out important monitoring on a certain condition.
Query detailed information interface: the method is mainly used for checking the picture captured by the camera by a worker through clicking a detailed information inquiry function when an abnormal condition alarm occurs in a certain area, checking details of the abnormal condition occurring at the moment, such as the type of the worker, the specific position and the like, and checking whether the abnormal condition alarm is false or not at the same time, and then performing the next processing.
Device distribution display interface: the method is mainly used for displaying the position of the current equipment, and once the equipment in a certain area is abnormal, the display function cannot be displayed, so that a worker can judge whether the equipment in each area works normally or not through the picture.
Device status display interface: the method is mainly used for displaying the condition of each key area. Such as whether an abnormal situation occurs, how many people are present in the current scene, etc.
Real-time monitoring of a picture interface: the method is mainly used for more intuitively observing whether the abnormal situation occurs at the current moment and the type of the abnormal situation. Once an abnormal situation occurs, text information can appear in the right warning information column to remind workers, and the abnormal situation occurs at the moment and is immediately processed.
Scene screenshot interface: the method is mainly used for automatically storing the images of the current processing frame and the video frame when the occurrence of the abnormal condition is detected, providing the function of historical retrieval for staff, and timely checking the monitoring loopholes.
Fig. 3 is a schematic diagram of an intelligent monitoring device for a gas facility area according to an embodiment of the present invention, where the device includes:
the model building module 11 is used for building a personnel detection model based on an artificial neural network, and the model outputs personnel positions and personnel types including staff and non-staff;
the personnel detection module 12 is used for acquiring monitoring video images of the gas facility area in real time, inputting the images into a trained personnel detection model, and obtaining the positions and the categories of the personnel;
the abnormality detection module 13 is configured to detect abnormal behaviors of the person based on the position and the category of the person, and if there is an abnormal behavior, alarm the person.
The device of this embodiment may be used to implement the technical solution of the method embodiment shown in fig. 1, and its implementation principle and technical effects are similar, and are not described here again. As well as the latter embodiments, will not be explained again.
As an alternative embodiment, the personnel detection model adopts an improved YOLOv4-Tiny network structure, and a larger-size feature layer is added on the basis of the original two feature layers for predicting a small target; calculating the width and height dimensions of the personnel images in the current data set by using a K-means++ clustering algorithm, so as to determine the number and the dimension of the detection frames; an attention mechanism module is introduced, and feature fusion weights of different scales are determined through training.
As an alternative embodiment, the abnormal behavior detection includes carrying foreign matter detection, and the method is as follows:
constructing a foreign matter detection model, and training the model;
inputting the video image acquired in real time into the trained model to obtain a foreign matter detection frame;
calculating the ratio K of the intersection area of the personnel detection frame and the foreign matter detection frame to the area of the foreign matter detection frame:
Figure BDA0003792774920000101
wherein S is A∩B S is the intersection area of the human detection frame and the foreign matter detection frame B Is the area of the foreign matter detection frame;
and if K is larger than the set threshold value, the person is considered to carry foreign matters.
As an optional embodiment, the abnormal behavior detection includes a person loitering detection, and the method includes the following steps:
inputting a video image acquired in real time into a personnel detection model, identifying personnel types and acquiring a personnel detection frame;
tracking the person, and estimating the movement speed V of the detection frame;
calculating the time T for the person to pass through the gas facility area;
if T is greater than a set threshold, the person is considered to have loitering behavior, the threshold being inversely related to V, i.e., the greater V the smaller the threshold.
As an optional embodiment, the apparatus further includes a display module, configured to generate a video monitoring interface, where the monitoring interface includes a partition display interface, a query detailed information interface, a device distribution display interface, a device status display interface, a real-time monitoring screen interface, and a scene display interface.
The foregoing is merely illustrative of the present invention, and the present invention is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the scope of the present invention should be included in the present invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (8)

1. An intelligent monitoring method for a gas facility area is characterized by comprising the following steps:
constructing a personnel detection model based on an artificial neural network, wherein the model outputs personnel positions and personnel types including staff and non-staff;
acquiring a monitoring video image of a gas facility area in real time, and inputting the image into a trained personnel detection model to obtain the position and the category of personnel;
detecting abnormal behaviors of the personnel based on the positions and the categories of the personnel, and giving an alarm if the abnormal behaviors exist; the abnormal behavior detection comprises detection of carried foreign matters, and the method comprises the following steps:
constructing a foreign matter detection model, and training the model;
inputting the video image acquired in real time into the trained model to obtain a foreign matter detection frame;
calculating the ratio K of the intersection area of the personnel detection frame and the foreign matter detection frame to the area of the foreign matter detection frame:
Figure FDA0004121808990000011
wherein S is A∩B S is the intersection area of the human detection frame and the foreign matter detection frame B Is the area of the foreign matter detection frame;
and if K is larger than the set threshold value, the person is considered to carry foreign matters.
2. The intelligent monitoring method of a gas facility area according to claim 1, wherein the personnel detection model adopts an improved YOLOv4-Tiny network structure, and a feature layer with a larger size is added on the basis of original two feature layers for predicting a small target; calculating the width and height dimensions of the personnel images in the current data set by using a K-means++ clustering algorithm, so as to determine the number and the dimension of the detection frames; an attention mechanism module is introduced, and feature fusion weights of different scales are determined through training.
3. The intelligent monitoring method of a gas facility area according to claim 1, wherein the abnormal behavior detection comprises a person loitering detection, the method comprising:
inputting a video image acquired in real time into a personnel detection model, identifying personnel types and acquiring a personnel detection frame;
tracking the person, and estimating the movement speed V of the detection frame;
calculating the time T for the person to pass through the gas facility area;
if T is greater than a set threshold, the person is considered to have loitering behavior, the threshold being inversely related to V, i.e., the greater V the smaller the threshold.
4. The intelligent monitoring method of a gas facility area of claim 1, further comprising generating a video monitoring interface comprising a zone display interface, a query details interface, a device distribution display interface, a device status display interface, a real-time monitoring screen interface, and a scene display interface.
5. An intelligent monitoring device for a gas facility area, comprising:
the model building module is used for building a personnel detection model based on the artificial neural network, and the model outputs personnel positions and personnel types including staff and non-staff;
the personnel detection module is used for acquiring monitoring video images of the gas facility area in real time, inputting the images into the trained personnel detection model, and obtaining the positions and the categories of the personnel;
the abnormal detection module is used for detecting abnormal behaviors of the personnel based on the positions and the categories of the personnel, and giving an alarm if the abnormal behaviors exist; the abnormal behavior detection comprises detection of carried foreign matters, and the method comprises the following steps:
constructing a foreign matter detection model, and training the model;
inputting the video image acquired in real time into the trained model to obtain a foreign matter detection frame;
calculating the ratio K of the intersection area of the personnel detection frame and the foreign matter detection frame to the area of the foreign matter detection frame:
Figure FDA0004121808990000021
wherein S is A∩B S is the intersection area of the human detection frame and the foreign matter detection frame B Is the area of the foreign matter detection frame;
and if K is larger than the set threshold value, the person is considered to carry foreign matters.
6. The intelligent monitoring device for a gas facility area according to claim 5, wherein the personnel detection model adopts an improved YOLOv4-Tiny network structure, and a feature layer with a larger size is added on the basis of original two feature layers for predicting a small target; calculating the width and height dimensions of the personnel images in the current data set by using a K-means++ clustering algorithm, so as to determine the number and the dimension of the detection frames; an attention mechanism module is introduced, and feature fusion weights of different scales are determined through training.
7. The intelligent monitoring device of a gas facility area according to claim 5, wherein the abnormal behavior detection comprises a person loitering detection, by the following method:
inputting a video image acquired in real time into a personnel detection model, identifying personnel types and acquiring a personnel detection frame;
tracking the person, and estimating the movement speed V of the detection frame;
calculating the time T for the person to pass through the gas facility area;
if T is greater than a set threshold, the person is considered to have loitering behavior, the threshold being inversely related to V, i.e., the greater V the smaller the threshold.
8. The intelligent monitoring apparatus of a gas facility area of claim 5, further comprising a display module for generating a video monitoring interface comprising a zone display interface, a query detailed information interface, a device distribution display interface, a device status display interface, a real-time monitoring screen interface, and a scene display interface.
CN202210960279.2A 2022-08-11 2022-08-11 Intelligent monitoring method and device for gas facility area Active CN115346170B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210960279.2A CN115346170B (en) 2022-08-11 2022-08-11 Intelligent monitoring method and device for gas facility area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210960279.2A CN115346170B (en) 2022-08-11 2022-08-11 Intelligent monitoring method and device for gas facility area

Publications (2)

Publication Number Publication Date
CN115346170A CN115346170A (en) 2022-11-15
CN115346170B true CN115346170B (en) 2023-05-30

Family

ID=83952663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210960279.2A Active CN115346170B (en) 2022-08-11 2022-08-11 Intelligent monitoring method and device for gas facility area

Country Status (1)

Country Link
CN (1) CN115346170B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117061703A (en) * 2023-08-28 2023-11-14 瀚能科技有限公司 Park safety production monitoring method and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110689107A (en) * 2019-09-29 2020-01-14 速飞得(中国)自动化科技有限公司 Intelligent foreign matter detection method
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN112098997A (en) * 2020-09-18 2020-12-18 欧必翼太赫兹科技(北京)有限公司 Three-dimensional holographic imaging security inspection radar image foreign matter detection method
CN113792578A (en) * 2021-07-30 2021-12-14 北京智芯微电子科技有限公司 Method, device and system for detecting abnormity of transformer substation

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013229666A (en) * 2012-04-24 2013-11-07 Toshiba Corp Abnormality inspection device and remote monitoring inspection system
CN108805859A (en) * 2018-04-20 2018-11-13 深圳博脑医疗科技有限公司 A kind of image detecting method, image detection device and terminal device
ES2908944B2 (en) * 2018-07-24 2023-01-09 Fund Centro Tecnoloxico De Telecomunicacions De Galicia A COMPUTER IMPLEMENTED METHOD AND SYSTEM FOR DETECTING SMALL OBJECTS IN AN IMAGE USING CONVOLUTIONAL NEURAL NETWORKS
CN110909604B (en) * 2019-10-23 2024-04-19 深圳市重投华讯太赫兹科技有限公司 Security check image detection method, terminal equipment and computer storage medium
CN112597877A (en) * 2020-12-21 2021-04-02 中船重工(武汉)凌久高科有限公司 Factory personnel abnormal behavior detection method based on deep learning
CN113095132B (en) * 2021-03-04 2022-08-02 北京市燃气集团有限责任公司 Neural network based gas field identification method, system, terminal and storage medium
CN113887445A (en) * 2021-10-08 2022-01-04 山东可信云信息技术研究院 Method and system for identifying standing and loitering behaviors in video
CN114120019B (en) * 2021-11-08 2024-02-20 贵州大学 Light target detection method
CN114038048A (en) * 2021-12-02 2022-02-11 中电云数智科技有限公司 Identity type recognition system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020164282A1 (en) * 2019-02-14 2020-08-20 平安科技(深圳)有限公司 Yolo-based image target recognition method and apparatus, electronic device, and storage medium
CN110689107A (en) * 2019-09-29 2020-01-14 速飞得(中国)自动化科技有限公司 Intelligent foreign matter detection method
CN112098997A (en) * 2020-09-18 2020-12-18 欧必翼太赫兹科技(北京)有限公司 Three-dimensional holographic imaging security inspection radar image foreign matter detection method
CN113792578A (en) * 2021-07-30 2021-12-14 北京智芯微电子科技有限公司 Method, device and system for detecting abnormity of transformer substation

Also Published As

Publication number Publication date
CN115346170A (en) 2022-11-15

Similar Documents

Publication Publication Date Title
CN111507308B (en) Transformer substation safety monitoring system and method based on video identification technology
CN101751744B (en) Detection and early warning method of smoke
KR102149832B1 (en) Automated Violence Detecting System based on Deep Learning
CN111488799A (en) Falling object identification method and system based on image identification
CN113011833A (en) Safety management method and device for construction site, computer equipment and storage medium
CN114973140A (en) Dangerous area personnel intrusion monitoring method and system based on machine vision
CN110867046A (en) Intelligent car washer video monitoring and early warning system based on cloud computing
CN112184773A (en) Helmet wearing detection method and system based on deep learning
CN115346170B (en) Intelligent monitoring method and device for gas facility area
KR20200017594A (en) Method for Recognizing and Tracking Large-scale Object using Deep learning and Multi-Agent
CN111553305B (en) System and method for identifying illegal videos
CN112288320A (en) Subway operation risk monitoring and management system
CN113223046A (en) Method and system for identifying prisoner behaviors
CN113506416A (en) Engineering abnormity early warning method and system based on intelligent visual analysis
CN210222962U (en) Intelligent electronic fence system
Zhang Safety management of civil engineering construction based on artificial intelligence and machine vision technology
CN116246416A (en) Intelligent analysis early warning platform and method for security protection
CN116862244B (en) Industrial field vision AI analysis and safety pre-warning system and method
CN113314230A (en) Intelligent epidemic prevention method, device, equipment and storage medium based on big data
CN116259013B (en) Intrusion detection system
CN104240432B (en) Mn-rich slag production safety based on information fusion monitoring method
CN116958900A (en) Visual fire data monitoring system and monitoring method thereof
CN116665305A (en) Method and system for detecting worker behaviors based on computer vision and knowledge graph
CN115841730A (en) Video monitoring system and abnormal event detection method
CN112419091B (en) Intelligent video safety control method for field operation of power distribution network driven by knowledge graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant