CN114399719A - Transformer substation fire video monitoring method - Google Patents

Transformer substation fire video monitoring method Download PDF

Info

Publication number
CN114399719A
CN114399719A CN202210297974.5A CN202210297974A CN114399719A CN 114399719 A CN114399719 A CN 114399719A CN 202210297974 A CN202210297974 A CN 202210297974A CN 114399719 A CN114399719 A CN 114399719A
Authority
CN
China
Prior art keywords
fire
smoke
transformer substation
network
open fire
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210297974.5A
Other languages
Chinese (zh)
Other versions
CN114399719B (en
Inventor
刘术娟
张洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Zhongke Rongdao Intelligent Technology Co ltd
Original Assignee
Hefei Zhongke Rongdao Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Zhongke Rongdao Intelligent Technology Co ltd filed Critical Hefei Zhongke Rongdao Intelligent Technology Co ltd
Priority to CN202210297974.5A priority Critical patent/CN114399719B/en
Publication of CN114399719A publication Critical patent/CN114399719A/en
Application granted granted Critical
Publication of CN114399719B publication Critical patent/CN114399719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a transformer substation fire video monitoring method, which overcomes the defects of low identification precision of small-scale open fire and smoke in a transformer substation monitoring video image and inaccurate fire early warning. The invention comprises the following steps: acquiring video monitoring data of a transformer substation; constructing and training a multi-scale regional image extraction network; constructing and training a fire detection model of the transformer substation; acquiring real-time video data of a transformer substation; and monitoring and early warning of fire videos of the transformer substation. According to the invention, the abundant small-scale open fire and smoke region characteristics in the video image of the transformer substation are effectively learned through the multi-scale feature fusion network; and accurately judging whether the video image has open fire or smoke and the positions and the sizes of the open fire and smoke areas by utilizing the cascade area convolution neural network.

Description

Transformer substation fire video monitoring method
Technical Field
The invention relates to the technical field of substation fire monitoring, in particular to a substation fire video monitoring method.
Background
In the traditional open fire detection of the transformer substation, data such as smoke particles of flame, ambient temperature, relative humidity and the like are mostly collected through a sensor so as to carry out judgment and fire alarm. However, the fire early warning based on the sensor needs to be placed near the open fire, and meanwhile, the surrounding environment cannot greatly interfere with the sensor, so that the open fire detection method based on the sensor is not suitable for wide space and complex scenes where the transformer substation is located. In addition, the open fire detection method based on the sensor is difficult to feed back information such as fire position, fire size and the like, and brings difficulty for timely fire prevention and fire fighting.
With the development of a video monitoring system, on the basis of the existing video monitoring system, a computer vision technology is combined, so that the fire monitoring and early warning tasks of the transformer substation can be completed, the cost can be reduced, the anti-interference capability can be improved, and the complex environment with large space and more air flows under the condition of the transformer substation can be well adapted.
Most of existing fire monitoring methods based on computer vision technology extract interested areas based on traditional image processing methods, and then a classifier is used for distinguishing open fire or smoke. The current open fire detection method needs to be further improved, for example, early warning of fire in an early substation is required, the early flame range is very small, and the early flame range is not easy to detect and detect, so that the opportunity of rescue is missed; and a distinguishing mechanism for judging whether the ground fire detection result reaches the disaster early warning is lacked, so that the resource for alarming under the condition of controllable fire is wasted. However, the traditional video identification technology cannot realize early flame identification and has low identification rate of smoke, and cannot meet the actual use requirement.
Therefore, how to design an image recognition method capable of early warning of fire in a transformer substation has become an urgent technical problem to be solved.
Disclosure of Invention
The invention aims to solve the defects that in the prior art, the identification precision of small-scale open fire and smoke in a transformer substation monitoring video image is low, and the fire early warning is not accurate, and provides a transformer substation fire video monitoring method to solve the problems.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a fire video monitoring method for a transformer substation comprises the following steps:
acquiring video monitoring data of the transformer substation: acquiring video data of a transformer substation, extracting static image data of each frame, randomly selecting an open fire or smoke image, selecting open fire or smoke area position coordinate information and a corresponding label corresponding to the selected image, and taking the open fire or smoke area position coordinate information and the corresponding label as a training set;
constructing and training a multi-scale area image extraction network: constructing a multi-scale regional image extraction network based on the feature extraction network, inputting the obtained static image of the transformer substation into the multi-scale regional image extraction network for training, and extracting a feature image of an open fire or smoke image from the static image;
constructing and training a fire detection model of the transformer substation: constructing a transformer substation fire detection model based on the cascaded regional convolutional neural network, and training the transformer substation fire detection model;
acquiring real-time video data of the transformer substation: acquiring real-time video data of a transformer substation and preprocessing the real-time video data;
monitoring and early warning of fire videos of the transformer substation: and after extracting characteristic images from the preprocessed real-time video data of the transformer substation through a multi-scale regional image extraction network, inputting the characteristic images into a trained fire detection model of the transformer substation to perform fire monitoring and early warning.
The method for constructing and training the multi-scale regional image extraction network comprises the following steps:
setting a multi-scale regional image extraction network to comprise a depth residual error network and a multi-scale feature fusion network, wherein the depth residual error network is used as a reference network of the image feature extraction network, and the open fire image features extracted by the depth residual error network are input into the multi-scale feature fusion network so as to extract rich small-scale and large-scale feature information of the transformer substation regional open fire or smoke images;
the multi-scale feature fusion network is set to comprise three parallel network branches,
wherein the first network branch is a convolution kernel of size
Figure 100002_DEST_PATH_IMAGE001
Has a standard convolution sum and convolution kernel size of
Figure 209990DEST_PATH_IMAGE002
A swell convolution with a swell ratio of 1;
the second network branch is of convolution kernel size of
Figure 947877DEST_PATH_IMAGE002
Has a standard convolution sum and convolution kernel size of
Figure 308451DEST_PATH_IMAGE002
And a swell convolution with a swell ratio of 3;
the third network branch is the convolution kernel size of
Figure 100002_DEST_PATH_IMAGE003
Has a standard convolution sum and convolution kernel size of
Figure 695570DEST_PATH_IMAGE002
A swell convolution with a swell ratio of 5;
inputting the static image of the transformer substation into a multi-scale regional image extraction network for training:
the static image of the transformer substation is input into a depth residual error network, and the extracted open fire image characteristics are output by the depth residual error network;
the open fire image characteristics are input into a multi-scale characteristic fusion network, and the open fire image characteristic diagrams of the transformer substation area output by three network branches of the multi-scale characteristic fusion network are added through corresponding pixel points to realize characteristic fusion, so that the characteristic image of the open fire or smoke image is obtained.
The method for constructing and training the transformer substation fire detection model comprises the following steps:
setting a fire detection model of the transformer substation based on a cascade region convolutional neural network structure, wherein the cascade region convolutional neural network structure is formed by cascading two-stage region convolutional neural networks;
the regional convolutional neural network for setting the first stage of the cascaded regional convolutional neural network structure comprises one layer of expansion convolutional layer, wherein the size of the convolutional core is
Figure 65371DEST_PATH_IMAGE002
An expansion ratio of 3; the parallel classification layer and regression layer adopt convolution kernel as
Figure 893650DEST_PATH_IMAGE001
The convolutional layer respectively outputs 2 neurons for indicating whether the image contains an open fire or smoke region, outputs 4 neurons for indicating the position coordinate information of the open fire or smoke region, and obtains the open fire or smoke interested region in the image through the classification layer of the regional convolutional neural network in the first stage and the output result;
setting the input of the regional convolution neural network at the second stage as an interested region of open fire or smoke in the image, and outputting the interested region as a specific category of the interested region, namely open fire or smoke;
setting the second stage of regional convolution neural network including one self-adaptive convolution layer and two parallel classification layers and regression layers, where the classification layer adopts convolution kernel
Figure 690705DEST_PATH_IMAGE002
The number of output channels is
Figure 299541DEST_PATH_IMAGE004
For outputting the confidence information of the category of the naked fire and the smoke, the regression layer adopts convolution kernel as
Figure 473033DEST_PATH_IMAGE001
The number of output channels is
Figure 100002_DEST_PATH_IMAGE005
Outputting position information of the open fire and smoke areas;
inputting characteristic images of open fire or smoke images into a fire detection model of the transformer substation for training, wherein the training of the cascade region convolutional neural network model comprises the determination of positive and negative training samples, the definition of a loss function and the learning of model parameters;
determining training samples by adopting a matching method based on cross-over ratio,
when the intersection ratio between the boundary box of the training sample and the actually marked boundary box is greater than the threshold value 0.5, the sample is a positive sample, otherwise, the sample is a negative sample; for the regional convolutional neural network at the second stage, the intersection ratio between the bounding box of the training sample and the truly labeled bounding box is greater than 0.7 of a threshold value;
calculating a loss value L between the network prediction and the real sample by using a loss function of the cascaded regional convolutional neural network model shown as the following formula:
Figure 421397DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE007
the regression loss of the first stage is shown,
Figure 389353DEST_PATH_IMAGE008
the regression loss in the second stage is shown,
Figure 100002_DEST_PATH_IMAGE009
to classify the loss, parameters
Figure 423168DEST_PATH_IMAGE010
A hyperparameter that balances the classification loss and the regression loss of the cascaded regional convolutional neural network model;
the classification loss function adopts a cross entropy loss function
Figure 134773DEST_PATH_IMAGE009
The definition is as follows:
Figure 100002_DEST_PATH_IMAGE011
wherein N represents the amount of training samples,
Figure 999960DEST_PATH_IMAGE012
is shown as
Figure 100002_DEST_PATH_IMAGE013
Positive and negative labels of the samples, positive sample is 1, negative sample is 0,
Figure 575036DEST_PATH_IMAGE014
representing the confidence of the sample prediction as an open fire region;
the regression loss function adopts an intersection-to-parallel ratio loss function
Figure 100002_DEST_PATH_IMAGE015
The definition is as follows:
Figure 158464DEST_PATH_IMAGE016
wherein, P represents the boundary box of the open fire area predicted by the convolution neural network model of the cascade area, G represents the boundary box of the open fire area really marked,
Figure 100002_DEST_PATH_IMAGE017
indicates the intersection between P and G,
Figure 345863DEST_PATH_IMAGE018
represents the union between P and G;
training the convolutional neural network in the cascade region by adopting BP algorithm, and weighting the networkWAnd bias parameterBPerforming learning and iterationNThe next time network parameters reach the optimal:
Figure 100002_DEST_PATH_IMAGE019
wherein the content of the first and second substances,lrepresenting the number of cascades of the concatenated regional convolutional neural network,
Figure 331136DEST_PATH_IMAGE020
is shown aslThe weight matrix of the stage is determined,
Figure 100002_DEST_PATH_IMAGE021
is shown aslThe bias parameters of the stages are set to,
Figure 578578DEST_PATH_IMAGE022
indicating the learning rate.
The monitoring and early warning of the fire video of the transformer substation comprises the following steps:
extracting a characteristic image from the preprocessed real-time video data of the transformer substation through a multi-scale regional image extraction network;
inputting the extracted characteristic images into a trained fire detection model of the transformer substation, and outputting classification information of open fire or smoke regions (c, s),
WhereincThe indication is an open flame or smoke category,sconfidence of corresponding classification and corresponding location information: (x, y, w, h) Wherein (A) isx, y) And (a)w, h) Respectively representing the coordinates of the central point of the corresponding open fire or smoke area and the length and width of the bounding box thereof;
calculating the area of the open fire or smoke region in two continuous frames of static images in the open fire video of the transformer substation according to the acquired position information of the open fire or smoke region
Figure DEST_PATH_IMAGE023
Which respectively represent the areas of the corresponding open fire or smoke regions in the current frame and the next frame, wherein the area a of the open fire or smoke is calculated from the information of its region bounding box as follows:
Figure 649302DEST_PATH_IMAGE024
calculating the fire growth rate of the open fire and smoke regions of successive frames
Figure DEST_PATH_IMAGE025
Figure 640392DEST_PATH_IMAGE026
Judging whether to alarm the fire according to the growth rate condition of the continuous f frames, which comprises the following steps:
if the area of the fire or smoke region increases at a rate
Figure DEST_PATH_IMAGE027
Greater than threshold T, variable C plus 1, the formula is defined as:
Figure 480172DEST_PATH_IMAGE028
wherein for an open fire T is set to 1.2 and smoke is set to 1.5;
Figure DEST_PATH_IMAGE029
when C = f, namely the area growth rate of the area of the continuous f frames of open fire or smoke is larger than a set threshold value, the video monitoring site can be judged to contain fire, and fire alarm is triggered; otherwise, only carrying out fire warning prompt in the monitoring background.
Advantageous effects
Compared with the prior art, the transformer substation fire disaster video monitoring method effectively learns the abundant small-scale open fire and smoke region characteristics in the transformer substation video image through the multi-scale feature fusion network; and accurately judging whether the video image has open fire or smoke and the positions and the sizes of the open fire and smoke areas by utilizing the cascade area convolution neural network.
According to the invention, the severity of the fire is judged by detecting the open fire and smoke in the video to process in a grading way, whether fire alarm is needed or not is judged or only the video monitoring background sends out fire alarm, so that the reliability and the real-time performance of fire early warning are improved, and the disasters caused by the fire are reduced.
Drawings
FIG. 1 is a sequence diagram of the method of the present invention;
fig. 2 is a sequence diagram of a method for monitoring and warning a fire disaster video of a transformer substation in the invention.
Detailed Description
So that the manner in which the above recited features of the present invention can be understood and readily understood, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, wherein:
as shown in fig. 1, the substation fire video monitoring method of the present invention includes the following steps:
the method comprises the following steps of firstly, obtaining video monitoring data of a transformer substation.
The method comprises the steps of obtaining video data of the transformer substation, extracting static image data of each frame, randomly selecting open fire or smoke images, selecting open fire or smoke area position coordinate information and corresponding labels corresponding to the selected images, and taking the open fire or smoke area position coordinate information and the corresponding labels as a training set. In the laboratory stage, the selected static image samples of the transformer substation area can be randomly selected according to the ratio of 8:2 and respectively used as a training set and a testing set.
And secondly, constructing and training a multi-scale area image extraction network. And constructing a multi-scale regional image extraction network based on the feature extraction network, inputting the obtained static image of the transformer substation into the multi-scale regional image extraction network for training, and extracting a feature image of an open fire or smoke image from the static image. Smog or open fire in a fire disaster video of a transformer substation is not obvious in the initial stage, namely, the open fire or the smog is small in size and difficult to recognize and detect.
The specific steps of constructing and training the multi-scale area image extraction network are as follows:
(1) the method comprises the steps that a multi-scale regional image extraction network is set to comprise a depth residual error network and a multi-scale feature fusion network, wherein the depth residual error network is used as a reference network of the image feature extraction network, and open fire image features extracted by the depth residual error network are set to be input into the multi-scale feature fusion network so as to extract rich small-scale and large-scale feature information of the transformer substation regional open fire or smoke images.
(2) The multi-scale feature fusion network is set to comprise three parallel network branches,
wherein the first network branch is a convolution kernel of size
Figure 100002_DEST_PATH_IMAGE031
Has a standard convolution sum and convolution kernel size of
Figure 100002_DEST_PATH_IMAGE033
A swell convolution with a swell ratio of 1;
the second network branch is of convolution kernel size of
Figure 397050DEST_PATH_IMAGE033
Has a standard convolution sum and convolution kernel size of
Figure 689491DEST_PATH_IMAGE033
And a swell convolution with a swell ratio of 3;
the third network branch is the convolution kernel size of
Figure 100002_DEST_PATH_IMAGE035
Has a standard convolution sum and convolution kernel size of
Figure 484272DEST_PATH_IMAGE033
And a swell convolution with a swell ratio of 5.
(3) Inputting the static image of the transformer substation into a multi-scale regional image extraction network for training:
A1) the static image of the transformer substation is input into a depth residual error network, and the extracted open fire image characteristics are output by the depth residual error network;
A2) the open fire image characteristics are input into a multi-scale characteristic fusion network, and the open fire image characteristic diagrams of the transformer substation area output by three network branches of the multi-scale characteristic fusion network are added through corresponding pixel points to realize characteristic fusion, so that the characteristic image of the open fire or smoke image is obtained.
Thirdly, constructing and training a fire detection model of the transformer substation: and constructing a transformer substation fire detection model based on the cascade region convolutional neural network, and training the transformer substation fire detection model.
In an actual transformer substation scene, the scale of open fire or smoke in a detection video is usually smaller, so that the background area is far more than the open fire or smoke area, and serious sample imbalance problem exists between the background and a target foreground.
The method for constructing and training the fire detection model of the transformer substation comprises the following steps:
(1) setting a fire detection model of the transformer substation based on a cascade region convolutional neural network structure, wherein the cascade region convolutional neural network structure is formed by cascading two-stage region convolutional neural networks.
(2) The regional convolutional neural network for setting the first stage of the cascaded regional convolutional neural network structure comprises one layer of expansion convolutional layer, wherein the size of the convolutional core is
Figure 178559DEST_PATH_IMAGE033
An expansion ratio of 3; parallel classification and regression layers, using convolution kernels of
Figure 830120DEST_PATH_IMAGE031
The convolutional layer respectively outputs 2 neurons for indicating whether the image contains an open fire or smoke region, outputs 4 neurons for indicating the position coordinate information of the open fire or smoke region, and obtains the open fire or smoke interested region in the image through the classification layer of the regional convolutional neural network in the first stage and the output result.
(3) Setting the input of the regional convolution neural network at the second stage as an interested region of open fire or smoke in the image, and outputting the interested region as a specific category of the interested region, namely open fire or smoke;
the regional convolutional neural network for setting the second stage specifically comprises a layer of adaptationA convolution layer and two parallel classification layers and regression layers, wherein the classification layer adopts convolution kernel
Figure 875436DEST_PATH_IMAGE033
The number of output channels is
Figure DEST_PATH_IMAGE037
For outputting the confidence information of the category of the naked fire and the smoke, the regression layer adopts convolution kernel as
Figure 208328DEST_PATH_IMAGE031
The number of output channels is
Figure DEST_PATH_IMAGE039
And outputting the position information of the open fire and smoke area. In order to achieve a gradual improvement of the detection quality of the two-stage cascade, the threshold t of the regional convolutional neural network for the second stage is set higher than the threshold set in the first stage to select a training sample of higher quality.
(4) Inputting characteristic images of open fire or smoke images into a fire detection model of the transformer substation for training, wherein the training of the cascade region convolutional neural network model comprises the determination of positive and negative training samples, the definition of a loss function and the learning of model parameters;
B1) determining training samples by adopting a matching method based on cross-over ratio,
when the intersection ratio between the boundary box of the training sample and the actually marked boundary box is greater than the threshold value 0.5, the sample is a positive sample, otherwise, the sample is a negative sample; for the second stage of the regional convolutional neural network, the intersection ratio between the bounding box of the training sample and the truly labeled bounding box is greater than the threshold of 0.7 to select a higher quality training sample.
B2) Calculating a loss value L between the network prediction and the real sample by using a loss function of the cascaded regional convolutional neural network model shown as the following formula:
Figure DEST_PATH_IMAGE041
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE043
the regression loss of the first stage is shown,
Figure DEST_PATH_IMAGE045
the regression loss in the second stage is shown,
Figure DEST_PATH_IMAGE047
to classify the loss, parameters
Figure DEST_PATH_IMAGE049
A hyperparameter that balances the classification loss and the regression loss of the cascaded regional convolutional neural network model;
the classification loss function adopts a cross entropy loss function
Figure 694805DEST_PATH_IMAGE047
The definition is as follows:
Figure DEST_PATH_IMAGE051
wherein N represents the amount of training samples,
Figure 224924DEST_PATH_IMAGE053
is shown as
Figure 491957DEST_PATH_IMAGE055
Positive and negative labels of the samples, positive sample is 1, negative sample is 0,
Figure 690857DEST_PATH_IMAGE057
representing the confidence of the sample prediction as an open fire region;
the regression loss function adopts an intersection-to-parallel ratio loss function
Figure DEST_PATH_IMAGE059
The definition is as follows:
Figure DEST_PATH_IMAGE061
wherein, P represents the boundary box of the open fire area predicted by the convolution neural network model of the cascade area, G represents the boundary box of the open fire area really marked,
Figure DEST_PATH_IMAGE063
indicates the intersection between P and G,
Figure DEST_PATH_IMAGE065
denotes the union between P and G.
B3) Training the convolutional neural network in the cascade region by adopting BP algorithm, and weighting the networkWAnd bias parameterBPerforming learning and iterationNSecondly, until the network parameters reach the optimal values:
Figure DEST_PATH_IMAGE067
wherein the content of the first and second substances,lrepresenting the number of cascades of the concatenated regional convolutional neural network,
Figure DEST_PATH_IMAGE069
is shown aslThe weight matrix of the stage is determined,
Figure DEST_PATH_IMAGE071
is shown aslThe bias parameters of the stages are set to,
Figure DEST_PATH_IMAGE073
indicating the learning rate.
And step four, acquiring real-time video data of the transformer substation: and acquiring real-time video data of the transformer substation, and performing traditional preprocessing work according to the actual acquisition condition of the video data.
Fifthly, monitoring and early warning of fire videos of the transformer substation: as shown in fig. 2, after extracting characteristic images from the preprocessed real-time video data of the transformer substation through a multi-scale regional image extraction network, inputting the extracted characteristic images into a trained fire detection model of the transformer substation to perform fire monitoring and early warning. The method comprises the following specific steps:
(1) and extracting the characteristic image from the preprocessed real-time video data of the transformer substation through a multi-scale regional image extraction network.
(2) Inputting the extracted characteristic images into a trained fire detection model of the transformer substation, and outputting classification information of open fire or smoke regions (c, s),
WhereincThe indication is an open flame or smoke category,sconfidence of corresponding classification and corresponding location information: (x, y, w, h) Wherein (A) isx, y) And (a)w, h) Respectively representing the coordinates of the center point of the corresponding open fire or smoke region and the length and width of its bounding box.
(3) Calculating the area of the open fire or smoke region in two continuous frames of static images in the open fire video of the transformer substation according to the acquired position information of the open fire or smoke region
Figure DEST_PATH_IMAGE075
Which respectively represent the areas of the corresponding open fire or smoke regions in the current frame and the next frame, wherein the area a of the open fire or smoke is calculated from the information of its region bounding box as follows:
Figure DEST_PATH_IMAGE077
(4) calculating the fire growth rate of the open fire and smoke regions of successive frames
Figure DEST_PATH_IMAGE079
Figure DEST_PATH_IMAGE081
(5) Judging whether to alarm the fire according to the growth rate condition of the continuous f frames, which comprises the following steps:
C1) if the area of the fire or smoke region increases at a rate
Figure DEST_PATH_IMAGE082
Greater than threshold T, variableC plus 1, the formula is defined as:
Figure DEST_PATH_IMAGE084
wherein for an open fire T is set to 1.2 and smoke is set to 1.5;
Figure DEST_PATH_IMAGE086
C2) when C = f, namely the area growth rate of the area of the continuous f frames of open fire or smoke is larger than a set threshold value, the video monitoring site can be judged to contain fire, and fire alarm is triggered; otherwise, only carrying out fire warning prompt in the monitoring background.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (4)

1. A fire video monitoring method for a transformer substation is characterized by comprising the following steps:
11) acquiring video monitoring data of the transformer substation: acquiring video data of a transformer substation, extracting static image data of each frame, randomly selecting an open fire or smoke image, selecting open fire or smoke area position coordinate information and a corresponding label corresponding to the selected image, and taking the open fire or smoke area position coordinate information and the corresponding label as a training set;
12) constructing and training a multi-scale area image extraction network: constructing a multi-scale regional image extraction network based on the feature extraction network, inputting the obtained static image of the transformer substation into the multi-scale regional image extraction network for training, and extracting a feature image of an open fire or smoke image from the static image;
13) constructing and training a fire detection model of the transformer substation: constructing a transformer substation fire detection model based on the cascaded regional convolutional neural network, and training the transformer substation fire detection model;
14) acquiring real-time video data of the transformer substation: acquiring real-time video data of a transformer substation and preprocessing the real-time video data;
15) monitoring and early warning of fire videos of the transformer substation: and after extracting characteristic images from the preprocessed real-time video data of the transformer substation through a multi-scale regional image extraction network, inputting the characteristic images into a trained fire detection model of the transformer substation to perform fire monitoring and early warning.
2. The substation fire video monitoring method according to claim 1, wherein the constructing and training of the multi-scale regional image extraction network comprises the following steps:
21) setting a multi-scale regional image extraction network to comprise a depth residual error network and a multi-scale feature fusion network, wherein the depth residual error network is used as a reference network of the image feature extraction network, and the open fire image features extracted by the depth residual error network are input into the multi-scale feature fusion network so as to extract rich small-scale and large-scale feature information of the transformer substation regional open fire or smoke images;
22) the multi-scale feature fusion network is set to comprise three parallel network branches,
wherein the first network branch is a convolution kernel of size
Figure DEST_PATH_IMAGE001
Has a standard convolution sum and convolution kernel size of
Figure 235923DEST_PATH_IMAGE002
A swell convolution with a swell ratio of 1;
the second network branch is of convolution kernel size of
Figure 229286DEST_PATH_IMAGE002
Has a standard convolution sum and convolution kernel size of
Figure 921299DEST_PATH_IMAGE002
And a swell convolution with a swell ratio of 3;
the third network branch is the convolution kernel size of
Figure DEST_PATH_IMAGE003
Has a standard convolution sum and convolution kernel size of
Figure 658311DEST_PATH_IMAGE002
A swell convolution with a swell ratio of 5;
23) inputting the static image of the transformer substation into a multi-scale regional image extraction network for training:
231) the static image of the transformer substation is input into a depth residual error network, and the extracted open fire image characteristics are output by the depth residual error network;
232) the open fire image characteristics are input into a multi-scale characteristic fusion network, and the open fire image characteristic diagrams of the transformer substation area output by three network branches of the multi-scale characteristic fusion network are added through corresponding pixel points to realize characteristic fusion, so that the characteristic image of the open fire or smoke image is obtained.
3. The substation fire video monitoring method according to claim 1, wherein the building and training of the substation fire detection model comprises the following steps:
31) setting a fire detection model of the transformer substation based on a cascade region convolutional neural network structure, wherein the cascade region convolutional neural network structure is formed by cascading two-stage region convolutional neural networks;
32) the regional convolutional neural network for setting the first stage of the cascaded regional convolutional neural network structure comprises one layer of expansion convolutional layer, wherein the size of the convolutional core is
Figure 181696DEST_PATH_IMAGE002
An expansion ratio of 3; the parallel classification layer and regression layer adopt convolution kernel as
Figure 549223DEST_PATH_IMAGE001
The convolutional layer respectively outputs 2 neurons for indicating whether the image contains an open fire or smoke region, outputs 4 neurons for indicating the position coordinate information of the open fire or smoke region, and obtains the open fire or smoke interested region in the image through the classification layer of the regional convolutional neural network in the first stage and the output result;
33) setting the input of the regional convolution neural network at the second stage as an interested region of open fire or smoke in the image, and outputting the interested region as a specific category of the interested region, namely open fire or smoke;
setting the second stage of regional convolution neural network including one self-adaptive convolution layer and two parallel classification layers and regression layers, where the classification layer adopts convolution kernel
Figure 790849DEST_PATH_IMAGE002
The number of output channels is
Figure 65972DEST_PATH_IMAGE004
For outputting the confidence information of the category of the naked fire and the smoke, the regression layer adopts convolution kernel as
Figure 709443DEST_PATH_IMAGE001
The number of output channels is
Figure DEST_PATH_IMAGE005
Outputting position information of the open fire and smoke areas;
34) inputting characteristic images of open fire or smoke images into a fire detection model of the transformer substation for training, wherein the training of the cascade region convolutional neural network model comprises the determination of positive and negative training samples, the definition of a loss function and the learning of model parameters;
341) determining training samples by adopting a matching method based on cross-over ratio,
when the intersection ratio between the boundary box of the training sample and the actually marked boundary box is greater than the threshold value 0.5, the sample is a positive sample, otherwise, the sample is a negative sample; for the regional convolutional neural network at the second stage, the intersection ratio between the bounding box of the training sample and the truly labeled bounding box is greater than 0.7 of a threshold value;
342) calculating a loss value L between the network prediction and the real sample by using a loss function of the cascaded regional convolutional neural network model shown as the following formula:
Figure 982293DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE007
the regression loss of the first stage is shown,
Figure 711214DEST_PATH_IMAGE008
the regression loss in the second stage is shown,
Figure DEST_PATH_IMAGE009
to classify the loss, parameters
Figure 226247DEST_PATH_IMAGE010
A hyperparameter that balances the classification loss and the regression loss of the cascaded regional convolutional neural network model;
the classification loss function adopts a cross entropy loss function
Figure 724225DEST_PATH_IMAGE009
The definition is as follows:
Figure DEST_PATH_IMAGE011
wherein N represents the amount of training samples,
Figure 433555DEST_PATH_IMAGE012
is shown as
Figure DEST_PATH_IMAGE013
Positive and negative labels of the samples, positive sample is 1, negative sample is 0,
Figure 384193DEST_PATH_IMAGE014
representing the confidence of the sample prediction as an open fire region;
the regression loss function adopts an intersection-to-parallel ratio loss function
Figure DEST_PATH_IMAGE015
The definition is as follows:
Figure 204381DEST_PATH_IMAGE016
wherein, P represents the boundary box of the open fire area predicted by the convolution neural network model of the cascade area, G represents the boundary box of the open fire area really marked,
Figure DEST_PATH_IMAGE017
indicates the intersection between P and G,
Figure 556865DEST_PATH_IMAGE018
represents the union between P and G;
343) training the convolutional neural network in the cascade region by adopting BP algorithm, and weighting the networkWAnd bias parameterBPerforming learning and iterationNThe next time network parameters reach the optimal:
Figure DEST_PATH_IMAGE019
wherein the content of the first and second substances,lrepresenting the number of cascades of the concatenated regional convolutional neural network,
Figure 437097DEST_PATH_IMAGE020
is shown aslThe weight matrix of the stage is determined,
Figure DEST_PATH_IMAGE021
is shown aslThe bias parameters of the stages are set to,
Figure 875031DEST_PATH_IMAGE022
indicating the learning rate.
4. The substation fire video monitoring method according to claim 1, wherein the monitoring and early warning of the substation fire video comprises the following steps:
41) extracting a characteristic image from the preprocessed real-time video data of the transformer substation through a multi-scale regional image extraction network;
42) inputting the extracted characteristic images into a trained fire detection model of the transformer substation, and outputting classification information of open fire or smoke regions (c, s),
WhereincThe indication is an open flame or smoke category,sconfidence of corresponding classification and corresponding location information: (x, y, w, h) Wherein (A) isx, y) And (a)w, h) Respectively representing the coordinates of the central point of the corresponding open fire or smoke area and the length and width of the bounding box thereof;
43) calculating the area of the open fire or smoke region in two continuous frames of static images in the open fire video of the transformer substation according to the acquired position information of the open fire or smoke region
Figure 295648DEST_PATH_IMAGE024
Which respectively represent the areas of the corresponding open fire or smoke regions in the current frame and the next frame, wherein the area a of the open fire or smoke is calculated from the information of its region bounding box as follows:
Figure 768218DEST_PATH_IMAGE026
44) calculating the fire growth rate of the open fire and smoke regions of successive frames
Figure 317886DEST_PATH_IMAGE028
Figure 243116DEST_PATH_IMAGE030
45) Judging whether to alarm the fire according to the growth rate condition of the continuous f frames, which comprises the following steps:
451) if the area of the fire or smoke region increases at a rate
Figure DEST_PATH_IMAGE031
Greater than threshold T, variable C plus 1, the formula is defined as:
Figure DEST_PATH_IMAGE033
wherein for an open fire T is set to 1.2 and smoke is set to 1.5;
Figure DEST_PATH_IMAGE035
452) when C = f, namely the area growth rate of the area of the continuous f frames of open fire or smoke is larger than a set threshold value, the video monitoring site can be judged to contain fire, and fire alarm is triggered; otherwise, only carrying out fire warning prompt in the monitoring background.
CN202210297974.5A 2022-03-25 2022-03-25 Transformer substation fire video monitoring method Active CN114399719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210297974.5A CN114399719B (en) 2022-03-25 2022-03-25 Transformer substation fire video monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210297974.5A CN114399719B (en) 2022-03-25 2022-03-25 Transformer substation fire video monitoring method

Publications (2)

Publication Number Publication Date
CN114399719A true CN114399719A (en) 2022-04-26
CN114399719B CN114399719B (en) 2022-06-17

Family

ID=81234149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210297974.5A Active CN114399719B (en) 2022-03-25 2022-03-25 Transformer substation fire video monitoring method

Country Status (1)

Country Link
CN (1) CN114399719B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115050159A (en) * 2022-06-08 2022-09-13 重庆工商大学 Chemical laboratory fire prevention early warning method and system
CN115909645A (en) * 2023-01-09 2023-04-04 西安易诺敬业电子科技有限责任公司 Workshop production safety early warning system and early warning method
CN116311751A (en) * 2023-05-16 2023-06-23 陕西开来机电设备制造有限公司 Underground coal mine use fire prevention and extinguishment electric control system

Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068198A1 (en) * 2016-09-06 2018-03-08 Carnegie Mellon University Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network
US20190080604A1 (en) * 2017-09-08 2019-03-14 Connaught Electronics Ltd. Freespace detection in a driver assistance system of a motor vehicle with a neural network
CN109670452A (en) * 2018-12-20 2019-04-23 北京旷视科技有限公司 Method for detecting human face, device, electronic equipment and Face datection model
CN109711288A (en) * 2018-12-13 2019-05-03 西安电子科技大学 Remote sensing ship detecting method based on feature pyramid and distance restraint FCN
CN109840905A (en) * 2019-01-28 2019-06-04 山东鲁能软件技术有限公司 Power equipment rusty stain detection method and system
CN109903507A (en) * 2019-03-04 2019-06-18 上海海事大学 A kind of fire disaster intelligent monitor system and method based on deep learning
CN110348390A (en) * 2019-07-12 2019-10-18 创新奇智(重庆)科技有限公司 A kind of training method, computer-readable medium and the system of fire defector model
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A kind of fire video detection and method for early warning based on image multiple features fusion
US20200167586A1 (en) * 2018-11-26 2020-05-28 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for detecting region of interset in image
CN111310615A (en) * 2020-01-23 2020-06-19 天津大学 Small target traffic sign detection method based on multi-scale information and residual error network
CN111444828A (en) * 2020-03-25 2020-07-24 腾讯科技(深圳)有限公司 Model training method, target detection method, device and storage medium
CN111462451A (en) * 2019-11-01 2020-07-28 武汉纺织大学 Straw burning detection alarm system based on video information
CN111639620A (en) * 2020-06-08 2020-09-08 深圳航天智慧城市***技术研究院有限公司 Fire disaster analysis method and system based on visible light image recognition
CN111967480A (en) * 2020-09-07 2020-11-20 上海海事大学 Multi-scale self-attention target detection method based on weight sharing
CN112465821A (en) * 2020-12-22 2021-03-09 中国科学院合肥物质科学研究院 Multi-scale pest image detection method based on boundary key point perception
CN112686190A (en) * 2021-01-05 2021-04-20 北京林业大学 Forest fire smoke automatic identification method based on self-adaptive target detection
CN112861635A (en) * 2021-01-11 2021-05-28 西北工业大学 Fire and smoke real-time detection method based on deep learning
CN112950634A (en) * 2021-04-22 2021-06-11 内蒙古电力(集团)有限责任公司内蒙古电力科学研究院分公司 Method, equipment and system for identifying damage of wind turbine blade based on unmanned aerial vehicle routing inspection
CN113469951A (en) * 2021-06-08 2021-10-01 燕山大学 Hub defect detection method based on cascade region convolutional neural network
EP3940454A1 (en) * 2020-07-16 2022-01-19 Goodrich Corporation Helicopter search light and method for detection and tracking of anomalous or suspicious behaviour
CN114048789A (en) * 2021-07-19 2022-02-15 青岛科技大学 Winebottle fault detection based on improved Cascade R-CNN
CN114092785A (en) * 2021-10-20 2022-02-25 杭州电子科技大学 Outdoor fire smoke detection method

Patent Citations (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068198A1 (en) * 2016-09-06 2018-03-08 Carnegie Mellon University Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network
US20190080604A1 (en) * 2017-09-08 2019-03-14 Connaught Electronics Ltd. Freespace detection in a driver assistance system of a motor vehicle with a neural network
US20200167586A1 (en) * 2018-11-26 2020-05-28 Shanghai United Imaging Intelligence Co., Ltd. Systems and methods for detecting region of interset in image
CN109711288A (en) * 2018-12-13 2019-05-03 西安电子科技大学 Remote sensing ship detecting method based on feature pyramid and distance restraint FCN
CN109670452A (en) * 2018-12-20 2019-04-23 北京旷视科技有限公司 Method for detecting human face, device, electronic equipment and Face datection model
CN109840905A (en) * 2019-01-28 2019-06-04 山东鲁能软件技术有限公司 Power equipment rusty stain detection method and system
CN109903507A (en) * 2019-03-04 2019-06-18 上海海事大学 A kind of fire disaster intelligent monitor system and method based on deep learning
CN110348390A (en) * 2019-07-12 2019-10-18 创新奇智(重庆)科技有限公司 A kind of training method, computer-readable medium and the system of fire defector model
CN110516609A (en) * 2019-08-28 2019-11-29 南京邮电大学 A kind of fire video detection and method for early warning based on image multiple features fusion
CN111462451A (en) * 2019-11-01 2020-07-28 武汉纺织大学 Straw burning detection alarm system based on video information
CN111310615A (en) * 2020-01-23 2020-06-19 天津大学 Small target traffic sign detection method based on multi-scale information and residual error network
CN111444828A (en) * 2020-03-25 2020-07-24 腾讯科技(深圳)有限公司 Model training method, target detection method, device and storage medium
CN111639620A (en) * 2020-06-08 2020-09-08 深圳航天智慧城市***技术研究院有限公司 Fire disaster analysis method and system based on visible light image recognition
EP3940454A1 (en) * 2020-07-16 2022-01-19 Goodrich Corporation Helicopter search light and method for detection and tracking of anomalous or suspicious behaviour
CN111967480A (en) * 2020-09-07 2020-11-20 上海海事大学 Multi-scale self-attention target detection method based on weight sharing
CN112465821A (en) * 2020-12-22 2021-03-09 中国科学院合肥物质科学研究院 Multi-scale pest image detection method based on boundary key point perception
CN112686190A (en) * 2021-01-05 2021-04-20 北京林业大学 Forest fire smoke automatic identification method based on self-adaptive target detection
CN112861635A (en) * 2021-01-11 2021-05-28 西北工业大学 Fire and smoke real-time detection method based on deep learning
CN112950634A (en) * 2021-04-22 2021-06-11 内蒙古电力(集团)有限责任公司内蒙古电力科学研究院分公司 Method, equipment and system for identifying damage of wind turbine blade based on unmanned aerial vehicle routing inspection
CN113469951A (en) * 2021-06-08 2021-10-01 燕山大学 Hub defect detection method based on cascade region convolutional neural network
CN114048789A (en) * 2021-07-19 2022-02-15 青岛科技大学 Winebottle fault detection based on improved Cascade R-CNN
CN114092785A (en) * 2021-10-20 2022-02-25 杭州电子科技大学 Outdoor fire smoke detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ERIC KEWANG 等: "Multi-Path Dilated Residual Network for Nuclei Segmentation and Detection", 《CELLS》, 23 May 2019 (2019-05-23), pages 1 - 19 *
张晓雅 等: "级联结构的遥感目标检测算法", 《计算机辅助设计与图形学学报》, vol. 33, no. 10, 31 October 2021 (2021-10-31), pages 1524 - 1531 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115050159A (en) * 2022-06-08 2022-09-13 重庆工商大学 Chemical laboratory fire prevention early warning method and system
CN115909645A (en) * 2023-01-09 2023-04-04 西安易诺敬业电子科技有限责任公司 Workshop production safety early warning system and early warning method
CN116311751A (en) * 2023-05-16 2023-06-23 陕西开来机电设备制造有限公司 Underground coal mine use fire prevention and extinguishment electric control system
CN116311751B (en) * 2023-05-16 2023-12-08 陕西开来机电设备制造有限公司 Underground coal mine use fire prevention and extinguishment electric control system

Also Published As

Publication number Publication date
CN114399719B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
CN114399719B (en) Transformer substation fire video monitoring method
CN108537215B (en) Flame detection method based on image target detection
CN111126136B (en) Smoke concentration quantification method based on image recognition
CN110688925B (en) Cascade target identification method and system based on deep learning
CN111444939B (en) Small-scale equipment component detection method based on weak supervision cooperative learning in open scene of power field
CN111368690B (en) Deep learning-based video image ship detection method and system under influence of sea waves
Su et al. RCAG-Net: Residual channelwise attention gate network for hot spot defect detection of photovoltaic farms
CN113469050B (en) Flame detection method based on image fine classification
CN111401419A (en) Improved RetinaNet-based employee dressing specification detection method
CN110827505A (en) Smoke segmentation method based on deep learning
CN112861635A (en) Fire and smoke real-time detection method based on deep learning
CN112163572A (en) Method and device for identifying object
CN114648714A (en) YOLO-based workshop normative behavior monitoring method
CN115439458A (en) Industrial image defect target detection algorithm based on depth map attention
CN114155474A (en) Damage identification technology based on video semantic segmentation algorithm
Yandouzi et al. Investigation of combining deep learning object recognition with drones for forest fire detection and monitoring
CN115984543A (en) Target detection algorithm based on infrared and visible light images
CN115526852A (en) Molten pool and splash monitoring method in selective laser melting process based on target detection and application
KR102602439B1 (en) Method for detecting rip current using CCTV image based on artificial intelligence and apparatus thereof
CN114596273B (en) Intelligent detection method for multiple defects of ceramic substrate by using YOLOV4 network
CN116994161A (en) Insulator defect detection method based on improved YOLOv5
CN111191575B (en) Naked flame detection method and system based on flame jumping modeling
CN115311601A (en) Fire detection analysis method based on video analysis technology
CN112967335A (en) Bubble size monitoring method and device
CN112465821A (en) Multi-scale pest image detection method based on boundary key point perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant