CN113128422B - Image smoke and fire detection method and system for deep neural network - Google Patents

Image smoke and fire detection method and system for deep neural network Download PDF

Info

Publication number
CN113128422B
CN113128422B CN202110441498.5A CN202110441498A CN113128422B CN 113128422 B CN113128422 B CN 113128422B CN 202110441498 A CN202110441498 A CN 202110441498A CN 113128422 B CN113128422 B CN 113128422B
Authority
CN
China
Prior art keywords
image
sample
network
module
pyrotechnic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110441498.5A
Other languages
Chinese (zh)
Other versions
CN113128422A (en
Inventor
陈秀祥
张大福
李秋华
胡俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Helpsoft Industry Co ltd
Original Assignee
Chongqing Helpsoft Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Helpsoft Industry Co ltd filed Critical Chongqing Helpsoft Industry Co ltd
Priority to CN202110441498.5A priority Critical patent/CN113128422B/en
Publication of CN113128422A publication Critical patent/CN113128422A/en
Application granted granted Critical
Publication of CN113128422B publication Critical patent/CN113128422B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/188Vegetation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30188Vegetation; Agriculture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Fire-Detection Mechanisms (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to an image smoke and fire detection method and system of a deep neural network, wherein the method comprises the following steps: drawing out boundary points of pyrotechnic regions on a sample image in the image sample set, generating a corresponding mask image, and multiplying the mask image with the sample image to obtain a region image only containing the pyrotechnic regions; constructing a pyrotechnical target simulation framework of the condition depth convolution generation countermeasure network, and sending the regional image into the condition depth convolution generation countermeasure network to perform training fitting to generate a new pyrotechnical sample; the method comprises the steps of obtaining an initial image of a forest fire monitoring system which does not contain a firework target, fusing the initial image with a firework sample in the step S2, and constructing a training data set, wherein the system comprises a camera module, an image set module, an image operation module, a processing module, a neural network building module and a data operation module. According to the invention, the smoke and fire sample is acquired without manual ignition at the initial time of forest environment fire monitoring, so that the method is more convenient, the risk of forest fire caused by manual ignition is prevented, and the cost is saved.

Description

Image smoke and fire detection method and system for deep neural network
Technical Field
The invention relates to the technical field of image processing, in particular to an image smoke and fire detection method and system of a deep neural network.
Background
Forest resources are very important natural resources, and loss caused by forest fires is very great every year, so prevention of forest fires is very important. The traditional forest fire prevention means is mainly based on manual inspection, and the method has the defects of poor timeliness, low efficiency, high cost and the like. In order to solve the problem of forest fire prevention by the existing manual inspection means, the detection system for forest fire prevention based on the computer technology and the photoelectric sensor can be used for carrying out large-scale, long-time and rapid scanning and monitoring on the forest, and is applied on a large scale. Accordingly, the automatic forest smoke and fire detection and positioning technology based on digital image processing and digital video processing is widely applied, and the forest smoke and fire inspection early warning efficiency and precision are greatly improved.
However, the traditional digital image processing technology based on the artificial design features has the problems of poor adaptability, high false alarm rate and the like when processing forest inspection images and videos, such as poor adaptability to different seasons and different illumination, and difficulty in distinguishing clouds, fog, smoke, fire and the like.
Based on the problem of poor adaptability, the deep neural network technology is widely applied to image processing by the strong fitting capability. The deep neural network is applied to smoke and fire detection in the forest inspection video image, so that the detection precision is greatly improved, and the false alarm rate is reduced. During training, the deep neural network needs a large number of firework image samples in different scenes, and generalization capability of the deep neural network is improved. However, the great number of pyrotechnic images acquired in relation to actual scenes is very difficult, and in particular for newly deployed monitoring device scenes, manual ignition is often relied on to acquire pyrotechnic samples, with extremely high risks and costs. When the deep neural network is deployed, the conventional deep neural network needs large computing resources, has high requirements on computing capacity, usually needs a GPU computing server, has high cost, is difficult to integrate to the front end of a camera, and realizes low-cost, real-time and efficient detection from end to end.
Disclosure of Invention
The invention aims to provide an image smoke and fire detection method of a deep neural network, which aims to solve the problems of risk and high cost of acquiring a smoke and fire sample by manual ignition.
The image smoke and fire detection method of the deep neural network in the scheme comprises the following steps of:
step S1, acquiring a plurality of sample images with pyrotechnic objects, forming an image sample set from the sample images containing the pyrotechnic objects, drawing out boundary points of pyrotechnic areas on the sample images in the image sample set, generating corresponding mask images, and multiplying the mask images with the sample images to obtain an area image only containing the pyrotechnic areas;
s2, constructing a pyrotechnic target simulation frame of a condition depth convolution generation countermeasure network, sending the regional image into the condition depth convolution generation countermeasure network for training, and fitting to generate a new pyrotechnic sample;
and S3, acquiring an initial image of the forest fire monitoring system which does not contain the pyrotechnic target, and fusing the initial image with the pyrotechnic sample in the step S2 to construct a training data set.
The beneficial effect of this scheme is:
the method comprises the steps of forming an image sample set by using an obtained sample image containing smoke, namely, shooting an image of a fire disaster in an actual environment, generating a mask image by drawing out boundary points of a smoke and fire area from the sample image, multiplying the mask image by the sample image to obtain an area image of the smoke and fire area, training to obtain a smoke and fire sample, fusing the smoke and fire sample to an initial image which does not generate a fire disaster to obtain a training data set, wherein the initial image is an image of the to-be-detected forest environment when the fire disaster is not transmitted, taking the image in the training data set as the smoke and fire sample, and acquiring the smoke and fire sample without manual ignition at the forest environment needing to monitor the fire disaster.
Further, the step S2 includes the following sub-steps:
s2.1: constructing a generation network in a generation countermeasure network by adopting a deconvolution neural network, inputting random noise, and outputting a simulation sample;
s2.2: the sample image and the generated simulation sample are used for training a discrimination network in the generation countermeasure network, and the probability that the simulation sample belongs to the smoke and fire type is output;
s2.3: updating and generating network parameters through a back propagation algorithm, and generating a group of new simulation samples for judging training of the network;
s2.4: repeating S2.1, S2.2 and S2.3, training the generating network and the judging network to enable the data distribution of the generated simulation sample and the data distribution of the sample image to be 90% identical, and enabling the judging network to be incapable of distinguishing the simulation sample and the sample image so as to randomly generate the simulation image of the pyrotechnic target.
The beneficial effects are that: after the network parameters are updated and generated through a back propagation algorithm and the simulation samples are generated, training of a discrimination network is performed, the discrimination network is utilized to judge the simulation samples, and a simulation image of the pyrotechnic target is generated, so that accuracy of the simulation image of the pyrotechnic target is improved.
Further, the step S3 includes the following substeps:
s3.1: scaling sample images in the image sample set sequentially according to actual scaling parameters of the camera;
s3.2: overlapping the zoomed sample image with an initial image which is recorded by a monitoring system and does not contain the pyrotechnic target at a randomly selected position, and recording the regional information of the pyrotechnic target so as to complete the image fusion operation;
s3.3: and after the fusion operation of all the images in sequence, the construction of the training data set is completed.
The beneficial effects are that: and scaling the sample image according to the actual scaling parameters of the camera, and then performing superposition fusion, so that the accuracy of the pyrotechnic region after the image fusion is improved.
Further, the method also comprises the following steps:
s4, scaling the fused image in the training data set according to actual scaling parameters of the camera;
s5, performing photometric distortion operation on the scaled fusion image to generate brightness samples under different illumination;
s6, carrying out geometric distortion operation on the brightness sample subjected to the optical distortion to obtain a processed sample, wherein the geometric distortion operation comprises stretching, rotating and translating operations;
step S7, randomly extracting two processed samples to be fused according to a preset proportionality coefficient, and repeating the fusion operation for a plurality of times;
and S8, transmitting the training data set processed in the steps S4-S7 into a preset depth network for smoke and fire detection training to obtain a complete weight model.
The beneficial effects are that: the images in the training data set are fusion images, the fusion images in the training data set are processed and then repeatedly fused, so that the coverage degree of the firework samples with different brightness to the actual firework forms is higher, and the accuracy of detecting the firework by the complete weight model is improved.
Further, the method also comprises the following steps:
s9, compressing the 16-bit floating point precision of the complete weight model in the step S8 to 8-bit integer data precision;
and S10, reconstructing and optimizing a preset depth network structure.
The beneficial effects are that: and the complete weight model is compressed, so that the operation speed is improved, and the operation cost is reduced.
Further, the step S10 includes the following substeps:
s10.1: by analyzing the preset depth network model, useless output layers in the preset depth network are eliminated;
s10.2: merging a convolution layer, a batch normalization layer and a rectification linear unit in a preset depth network into one layer, and vertically integrating a network structure;
s10.3: and merging layers which are input into the same tensor and execute the same operation in the preset depth network, and horizontally combining the network structure.
The beneficial effects are that: and the accuracy of fire detection is improved through reconstruction and optimization of a preset depth network.
The image smoke and fire detection system of the deep neural network comprises a camera module, an image set module, an image operation module, a processing module, a neural network building module and a data operation module;
the system comprises an image acquisition module, an image collection module, a processing module, a neural network construction module, a processing module, a condition depth convolution module, a training data set and a full-scale weight model, wherein the image acquisition module is used for shooting an initial image of a forest which does not contain a firework target, the image collection module is used for acquiring a plurality of sample images with fireworks, the image operation module is used for drawing boundary points of a firework area on the sample images and generating a mask image, the processing module is used for multiplying the mask image with the sample images to obtain an area image which only contains the firework area, the neural network construction module is used for constructing a condition depth convolution to generate a firework target simulation frame of an countermeasure network, the neural network construction module is used for adding the area image into the condition depth convolution to generate the countermeasure network to train and fitting to generate a new firework sample, the processing module is used for acquiring the initial image and fusing with the firework sample to construct a training data set, the data set is used for scaling the fused image in the training data set with the training data set, the data operation module is used for scaling the fused image in the training data set is performed with the actual scaling parameters of the camera, the fused image after scaling is subjected to perform photometric distortion operation to generate brightness samples under different illumination, the brightness samples after the photometric distortion operation are subjected to geometry distortion, the brightness samples are subjected to geometry distortion to obtain the processing samples, the processing samples are randomly acquire the processing samples, and the two processing samples are fused according to preset scale factors and the preset parameters are fused according to the preset scale factors, and the preset.
The beneficial effect of this scheme is:
the method comprises the steps of obtaining a sample image containing a firework target through an image set module, obtaining an original image of a forest fire monitoring fire which does not occur at an initial moment through a camera module, obtaining an area image of an independent firework area from the sample image, adding the area image to a condition depth volume data to generate a firework sample in an countermeasure network, fusing the initial image and the firework sample to form a training data set, sequentially performing scaling, photometric distortion and geometric distortion treatment on a fused image in the training data set, fusing and training two random treatment samples to obtain a complete weight model, and not needing to be ignited to form the firework sample at the initial monitoring moment of the forest fire.
Further, the processing module compresses the 16-bit floating point precision of the complete weight model to 8-bit integer data precision, and reconstructs and optimizes the structure of the preset depth network.
The beneficial effects are that: the processing module compresses the complete weight model accurately, and carries out structural reconstruction and optimization of a preset depth network, so that the accuracy of subsequent fire detection is improved.
Drawings
FIG. 1 is a block flow diagram of a method for detecting smoke and fire in an image of a deep neural network according to an embodiment of the present invention;
FIG. 2 is a schematic block diagram of an image smoke and fire detection system of a deep neural network in accordance with an embodiment of the present invention;
FIG. 3 is a schematic block diagram of a training data set training process in an image smoke and fire detection system of a deep neural network according to an embodiment of the present invention;
fig. 4 is a flowchart of training YOLO V4 deep neural network in an image smoke and fire detection method of the deep neural network according to an embodiment of the present invention.
Detailed Description
Further details are provided below with reference to the specific embodiments.
Example 1
An image smoke and fire detection system of a deep neural network is shown in fig. 2: the system comprises a camera module, an image set module, an image operation module, a processing module, a neural network building module and a data operation module, wherein the camera module can use the existing camera equipment for forest fire monitoring, the image set module can use the existing hardware storage equipment, the image operation module can use the existing software, the processing module can use the existing four-core 64-bit ARM CPU, 128-core integrated NVIDIA GPU and 4GB LPDDR4 memory.
The image collection module is used for acquiring a plurality of sample images with fireworks; the image operation module is used for drawing out boundary points of a smoke and fire area on the sample image and generating a mask image, and can process the sample image through the existing image processing software; the processing module multiplies the mask image with the sample image to obtain a region image containing only the pyrotechnic region.
The neural network building module is used for building a pyrotechnic target simulation frame of a condition depth convolution generation countermeasure network, the neural network building module adds an area image into the condition depth volume data to generate the countermeasure network to train and fit to generate a new pyrotechnic sample, the processing module acquires an initial image and fuses the initial image and the pyrotechnic sample to form a training data set, the data operation module is used for scaling the fused image in the training data set by the actual scaling parameters of a camera, the data operation module performs photometric distortion operation on the scaled fused image to generate brightness samples under different illumination, the data operation module performs geometric distortion on the photometric distortion brightness samples to obtain processing samples, the processing module randomly acquires two processing samples for multiple times to fuse according to a preset scale coefficient, the processing module sends the training data set fused by the preset scale coefficient into the preset depth network to perform pyrotechnic detection training to obtain a complete weight model, and the processing module compresses 16-bit floating point precision of the complete weight model to 8-bit integer data precision and reconstructs and optimizes the structure of the preset depth network.
The image smoke and fire detection method of the image smoke and fire detection system based on the depth neural network comprises the following steps as shown in fig. 1:
step S1, acquiring a plurality of sample images with pyrotechnic objects, forming an image sample set from the sample images containing the pyrotechnic objects, drawing out boundary points of pyrotechnic areas on the sample images in the image sample set, generating corresponding mask images, and multiplying the mask images with the sample images to obtain an area image only containing the pyrotechnic areas;
step S2, constructing a pyrotechnic target simulation frame of a condition depth convolution generation countermeasure network, sending an area image into the condition depth convolution generation countermeasure network for training, and fitting to generate a new pyrotechnic sample, wherein the method specifically comprises the following substeps of S2.1: constructing a generation network in a generation countermeasure network by adopting a deconvolution neural network, inputting random noise, and outputting a simulation sample; s2.2: the sample image and the generated simulation sample are used for training a discrimination network in the generation countermeasure network, and the probability that the simulation sample belongs to the smoke and fire type is output; s2.3: updating and generating network parameters through a back propagation algorithm, and generating a group of new simulation samples for judging training of the network; s2.4: repeating the steps S2.1, S2.2 and S2.3, training the generating network and the judging network to ensure that the data distribution of the generated simulation sample and the data distribution of the sample image are 90% identical, and the judging network cannot distinguish the simulation sample and the sample image so as to randomly generate the simulation image of the pyrotechnic target;
step S3, obtaining an initial image of a forest fire monitoring system which does not contain a firework target, fusing the initial image with the firework sample in the step S2, and constructing a training data set, wherein the method specifically comprises the following substeps, as shown in FIG. 3, S3.1: scaling sample images in the image sample set sequentially according to actual scaling parameters of the camera; s3.2: overlapping the zoomed sample image with an initial image which is recorded by a monitoring system and does not contain the pyrotechnic target at a randomly selected position, and recording the regional information of the pyrotechnic target so as to complete the image fusion operation; s3.3: after the fusion operation of all the images in sequence, the construction of the training data set is completed;
s4, scaling the fused image in the training data set according to actual scaling parameters of the camera;
s5, performing photometric distortion operation on the scaled fusion image to generate brightness samples under different illumination;
s6, carrying out geometric distortion operation on the brightness sample subjected to the optical distortion to obtain a processed sample, wherein the geometric distortion operation comprises stretching, rotating and translating operations;
step S7, randomly extracting two processed samples to be fused according to a preset proportionality coefficient, and repeating the fusion operation for a plurality of times;
and S8, sending the training data set processed in the steps S4-S7 into a preset depth network, as shown in FIG. 4, performing smoke and fire detection training to obtain a complete weight model, wherein the preset depth network is an existing YOLO V4 depth neural network, and the YOLO V4 network structure consists of a backbone network CSPDarknet53, a Neck structure and a Head structure. The backbone network continuously extracts feature graphs, and the Neck extracts spliced feature graphs to detect the class and the position of the Head predicted object;
s9, compressing the 16-bit floating point precision of the complete weight model in the step S8 to 8-bit integer data precision;
step S10, reconstructing and optimizing a preset depth network structure, and during reconstruction and optimization, performing the following substeps:
s10.1: analyzing a preset depth network model, wherein the analysis is realized by using the self-contained function of the existing software, and a useless output layer in the preset depth network is eliminated;
s10.2: merging a convolution layer, a batch normalization layer and a rectification linear unit in a preset depth network into one layer, and vertically integrating a network structure;
s10.3: and merging layers which are input into the same tensor and execute the same operation in the preset depth network, and horizontally combining the network structure.
According to the forest smoke and fire detection in the first embodiment, the artificial ignition is not needed to manufacture smoke and fire samples during detection in a new area, labor is saved, the use is more convenient, and the effects of low cost, real-time and high-efficiency detection can be achieved.
Example two
The difference from the first embodiment is that the image smoke and fire detection system of the deep neural network further includes a color recognition module, the color recognition module recognizes color information at a plurality of preset position points in the sample image and sends the color information to the processing module, the color recognition module can recognize the color information by using existing PS software, the preset position points are set according to intersections in the grids, the density of the preset position points is set based on that each grid area in the grids is one square millimeter, the processing module obtains the color information and counts the color types, the processing module obtains the color type corresponding to the maximum count value, the processing module compares the color type corresponding to the maximum count value with the preset type and judges the shooting time information of the sample image, the shooting time information includes daytime and nighttime, for example, the color type corresponding to the maximum count value is green and judges the daytime and the nighttime, the color type corresponding to the maximum count value is black and judges the nighttime, the processing module judges whether the shooting time information of the plurality of sample images has the daytime and the nighttime at the same time, if the sample image is obtained and the sample image is sent to the image operation module is paused, and if the sample image operation module has the image operation time.
In step S1, the sample image is acquired, comprising the sub-steps of:
s1.1, identifying color information at a plurality of preset position points in a sample image, arranging the preset position points in a grid mode, counting color types of the color information to obtain count values, comparing count values of different color types, judging color types corresponding to the maximum count value, comparing the color types corresponding to the maximum count value with the preset types, and judging shooting time information of the sample image;
s1.2, forming the sample image into an image sample set when the shooting time information of the plurality of sample images has the daytime time and the nighttime time at the same time, and suspending the sample image from the image sample set when the shooting time information of the plurality of sample images has the daytime time or the nighttime time until the shooting time information of the plurality of sample images has the daytime time and the nighttime at the same time, and forming the sample image into the image sample set.
At daytime and nighttime, the difference between the subsequent processing of the pyrotechnic target and the actual pyrotechnic is caused due to the difference in brightness, color and other information of the pyrotechnic target on the photographed sample image. According to the method, the color information of a plurality of position points in the sample image is identified, the number of each color type is determined, time information is determined according to the color type corresponding to the maximum count value, whether the sample image is formed into an image sample set is judged according to the time information, the integrity of the smoke and fire target form which can be contained in the sample image obtained before smoke and fire detection is improved, and the processed simulated smoke and fire is enabled to be more similar to actual smoke and fire.
The foregoing is merely exemplary embodiments of the present invention, and specific structures and features that are well known in the art are not described in detail herein. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the structure of the present invention, and these should also be considered as the scope of the present invention, which does not affect the effect of the implementation of the present invention and the utility of the patent. The protection scope of the present application shall be subject to the content of the claims, and the description of the specific embodiments and the like in the specification can be used for explaining the content of the claims.

Claims (7)

1. The image smoke and fire detection method of the deep neural network is characterized by comprising the following steps of:
step S1, acquiring a plurality of sample images with pyrotechnic objects, forming an image sample set from the sample images containing the pyrotechnic objects, drawing out boundary points of pyrotechnic areas on the sample images in the image sample set, generating corresponding mask images, and multiplying the mask images with the sample images to obtain an area image only containing the pyrotechnic areas;
s2, constructing a pyrotechnic target simulation frame of a condition depth convolution generation countermeasure network, and sending an area image only comprising a pyrotechnic area into the condition depth convolution generation countermeasure network for training, and fitting to generate a new pyrotechnic sample;
step S3, acquiring an initial image of a forest fire monitoring system which does not contain a firework target, and fusing the initial image with the firework sample in the step S2 to construct a training data set;
s4, scaling the fused image in the training data set according to actual scaling parameters of the camera;
s5, performing photometric distortion operation on the scaled fusion image to generate brightness samples under different illumination;
s6, carrying out geometric distortion operation on the brightness sample subjected to the optical distortion to obtain a processed sample, wherein the geometric distortion operation comprises stretching, rotating and translating operations;
step S7, randomly extracting two processed samples to be fused according to a preset proportionality coefficient, and repeating the fusion operation for a plurality of times;
and S8, transmitting the training data set processed in the steps S4-S7 into a preset depth network for smoke and fire detection training to obtain a complete weight model.
2. The method for image smoke detection of a deep neural network of claim 1, wherein: said step S2 comprises the sub-steps of:
s2.1: constructing a generation network in a generation countermeasure network by adopting a deconvolution neural network, inputting random noise, and outputting a simulation sample;
s2.2: the sample image and the generated simulation sample are used for training a discrimination network in the generation countermeasure network, and the probability that the simulation sample belongs to the smoke and fire type is output;
s2.3: updating and generating network parameters through a back propagation algorithm, and generating a group of new simulation samples for judging training of the network;
s2.4: repeating S2.1, S2.2 and S2.3, training the generating network and the judging network to enable the data distribution of the generated simulation sample and the data distribution of the sample image to be 90% identical, and enabling the judging network to be incapable of distinguishing the simulation sample and the sample image so as to randomly generate the simulation image of the pyrotechnic target.
3. The method for image smoke detection of deep neural network according to claim 2, wherein: said step S3 comprises the sub-steps of:
s3.1: scaling sample images in the image sample set sequentially according to actual scaling parameters of the camera;
s3.2: overlapping the zoomed sample image with an initial image which is recorded by a monitoring system and does not contain the pyrotechnic target at a randomly selected position, and recording the regional information of the pyrotechnic target so as to complete the image fusion operation;
s3.3: and after the fusion operation of all the images in sequence, the construction of the training data set is completed.
4. The method for image smoke detection of a deep neural network of claim 1, wherein: the method also comprises the following steps:
s9, compressing the 16-bit floating point precision of the complete weight model in the step S8 to 8-bit integer data precision;
and S10, reconstructing and optimizing a preset depth network structure.
5. The method for image smoke detection of a deep neural network of claim 4, wherein: the step S10 comprises the following sub-steps:
s10.1: by analyzing the preset depth network model, useless output layers in the preset depth network are eliminated;
s10.2: merging a convolution layer, a batch normalization layer and a rectification linear unit in a preset depth network into one layer, and vertically integrating a network structure;
s10.3: and merging layers which are input into the same tensor and execute the same operation in the preset depth network, and horizontally combining the network structure.
6. The image smoke and fire detection system of the deep neural network comprises a camera module, an image set module, an image operation module, a processing module, a neural network building module and a data operation module;
the system comprises an image acquisition module, an image collection module, a data operation module, a processing module, a neural network construction module, a condition depth convolution module and a processing module, wherein the image acquisition module is used for shooting an initial image of a forest which does not contain a pyrotechnic target, the image collection module is used for acquiring a plurality of sample images with fireworks, the image operation module is used for drawing out boundary points of a pyrotechnic region on the sample images and generating a mask image, the processing module multiplies the mask image with the sample images to obtain a region image which only contains the pyrotechnic region, the neural network construction module is used for constructing a condition depth convolution to generate a pyrotechnic target simulation frame of an countermeasure network, the neural network construction module adds the region image which only contains the pyrotechnic region into the condition depth convolution to generate the countermeasure network to train and fit to generate a new pyrotechnic sample, the processing module acquires the initial image and fuses the initial image with the pyrotechnic sample to form a training data set, the data operation module is used for scaling the fused image in the training data set by the actual scaling parameters of the camera, the data operation module performs photometric distortion operation on the fused image after the scaling to generate brightness samples under different illumination, the data operation module performs geometric distortion on the brightness samples after the photometric distortion to obtain the brightness samples, the processed samples, the processing samples are randomly obtained, the processing samples are fused for a plurality of times, the training data set after the preset scale coefficient fusion is obtained according to the preset scale coefficient, and the training data set after the preset depth is sent into the preset depth to detect the smoke weight to obtain a complete model.
7. The deep neural network image smoke detection system of claim 6, wherein: the processing module compresses the 16-bit floating point precision of the complete weight model to 8-bit integer data precision, and reconstructs and optimizes the structure of the preset depth network.
CN202110441498.5A 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network Active CN113128422B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110441498.5A CN113128422B (en) 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110441498.5A CN113128422B (en) 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network

Publications (2)

Publication Number Publication Date
CN113128422A CN113128422A (en) 2021-07-16
CN113128422B true CN113128422B (en) 2024-03-29

Family

ID=76779275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110441498.5A Active CN113128422B (en) 2021-04-23 2021-04-23 Image smoke and fire detection method and system for deep neural network

Country Status (1)

Country Link
CN (1) CN113128422B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114664047A (en) * 2022-05-26 2022-06-24 长沙海信智能***研究院有限公司 Expressway fire identification method and device and electronic equipment
CN116468974B (en) * 2023-06-14 2023-10-13 华南理工大学 Smoke detection method, device and storage medium based on image generation

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118467A (en) * 2018-08-31 2019-01-01 武汉大学 Based on the infrared and visible light image fusion method for generating confrontation network
CN109409256A (en) * 2018-10-10 2019-03-01 东南大学 A kind of forest rocket detection method based on 3D convolutional neural networks
CN109460708A (en) * 2018-10-09 2019-03-12 东南大学 A kind of Forest fire image sample generating method based on generation confrontation network
CN109886227A (en) * 2019-02-27 2019-06-14 哈尔滨工业大学 Inside fire video frequency identifying method based on multichannel convolutive neural network
CN110991242A (en) * 2019-11-01 2020-04-10 武汉纺织大学 Deep learning smoke identification method for negative sample excavation
IT201800009442A1 (en) * 2018-10-15 2020-04-15 Laser Navigation Srl Control and management system of a process within an environment through artificial intelligence techniques and related method
CN111145275A (en) * 2019-12-30 2020-05-12 重庆市海普软件产业有限公司 Intelligent automatic control forest fire prevention monitoring system and method
EP3671261A1 (en) * 2018-12-21 2020-06-24 Leica Geosystems AG 3d surveillance system comprising lidar and multispectral imaging for object classification
CN111754446A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Image fusion method, system and storage medium based on generation countermeasure network
CN111882514A (en) * 2020-07-27 2020-11-03 中北大学 Multi-modal medical image fusion method based on double-residual ultra-dense network
CN112270207A (en) * 2020-09-27 2021-01-26 青岛邃智信息科技有限公司 Smoke and fire detection method in community monitoring scene
CN112507865A (en) * 2020-12-04 2021-03-16 国网山东省电力公司电力科学研究院 Smoke identification method and device
CN112633103A (en) * 2020-12-15 2021-04-09 中国人民解放军海军工程大学 Image processing method and device and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109118467A (en) * 2018-08-31 2019-01-01 武汉大学 Based on the infrared and visible light image fusion method for generating confrontation network
CN109460708A (en) * 2018-10-09 2019-03-12 东南大学 A kind of Forest fire image sample generating method based on generation confrontation network
CN109409256A (en) * 2018-10-10 2019-03-01 东南大学 A kind of forest rocket detection method based on 3D convolutional neural networks
IT201800009442A1 (en) * 2018-10-15 2020-04-15 Laser Navigation Srl Control and management system of a process within an environment through artificial intelligence techniques and related method
EP3671261A1 (en) * 2018-12-21 2020-06-24 Leica Geosystems AG 3d surveillance system comprising lidar and multispectral imaging for object classification
CN109886227A (en) * 2019-02-27 2019-06-14 哈尔滨工业大学 Inside fire video frequency identifying method based on multichannel convolutive neural network
CN110991242A (en) * 2019-11-01 2020-04-10 武汉纺织大学 Deep learning smoke identification method for negative sample excavation
CN111145275A (en) * 2019-12-30 2020-05-12 重庆市海普软件产业有限公司 Intelligent automatic control forest fire prevention monitoring system and method
CN111754446A (en) * 2020-06-22 2020-10-09 怀光智能科技(武汉)有限公司 Image fusion method, system and storage medium based on generation countermeasure network
CN111882514A (en) * 2020-07-27 2020-11-03 中北大学 Multi-modal medical image fusion method based on double-residual ultra-dense network
CN112270207A (en) * 2020-09-27 2021-01-26 青岛邃智信息科技有限公司 Smoke and fire detection method in community monitoring scene
CN112507865A (en) * 2020-12-04 2021-03-16 国网山东省电力公司电力科学研究院 Smoke identification method and device
CN112633103A (en) * 2020-12-15 2021-04-09 中国人民解放军海军工程大学 Image processing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于轻量级卷积神经网络的烟雾识别算法;袁飞等;《 西南交通大学学报》;第55卷(第05期);1111-1116 *

Also Published As

Publication number Publication date
CN113128422A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
US20210319561A1 (en) Image segmentation method and system for pavement disease based on deep learning
CN113128422B (en) Image smoke and fire detection method and system for deep neural network
CN111797890A (en) Method and system for detecting defects of power transmission line equipment
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN111985365A (en) Straw burning monitoring method and system based on target detection technology
CN111832398B (en) Unmanned aerial vehicle image distribution line pole tower ground wire broken strand image detection method
CN109815904B (en) Fire identification method based on convolutional neural network
KR102346676B1 (en) Method for creating damage figure using the deep learning-based damage image classification of facility
CN112215182B (en) Smoke identification method suitable for forest fire
CN111695493B (en) Method and system for detecting hidden danger of power transmission line
CN113469050A (en) Flame detection method based on image subdivision classification
CN111145222A (en) Fire detection method combining smoke movement trend and textural features
CN112184773A (en) Helmet wearing detection method and system based on deep learning
CN111539325A (en) Forest fire detection method based on deep learning
CN114648714A (en) YOLO-based workshop normative behavior monitoring method
CN111723656B (en) Smog detection method and device based on YOLO v3 and self-optimization
CN116503318A (en) Aerial insulator multi-defect detection method, system and equipment integrating CAT-BiFPN and attention mechanism
CN114399734A (en) Forest fire early warning method based on visual information
CN111723767B (en) Image processing method, device and computer storage medium
CN115083229B (en) Intelligent recognition and warning system of flight training equipment based on AI visual recognition
CN115100546A (en) Mobile-based small target defect identification method and system for power equipment
CN112990350B (en) Target detection network training method and target detection network-based coal and gangue identification method
CN113516082A (en) Detection method and device of safety helmet, computer equipment and storage medium
CN113516120A (en) Raise dust detection method, image processing method, device, equipment and system
CN113033289A (en) Safety helmet wearing inspection method, device and system based on DSSD algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant