CN107749067A - Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks - Google Patents

Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks Download PDF

Info

Publication number
CN107749067A
CN107749067A CN201710824117.5A CN201710824117A CN107749067A CN 107749067 A CN107749067 A CN 107749067A CN 201710824117 A CN201710824117 A CN 201710824117A CN 107749067 A CN107749067 A CN 107749067A
Authority
CN
China
Prior art keywords
mrow
msub
mtd
msubsup
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710824117.5A
Other languages
Chinese (zh)
Inventor
骆炎民
柳培忠
赵亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quanzhou City Hongye Mdt Infotech Ltd In Imitation
Huaqiao University
Original Assignee
Quanzhou City Hongye Mdt Infotech Ltd In Imitation
Huaqiao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quanzhou City Hongye Mdt Infotech Ltd In Imitation, Huaqiao University filed Critical Quanzhou City Hongye Mdt Infotech Ltd In Imitation
Priority to CN201710824117.5A priority Critical patent/CN107749067A/en
Publication of CN107749067A publication Critical patent/CN107749067A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Neurology (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks, and it preserves the first two field picture as original two field picture, and carry out Smoke Detection to each frame of video by reading video file:Original two field picture is added in context update first as reference and establishes background model, then foreground image is extracted by calculus of finite differences, and foreground image is carried out by dark threshold binary image to be filtrated to get candidate smoke region, finally load the depth convolutional neural networks model trained to automatically extract the high-level characteristic of candidate smoke region, judge whether candidate smoke region belongs to smoke region according to the characteristic vector extracted.The present invention is by the way that dark channel prior knowledge is added in sport foreground detection, common interference is effectively filtered, the environmental suitability of detection method is improved, while convolutional neural networks are used for the feature extraction of smog image, substantially increases the accuracy rate of detection.

Description

Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
Technical field
The invention belongs to fire monitoring technical field, and in particular to a kind of fire based on kinetic characteristic and convolutional neural networks Calamity smog detection method.
Background technology
Up to ten thousand of fire occurs daily for the whole world, causes the forest cover of hundreds of people injures and deaths and large area to damage.Fire is tight The security of the lives and property and natural ecological environment of the mankind is threatened again.Often emergentness is strong for fire, it is wide to involve scope, and is difficult place Put.Therefore, monitoring real-time to fire and timely early warning become particularly important.The condition of a fire is the discovery that the key for reducing loss as early as possible, Because the intensity of a fire, which once spreads, to be difficult to control.General fire early period of origination flame is smaller, but smog is but it is obvious that therefore right The detection of fire hazard aerosol fog is to judge the important evidence whether fire occurs in time.
Traditional fire hazard aerosol fog detecting system relies on sensor, only can just be worked when smog is close to sensor, so Using being restricted in open space, while this kind of sensor is easily disturbed by dust, air-flow and the factor such as artificial, by mistake Report rate is generally higher.With the fast development of computer vision and mode identification technology, the fire hazard aerosol fog detection based on video Method can be judged fire using abundant video image information, and is had substantially in large-scale dimension monitoring Advantage.
Fire detecting method based on video image, according to the difference of detection object, it can be generally divided into fire defector class Type and Smoke Detection type.When usual fire occurs the appearance of smog will earlier than flame, therefore the detection to fire hazard aerosol fog be and When judge the important evidence whether fire occurs.Toreyin et al. (Pattern Recognition Letters, 2006,27: Smoke Detection 49-58) is carried out according to wavelet analysis, the principle of this method is background edge to be caused to obscure when smog produces, The energy of HFS reduces, while chromatic component is decayed in scene, and brightness value declines, due to needing to integrate the background wheel of scene Exterior feature is analyzed, and limits the scope of application of algorithm.Piccinini et al. (15th IEEE International Conference on, ICIP 2008, California, October12-15,2008) by prospect energy and background energy Ratio line modeling, so as to splitting to smoke region, this method is preferable for Smoke Detection effect closer to the distance, but It is that computation complexity is higher, typically it is difficult to ensure that real-time.Fujiwara et al. (International Symposium on Communication and Information Technologies, SUPDET 2007, Orlando, Florida, March5-8,2007) it is theoretical according to self-similar fractal, extract smoke region from image, this method is for low contrast, fuzzy Smog image, the fractal characteristic of extraction is not sufficiently stable.China Patent Publication No. CN101441771A describes one kind and is based on color The video fire hazard smoke detecting method of color saturation degree and motor pattern, this method integrated use color saturation, average accumulated Amount and active movement ratio, can reduce system rate of false alarm, but hardly result in for these above-mentioned characteristics of remote scene Lasting statistics, therefore limit its use range.
The content of the invention
It is an object of the invention in view of the deficienciess of the prior art, providing one kind is based on kinetic characteristic and convolutional Neural The fire hazard smoke detecting method of network, it is effectively filtered by the way that dark channel prior knowledge is added in sport foreground detection Common interference, the environmental suitability of detection method is improved, while convolutional neural networks are used for the feature of smog image Extraction, substantially increase the accuracy rate of detection.Suitable for large-scale forest and mountain area scene.
To achieve the above object, the technical solution adopted by the present invention is:
A kind of fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks, its by reading video file, The first two field picture is preserved as original two field picture, and Smoke Detection is carried out to each frame of video:Add first in context update Enter original two field picture as referring to and establishing background model, foreground image is then extracted by calculus of finite differences, and pass through dark threshold Value image carries out being filtrated to get candidate smoke region to foreground image, finally loads the depth convolutional neural networks model trained The high-level characteristic of candidate smoke region is automatically extracted, whether candidate smoke region is judged according to the characteristic vector extracted Belong to smoke region.
The fire hazard smoke detecting method specifically includes following steps:
Step 1, video sequence is read, and the first frame for preserving video is defined as B as original two field picture1(x,y);
Step 2, extraction foreground pixel
First, background model is established;Context update not only considers next two field picture and current frame image, while adds original Reference of the frame as renewal, background estimating are expressed as:
Wherein, n represents current frame number, and n+1 represents next frame number, Bn(x, y) represent current background image (x, Y) gray value at place, Bn+1(x, y) is the background image of estimation in the gray value at (x, y) place, Fn+1(x, y) is that next two field picture exists The gray value at (x, y) place, B1(x, y) be primitive frame in the gray value at (x, y) place, α, β are weight coefficient, and meet alpha+beta < 1;
Secondly, foreground image G extractions are carried out;Make difference with the video image currently read after Background Modeling, just Sport foreground pixel can be obtained, expression formula is:
Wherein, T represents the threshold value set, Gn+1(x, y) represents gray values of the foreground image G at (x, y) place;
Step 3, by dark threshold binary image foreground image G is carried out being filtrated to get candidate smoke region;
Dark channel image corresponding to generating the present image read, is defined as:
Wherein, JcFor the gray value of wherein some Color Channel, Ω (x) is expressed as a window centered on x;
Obtained after obtaining the dark channel image of present frame according to smog dark characteristic selection appropriate threshold value, progress threshold process To dark threshold binary image Mdark
By dark threshold binary image MdarkThe foreground image S by filtering is just obtained with foreground image G contrasts, is expressed as S =G ∩ Mdark
Morphological transformation processing is carried out to foreground image S, the time in foreground image S is then obtained by minimum enclosed rectangle Select smoke region;
Step 4, off-line training depth convolutional neural networks model
Neural network structure shares 8 layers in addition to input layer, wherein comprising 5 convolutional layers and 3 full articulamentums, in the first volume Pondization is carried out after basic unit, the basic unit of volume three and the 5th convolutional layer by pond layer to operate, last full articulamentum utilizes Softmax functions realize classification;It is specific as follows:
(1) input layer:Input layer picture size is fixed as 227*227 pixels;
(2) basic unit is rolled up:Feature extraction is realized by the convolution operation of convolution kernel and input picture or characteristic pattern, through pulleying Characteristic pattern size N expression formula is after product core convolution:
Wherein, l represents current layer number, and k represents convolution kernel size, and P represents filler pixels number, and S represents step-length;
Then nonlinear transformation is carried out using Relu activation primitives, the calculation expression of a certain node of such convolutional layer can To be expressed as:
Wherein, M represents convolution kernel size, and w is connection weight, and b is bias term;
(3) pond layer:What is chosen is that maximum pondization carries out pondization operation, and expression formula is:Y=max (xi),xi∈ x, its In, x represents a region of characteristic pattern, xiFor the output valve of neuron in region;
(4) full articulamentum:Each neuron neuron all with preceding layer is connected, and exports as 4096 neurons, The characteristic vector of 4096 dimensions is obtained by Relu activation primitives;
(5) classification layer:Last full articulamentum is arranged to 2 neurons, it is every with second full articulamentum respectively Individual neuron is attached, i.e., carries out two classification to the linear vector of one 4096 dimension;
Depth convolutional neural networks model training uses stochastic gradient descent method (SGD), corresponding right value update expression formula For:
Wherein, W represents weight, and t represents iterations, and v is weight updated value, and ε is learning rate updating factor, ▽ L (Wt) Gradient of the feeling the pulse with the finger-tip scalar functions for weight W;
During depth convolutional neural networks model training by the way of data enhancing or in the first full articulamentum and second The over-fitting at networking is prevented after full articulamentum using dropout mode;The dropout refers in depth convolutional Neural net Allowed at random during network model training some nodes of network weights be 0;It is described to refer to that image enters by the way of data enhancing When entering input layer, 256*256 pixel sizes are first scaled the images to, it is 227*227 pixel sizes then to carry out random cropping;
Step 5, the candidate smoke region for obtaining step 3 carry out unified scaling, then load the depth convolution trained Neural network model, automatically extract the characteristic vector F of candidate region;Wherein, use centered on candidate region to imply expansion Candidate region, then zoom in and out again;And directly zoomed in and out when candidate region reaches sufficiently large small.
Step 6, returned by Softmax calculate candidate smoke region characteristic vector F belong to the probability of smoke region with And belong to the probability of non-smoke region, then the larger classification for belonging to candidate smoke region of select probability, expression formula are:
Wherein, p0Represent that candidate region belongs to the probability of non-smoke region, p1To represent to belong to the probability of smoke region;
Step 7, if it is decided that candidate region includes smog, then calibrate to come the candidate region, start alarm, simultaneously Continue the next frame of monitoring video, realize continuous early warning;If it is determined that candidate region is non-smog, then continue to read under video One frame.
Video fire hazard smoke detecting method provided by the invention based on kinetic characteristic and convolutional neural networks, suitable for big The forest of scope and mountain area scene, it is effectively filtered by the way that dark channel prior knowledge is added in sport foreground detection Common interference, the environmental suitability of detection method is improved, while feature of the convolutional neural networks for smog image is carried Take, substantially increase the accuracy rate of detection.
Brief description of the drawings
Fig. 1 is a certain two field picture of video;
Fig. 2 is sport foreground image corresponding to Fig. 1;
Fig. 3 is the foreground image after dark channel prior filtering;
Fig. 4 realizes framework for fire hazard smoke detecting method.
Embodiment
The present invention is described in detail below with reference to specification drawings and specific embodiments.
Referring to figs. 1 to shown in Fig. 4, present invention is disclosed a kind of fire hazard aerosol fog based on kinetic characteristic and convolutional neural networks Detection method, it specifically includes following steps:
Step 1, video sequence is read, and the first frame for preserving video is defined as B as original two field picture1(x,y);
Step 2, extraction foreground pixel
First, background model is established;The present invention establishes background model by background estimating method, and it is that a dynamic updates Model, the mode of diffusion is presented in smog movement, therefore the smoke region gray-value variation very little of consecutive frame causes conventional method Easily produce cavitation.So being directed to Smoke Detection, context update not only considers next two field picture and current frame image, simultaneously Reference of the primitive frame as renewal is added, so as to obtain complete smoke region, background estimating is expressed as:
Wherein, n represents current frame number, and n+1 represents next frame number, Bn(x, y) represent current background image (x, Y) gray value at place, Bn+1(x, y) is the background image of estimation in the gray value at (x, y) place, Fn+1(x, y) is that next two field picture exists The gray value at (x, y) place, B1(x, y) be primitive frame in the gray value at (x, y) place, α, β are weight coefficient, and meet alpha+beta < 1.
Because smog diffusion is slow, over time, gray value differences of the smoke region caused by early stage in different frame Different to reduce, easily there is cavity in the prospect extracted, so we add primitive frame in background model, is just to provide for The reference of one long period, so as to obtain complete smoke region.
Secondly, foreground image G extractions are carried out;Make difference with the video image currently read after Background Modeling, just Sport foreground pixel can be obtained, is expressed as:
Wherein, T represents the threshold value set, Gn+1(x, y) represents gray values of the foreground image G at (x, y) place.
Step 3, by dark threshold binary image foreground image is carried out being filtrated to get candidate smoke region;
Reach the effect that filtering is realized to foreground image by dark channel prior knowledge, caused by reduction objects interfered is possible Flase drop, so as to preferably determine candidate smoke region.
Dark channel image corresponding to generating the present image read, is defined as:
Wherein, JcFor the gray value of wherein some Color Channel, Ω (x) is expressed as a window centered on x.
Obtain choosing appropriate threshold value according to smog dark characteristic after the dark channel image of present frame, carrying out threshold process can To obtain dark threshold binary image Mdark, then eliminate prospect caused by the interfering object of part with foreground image G contrast cans Pixel, it is expressed as:
S=G ∩ Mdark (4)
It is described as working as the gray value of some position and the gray scale of dark threshold binary image relevant position in sport foreground image When value is all 255, candidate region pixel is just considered, otherwise it is assumed that be that pixel is filtered caused by interfering object, this Sample can be obtained by the foreground image S by filtering.
Because dark channel image has used a certain size construction module to do mini-value filtering, this process can be to actual cigarette The partial pixel at mist edge has certain corrosiveness, causes smoke region to reduce compared with actual area, so complete in order to reduce Smoke region, it is also necessary to carry out a series of morphological transformation processing, doubtful time determined finally by minimum enclosed rectangle Select smoke region.
Step 4, off-line training depth convolutional neural networks model
In the case where not considering strictly to distinguish convolutional layer and pond layer, the network structure of use shares except input layer 8 layers, wherein comprising 5 convolutional layers and 3 full articulamentums, last full articulamentum realizes classification using Softmax functions.Net Network is described in detail as follows:
(1) input layer;In order to carry out more layer operations to input picture, network architecture requirement input layer picture size is fixed as 227*227 pixels, it is therefore desirable to image is zoomed in and out into processing, because usual RGB color image includes 3 Color Channels, institute Using its size as 227*227*3.
(2) convolutional layer;This layer realizes feature extraction by the convolution operation of convolution kernel and input picture or characteristic pattern, rolls up The size of product core determines the size of output characteristic figure.Characteristic pattern size N expression formula is after convolution kernel convolution:
Wherein, l represents current layer number, and k represents convolution kernel size, and P represents filler pixels number, and S represents step-length.
Nonlinear transformation, Relu activation primitives used herein, expression formula are carried out by activation primitive after convolution operation For:Relu (x)=max (0, x), when x is more than 0, derivative is constantly equal to 1, therefore can be very good to carry out the backpropagation of error, Substantially reduce the training time.
So, the calculation expression of a certain node of convolutional layer can be expressed as:
Wherein, M represents convolution kernel size, and w is connection weight, and b is bias term.
(3) pond layer;The purpose in pond is to reduce the number of neuron while ensure feature for the constant of dimensional variation Property.Network carries out pondization operation after the 1st, 3,5 convolutional layers.Pondization is exactly that the characteristic pattern of input is divided into several rectangle regions Domain, corresponding region is operated.Pond operation is divided into maximum pondization and average pond, and what is chosen herein is maximum pond Change, expression formula is:Y=max (xi),xi∈ x, wherein, x represents a region of characteristic pattern, xiFor the output of neuron in region Value.
(4) full articulamentum;Each neuron neuron all with preceding layer is connected, and exports as 4096 neurons, The characteristic vector of 4096 dimensions is can be obtained by by Relu activation primitives.
(5) classification layer;Smoke Detection is really two classification problems, therefore last full articulamentum FC8 is arranged to 2 god Through member, each neuron with FC7 is attached respectively, and two classification are carried out equivalent to the linear vector to one 4096 dimension.
Depth convolutional neural networks are trained, it is necessary to which the parameter amount solved is very big, training needs enough samples.Training set Shortcoming may cause the insufficient of e-learning, there is over-fitting.Because smog image obtains difficult, existing smog number It is smaller according to collecting, therefore network parameter uses random initializtion mode, it would be possible to cause training convergence relatively slow or learn not fill Grade problem.
The present invention uses the deep learning framework based on Caffe, for the less problem of data set, proposes using fine setting Mode trains depth convolutional neural networks model, i.e., trained using network on ImageNet large-scale datasets obtained by parameter To initialize our model, concrete operations need by the output number of the full articulamentum of last in network structure be changed to 2 with Adapt to this kind of two classification problem of Smoke Detection.Parameter so in network structure except last layer needs random initializtion, its He is initialized layer parameter by corresponding pre-training model parameter, network is had certain feature extraction before training Power, accelerate the convergence rate of network training.
The training of depth convolutional neural networks uses stochastic gradient descent method (SGD), and corresponding right value update expression formula is:
Wherein, W represents weight, and t represents iterations, and v is weight updated value, and ε is learning rate updating factor, ▽ L (Wt) Gradient of the feeling the pulse with the finger-tip scalar functions for weight W.
In order to prevent the over-fitting at networking, we employ two kinds of strategies.The first is used after full articulamentum FC6, FC7 Dropout methods, dropout refer to allow at random in depth convolutional neural networks training process some nodes of network weights with 50% probability does not work, namely weights are arranged to 0, so as to reduce the dependence between full connection.Second of strategy uses The mode of data enhancing, the input layer picture size of network is 227*227, is not directly to zoom in and out to obtain by image, but 256*256 pixel sizes are first scaled the images to, then carry out random cropping, depth convolutional Neural net is so trained by SGD During network model, it is ensured that with the different cuttings of an image.
Step 5, the candidate smoke region for obtaining step 3 carry out unified scaling, then load the depth convolution trained Neural network model, automatically extract the characteristic vector F of candidate region;
When smog produces initial stage or especially small other moving objects, very little is understood in the candidate region generally extracted, such as Fruit still selects directly to zoom in and out candidate region, can easily cause flase drop because the pixel of insertion is excessive.The present invention Use and expand candidate region centered on candidate region come implicit, then zoom in and out again, when candidate region reaches enough sizes When avoid the need for carrying out it is implicit expand operating for candidate region, can directly zoom in and out.
Step 6, by Softmax return calculate characteristic vector F belong to every a kind of Probability p0And p1, p0Represent candidate regions Domain belongs to the probability of non-smoke region, p1To represent to belong to the probability of smoke region, select probability is larger for candidate's smog area Classification belonging to domain, expression formula are:
Step 7, if it is decided that candidate region includes smog, then calibrate to come the candidate region, start alarm, simultaneously Continue the next frame of monitoring video, realize continuous early warning;If it is determined that candidate region is non-smog, then continue to read under video One frame.
Video fire hazard smoke detecting method provided by the invention based on kinetic characteristic and convolutional neural networks, it passes through reading Video file is taken, preserves the first two field picture as original two field picture, and Smoke Detection is carried out to each frame of video:Carrying on the back first Original two field picture is added in scape renewal as referring to and establishing background model, foreground image is then extracted by calculus of finite differences, and lead to Cross dark threshold binary image foreground image is carried out to be filtrated to get candidate smoke region, finally load the depth convolution god trained The high-level characteristic of candidate smoke region is automatically extracted through network model, candidate's cigarette is judged according to the characteristic vector extracted Whether fog-zone domain belongs to smoke region.The fire hazard smoke detecting method is applied to large-scale forest and mountain area scene, and it passes through Dark channel prior knowledge is added in sport foreground detection, common interference has effectively been filtered, has improved detection method Environmental suitability, while convolutional neural networks are used for the feature extraction of smog image, substantially increase the accuracy rate of detection.
It is described above, only it is the embodiment of the present invention, is not intended to limit the scope of the present invention, thus it is every Any subtle modifications, equivalent variations and modifications that technical spirit according to the present invention is made to above example, still fall within this In the range of inventive technique scheme.

Claims (2)

  1. A kind of 1. fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks, it is characterised in that:Regarded by reading Frequency file, the first two field picture is preserved as original two field picture, and Smoke Detection is carried out to each frame of video:First in background more Original two field picture is added in new as referring to and establishing background model, foreground image is then extracted by calculus of finite differences, and by dark Passage threshold binary image carries out being filtrated to get candidate smoke region to foreground image, finally loads the depth convolutional Neural net trained Network model automatically extracts to the high-level characteristic of candidate smoke region, judges candidate's smog area according to the characteristic vector extracted Whether domain belongs to smoke region.
  2. 2. the fire hazard smoke detecting method according to claim 1 based on kinetic characteristic and convolutional neural networks, its feature It is:The fire hazard smoke detecting method specifically includes following steps:
    Step 1, video sequence is read, and the first frame for preserving video is defined as B as original two field picture1(x,y);
    Step 2, extraction foreground pixel
    First, background model is established;Context update not only considers next two field picture and current frame image, while adds primitive frame work For the reference of renewal, background estimating is expressed as:
    <mrow> <msub> <mi>B</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <mi>&amp;alpha;</mi> <mo>*</mo> <msub> <mi>B</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;beta;</mi> <mo>*</mo> <msub> <mi>F</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow></mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mrow> <mo>(</mo> <mrow> <mn>1</mn> <mo>-</mo> <mi>&amp;alpha;</mi> <mo>-</mo> <mi>&amp;beta;</mi> </mrow> <mo>)</mo> </mrow> <mo>*</mo> <msub> <mi>B</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mo>|</mo> <msub> <mi>F</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>F</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> <mo>|</mo> <mo>&gt;</mo> <mn>0</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>B</mi> <mi>n</mi> </msub> <mrow> <mo>(</mo> <mrow> <mi>x</mi> <mo>,</mo> <mi>y</mi> </mrow> <mo>)</mo> </mrow> </mrow> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
    Wherein, n represents current frame number, and n+1 represents next frame number, BnThe current background image of (x, y) expression is at (x, y) place Gray value, Bn+1(x, y) is the background image of estimation in the gray value at (x, y) place, Fn+1(x, y) be next two field picture (x, Y) gray value at place, B1(x, y) be primitive frame in the gray value at (x, y) place, α, β are weight coefficient, and meet alpha+beta < 1;
    Secondly, foreground image G extractions are carried out;Make difference with the video image currently read after Background Modeling, it is possible to Sport foreground pixel is obtained, expression formula is:
    <mrow> <msub> <mi>G</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mn>255</mn> </mtd> <mtd> <mrow> <mi>i</mi> <mi>f</mi> <mo>|</mo> <msub> <mi>F</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>-</mo> <msub> <mi>B</mi> <mrow> <mi>n</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>|</mo> <mo>&gt;</mo> <mi>T</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mi>e</mi> <mi>l</mi> <mi>s</mi> <mi>e</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
    Wherein, T represents the threshold value set, Gn+1(x, y) represents gray values of the foreground image G at (x, y) place;
    Step 3, by dark threshold binary image foreground image G is carried out being filtrated to get candidate smoke region;
    Dark channel image corresponding to generating the present image read, is defined as:
    <mrow> <msup> <mi>J</mi> <mrow> <mi>d</mi> <mi>a</mi> <mi>r</mi> <mi>k</mi> </mrow> </msup> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mi>min</mi> <mrow> <mi>c</mi> <mo>&amp;Element;</mo> <mrow> <mo>(</mo> <mi>r</mi> <mo>,</mo> <mi>g</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> </mrow> </munder> <mrow> <mo>(</mo> <munder> <mi>min</mi> <mrow> <mi>y</mi> <mo>&amp;Element;</mo> <mi>&amp;Omega;</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </mrow> </munder> <mo>(</mo> <mrow> <msup> <mi>J</mi> <mi>c</mi> </msup> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mo>)</mo> <mo>)</mo> </mrow> </mrow>
    Wherein, JcFor the gray value of wherein some Color Channel, Ω (x) is expressed as a window centered on x;
    Obtained secretly according to smog dark characteristic selection appropriate threshold value, progress threshold process after obtaining the dark channel image of present frame Passage threshold binary image Mdark
    By dark threshold binary image MdarkThe foreground image S by filtering is just obtained with foreground image G contrasts, is expressed as S=G ∩ Mdark
    Morphological transformation processing is carried out to foreground image S, candidate's cigarette in foreground image S is then obtained by minimum enclosed rectangle Fog-zone domain;
    Step 4, off-line training depth convolutional neural networks model
    Neural network structure shares 8 layers in addition to input layer, wherein comprising 5 convolutional layers and 3 full articulamentums, in first volume base Pondization is carried out after layer, the basic unit of volume three and the 5th convolutional layer by pond layer to operate, last full articulamentum utilizes Softmax functions realize classification;It is specific as follows:
    (1) input layer:Input layer picture size is fixed as 227*227 pixels;
    (2) basic unit is rolled up:Feature extraction is realized by the convolution operation of convolution kernel and input picture or characteristic pattern, by convolution kernel Characteristic pattern size N expression formula is after convolution:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <msubsup> <mi>N</mi> <mi>x</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>N</mi> <mi>x</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>K</mi> <mi>x</mi> <mi>l</mi> </msubsup> <mo>+</mo> <mn>2</mn> <msubsup> <mi>P</mi> <mi>x</mi> <mi>l</mi> </msubsup> </mrow> <msubsup> <mi>S</mi> <mi>x</mi> <mi>l</mi> </msubsup> </mfrac> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>N</mi> <mi>y</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mfrac> <mrow> <msubsup> <mi>N</mi> <mi>y</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>K</mi> <mi>y</mi> <mi>l</mi> </msubsup> <mo>+</mo> <mn>2</mn> <msubsup> <mi>P</mi> <mi>y</mi> <mi>l</mi> </msubsup> </mrow> <msubsup> <mi>S</mi> <mi>y</mi> <mi>l</mi> </msubsup> </mfrac> </mrow> </mtd> </mtr> </mtable> </mfenced>
    Wherein, l represents current layer number, and k represents convolution kernel size, and P represents filler pixels number, and S represents step-length;
    Then nonlinear transformation is carried out using Relu activation primitives, the calculation expression of a certain node of such convolutional layer can be with table It is shown as:
    <mrow> <msubsup> <mi>x</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>=</mo> <mi>Re</mi> <mi>l</mi> <mi>u</mi> <mrow> <mo>(</mo> <munder> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>&amp;Element;</mo> <msub> <mi>M</mi> <mi>j</mi> </msub> </mrow> </munder> <msup> <mi>x</mi> <mrow> <mi>l</mi> <mo>-</mo> <mn>1</mn> </mrow> </msup> <msubsup> <mi>w</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>l</mi> </msubsup> <mo>+</mo> <msubsup> <mi>b</mi> <mi>j</mi> <mi>l</mi> </msubsup> <mo>)</mo> </mrow> </mrow>
    Wherein, M represents convolution kernel size, and w is connection weight, and b is bias term;
    (3) pond layer:What is chosen is that maximum pondization carries out pondization operation, and expression formula is:Y=max (xi),xi∈ x, wherein, x Represent a region of characteristic pattern, xiFor the output valve of neuron in region;
    (4) full articulamentum:Each neuron neuron all with preceding layer is connected, and exports as 4096 neurons, passes through Relu activation primitives obtain the characteristic vector of 4096 dimensions;
    (5) classification layer:Last full articulamentum is arranged to 2 neurons, respectively with each god of second full articulamentum It is attached through member, i.e., two classification is carried out to the linear vector of one 4096 dimension;
    Depth convolutional neural networks model training uses stochastic gradient descent method (SGD), and corresponding right value update expression formula is:
    <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <mn>0.9</mn> <mo>*</mo> <msub> <mi>v</mi> <mi>t</mi> </msub> <mo>-</mo> <mn>0.01</mn> <mo>*</mo> <mi>&amp;epsiv;</mi> <mo>*</mo> <mo>&amp;dtri;</mo> <mi>L</mi> <mrow> <mo>(</mo> <msub> <mi>W</mi> <mi>t</mi> </msub> <mo>)</mo> </mrow> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msub> <mi>W</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>=</mo> <msub> <mi>W</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>v</mi> <mrow> <mi>t</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced>
    Wherein, W represents weight, and t represents iterations, and v is weight updated value, and ε is learning rate updating factor,Feeling the pulse with the finger-tip mark Gradient of the function for weight W;
    Connect entirely by the way of data enhancing or in the first full articulamentum and second during depth convolutional neural networks model training Connect prevents the over-fitting at networking after layer using dropout mode;The dropout refers in depth convolutional neural networks mould Allowed at random in type training process some nodes of network weights be 0;It is described using data enhancing by the way of refer to image enter it is defeated When entering layer, 256*256 pixel sizes are first scaled the images to, it is 227*227 pixel sizes then to carry out random cropping;
    Step 5, the candidate smoke region for obtaining step 3 carry out unified scaling, then load the depth convolutional Neural trained Network model, automatically extract the characteristic vector F of candidate region;Wherein, use and expand candidate centered on candidate region come implicit Region, then zoom in and out again;And directly zoomed in and out when candidate region reaches sufficiently large small;
    Step 6, returned by Softmax and to calculate the characteristic vector F of candidate smoke region and belong to the probability and category of smoke region Probability in non-smoke region, then the larger classification for belonging to candidate smoke region of select probability, expression formula are:
    <mrow> <mi>C</mi> <mo>=</mo> <mi>arg</mi> <munder> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> <mi>i</mi> </munder> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mrow>
    Wherein, p0Represent that candidate region belongs to the probability of non-smoke region, p1To represent to belong to the probability of smoke region;
    Step 7, if it is decided that candidate region includes smog, then calibrate to come the candidate region, start alarm, continue simultaneously The next frame of video is monitored, realizes continuous early warning;If it is determined that candidate region is non-smog, then continue to read the next of video Frame.
CN201710824117.5A 2017-09-13 2017-09-13 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks Pending CN107749067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710824117.5A CN107749067A (en) 2017-09-13 2017-09-13 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710824117.5A CN107749067A (en) 2017-09-13 2017-09-13 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks

Publications (1)

Publication Number Publication Date
CN107749067A true CN107749067A (en) 2018-03-02

Family

ID=61254542

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710824117.5A Pending CN107749067A (en) 2017-09-13 2017-09-13 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks

Country Status (1)

Country Link
CN (1) CN107749067A (en)

Cited By (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108428324A (en) * 2018-04-28 2018-08-21 温州大学激光与光电智能制造研究院 The detection device of smog in a kind of fire scenario based on convolutional network
CN108597172A (en) * 2018-04-16 2018-09-28 河南理工大学 A kind of forest fire recognition methods, device, electronic equipment and storage medium
CN108648409A (en) * 2018-04-28 2018-10-12 北京环境特性研究所 A kind of smog detection method and device
CN108647601A (en) * 2018-04-28 2018-10-12 温州大学激光与光电智能制造研究院 The detection method of smog in a kind of fire scenario based on convolutional network
CN108664906A (en) * 2018-04-27 2018-10-16 温州大学激光与光电智能制造研究院 The detection method of content in a kind of fire scenario based on convolutional network
CN108710942A (en) * 2018-04-27 2018-10-26 温州大学激光与光电智能制造研究院 The detection device of content in a kind of fire scenario based on convolutional network
CN108764142A (en) * 2018-05-25 2018-11-06 北京工业大学 Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
CN108830157A (en) * 2018-05-15 2018-11-16 华北电力大学(保定) Human bodys' response method based on attention mechanism and 3D convolutional neural networks
CN108921039A (en) * 2018-06-07 2018-11-30 南京启德电子科技有限公司 The forest fire detection method of depth convolution model based on more size convolution kernels
CN109035666A (en) * 2018-08-29 2018-12-18 深圳市中电数通智慧安全科技股份有限公司 A kind of fire-smoke detection method, apparatus and terminal device
CN109087337A (en) * 2018-11-07 2018-12-25 山东大学 Long-time method for tracking target and system based on layering convolution feature
CN109086647A (en) * 2018-05-24 2018-12-25 北京飞搜科技有限公司 Smog detection method and equipment
CN109165575A (en) * 2018-08-06 2019-01-08 天津艾思科尔科技有限公司 A kind of pyrotechnics recognizer based on SSD frame
CN109271906A (en) * 2018-09-03 2019-01-25 五邑大学 A kind of smog detection method and its device based on depth convolutional neural networks
CN109409256A (en) * 2018-10-10 2019-03-01 东南大学 A kind of forest rocket detection method based on 3D convolutional neural networks
CN109409224A (en) * 2018-09-21 2019-03-01 河海大学 A kind of method of natural scene fire defector
CN109522819A (en) * 2018-10-29 2019-03-26 西安交通大学 A kind of fire image recognition methods based on deep learning
CN109598700A (en) * 2018-10-16 2019-04-09 天津大学 Using the incipient fire detection method of convolutional neural networks
CN109598891A (en) * 2018-12-24 2019-04-09 中南民族大学 A kind of method and system for realizing Smoke Detection using deep learning disaggregated model
CN109740673A (en) * 2019-01-02 2019-05-10 天津工业大学 A kind of neural network smog image classification method merging dark
CN109815863A (en) * 2019-01-11 2019-05-28 北京邮电大学 Firework detecting method and system based on deep learning and image recognition
CN110009658A (en) * 2019-06-06 2019-07-12 南京邮电大学 A kind of smog detection method based on ingredient separation
CN110059723A (en) * 2019-03-19 2019-07-26 北京工业大学 A kind of robust smog detection method based on integrated depth convolutional neural networks
CN110070106A (en) * 2019-03-26 2019-07-30 罗克佳华科技集团股份有限公司 Smog detection method, device and electronic equipment
CN110096942A (en) * 2018-12-20 2019-08-06 北京以萨技术股份有限公司 A kind of Smoke Detection algorithm based on video analysis
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
CN110309765A (en) * 2019-06-27 2019-10-08 浙江工业大学 A kind of video frequency motion target efficient detection method
CN110322659A (en) * 2019-06-21 2019-10-11 江西洪都航空工业集团有限责任公司 A kind of smog detection method
CN110415260A (en) * 2019-08-01 2019-11-05 西安科技大学 Smog image segmentation and recognition methods based on dictionary and BP neural network
CN110490043A (en) * 2019-06-10 2019-11-22 东南大学 A kind of forest rocket detection method based on region division and feature extraction
CN110852174A (en) * 2019-10-16 2020-02-28 天津大学 Early smoke detection method based on video monitoring
CN110927171A (en) * 2019-12-09 2020-03-27 中国科学院沈阳自动化研究所 Bearing roller chamfer surface defect detection method based on machine vision
CN110956611A (en) * 2019-11-01 2020-04-03 武汉纺织大学 Smoke detection method integrated with convolutional neural network
CN111046827A (en) * 2019-12-20 2020-04-21 哈尔滨理工大学 Video smoke detection method based on convolutional neural network
CN111091586A (en) * 2019-12-17 2020-05-01 上海工程技术大学 Rapid smoke dynamic shielding area detection and positioning method and application thereof
CN111126293A (en) * 2019-12-25 2020-05-08 国网智能科技股份有限公司 Flame and smoke abnormal condition detection method and system
CN111145222A (en) * 2019-12-30 2020-05-12 浙江中创天成科技有限公司 Fire detection method combining smoke movement trend and textural features
CN111191575A (en) * 2019-12-27 2020-05-22 国网江苏省电力有限公司电力科学研究院 Naked flame detection method and system based on flame jumping modeling
CN111488772A (en) * 2019-01-29 2020-08-04 杭州海康威视数字技术股份有限公司 Method and apparatus for smoke detection
CN111928304A (en) * 2019-05-13 2020-11-13 青岛海尔智能技术研发有限公司 Oil smoke concentration identification method and device and range hood
CN111951250A (en) * 2020-08-14 2020-11-17 西安科技大学 Image-based fire detection method
CN112052744A (en) * 2020-08-12 2020-12-08 成都佳华物链云科技有限公司 Environment detection model training method, environment detection method and device
CN112215122A (en) * 2020-09-30 2021-01-12 中国科学院深圳先进技术研究院 Fire detection method, system, terminal and storage medium based on video image target detection
CN112861737A (en) * 2021-02-11 2021-05-28 西北工业大学 Forest fire smoke detection method based on image dark channel and YoLov3
CN113743605A (en) * 2021-06-16 2021-12-03 温州大学 Method for searching smoke and fire detection network architecture based on evolution method
CN113792793A (en) * 2021-09-15 2021-12-14 河北雄安京德高速公路有限公司 Road video monitoring and improving method in adverse meteorological environment
US11210916B2 (en) 2018-12-21 2021-12-28 Fujitsu Limited Smoke detection method and apparatus
CN114022672A (en) * 2022-01-10 2022-02-08 深圳金三立视频科技股份有限公司 Flame data generation method and terminal
CN114049585A (en) * 2021-10-12 2022-02-15 北京控制与电子技术研究所 Mobile phone action detection method based on motion foreground extraction
CN114187734A (en) * 2021-12-01 2022-03-15 深圳市中西视通科技有限公司 Image identification method and system for smoke alarm
CN115830516A (en) * 2023-02-13 2023-03-21 新乡职业技术学院 Computer neural network image processing method for battery detonation detection
CN116797987A (en) * 2022-12-22 2023-09-22 中建新疆建工集团第三建设工程有限公司 Method and system for controlling blasting dust of building
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
CN102163358A (en) * 2011-04-11 2011-08-24 杭州电子科技大学 Smoke/flame detection method based on video image analysis
CN104794486A (en) * 2015-04-10 2015-07-22 电子科技大学 Video smoke detecting method based on multi-feature fusion

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101625789A (en) * 2008-07-07 2010-01-13 北京东方泰坦科技有限公司 Method for monitoring forest fire in real time based on intelligent identification of smoke and fire
CN102163358A (en) * 2011-04-11 2011-08-24 杭州电子科技大学 Smoke/flame detection method based on video image analysis
CN104794486A (en) * 2015-04-10 2015-07-22 电子科技大学 Video smoke detecting method based on multi-feature fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANMIN LUO 等: "Fire smoke detection algorithm based on motion characteristic and convolutional neural networks", 《MULTIMEDIA TOOLS AND APPLICATION》 *
赵亮 等: "基于背景动态更新与暗通道先验的火灾烟雾检测算法", 《计算机应用研究》 *

Cited By (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108597172A (en) * 2018-04-16 2018-09-28 河南理工大学 A kind of forest fire recognition methods, device, electronic equipment and storage medium
CN108597172B (en) * 2018-04-16 2020-11-06 河南理工大学 Forest fire recognition method and device, electronic equipment and storage medium
CN108664906A (en) * 2018-04-27 2018-10-16 温州大学激光与光电智能制造研究院 The detection method of content in a kind of fire scenario based on convolutional network
CN108710942A (en) * 2018-04-27 2018-10-26 温州大学激光与光电智能制造研究院 The detection device of content in a kind of fire scenario based on convolutional network
CN108428324A (en) * 2018-04-28 2018-08-21 温州大学激光与光电智能制造研究院 The detection device of smog in a kind of fire scenario based on convolutional network
CN108648409A (en) * 2018-04-28 2018-10-12 北京环境特性研究所 A kind of smog detection method and device
CN108647601A (en) * 2018-04-28 2018-10-12 温州大学激光与光电智能制造研究院 The detection method of smog in a kind of fire scenario based on convolutional network
CN108648409B (en) * 2018-04-28 2020-07-24 北京环境特性研究所 Smoke detection method and device
CN108830157A (en) * 2018-05-15 2018-11-16 华北电力大学(保定) Human bodys' response method based on attention mechanism and 3D convolutional neural networks
CN109086647B (en) * 2018-05-24 2022-01-07 苏州飞搜科技有限公司 Smoke detection method and device
CN109086647A (en) * 2018-05-24 2018-12-25 北京飞搜科技有限公司 Smog detection method and equipment
CN108764142A (en) * 2018-05-25 2018-11-06 北京工业大学 Unmanned plane image forest Smoke Detection based on 3DCNN and sorting technique
CN108921039A (en) * 2018-06-07 2018-11-30 南京启德电子科技有限公司 The forest fire detection method of depth convolution model based on more size convolution kernels
CN109165575B (en) * 2018-08-06 2024-02-20 天津艾思科尔科技有限公司 Pyrotechnic recognition algorithm based on SSD frame
CN109165575A (en) * 2018-08-06 2019-01-08 天津艾思科尔科技有限公司 A kind of pyrotechnics recognizer based on SSD frame
CN109035666B (en) * 2018-08-29 2020-05-19 深圳市中电数通智慧安全科技股份有限公司 Fire and smoke detection method and device and terminal equipment
CN109035666A (en) * 2018-08-29 2018-12-18 深圳市中电数通智慧安全科技股份有限公司 A kind of fire-smoke detection method, apparatus and terminal device
CN109271906A (en) * 2018-09-03 2019-01-25 五邑大学 A kind of smog detection method and its device based on depth convolutional neural networks
CN109409224B (en) * 2018-09-21 2023-09-05 河海大学 Method for detecting flame in natural scene
CN109409224A (en) * 2018-09-21 2019-03-01 河海大学 A kind of method of natural scene fire defector
CN109409256A (en) * 2018-10-10 2019-03-01 东南大学 A kind of forest rocket detection method based on 3D convolutional neural networks
CN109598700A (en) * 2018-10-16 2019-04-09 天津大学 Using the incipient fire detection method of convolutional neural networks
CN109522819A (en) * 2018-10-29 2019-03-26 西安交通大学 A kind of fire image recognition methods based on deep learning
CN109087337B (en) * 2018-11-07 2020-07-14 山东大学 Long-time target tracking method and system based on hierarchical convolution characteristics
CN109087337A (en) * 2018-11-07 2018-12-25 山东大学 Long-time method for tracking target and system based on layering convolution feature
CN110096942A (en) * 2018-12-20 2019-08-06 北京以萨技术股份有限公司 A kind of Smoke Detection algorithm based on video analysis
US11210916B2 (en) 2018-12-21 2021-12-28 Fujitsu Limited Smoke detection method and apparatus
CN109598891A (en) * 2018-12-24 2019-04-09 中南民族大学 A kind of method and system for realizing Smoke Detection using deep learning disaggregated model
CN109740673A (en) * 2019-01-02 2019-05-10 天津工业大学 A kind of neural network smog image classification method merging dark
CN109815863A (en) * 2019-01-11 2019-05-28 北京邮电大学 Firework detecting method and system based on deep learning and image recognition
CN111488772A (en) * 2019-01-29 2020-08-04 杭州海康威视数字技术股份有限公司 Method and apparatus for smoke detection
CN111488772B (en) * 2019-01-29 2023-09-22 杭州海康威视数字技术股份有限公司 Method and device for detecting smoke
CN110059723B (en) * 2019-03-19 2021-01-05 北京工业大学 Robust smoke detection method based on integrated deep convolutional neural network
CN110059723A (en) * 2019-03-19 2019-07-26 北京工业大学 A kind of robust smog detection method based on integrated depth convolutional neural networks
CN110070106A (en) * 2019-03-26 2019-07-30 罗克佳华科技集团股份有限公司 Smog detection method, device and electronic equipment
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
CN111928304A (en) * 2019-05-13 2020-11-13 青岛海尔智能技术研发有限公司 Oil smoke concentration identification method and device and range hood
CN111928304B (en) * 2019-05-13 2022-03-29 青岛海尔智能技术研发有限公司 Oil smoke concentration identification method and device and range hood
CN110009658A (en) * 2019-06-06 2019-07-12 南京邮电大学 A kind of smog detection method based on ingredient separation
CN110009658B (en) * 2019-06-06 2022-08-23 南京邮电大学 Smoke detection method based on component separation
CN110490043A (en) * 2019-06-10 2019-11-22 东南大学 A kind of forest rocket detection method based on region division and feature extraction
CN110322659A (en) * 2019-06-21 2019-10-11 江西洪都航空工业集团有限责任公司 A kind of smog detection method
CN110309765A (en) * 2019-06-27 2019-10-08 浙江工业大学 A kind of video frequency motion target efficient detection method
CN110415260A (en) * 2019-08-01 2019-11-05 西安科技大学 Smog image segmentation and recognition methods based on dictionary and BP neural network
CN110415260B (en) * 2019-08-01 2022-02-15 西安科技大学 Smoke image segmentation and identification method based on dictionary and BP neural network
CN110852174A (en) * 2019-10-16 2020-02-28 天津大学 Early smoke detection method based on video monitoring
CN110956611A (en) * 2019-11-01 2020-04-03 武汉纺织大学 Smoke detection method integrated with convolutional neural network
CN110927171A (en) * 2019-12-09 2020-03-27 中国科学院沈阳自动化研究所 Bearing roller chamfer surface defect detection method based on machine vision
CN111091586A (en) * 2019-12-17 2020-05-01 上海工程技术大学 Rapid smoke dynamic shielding area detection and positioning method and application thereof
CN111046827A (en) * 2019-12-20 2020-04-21 哈尔滨理工大学 Video smoke detection method based on convolutional neural network
CN111126293A (en) * 2019-12-25 2020-05-08 国网智能科技股份有限公司 Flame and smoke abnormal condition detection method and system
CN111191575A (en) * 2019-12-27 2020-05-22 国网江苏省电力有限公司电力科学研究院 Naked flame detection method and system based on flame jumping modeling
CN111191575B (en) * 2019-12-27 2022-09-23 国网江苏省电力有限公司电力科学研究院 Naked flame detection method and system based on flame jumping modeling
CN111145222A (en) * 2019-12-30 2020-05-12 浙江中创天成科技有限公司 Fire detection method combining smoke movement trend and textural features
CN112052744A (en) * 2020-08-12 2020-12-08 成都佳华物链云科技有限公司 Environment detection model training method, environment detection method and device
CN112052744B (en) * 2020-08-12 2024-02-09 成都佳华物链云科技有限公司 Environment detection model training method, environment detection method and environment detection device
CN111951250A (en) * 2020-08-14 2020-11-17 西安科技大学 Image-based fire detection method
CN111951250B (en) * 2020-08-14 2024-02-06 西安科技大学 Fire detection method based on image
CN112215122A (en) * 2020-09-30 2021-01-12 中国科学院深圳先进技术研究院 Fire detection method, system, terminal and storage medium based on video image target detection
CN112215122B (en) * 2020-09-30 2023-10-24 中国科学院深圳先进技术研究院 Fire detection method, system, terminal and storage medium based on video image target detection
CN112861737A (en) * 2021-02-11 2021-05-28 西北工业大学 Forest fire smoke detection method based on image dark channel and YoLov3
CN113743605A (en) * 2021-06-16 2021-12-03 温州大学 Method for searching smoke and fire detection network architecture based on evolution method
CN113792793A (en) * 2021-09-15 2021-12-14 河北雄安京德高速公路有限公司 Road video monitoring and improving method in adverse meteorological environment
CN113792793B (en) * 2021-09-15 2024-01-23 河北雄安京德高速公路有限公司 Road video monitoring and lifting method under bad weather environment
CN114049585A (en) * 2021-10-12 2022-02-15 北京控制与电子技术研究所 Mobile phone action detection method based on motion foreground extraction
CN114049585B (en) * 2021-10-12 2024-04-02 北京控制与电子技术研究所 Mobile phone operation detection method based on motion prospect extraction
CN114187734A (en) * 2021-12-01 2022-03-15 深圳市中西视通科技有限公司 Image identification method and system for smoke alarm
CN114022672A (en) * 2022-01-10 2022-02-08 深圳金三立视频科技股份有限公司 Flame data generation method and terminal
CN116797987A (en) * 2022-12-22 2023-09-22 中建新疆建工集团第三建设工程有限公司 Method and system for controlling blasting dust of building
CN116797987B (en) * 2022-12-22 2024-05-31 中建新疆建工集团第三建设工程有限公司 Method and system for controlling blasting dust of building
CN115830516B (en) * 2023-02-13 2023-05-12 新乡职业技术学院 Computer neural network image processing method for battery deflagration detection
CN115830516A (en) * 2023-02-13 2023-03-21 新乡职业技术学院 Computer neural network image processing method for battery detonation detection
CN116977634B (en) * 2023-07-17 2024-01-23 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction

Similar Documents

Publication Publication Date Title
CN107749067A (en) Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
CN110348376B (en) Pedestrian real-time detection method based on neural network
CN110135269B (en) Fire image detection method based on mixed color model and neural network
CN104809443B (en) Detection method of license plate and system based on convolutional neural networks
CN104050471B (en) Natural scene character detection method and system
CN106407903A (en) Multiple dimensioned convolution neural network-based real time human body abnormal behavior identification method
CN110956094A (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-current network
CN110458077B (en) Vehicle color identification method and system
CN108229338A (en) A kind of video behavior recognition methods based on depth convolution feature
CN107341452A (en) Human bodys&#39; response method based on quaternary number space-time convolutional neural networks
CN109508710A (en) Based on the unmanned vehicle night-environment cognitive method for improving YOLOv3 network
CN106845408A (en) A kind of street refuse recognition methods under complex environment
CN104599290B (en) Video sensing node-oriented target detection method
CN113536972B (en) Self-supervision cross-domain crowd counting method based on target domain pseudo label
CN111653023A (en) Intelligent factory supervision method
CN111738054B (en) Behavior anomaly detection method based on space-time self-encoder network and space-time CNN
CN114648714A (en) YOLO-based workshop normative behavior monitoring method
CN113850242A (en) Storage abnormal target detection method and system based on deep learning algorithm
CN104463869A (en) Video flame image composite recognition method
CN107958219A (en) Image scene classification method based on multi-model and Analysis On Multi-scale Features
CN114580541A (en) Fire disaster video smoke identification method based on time-space domain double channels
CN113487576A (en) Insect pest image detection method based on channel attention mechanism
CN104463242A (en) Multi-feature motion recognition method based on feature transformation and dictionary study
CN112560624A (en) High-resolution remote sensing image semantic segmentation method based on model depth integration
CN114463837A (en) Human behavior recognition method and system based on self-adaptive space-time convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180302

RJ01 Rejection of invention patent application after publication