CN112069975A - Comprehensive flame detection method based on ultraviolet, infrared and vision - Google Patents

Comprehensive flame detection method based on ultraviolet, infrared and vision Download PDF

Info

Publication number
CN112069975A
CN112069975A CN202010907770.XA CN202010907770A CN112069975A CN 112069975 A CN112069975 A CN 112069975A CN 202010907770 A CN202010907770 A CN 202010907770A CN 112069975 A CN112069975 A CN 112069975A
Authority
CN
China
Prior art keywords
flame
detector
image
ultraviolet
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010907770.XA
Other languages
Chinese (zh)
Other versions
CN112069975B (en
Inventor
王思维
刘伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhima Technology Co ltd
Original Assignee
Chengdu Zhima Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhima Technology Co ltd filed Critical Chengdu Zhima Technology Co ltd
Priority to CN202010907770.XA priority Critical patent/CN112069975B/en
Publication of CN112069975A publication Critical patent/CN112069975A/en
Application granted granted Critical
Publication of CN112069975B publication Critical patent/CN112069975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Photometry And Measurement Of Optical Pulse Characteristics (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention provides an ultraviolet, infrared and visual based comprehensive flame detection method, which comprises the steps that a data output end of an ultraviolet detector is connected with a first data input end of a controller, a data output end of an infrared detector is connected with a second data input end of the controller, a data output end of a visual detector is connected with a third data input end of the controller, a trigger data end of the visual detector is connected with a trigger data end of the controller, and a communication end of the controller is connected with a communication end of a communication module; and analyzing and processing the image data acquired by the visual detector at the server side to judge the fire condition. The invention combines the image type fire detection technology with ultraviolet and infrared detection, and greatly meets the detection requirements of people on high sensitivity, high reliability and visibility of fire.

Description

Comprehensive flame detection method based on ultraviolet, infrared and vision
Technical Field
The invention relates to the field of image analysis, in particular to a comprehensive flame detection method based on ultraviolet, infrared and vision.
Background
The fire detection technology detects the current environment by using a sensor, generates parameters when a fire breaks out, and judges whether the fire breaks out in the current environment or not by detecting the parameters. Conventional fire detection uses light, temperature and smoke as main characteristic variables. Present smoke detector and temperature sensor are the most extensive and mature fire detection technique of using, but this type of sensor still has great defect when using under big space environment such as warehouse, indoor stadium, generally insensitive to early conflagration, can't put out a fire with the fire control facility in the big space building in the linkage, often is difficult to control through the intensity of a fire when discovering the condition of a fire, causes huge loss easily, therefore the fire detection technique that present multisensor fuses receives much attention.
The comprehensive flame detection method based on ultraviolet, infrared and vision is based on the idea of multi-sensor fusion, and can radiate spectrums such as ultraviolet light, visible light and infrared light when an object is burnt, and in the infrared spectrum of flame radiation, the radiation intensity of 4.3-4.4 um wave bands is the largest, so that the flame can be detected by reasonably utilizing the characteristic. Since the ultraviolet detector is susceptible to electric arcs and the like, the infrared detector is susceptible to high-temperature objects, people, sunlight and the like. Therefore, the ultraviolet detector and the infrared detector respectively detect the current environment, the trigger signals of the ultraviolet detector and the infrared detector are connected to the counter through the AND gate, the alarm threshold value of the counter can be designed in a self-defined mode due to the fact that flame is in a continuous state and meets the sensitivity requirement of an application scene, when the camera obtains the current image, the image is analyzed, whether the current environment has a fire or not is judged, and if the fire exists, the area of the flame is determined, and the brightness of the surrounding environment is used for judging the current fire level.
The comprehensive flame detection method based on ultraviolet, infrared and vision overcomes the defects of the traditional temperature-sensing and smoke-sensing sensors, has the advantages of fast response, reduction of false alarm through multiple steps, improvement of detection precision and capability of preliminarily judging the current fire level.
Disclosure of Invention
The invention aims to at least solve the technical problems in the prior art, and particularly creatively provides a comprehensive flame detection method based on ultraviolet, infrared and vision.
In order to achieve the above purpose, the invention provides a comprehensive flame detection system based on ultraviolet, infrared and vision, which comprises an ultraviolet detector, an infrared detector, a vision detector, a controller, a communication module and a server end, wherein the infrared detector is connected with the controller;
the data output end of the ultraviolet detector is connected with the first data input end of the controller, the data output end of the infrared detector is connected with the second data input end of the controller, the data output end of the visual detector is connected with the third data input end of the controller, the trigger data end of the visual detector is connected with the trigger data end of the controller, and the communication end of the controller is connected with the communication end of the communication module;
the controller triggers the visual detector to collect image data according to data collected by the ultraviolet detector or/and the infrared detector, transmits the image data collected by the visual detector to the server end through the communication module, analyzes and processes the image data collected by the visual detector at the server end, and judges the fire condition.
In a preferred embodiment of the present invention, the controller comprises a first counter, a second counter, a third counter, an and gate and a processor, wherein a data output end of the ultraviolet detector is connected with a first data input end of the processor, and a counting data first output end of the processor is connected with an input end of the first counter; the data output end of the infrared detector is connected with the second data input end of the processor, and the second output end of the counting data of the processor is connected with the input end of the second counter; the data output end of the first counter is connected with the first data input end of the AND gate, the data output end of the second counter is connected with the second data input end of the AND gate, the data output end of the AND gate is connected with the input end of the third counter, the data output end of the third counter is connected with the counting input end of the processor, the triggering data end of the processor is connected with the triggering data end of the visual detector, and the data output end of the visual detector is connected with the third data input end of the processor.
The invention also discloses a comprehensive flame detection method based on ultraviolet, infrared and vision, which comprises the following steps:
and S1, detecting the monitored environment in real time through the ultraviolet, infrared and visual composite flame detector. The system comprises an ultraviolet detector, an infrared detector, a vision detector and a controller, wherein the ultraviolet detector is used for detecting ultraviolet radiation emitted by flame, the infrared detector is used for detecting a spectrum of a 4.3-4.4 mu m wave band radiated by the flame, and the vision detector is used for acquiring a flicker frequency characteristic of the flame and a vision identification image of the flame; when flame occurs, the ultraviolet sensor and the infrared sensor respectively detect and identify a flame target, and transmit a result signal to the controller. And if the visual detector is triggered, the visual detector captures, identifies, records and confirms images and videos. And the flame detection result with high reliability and high sensitivity visible to human eyes is obtained by fusing the identification results of the three sensors.
In a preferred embodiment of the present invention, the S1 includes:
s11, outputting a first-order alarm signal by the ultraviolet detector: counting is carried out through a first counter to obtain a statistical value m, a first calculator threshold value A is set according to the requirements of different scenes on sensitivity, and when m is larger than A, a controller judges that the ultraviolet detector is a first-order alarm signal.
S12, outputting a first-order alarm signal by the infrared detector: and counting by using a second counter to obtain a statistical value n, setting a threshold value B of the second counter according to the requirements of different scenes on sensitivity, and judging as a first-order alarm signal of the infrared detector by using the controller when n is greater than B.
And S13, fusing the first-order alarm signals of the ultraviolet detector and the infrared detector through an AND gate, namely, the controller receives the ultraviolet detector and the first-order alarm signals output by the infrared detector, a third counter in the controller counts once, and when the third counter reaches a preset counting threshold value, the controller outputs second-order alarm signals and transmits the second-order alarm signals to the visual detector.
And S14, after receiving the second-order alarm signal, the visual detector starts to capture images and videos on site until the alarm signal is finished or confirmed manually.
S15 analyzes the obtained environment image and video data, uses motion detection to segment the foreground image, extracts the color, texture and outline characteristics of the foreground image, fuses the characteristic quantity and adopts a Support Vector Machine (SVM) to train and identify flame.
S16, drawing a brightness change curve within a certain time by monitoring the area proportion of the flame in the image in real time and the brightness change in the current space, eliminating the brightness mutation caused by illumination through the trend of the curve, and judging the current fire condition through the flame area in the space and the brightness weighted value.
In a preferred embodiment of the present invention, the S15 includes:
s15-1, performing motion detection by using background difference operation to segment a foreground image;
s15-2, extracting HSI color features of the foreground image;
s15-3, describing texture features by using a gray level co-occurrence matrix;
s15-4, extracting contour information by using a Fourier Descriptor (FD);
s15-5, constructing the characteristic information into a characteristic vector with a certain dimension;
s15-6, training and recognizing flame by adopting a Support Vector Machine (SVM);
and S15-7, extracting the flame stroboscopic characteristics to confirm the flame.
In a preferred embodiment of the present invention, the motion detection in S15-1 includes:
the motion detection adopts a background difference method to segment the foreground image, a Gaussian model modeling method is adopted when the background is modeled, the Gaussian model considers the pixel value of the detection area, and the color information, the brightness information and the like of the detection area meet the Gaussian distribution within a certain time. When the pixel value of any point (x, y) in the image I satisfies:
Figure BDA0002662109360000041
the pixel point (x, y) is a background point, when the pixel value is smaller than the threshold value T, the pixel point is a foreground point, u (x, y) is the mean value of the pixel value in the previous time period, and sigma is the variance. The Gaussian model needs to be updated in real time to adapt to the slow change of the background, and the updating mode is as follows:
u(t+1,x,y)=α·u(t,x,y)+(1-α)·I(x,y),
after a background image is established, performing differential operation on a current frame and the background image, and segmenting a motion area of the image to obtain a segmented image; the algorithm is as follows:
Ik(i,j)=b'k(i,j)+mk(i,j)+nk(i,j),
dk(i,j)=Ik(i,j)-bK(i,j),
wherein Ik(i, j) is pixel information of the current picture at the (i, j) position in the two-dimensional coordinate plane, b'k(i, j) is pixel information of the current background image at the (i, j) position, mk(i, j) represents pixel information of the moving object at the (i, j) position; n isk(i, j) represents noise information in the image, dk(i, j) is pixel information of the foreground image, bK(i, j) is background information.
In a preferred embodiment of the present invention, the feature extraction in S15-5 includes:
extracting an HSI value of a flame image as a color characteristic of flame, wherein when describing a spatial relation of image texture, a gray level co-occurrence matrix based on statistics is a common method, extracting the texture characteristic of the flame, and extracting by adopting the gray level co-occurrence matrix. When the edge features of the flame are extracted, a Fourier Descriptor (FD) is adopted for extraction, the Fourier descriptor forms a discrete complex sequence on the contour edge of the segmented foreground image, then one-dimensional discrete Fourier transform is carried out on the sequence, Fourier coefficients for describing the contour information of the foreground image are obtained through normalization, and the first K low-frequency Fourier coefficients are reserved as Fourier descriptors to describe the contour.
The characteristic extraction in the S15-6 step comprises the following steps: the characteristic information is constructed into a characteristic vector with a certain dimensionality, and the training and flame recognition by adopting a Support Vector Machine (SVM) comprise the following steps:
the SVM is a two-classification model, which first learns a large number of data samples and then classifies new data according to the learned results. His basic model is defined as the linear classifier with the largest separation in feature space. The central idea of SVM is to find a way to maximize the classification interval, which can easily transform a complex classification problem into a convex quadratic programming problem to solve in the final solution. Extracting color features, texture features and edge features of the positive samples and the negative samples in a training process, forming multidimensional feature vectors by the color features, the texture features and the edge features, and solving a hyperplane with the largest segmentation interval in a feature space through training and learning of the positive and negative sample sets to classify the samples.
S15-7, extracting the flame stroboscopic feature and confirming the flame comprises the following steps: in the flame combustion process, when the influence of wind action in the environment is small, the flame is relatively stable, and the stroboscopic characteristic of the flame can be described by the change condition of the flame combustion area.
First, the flame area S of the i +1 th frame in the t +1 frame image is obtainedi+1Flame area S compared to ith frameiRate of change p ofi
Figure BDA0002662109360000061
Will change the rate piAs discrete variables, calculating the frequency domain characteristics of the signals obtained by calculating the discrete time fourier transform, and calculating the frequency:
Figure BDA0002662109360000062
and converting the time domain signal into a frequency domain signal, and calculating the flicker frequency of the flame.
The S16 includes:
based on ultraviolet, infrared and visual (video) composite flame detectors, the current combustion grade is calculated as follows: and segmenting the foreground image through motion detection, and judging whether the foreground image has flames or not.
S16-1, calculating the flame weighted average brightness of the current frame: when flame information exists, calculating the weighted average I of the brightness of the whole imageaveAnd calculate the short timeA luminance weighted average of the inner previous M frames of images, which can be a data source for evaluating the flame burning level if the value is not mutated;
and S16-2, calculating the flame area of the current frame, namely calculating the proportion of the flame area in the whole image at the same time, and taking the data as another judgment basis.
S16-3, calculating the brightness value of the current image: the luminance value I at point (x, y) should be:
Figure BDA0002662109360000063
s16-4 current image weighted brightness value calculation: since the brightness of the flame is overall high, in order to reduce the weight of the flame in the average brightness value, the brightness value of the flame pixel in the image is set as IfiThe non-flame pixel brightness value is IpjThen, IaveThe method comprises the following steps:
Figure BDA0002662109360000071
s16-5, judging the grade of the current flame combustion process;
after the integral weighted average brightness of the image is obtained, calculating the proportion of the flame pixels of the current image in the integral image: recording the area S of the flame zonefThe cross-sectional area of the current environmental space region is SaCalculating the percentage as Tk(ii) a The flame burning rating λ is:
Figure BDA0002662109360000072
λ=100γTk+(1-γ)Iave,0.35<γ<0.45,
finally, lambda obtained by calculating parameters such as brightness, flame area and the like is compared with a set threshold value, and the flame combustion grade is judged.
In conclusion, by adopting the technical scheme, the ultraviolet and infrared composite sensor can be used for monitoring whether flame exists in a space or not, the ultraviolet and infrared sensors are sensitive to ultraviolet and infrared spectrums generated by object combustion fast in response and can be used for rapidly monitoring changes in the space, the ultraviolet sensor is sensitive to noise such as lightning and electric arc, the infrared sensor is sensitive to human bodies or other high-temperature objects, false alarm can be easily formed when the ultraviolet sensor and the infrared sensor are used independently, and the situations such as false alarm can be effectively reduced by combining the performances of the ultraviolet sensor and the infrared sensor. And when the flame exists in the space, the signals can trigger the camera to work through the AND gate to obtain a real-time image in the current space, and the flame existence in the space is further judged by using an image processing mode.
The image type fire detection technology is a trend of fire monitoring development, and compared with a traditional detector, the image type fire detection technology has the advantages of high response speed, long detection distance and the like, and is suitable for being used in large-space buildings. In addition, the proportion of image flame pixels and the brightness of the whole space are calculated, the weighted average of the brightness of the whole space is obtained, the average is weighted with the flame area, and the fire is divided in this way, so that more fire information is further obtained.
The invention combines the image type fire detection technology with ultraviolet and infrared detection, and greatly meets the detection requirements of people on high sensitivity, high reliability and visibility of fire.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow chart of the method of the present invention.
Fig. 2 is a flow chart of motion detection determination according to the present invention.
FIG. 3 is a flow chart of SVM training recognition according to the present invention.
FIG. 4 is a schematic diagram of a flame original frame in accordance with an embodiment of the present invention.
Fig. 5 is a schematic diagram of a foreground image by the background subtraction method of the present invention.
FIG. 6 is a schematic diagram of an edge detection extracted flame profile image according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
As shown in fig. 1, the ultraviolet, infrared and video composite detector monitors the current environment, transmits detected data back to the server for analysis and processing, and sends out a warning when a fire disaster is analyzed.
As shown in fig. 2, the system monitors the current space through the ultraviolet and infrared sensors, sends out a signal when the ultraviolet and infrared sensors (detectors) monitor information conforming to the flame spectrum, the signal is connected with a third counter through an and gate, and triggers a camera (a visual detector) to work when the count of the third counter is 5. The method comprises the steps that a camera obtains image information of a current space, a foreground image is extracted through motion detection, a multi-dimensional feature vector is formed through detecting color features, texture features and edge feature information of the foreground image, detection is carried out through a Support Vector Machine (SVM), and whether flames exist or not is confirmed again. When the flame does exist in the space, the flame area and the weighted average value of the space brightness are counted, and the value of the flame area and the weighted average value are weighted to be used as the basis for judging the flame grade.
Method for identifying flame in image
Motion detection
As shown in fig. 3, the motion detection, background subtraction method step:
modeling and updating a background image by using a Gaussian model;
carrying out difference operation on the current frame image and a background image generated by the Gaussian model;
setting a threshold value, and comparing each pixel of the image after the difference operation with the threshold value;
the motion detection adopts a background difference method to segment the foreground image, a Gaussian model modeling method is adopted when the background is modeled, the Gaussian model considers the pixel value of the detection area, and the color information, the brightness information and the like of the detection area meet the Gaussian distribution within a certain time. When the pixel value of any point (x, y) in the image I satisfies:
Figure BDA0002662109360000091
the pixel point (x, y) is a background point, when the pixel value is smaller than the threshold value T, the pixel point is a foreground point, u (x, y) is the mean value of the pixel value in the previous time period, and sigma is the variance. The Gaussian model needs to be updated in real time to adapt to the slow change of the background, and the updating mode is as follows:
u(t+1,x,y)=α·u(t,x,y)+(1-α)·I(x,y),
after a background image is established, performing differential operation on a current frame and the background image, and segmenting a motion area of the image to obtain a segmented image; the algorithm is as follows:
Ik(i,j)=b'k(i,j)+mk(i,j)+nk(i,j),
dk(i,j)=Ik(i,j)-bK(i,j),
wherein Ik(i, j) is pixel information of the current picture at the (i, j) position in the two-dimensional coordinate plane, b'k(i, j) is pixel information of the current background image at the (i, j) position, mk(i, j) represents pixel information of the moving object at the (i, j) position; n isk(i, j) represents noise information in the image, dk(i, j) is pixel information of the foreground image, bK(i, j) is background information.
Extracting color features, texture features and edge features of the foreground image:
when the color features of the foreground image are extracted, the HSI value of each pixel point of the image is mainly collected, the average value of the HSI values is finally calculated, and the HSI value of the positive sample set and the HSI value of the negative sample set are collected in the training stage and serve as sample data sets.
The texture features are different from the color features, and the color features of the image are essentially based on the features of one pixel point. However, from the composition of the texture features, the texture features belong to a regional feature and cannot be represented by a single pixel point. The texture can be described by the spatial correlation of the gray level features of the pictures through the gray level co-occurrence matrix.
The position information of two pixel points in the image is (I, j) and (I ', j'), and the gray value of the two points is I1And I2After one point is shifted by a certain direction angle theta by d, the probability that the other point can be reached can be determined by using a gray level co-occurrence matrix P (I)1,I2) And (4) showing. And recording the set of all the mutually-connected pixels in the target area as S, the spatial gray level co-occurrence matrix can be written as follows:
Figure BDA0002662109360000101
I(i,j)=I1representing the gray value at the pixel point (I, j) in the image as I1
I(i',j')=I2Representing the gray value at the pixel point (I ', j') in the image as I2
Extracting texture features by using entropy as dimension quadratic feature statistic, wherein P (I)1,I2) Is a gray level co-occurrence matrix:
Figure BDA0002662109360000102
the flame image can be formed into a closed curve by N pixel points, and P is assumedkFor one point on the closed curve, the coordinate plane is taken as a complex plane, and after the point is taken as a starting point and moves clockwise for one circle, the N coordinates of the curve can obtain a complex discrete sequence:
Cn=xn+jyn(n=1,2,3,...,n-1),
(xn,yn) Representing coordinate points on a complex plane;
j represents a complex plane;
subjecting the sequence to a one-dimensional discrete fourier transform:
Figure BDA0002662109360000111
n represents N pixel points of the curve;
c (n) complex plane coordinates representing pixel points;
and carrying out normalization processing on data obtained after discrete Fourier transform to obtain a series of Fourier coefficients for describing the flame profile, and reserving the first K low-frequency Fourier coefficients as flame shape characteristics.
Support Vector Machine (SVM) training recognition
The SVM is a two-classification model, which first learns a large number of data samples and then classifies new data according to the learned results. His basic model is defined as the linear classifier with the largest separation in feature space. The central idea of SVM is to find a way to maximize the classification interval, which can easily transform a complex classification problem into a convex quadratic programming problem to solve in the final solution. Extracting color features, texture features and edge features of the positive samples and the negative samples in a training process, forming multidimensional feature vectors by the color features, the texture features and the edge features, and solving a hyperplane with the largest segmentation interval in a feature space through training and learning of the positive and negative sample sets to classify the samples.
Stroboscopic character of flame
In the flame combustion process, when the influence of wind action in the environment is small, the flame is relatively stable, and the stroboscopic characteristic of the flame can be described by the change condition of the flame combustion area.
First, the flame area S of the i +1 th frame in the t +1 frame image is obtainedi+1Flame area S compared to ith frameiRate of change p ofi
Figure BDA0002662109360000112
Will change the rate piAs discrete variables, calculating the frequency domain characteristics of the signals obtained by calculating the discrete time fourier transform, and calculating the frequency:
Figure BDA0002662109360000121
wherein k represents a coefficient;
and converting the time domain signal into a frequency domain signal, and calculating the flicker frequency of the flame.
Flame grade division method based on flame area and ambient brightness
Calculating the flame weighted average brightness of the current frame;
based on ultraviolet, infrared and video flame detectors, the current combustion grade is calculated as follows: and segmenting the foreground image through motion detection, and judging whether the foreground image has flames or not. When flame information exists, calculating the weighted average I of the brightness of the whole imageaveAnd calculating a brightness weighted average of the previous M frames of images in a short time, wherein if the numerical value does not have mutation, the numerical value can be used as a data source for evaluating the flame burning level; meanwhile, the proportion of the flame area in the whole image is calculated, and the data is used as another judgment basis. Assuming that the current image is the nth frame, the luminance value I at the point (x, y) should be:
Figure BDA0002662109360000122
since the brightness of the flame is overall high, in order to reduce the weight of the flame in the average brightness value, the brightness value of the flame pixel in the image is set as IfiThe non-flame pixel brightness value is IpjThen, IaveThe method comprises the following steps:
Figure BDA0002662109360000123
wherein, IfiTable of (1)Showing the ith flame point pixel; i ispjJ in (1) represents the jth non-flame point pixel; β represents a first weight parameter;
② grade judgment of current flame combustion process
After the integral weighted average brightness of the image is obtained, calculating the proportion of the flame pixels of the current image in the integral image: recording the area S of the flame zonefThe cross-sectional area of the current environmental space region is SaCalculating the percentage as Tk(ii) a The flame burning rating λ is:
Figure BDA0002662109360000131
λ=100γTk+(1-γ)Iave0.35 < gamma < 0.45, gamma representing a second weight parameter;
finally, lambda obtained by calculating parameters such as brightness, flame area and the like is compared with a set threshold value, and the flame combustion grade is judged.
Examples
Environmental monitoring
Detecting an ultraviolet spectrum of a 180-240 nm waveband in the current environment by using an ultraviolet fire detector (an ultraviolet detector), counting the ultraviolet spectrum by using a first counter, and judging as a first-order alarm signal of the ultraviolet detector by using a controller if a statistical value m is greater than a threshold value A of the first counter; and simultaneously, detecting the infrared spectrum of a 4.4 mu m wave band in the current environment by using an infrared flame detector (an infrared detector), counting the infrared spectrum by using a second counter, and judging as a first-order alarm signal of the infrared detector by using the controller if the statistical value n is greater than a threshold value B of the second counter. Then a signal is transmitted to a third counter through an and gate, and the third counter triggers the camera (visual detector) to work when the count of the third counter is 5.
Motion detection
And analyzing the picture, and firstly, carrying out motion detection on the acquired picture. Background subtraction, which is one of the simplest and most efficient methods for motion recognition at present, is to obtain an accurate foreground image, wherein an important step is how to establish an accurate background image. The background modeling method adopts a Gaussian model to model and update a background image. Fig. 5 shows a motion region obtained by the background subtraction method. Wherein, FIG. 4 is a flame original frame, and FIG. 5 is a background difference method foreground image; FIG. 6 is a schematic diagram of an edge detection extracted flame profile image.
Extracting color features, texture features and edge features of foreground images
When the color features of the foreground image are extracted, the HSI value of each pixel point of the image is mainly collected, the average value of the HSI values is finally calculated, and the HSI value of the positive sample set and the HSI value of the negative sample set are collected in the training stage and serve as sample data sets. The flame HSI value ranges obtained by performing statistics on HSI values for the positive sample flame data set are shown in table 1.
TABLE 1 flame color HSI value Range
Color model H component Component S Component I
HSI 0~60 20~100 100~255
Extracting the texture features of the image, analyzing the correlation of the gray level image of the foreground image by adopting a gray level co-occurrence matrix, obtaining the primary statistical features of the foreground image by the gray level co-occurrence matrix, and extracting the texture features belonging to secondary feature statistics on the basis of the primary statistical features. The entropy is used as dimensional quadratic feature statistic to extract texture features, the texture features of flames are respectively counted, and the texture features are respectively obtained by comparing the texture features with three interference sources, namely a mobile light source, an incandescent lamp and a running car lamp, as shown in a table 2.
TABLE 2 flame and interference source image entropy feature statistics
Image numbering 1 2 3 4 5 Mean value of
Flame (bit) 0.3439 0.3687 0.2877 0.3056 0.3186 0.3249
Mobile light source (bit) 0.1457 0.1148 0.1336 0.1312 0.1377 0.1326
Incandescent lamp (bit) 0.0355 0.0349 0.1347 0.0344 0.0342 0.05452
Running vehicle lamp (bit) 0.0475 0.0429 0.0412 0.0427 0.0419 0.04324
When the edge features of the flame are extracted, a Fourier Descriptor (FD) is used for extracting, the fourier descriptor forms a discrete complex sequence on the contour edge of the segmented foreground image, then one-dimensional discrete fourier transform is performed on the sequence, fourier coefficients describing the contour information of the foreground image are obtained through normalization, and the first 10 low-frequency fourier coefficients are reserved as fourier descriptors to describe the contour, as shown in table 3.
TABLE 3 statistics of the first 10 Fourier coefficients of the flame edge feature
Figure BDA0002662109360000141
Stroboscopic feature
In the flame combustion process, when the influence of wind action in the environment is small, the flame flicker frequency is distributed between 3 Hz and 25Hz and the main frequency is between 7 Hz and 12Hz under the condition that the fire is relatively stable. The stroboscopic character of the flame can be described by the variation of the flame burning area.
First, the flame area S of the i +1 th frame in the t +1 frame image is obtainedi+1Flame area S compared to ith frameiRate of change p ofi
Figure BDA0002662109360000151
Will change the rate piAs discrete variables, calculating the frequency domain characteristics of the signals obtained by calculating the discrete time fourier transform, and calculating the frequency:
Figure BDA0002662109360000152
and converting the time domain signal into a frequency domain signal, and calculating the flicker frequency of the flame.
Flame grade determination based on actual area of flame
Calculating the flame weighted average brightness of the current frame;
based on ultraviolet, infrared and video flame detectors, the current combustion grade is calculated as follows: and segmenting the foreground image through motion detection, and judging whether the foreground image has flames or not. When flame information exists, calculating the weighted average I of the brightness of the whole imageaveAnd calculating a brightness weighted average of the previous M frames of images in a short time, wherein if the numerical value does not have mutation, the numerical value can be used as a data source for evaluating the flame burning level; meanwhile, the proportion of the flame area in the whole image is calculated, and the data is used as another judgment basis. Assuming that the current image is the nth frame, the luminance value I at the point (x, y) should be:
Figure BDA0002662109360000153
since the flame brightness is overall high, in order to reduce the weight of the flame in the brightness mean value, the image is processedThe brightness value of the flame pixel is IfThe non-flame pixel brightness value is InThen, IaveThe method comprises the following steps:
Figure BDA0002662109360000154
② grade judgment of current flame combustion process
After the integral weighted average brightness of the image is obtained, calculating the proportion of the flame pixels of the current image in the integral image: recording the area S of the flame zonefThe cross-sectional area of the current environmental space region is SaCalculating the percentage as Tk(ii) a The flame burning rating λ is:
Figure BDA0002662109360000161
λ=100γTk+(1-γ)Iave,0.35<γ<0.45,
finally, comparing the flame combustion grade lambda obtained by calculating parameters such as brightness, flame area and the like with a set threshold value, and judging the flame combustion grade: if the flame combustion grade lambda is smaller than or equal to a preset first flame threshold value, the alarm device gives out a three-level alarm;
if the flame combustion grade lambda is larger than a preset first flame threshold value and smaller than or equal to a preset second flame threshold value, the alarm device gives out a secondary alarm; the preset second flame threshold value is larger than the preset first flame threshold value;
if the flame combustion grade lambda is larger than a preset second flame threshold value and is smaller than or equal to a preset third flame threshold value, the alarm device gives out a first-level alarm; the preset third flame threshold value is greater than the preset second flame threshold value; the three-level alarm is a bright alarm, the second-level alarm is a sound alarm, and the first-level alarm is a sound-light alarm.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (8)

1. A comprehensive flame detection system based on ultraviolet, infrared and vision is characterized by comprising an ultraviolet detector, an infrared detector, a vision detector, a controller, a communication module and a server end;
the data output end of the ultraviolet detector is connected with the first data input end of the controller, the data output end of the infrared detector is connected with the second data input end of the controller, the data output end of the visual detector is connected with the third data input end of the controller, the trigger data end of the visual detector is connected with the trigger data end of the controller, and the communication end of the controller is connected with the communication end of the communication module;
the controller triggers the visual detector to collect image data according to data collected by the ultraviolet detector or/and the infrared detector, transmits the image data collected by the visual detector to the server end through the communication module, analyzes and processes the image data collected by the visual detector at the server end, and judges the fire condition.
2. A comprehensive flame detection method based on ultraviolet, infrared and vision is characterized by comprising the following steps:
s1, detecting the monitored environment in real time through an ultraviolet, infrared and visual composite flame detector;
when flame occurs, the ultraviolet detector and the infrared detector respectively detect and identify a flame target, and transmit a result signal to the controller;
and if the visual detector is triggered, the visual detector captures, identifies, records and confirms images and videos.
3. The ultraviolet, infrared and visual based integrated flame detection method of claim 2, wherein the S1 comprises:
s11, outputting a first-order alarm signal by the ultraviolet detector: counting by a first counter to obtain a statistical value m, setting a threshold value A of the first counter according to the requirements of different scenes on sensitivity, and outputting a first-order alarm signal by an ultraviolet detector when m is greater than A;
s12, outputting a first-order alarm signal by the infrared detector: counting by a second counter to obtain a statistical value n, setting a threshold value B of the second counter according to the requirements of different scenes on sensitivity, and outputting a first-order alarm signal by an infrared detector when n is greater than B;
s13, fusing the first-order alarm signals of the ultraviolet detector and the infrared detector through an AND gate, outputting a second-order alarm signal, and transmitting the second-order alarm signal to the visual detector;
s14, after the vision detector receives the second-order alarm signal, the vision detector starts to capture images and videos on site until the alarm signal is finished or confirmed manually;
s15, analyzing the obtained environment image and video data, segmenting the foreground image by using motion detection, extracting the color, texture and contour characteristics of the foreground image, fusing characteristic quantities, training and identifying flame by using a support vector machine, and finally extracting the flicker characteristics of the flame to further confirm the flame;
s16, drawing a brightness change curve within a certain time by monitoring the area proportion of the flame in the image in real time and the brightness change in the current space, eliminating the brightness mutation caused by illumination through the trend of the curve, and judging the current fire condition through the flame area in the space and the brightness weighted value.
4. The ultraviolet, infrared and visual based integrated flame detection method of claim 3, wherein the S15 comprises:
s15-1, performing motion detection by using background difference operation to segment a foreground image;
s15-2, extracting HSI color features of the foreground image;
s15-3, describing texture features by using a gray level co-occurrence matrix;
s15-4, extracting contour information by using a Fourier descriptor;
s15-5, constructing the characteristic information into a characteristic vector with a certain dimension;
s15-6, training and identifying flames by adopting a support vector machine;
and S15-7, extracting the flame stroboscopic characteristics to confirm the flame.
5. The ultraviolet, infrared and visual based integrated flame detection method of claim 4, wherein the motion detection in S15-1 comprises:
the motion detection adopts a background difference method to segment the foreground image, a Gaussian model modeling method is adopted when the background is modeled, the Gaussian model considers the pixel value of the detection area, and the color information and the brightness information of the detection area meet Gaussian distribution within a certain time; when the pixel value of any point (x, y) in the image I satisfies:
Figure FDA0002662109350000031
wherein I (x, y) represents a background pixel point in a given image;
the pixel point (x, y) is a background point, and when the pixel point (x, y) is less than the threshold value T, the pixel point (x, y) is a foreground point; u (x, y) is the mean value of the pixel value in the previous time period, σ is the variance, the gaussian model needs to be updated in real time to adapt to the slow change of the background, and the updating mode is as follows:
u(t+1,x,y)=α·u(t,x,y)+(1-α)·I(x,y);
wherein u (t +1, x, y) represents the mean value of the pixel values at the moment of t + 1;
α represents an update parameter value;
u (t, x, y) represents the mean value of the pixel values at time t.
6. The ultraviolet, infrared and visual based comprehensive flame detection method of claim 4, wherein the multi-dimensional vector features in S15-5 comprise:
extracting color features, texture features and edge features of flames, wherein the edge features are extracted by a Fourier descriptor, the Fourier descriptor forms a discrete complex sequence on the contour edge of the segmented foreground image, then one-dimensional discrete Fourier transform is carried out on the sequence, Fourier coefficients for describing contour information of the foreground image are obtained through normalization, and the first K low-frequency Fourier coefficients are reserved as Fourier descriptors to describe the contour.
7. The ultraviolet, infrared and visual based integrated flame detection method of claim 4, wherein the flame strobe feature of S15-7 comprises:
in the flame combustion process, when the influence of wind action in the environment is small and the fire is relatively stable, the stroboscopic characteristic of the flame can be described by the change condition of the flame combustion area;
first, the flame area S of the i +1 th frame in the t +1 frame image is obtainedi+1Flame area S compared to ith frameiRate of change p ofi
Figure FDA0002662109350000032
Will change the rate piAs discrete variables, calculating the frequency domain characteristics of the signals obtained by calculating the discrete time fourier transform, and calculating the frequency:
Figure FDA0002662109350000041
wherein N represents the total number of frames;
j represents a complex plane;
and converting the time domain signal into a frequency domain signal, and calculating the flicker frequency of the flame.
8. The ultraviolet, infrared and visual based integrated flame detection method of claim 3, wherein the S16 comprises:
based on the ultraviolet, infrared and visual composite flame detector, the current combustion grade is calculated as follows: segmenting the foreground image through motion detection, and judging whether the foreground image has flames or not;
s16-1, calculating the flame weighted average brightness of the current frame: when flame information exists, calculating the weighted average I of the brightness of the whole imageaveAnd calculating a brightness weighted average of the previous M frames of images in a short time, wherein if the numerical value does not have mutation, the numerical value can be used as a data source for evaluating the flame burning level;
s16-2, calculating the flame area of the current frame: meanwhile, calculating the proportion of the flame area in the whole image, and taking the data as another judgment basis;
s16-3, calculating the brightness value of the current image: the luminance value I at point (x, y) should be:
Figure FDA0002662109350000042
wherein R represents a red component;
b represents a blue component;
g represents a green component;
s16-4, calculating the weighted brightness value of the current image: since the brightness of the flame is overall high, in order to reduce the weight of the flame in the average brightness value, the brightness value of the flame pixel in the image is set as IfiThe non-flame pixel brightness value is IpjThen, IaveThe method comprises the following steps:
Figure FDA0002662109350000051
wherein m represents the total number of flame pixel points;
n represents the total number of non-flame pixel points;
s16-5, judging the grade of the current flame combustion process;
after the integral weighted average brightness of the image is obtained, calculating the proportion of the flame pixels of the current image in the integral image: recording the area S of the flame zonefThe cross-sectional area of the current environmental space region is SaCalculating the percentage as Tk(ii) a The flame burning rating λ is:
Figure FDA0002662109350000052
λ=100γTk+(1-γ)Iave,0.35<γ<0.45,
and calculating the brightness and the flame area to obtain lambda, and finally comparing the lambda with a set threshold value to judge the flame combustion grade.
CN202010907770.XA 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet, infrared and vision Active CN112069975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010907770.XA CN112069975B (en) 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet, infrared and vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010907770.XA CN112069975B (en) 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet, infrared and vision

Publications (2)

Publication Number Publication Date
CN112069975A true CN112069975A (en) 2020-12-11
CN112069975B CN112069975B (en) 2024-06-04

Family

ID=73665770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010907770.XA Active CN112069975B (en) 2020-09-02 2020-09-02 Comprehensive flame detection method based on ultraviolet, infrared and vision

Country Status (1)

Country Link
CN (1) CN112069975B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409642A (en) * 2021-06-28 2021-09-17 山东科技大学 Small-size fire smoke flow simulation experiment and numerical simulation combined system
CN113409535A (en) * 2021-04-29 2021-09-17 辛米尔视觉科技(上海)有限公司 Tunnel flame detection system based on computer vision and artificial intelligence algorithm
CN113436406A (en) * 2021-08-25 2021-09-24 广州乐盈信息科技股份有限公司 Sound-light alarm system
CN113609769A (en) * 2021-08-03 2021-11-05 无锡格林通安全装备有限公司 Design method of multiband red ultraviolet hydrogen flame detector
CN113743328A (en) * 2021-09-08 2021-12-03 无锡格林通安全装备有限公司 Flame detection method and device based on long-term and short-term memory model
CN113762385A (en) * 2021-09-08 2021-12-07 无锡格林通安全装备有限公司 Flame detection method and device based on Gaussian mixture model
CN113781736A (en) * 2021-09-23 2021-12-10 深圳市保国特卫安保技术服务有限公司 Building fire-fighting early warning method, system, equipment and storage medium
CN113804305A (en) * 2021-09-14 2021-12-17 新疆有色金属工业(集团)有限责任公司 Electric arc furnace flame temperature measurement method and system based on visual perception
CN113916381A (en) * 2021-10-11 2022-01-11 无锡格林通安全装备有限公司 Flame detection method and device based on FFT, BP and rectangular filter bank
CN114283548A (en) * 2021-12-27 2022-04-05 北京科技大学天津学院 Fire continuous monitoring method and system for unmanned aerial vehicle
CN115424404A (en) * 2022-09-06 2022-12-02 湖南省永神科技有限公司 Lighting system with fire alarm function
CN115494193A (en) * 2022-11-16 2022-12-20 常州市建筑科学研究院集团股份有限公司 Machine vision-based flame transverse propagation detection method and system for single body combustion test
CN117253231A (en) * 2023-11-15 2023-12-19 四川弘和数智集团有限公司 Oil-gas station image processing method and device, electronic equipment and storage medium
CN117636565A (en) * 2024-01-24 2024-03-01 贵州道坦坦科技股份有限公司 Multispectral flame detection system based on spectral feature data fusion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120235042A1 (en) * 2011-03-16 2012-09-20 Honeywell International Inc. Mwir sensor for flame detection
CN107123226A (en) * 2017-06-16 2017-09-01 招商局重庆交通科研设计院有限公司 One kind is used for highway tunnels fire detection alignment system
CN107590941A (en) * 2017-09-19 2018-01-16 重庆英卡电子有限公司 Photo taking type mixed flame detector and its detection method
CN109029736A (en) * 2018-08-22 2018-12-18 王永福 A kind of compound flame detector
CN111179279A (en) * 2019-12-20 2020-05-19 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and binocular vision

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120235042A1 (en) * 2011-03-16 2012-09-20 Honeywell International Inc. Mwir sensor for flame detection
CN107123226A (en) * 2017-06-16 2017-09-01 招商局重庆交通科研设计院有限公司 One kind is used for highway tunnels fire detection alignment system
CN107590941A (en) * 2017-09-19 2018-01-16 重庆英卡电子有限公司 Photo taking type mixed flame detector and its detection method
CN109029736A (en) * 2018-08-22 2018-12-18 王永福 A kind of compound flame detector
CN111179279A (en) * 2019-12-20 2020-05-19 成都指码科技有限公司 Comprehensive flame detection method based on ultraviolet and binocular vision

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113409535A (en) * 2021-04-29 2021-09-17 辛米尔视觉科技(上海)有限公司 Tunnel flame detection system based on computer vision and artificial intelligence algorithm
CN113409642A (en) * 2021-06-28 2021-09-17 山东科技大学 Small-size fire smoke flow simulation experiment and numerical simulation combined system
CN113409642B (en) * 2021-06-28 2023-02-24 山东科技大学 Small-size fire smoke flow simulation experiment and numerical simulation combined system
CN113609769A (en) * 2021-08-03 2021-11-05 无锡格林通安全装备有限公司 Design method of multiband red ultraviolet hydrogen flame detector
CN113436406A (en) * 2021-08-25 2021-09-24 广州乐盈信息科技股份有限公司 Sound-light alarm system
CN113743328A (en) * 2021-09-08 2021-12-03 无锡格林通安全装备有限公司 Flame detection method and device based on long-term and short-term memory model
CN113762385A (en) * 2021-09-08 2021-12-07 无锡格林通安全装备有限公司 Flame detection method and device based on Gaussian mixture model
CN113804305B (en) * 2021-09-14 2024-04-09 新疆有色金属工业(集团)有限责任公司 Arc furnace flame temperature measurement method and system based on visual perception
CN113804305A (en) * 2021-09-14 2021-12-17 新疆有色金属工业(集团)有限责任公司 Electric arc furnace flame temperature measurement method and system based on visual perception
CN113781736B (en) * 2021-09-23 2022-12-27 深圳市保国特卫安保技术服务有限公司 Building fire-fighting early warning method, system, equipment and storage medium
CN113781736A (en) * 2021-09-23 2021-12-10 深圳市保国特卫安保技术服务有限公司 Building fire-fighting early warning method, system, equipment and storage medium
CN113916381A (en) * 2021-10-11 2022-01-11 无锡格林通安全装备有限公司 Flame detection method and device based on FFT, BP and rectangular filter bank
CN114283548A (en) * 2021-12-27 2022-04-05 北京科技大学天津学院 Fire continuous monitoring method and system for unmanned aerial vehicle
CN115424404A (en) * 2022-09-06 2022-12-02 湖南省永神科技有限公司 Lighting system with fire alarm function
CN115494193A (en) * 2022-11-16 2022-12-20 常州市建筑科学研究院集团股份有限公司 Machine vision-based flame transverse propagation detection method and system for single body combustion test
CN117253231A (en) * 2023-11-15 2023-12-19 四川弘和数智集团有限公司 Oil-gas station image processing method and device, electronic equipment and storage medium
CN117253231B (en) * 2023-11-15 2024-01-26 四川弘和数智集团有限公司 Oil-gas station image processing method and device, electronic equipment and storage medium
CN117636565A (en) * 2024-01-24 2024-03-01 贵州道坦坦科技股份有限公司 Multispectral flame detection system based on spectral feature data fusion
CN117636565B (en) * 2024-01-24 2024-04-26 贵州道坦坦科技股份有限公司 Multispectral flame detection system based on spectral feature data fusion

Also Published As

Publication number Publication date
CN112069975B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN112069975B (en) Comprehensive flame detection method based on ultraviolet, infrared and vision
CN110516609B (en) Fire disaster video detection and early warning method based on image multi-feature fusion
CN106845443B (en) Video flame detection method based on multi-feature fusion
CN107622258B (en) Rapid pedestrian detection method combining static underlying characteristics and motion information
CN115691026B (en) Intelligent early warning monitoring management method for forest fire prevention
KR101822924B1 (en) Image based system, method, and program for detecting fire
CN108389359B (en) Deep learning-based urban fire alarm method
CN109637068A (en) Intelligent pyrotechnics identifying system
CN107944359A (en) Flame detecting method based on video
CN111126136A (en) Smoke concentration quantification method based on image recognition
CN111126293A (en) Flame and smoke abnormal condition detection method and system
CN101711393A (en) System and method based on the fire detection of video
CN109377713B (en) Fire early warning method and system
CN109034038B (en) Fire identification device based on multi-feature fusion
CN111951250A (en) Image-based fire detection method
CN113192038B (en) Method for recognizing and monitoring abnormal smoke and fire in existing flame environment based on deep learning
Wang et al. A new fire detection method using a multi-expert system based on color dispersion, similarity and centroid motion in indoor environment
CN113963301A (en) Space-time feature fused video fire and smoke detection method and system
CN107688793A (en) A kind of outside transformer substation fire automatic monitoring method for early warning
KR101196678B1 (en) Real-time fire detection device and method
Chen et al. Fire detection using spatial-temporal analysis
CN107704818A (en) A kind of fire detection system based on video image
CN107729811B (en) Night flame detection method based on scene modeling
CN112364884B (en) Method for detecting moving object
CN113657250A (en) Flame detection method and system based on monitoring video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 109, 1st Floor, Building 2, No. 11, Tianying Road, High tech Zone, Chengdu, Sichuan 611700

Applicant after: Chengdu Shidao Information Technology Co.,Ltd.

Address before: 611731 floor 2, No. 4, Xinhang Road, West Park, high tech Zone (West Zone), Chengdu, Sichuan

Applicant before: CHENGDU ZHIMA TECHNOLOGY CO.,LTD.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant