CN113537204A - Small flame detection method based on infrared features and machine learning and computer device - Google Patents

Small flame detection method based on infrared features and machine learning and computer device Download PDF

Info

Publication number
CN113537204A
CN113537204A CN202010313706.9A CN202010313706A CN113537204A CN 113537204 A CN113537204 A CN 113537204A CN 202010313706 A CN202010313706 A CN 202010313706A CN 113537204 A CN113537204 A CN 113537204A
Authority
CN
China
Prior art keywords
frame
image
visible light
infrared
light image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010313706.9A
Other languages
Chinese (zh)
Inventor
周严伟
占兆武
谢恺
罗为
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Longhua New Generation Communication And Intelligent Computing Research Institute
Fuhuake Precision Industry Shenzhen Co ltd
Original Assignee
Shenzhen Longhua New Generation Communication And Intelligent Computing Research Institute
Fuhuake Precision Industry Shenzhen Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Longhua New Generation Communication And Intelligent Computing Research Institute, Fuhuake Precision Industry Shenzhen Co ltd filed Critical Shenzhen Longhua New Generation Communication And Intelligent Computing Research Institute
Priority to CN202010313706.9A priority Critical patent/CN113537204A/en
Publication of CN113537204A publication Critical patent/CN113537204A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a small flame detection method based on infrared features and machine learning and a computer device. According to the method, the corresponding coordinate relation between each frame of infrared image and each frame of visible light image is obtained according to camera parameters and scale factors, the intersection region between each frame of infrared image and each frame of visible light image is obtained, the static block in the intersection region is eliminated by using a frame difference method to obtain a candidate region, and the obtained candidate region is sent to a trained machine learning model for detection to obtain a detection result, so that the position of small flame in a large-range environment can be positioned, the fire information is reported earlier, the probability of small fire developing into large fire is reduced, and good information support is provided for eliminating potential safety hazards in time.

Description

Small flame detection method based on infrared features and machine learning and computer device
Technical Field
The invention relates to the technical field of image recognition, in particular to a small flame detection method and a computer device based on visible light image characteristics, infrared image characteristics and machine learning.
Background
The fire hazard threatens the safety of human life and property, and the early detection of the fire hazard and the sending of early warning information are important research subjects. The conventional fire detection method mainly monitors byproducts generated in the fire occurrence process or some other environmental variables according to a photoelectric sensing device or a particle sensor, such as temperature monitoring, flame color monitoring, smoke particle concentration monitoring, environmental humidity monitoring and the like. However, the conventional fire detector is generally a detection method based on single data discrimination, is only suitable for the conditions of low indoor space, short distance and less interference, and has great limitation. With the rapid development of science and technology, the fire detection technology based on vision becomes a main research direction for preventing fire, the fire detection based on vision utilizes a monitoring video to obtain richer information, the anti-interference capability is stronger, and the applicable region is wider. However, the current research work of fire detection based on vision mainly analyzes the already burnt flame, and the research on the detection of small flame in the early stage is less.
At present, fire detection research mainly focuses on outdoor fires such as forest fires, building fires and the like, the flame area is relatively large, and the fire of the type can achieve a certain detection effect through color model (such as RGB, YCbCr and the like) detection or machine learning methods. However, in the initial stage of a fire, if the area of flames is small, the occupied area in a picture is small, the effect may not be ideal or even detected by using the RGB and YCbCr color model method for detection, if the color model is used for detecting small flames, the loose condition setting may have false detection, and the strict condition setting may not detect the small flames; if a machine learning method is used for detection, the existing machine learning model is generally trained on large flame data, small flame data sets are few and difficult to collect, and the existing machine learning model trained on large flames is difficult to detect small flames.
Disclosure of Invention
In view of the above, there is a need for a small flame detection method and a computer device based on infrared features and machine learning to locate the position of a small flame in a wider environment, report fire information earlier, and reduce the probability of the small flame developing into a large flame.
A first aspect of the present application provides a method for detecting a small flame based on infrared features and machine learning, the method comprising:
obtaining each frame of infrared image and each frame of visible light image from a video stream;
converting each frame of infrared image into a gray image, and carrying out contour searching and screening on the gray image according to contour screening conditions to obtain a first key area of each frame of infrared image;
inputting each frame of visible light image into a color model, and analyzing each frame of visible light image according to a color filtering condition of the color model and the contour filtering condition to obtain a second key area of the visible light image;
obtaining a corresponding coordinate relation between each frame of infrared image and each frame of visible light image according to camera parameters and scale factors, and obtaining an intersection area between the first key area and the second key area according to the corresponding coordinate relation;
eliminating the static block in the intersection area by using a frame difference method to obtain a candidate area;
sending the candidate region into a trained machine learning model for detection to obtain a detection result; and
and displaying the result of each frame of visible light image or each frame of infrared image according to the detection result.
Preferably, the converting the each frame of infrared image into a grayscale image includes:
and taking the R component brightness, the G component brightness and the B component brightness in each frame of infrared image as gray values of three gray images, and selecting one gray value from the gray values of the three gray images as the gray value of each frame of infrared image.
Preferably, the color model is an RGB color model, and the analyzing the second key region of each frame of visible light image from each frame of visible light image according to the color filtering condition and the contour filtering condition of the color model includes:
carrying out RGB color filtering on each frame of visible light image according to color filtering conditions to obtain a template of each frame of visible light image;
expanding the template of each frame of visible light image; and
and carrying out contour searching and filtering on the expanded template according to the contour screening conditions to obtain a second key area of each frame of visible light image.
Preferably, the contour screening conditions include polygon fitting accuracy, contour aspect ratio, upper and lower limits of contour area, number of vertices of contour polygon fitting, and/or white area ratio.
Preferably, the obtaining of the corresponding coordinate relationship between each frame of infrared image and each frame of visible light image according to the camera parameters and the scale factors includes:
and respectively carrying out camera calibration on each frame of infrared image and each frame of visible light image by adopting a calibration method to obtain the camera parameters and the scale factors, obtaining the coordinate offset of each frame of infrared image relative to each frame of visible light image according to the camera parameters and the scale factors, and obtaining the corresponding coordinate relation between each frame of infrared image and each frame of visible light image according to the coordinate offset.
Preferably, the scale factor is obtained according to the following method:
providing a calibration plate, wherein the calibration plate comprises a plurality of circle centers;
calculating a first pixel difference of two circle centers in the calibration plate corresponding to each frame of infrared image;
calculating a second pixel difference of two circle centers in the calibration plate corresponding to each frame of visible light image; and
and calculating the scale factor according to the ratio of the first pixel difference to the second pixel difference.
Preferably, the obtaining of the coordinate offset of each frame of infrared image relative to each frame of visible light image according to the camera parameters and the scale factors includes:
scaling the infrared images and the visible light images to a uniform size according to the scale factor; and
and calculating the pixel coordinates of the optical image to obtain the coordinate offset according to the pixel coordinates of each frame of infrared image with the circle center of the calibration plate after zooming and each frame of infrared image with the circle center of the calibration plate after zooming.
Preferably, the calculating the coordinate offset according to the pixel coordinate of each frame of the infrared image with the scaled center of the calibration plate and the pixel coordinate of the optical image with the scaled center of the calibration plate includes:
according to formula Xdiff=Xn-X'nAnd Ydiff=Yn-Y'nCalculating the coordinate offset, wherein XnThe n-th circle center of the calibration plate is the pixel coordinate, X 'of the zoomed infrared image of each frame relative to the X axis of the pixel coordinate system'nThe n-th circle center of the calibration plate is the pixel coordinate, Y, of the X axis relative to the pixel coordinate system in each frame of the zoomed visible light imagenThe n-th circle center of the calibration plate is the pixel coordinate, Y ', of the zoomed infrared image of each frame relative to the Y axis of the pixel coordinate system'nAnd the n-th circle center of the calibration plate is the pixel coordinate of the Y axis relative to the pixel coordinate system in each frame of the zoomed visible light image.
Preferably, the displaying the result of each frame of visible light image or each frame of infrared image according to the detection result includes:
recording the coordinate position of the area where the detection result in each frame of visible light image or each frame of infrared image is a fire disaster; and
and displaying each frame of visible light image or each frame of infrared image of which the detection result is a fire, wherein the detection result in each frame of visible light image or each frame of infrared image is the coordinate position of the area of the fire.
A second aspect of the application provides a computer apparatus comprising a processor and a memory, the processor being configured to implement the infrared feature-based, machine learning-based small flame detection method when executing a computer program stored in the memory.
According to the scheme, the corresponding coordinate relation between each frame of infrared image and each frame of visible light image is obtained according to camera parameters and scale factors to obtain the intersection region between each frame of infrared image and each frame of visible light image, the static block in the intersection region is eliminated by using a frame difference method to obtain a candidate region, and the obtained candidate region is sent to a trained machine learning model for detection to obtain a detection result, so that the position of small flame in a large-range environment can be positioned, the fire information can be reported earlier, the probability of small fire evolving into a large fire is reduced, and good information support is provided for timely eliminating potential safety hazards.
Drawings
Fig. 1 is a flowchart of a method for detecting a small flame based on infrared features and machine learning according to an embodiment of the present invention.
Fig. 2 is a schematic application environment diagram of a small flame detection method based on infrared features and machine learning in an embodiment of the present invention.
Fig. 3 is a schematic diagram of eliminating a static block in a key region of each frame of image by using a frame difference method to obtain a candidate region according to an embodiment of the present invention.
Fig. 4 is a schematic view of a small flame detection device according to an embodiment of the invention.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a detailed description of the present invention will be given below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth to provide a thorough understanding of the present invention, and the described embodiments are merely a subset of the embodiments of the present invention, rather than a complete embodiment. All other embodiments, which can be obtained by a person skilled in the art without any inventive step based on the embodiments of the present invention, are within the scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
Preferably, the small flame detection method based on infrared features and machine learning is applied to one or more computer devices. The computer device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware thereof includes, but is not limited to, a processor, an external storage medium, a memory, and the like.
The computer device may be, but is not limited to, a desktop computer, a notebook computer, a cloud server, a smart phone, and the like. The computer device can be in man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad, gesture recognition equipment, voice control equipment and the like.
Example one
Fig. 1 is a flowchart of a method for detecting a small flame based on infrared features and machine learning according to an embodiment of the present invention. The order of the steps in the flow chart may be changed and some steps may be omitted according to different needs. For convenience of explanation, only portions related to the embodiments of the present invention are shown.
As shown in fig. 1, the method for detecting small flames based on infrared features and machine learning specifically includes the following steps.
And step S11, obtaining each frame of infrared image and each frame of visible light image from the video stream.
Referring to fig. 2, a schematic diagram of an application environment of the method for detecting a small flame in an embodiment of the invention is shown. The method is applied in a computer device 10. The computer device 10 includes a camera 11, and the computer device 10 acquires a video stream through the camera 11 and obtains each frame of infrared image and each frame of visible light image from the video stream. In another embodiment, the computer device 10 may obtain a video stream from an internal storage device, and obtain each frame of infrared image and each frame of visible light image from the video stream. In other embodiments, the computer device 10 may also obtain a video stream from an electronic device to which it is communicatively coupled. The electronic device may be, but is not limited to, a server cluster, a mobile phone, a tablet computer, and the like. The computer device 10 may be communicatively coupled to the electronic device via a network and obtain a video stream from the electronic device. In one embodiment, the network for supporting the computer device 10 to communicate with the electronic device may be a wired network or a Wireless network, such as radio, Wireless Fidelity (WIFI), cellular, satellite, broadcast, etc.
And step S12, converting each frame of infrared image into a gray image, and carrying out contour searching and screening on the gray image according to contour screening conditions to obtain a first key area of each frame of infrared image.
In this embodiment, each frame of infrared image is converted into a grayscale image by a component method. Specifically, the R component luminance, the G component luminance, and the B component luminance in each frame of the infrared image are taken as the gray values of three gray images, and any one gray value among the gray values of the three gray images is taken as the gray value of each frame of the infrared image. In another embodiment, the infrared image of each frame may be converted into a grayscale image by a maximum value method. Specifically, the maximum value among the R component brightness, the G component brightness, and the B component brightness in each frame of the infrared image is taken as the gray level value of each frame of the infrared image. In other embodiments, each frame of infrared image may be converted into a grayscale image by an averaging method and a weighted averaging method. In this embodiment, after obtaining the gray level image, the gray level image is subjected to contour search and screening according to a contour screening condition to obtain a first key region of each frame of the infrared image, where the contour is a curve formed by connecting continuous points and has the same gray level.
And step S13, inputting each frame of visible light image into a color model, and analyzing each frame of visible light image according to the color filtering condition and the contour screening condition of the color model to obtain a second key area of the visible light image.
In this embodiment, the color model is an RGB color model, and the analyzing the second key region of each frame of visible light image from each frame of visible light image according to the color filtering condition and the contour filtering condition of the color model includes: carrying out RGB color filtering on each frame of visible light image according to color filtering conditions to obtain a template of each frame of visible light image; expanding the template of each frame of visible light image; and carrying out contour searching and filtering on the expanded template according to the contour screening conditions to obtain a second key area of each frame of visible light image. In this embodiment, the color filtering conditions are:
Figure BDA0002458643230000081
R(x,y)>G(x,y)>B(x,y)
0.25≤G(x,y)/(R(x,y)+1)≤0.65
0.05≤B(,x,y)/(R(x,y)+1)≤0.45
0.20≤B(x,y)/(G(x,y)+1)≤0.60
wherein, (x, y) is the pixel position of the image, and R (x, y), G (x, y), B (x, y) are respectively the R component value, G component value, B component value corresponding to the pixel position (x, y).
In the present embodiment, the contour screening conditions include, but are not limited to, polygon fitting accuracy, contour aspect ratio, upper and lower limits of contour area, number of vertices to be fitted to a contour polygon, and/or white area ratio. In the present embodiment, the polygon fitting accuracy refers to the maximum distance between the fitted polygon and the original contour, which is set when performing polygon fitting on the contour of the image, and the smaller the polygon fitting accuracy is set, the more the number of fixed points of the fitted polygon is, but the more time is consumed. In particular embodiments, the polygon fitting accuracy may be dynamically set according to the contour perimeter of the image. Because the number of vertices of the polygon fitted by the contour of the fire area in each frame of image is relatively large, in the embodiment, when the contour of the image is subjected to polygon fitting according to the polygon fitting precision, the contour of the image is further subjected to contour searching and filtering according to the number of vertices of the contour polygon fitting so as to remove a detection area with a comparison rule in the image. In the present embodiment, when the contour of each frame of the visible light image or each frame of the infrared image is detected, the detection region exceeding the contour aspect ratio in the contour is also removed according to the contour aspect ratio of the contour. For example, when there is a long interference color such as a yellow rail in each visible image or each infrared image, the yellow rail can be removed by removing the area exceeding the contour aspect ratio in the contour according to the contour aspect ratio. In the embodiment, when the outline of each frame of visible light image or each frame of infrared image is detected, the detection area with too small or too large area in each frame of visible light image or each frame of infrared image can be removed according to the upper and lower limits of the area of the outline. The white area ratio refers to a ratio of a white area in each visible light image or each infrared image to a pure white image in each visible light image or each infrared image. Referring to table 1, a schematic diagram of profile parameters and setting values of the profile parameters according to an embodiment of the invention is shown.
TABLE 1
Figure BDA0002458643230000091
And step S14, obtaining a corresponding coordinate relation between each frame of infrared image and each frame of visible light image according to camera parameters and scale factors, and obtaining an intersection area between the first key area and the second key area according to the corresponding coordinate relation.
In this embodiment, the "obtaining the corresponding coordinate relationship between each frame of the infrared image and each frame of the visible light image according to the calibrated camera parameter and the scale factor" includes: and respectively carrying out camera calibration on each frame of infrared image and each frame of visible light image by adopting a calibration method to obtain the camera parameters and the scale factors, obtaining the coordinate offset of each frame of infrared image relative to each frame of visible light image according to the camera parameters and the scale factors, and obtaining the corresponding coordinate relation between each frame of infrared image and each frame of visible light image according to the coordinate offset. In this embodiment, the calibration method is a calibration method in OpenCV, where OpenCV is a cross-platform computer vision library. In this embodiment, a calibration plate is provided, and the calibration plate includes a plurality of circle centers. In this embodiment, a first pixel difference between two circle centers in the calibration plate corresponding to each frame of the infrared image is calculated, a second pixel difference between two circle centers in the calibration plate corresponding to each frame of the visible light image is calculated, and the scale factor is calculated according to a ratio of the first pixel difference to the second pixel difference. In the present embodiment, the scale factor is calculated according to the formula S ═ T1/T2, where T1 ═ Tn-Tn-1,T2=T'n-T'n-1,TnAnd Tn-1Respectively corresponding to the pixel coordinate value T 'in each infrared image at the n-th circle center and the n-1 th circle center in the calibration plate'nAnd T'n-1The pixel coordinate values of the nth circle center and the n-1 th circle center in each frame of visible light image in the calibration plate are respectively, and n is a positive integer.
In this embodiment, the obtaining of the coordinate offset of each frame of infrared image relative to each frame of visible light image according to the camera parameter and the scale factor includes: scaling the infrared images and the visible light images to a uniform size according to the scale factor, and scaling the infrared images and the visible light images according to the pixel coordinates of the scaled infrared images and the center of the scale plateThe pixel coordinates of the light image can be calculated to obtain the coordinate offset in each frame. In the present embodiment, the formula X is useddiff=Xn-X'nAnd Ydiff=Yn-Y'nCalculating the coordinate offset, wherein XnThe n-th circle center of the calibration plate is the pixel coordinate, X 'of the zoomed infrared image relative to the X axis of the pixel coordinate system'nThe n-th circle center of the calibration plate is the pixel coordinate, Y coordinate relative to the X axis of the pixel coordinate system in each frame of the zoomed visible light imagenThe n-th circle center of the calibration plate is the pixel coordinate, Y ', of the zoomed infrared image of each frame relative to the Y axis of the pixel coordinate system'nAnd the n-th circle center of the calibration plate is the pixel coordinate of the Y axis relative to the pixel coordinate system in each frame of the zoomed visible light image.
And step S15, eliminating the static blocks in the intersection area by using a frame difference method to obtain a candidate area.
In this embodiment, the obtaining a candidate region by eliminating the stationary block in the intersection region by using a frame difference method includes: carrying out differential operation on a current frame image containing an intersection region and a previous frame image continuous with the current frame image by a frame difference method to subtract pixel points corresponding to the two frame images to obtain a differential image; carrying out binarization processing on the differential image one by one to obtain a binarized image; and determining a static block in the current frame image from the binary image, and removing the static block to obtain a candidate area. In this embodiment, the current frame image including the intersection area may be a visible light image or an infrared image. Referring to fig. 3, a schematic diagram of eliminating the static blocks in the key region of each frame of image by using a frame difference method to obtain candidate regions according to an embodiment of the present invention is shown. In this embodiment, the current frame image including the intersection area is denoted as the nth frame, the previous frame image of the current frame image is denoted as the n-1 st frame, and the images corresponding to the two frame images are denoted as FnAnd Fn-1The gray value of the corresponding pixel point of the two frames of images is recorded as fn(x, y) and fn-1(x, y), subtracting the gray values of the corresponding pixel points of the two frames of images and taking the gray valuesAbsolute value, obtaining a difference image and recording as DnWherein D isn(x,y)=|fn(x,y)-fn-1(x, y) |; setting a threshold value T according to a formula
Figure BDA0002458643230000111
Carrying out binarization processing on the difference image to obtain a binarized image; and taking the point set with the gray value of 0 in the binary image as a static block in the current frame image, taking the point set with the gray value of 255 in the binary image as a motion target area, and removing the static block to obtain a candidate area. In this embodiment, it is further determined whether the white area ratio in the binarized image exceeds a preset threshold value or not when the binarized image is obtained, and an area in which the white area does not exceed the preset threshold value is used as the stationary block, so that the false detection of flames due to the influence of small changes in the image is prevented.
And step S16, sending the candidate region to a trained machine learning model for detection to obtain a detection result.
In this embodiment, the machine learning model is a convolutional neural network. The convolutional neural network comprises a plurality of convolutional layers, a maximum pool sampling layer and a full connection layer. The convolution layer and the maximum pool sampling layer are alternately connected and used for carrying out feature extraction on the frame image containing the candidate area. The plurality of fully-connected layers are interconnected. The last maximum pool sampling layer in the maximum pool sampling layers is connected with a first full connection layer of the full connection layers and used for inputting a feature image acquired through feature extraction to the first full connection layer, the last full connection layer in the full connection layers is a classifier, and the classifier is used for classifying the feature image to obtain a detection result. In the present embodiment, the classification result is a fire or a non-fire. In this embodiment, the convolutional neural network is an Alexnet network, and the Alexnet network includes five convolutional layers, three maximum pool sampling layers, and three full connection layers.
In this embodiment, before step S16, the method further includes: and reversely adjusting the weight parameters in the convolutional neural network layer by adopting a gradient descent method to minimize a loss function so as to train the convolutional neural network.
In this embodiment, before step S16, the method further includes: and expanding the candidate region.
And step S17, displaying the result of each frame of visible light image or each frame of infrared image according to the detection result.
In this embodiment, the displaying the result of each frame of the visible light image or each frame of the infrared image according to the detection result includes: recording the detection result in each frame of visible light image or each frame of infrared image as the coordinate position of the fire area; and displaying each frame of visible light image or each frame of infrared image of the fire, wherein the detection result in each frame of visible light image or each frame of infrared image is the coordinate position of the area of the fire.
According to the scheme, the corresponding coordinate relation between each frame of infrared image and each frame of visible light image is obtained according to camera parameters and scale factors to obtain the intersection region between each frame of infrared image and each frame of visible light image, the static block in the intersection region is eliminated by using a frame difference method to obtain a candidate region, and the obtained candidate region is sent to a trained machine learning model for detection to obtain a detection result, so that the position of small flame in a large-range environment can be positioned, the fire information can be reported earlier, the probability of small fire evolving into a large fire is reduced, and good information support is provided for timely eliminating potential safety hazards.
Example two
Fig. 4 is a structural diagram of a small flame detection device according to a second embodiment of the present invention, and only the parts related to the second embodiment of the present invention are shown for convenience of description, and the details are as follows.
Referring to fig. 4, the small flame detection device 100 may be divided into a plurality of functional modules according to functions performed by the small flame detection device, and the functional modules are used for performing various steps in the corresponding embodiment of fig. 1 to realize the small flame detection function. In an embodiment of the present invention, the functional modules of the small flame detection apparatus 100 may include an image acquisition module 101, a first filtering module 102, a second filtering module 103, an intersection region determination module 104, a static block elimination module 105, a detection module 106, and a display module 107. The functions of the respective functional blocks will be described in detail in the following embodiments.
The image acquisition module 101 obtains each frame of infrared image and each frame of visible light image from the video stream.
The first screening module 102 converts each frame of infrared image into a gray image, and performs contour search and screening on the gray image according to contour screening conditions to obtain a first key area of each frame of infrared image.
In this embodiment, the first filtering module 102 converts each frame of the infrared image into a gray image by a component method. Specifically, the first filtering module 102 uses the R component brightness, the G component brightness, and the B component brightness in each frame of the infrared image as the gray values of three gray images, and selects one gray value from the gray values of the three gray images as the gray value of each frame of the infrared image. In another embodiment, the first filtering module 102 converts the infrared image of each frame into a grayscale image by a maximum value method. Specifically, the first filtering module 102 uses a maximum value of R component luminance, G component luminance, and B component luminance in each frame of the infrared image as a gray value of each frame of the infrared image. In other embodiments, the first filtering module 102 converts each frame of the infrared image into a grayscale image by an averaging method and a weighted averaging method. In this embodiment, after obtaining the gray level image, performing contour search and screening on the gray level image according to a contour screening condition to obtain a first key region of each frame of the infrared image, where the contour is a curve formed by connecting continuous points and has the same gray level.
The second filtering module 103 inputs each frame of visible light image into a color model, and analyzes a second key region of the visible light image from each frame of visible light image according to a color filtering condition and a contour filtering condition of the color model.
In this embodiment, the color model is an RGB color model, and the second filtering module 103 performs RGB color filtering on each frame of visible light image according to a color filtering condition to obtain a template of each frame of visible light image; expanding the template of each frame of visible light image; and carrying out contour searching and filtering on the expanded template according to the contour screening conditions to obtain a second key area of each frame of visible light image. In this embodiment, the color filtering conditions are:
Figure BDA0002458643230000141
R(x,y)>G(x,y)>B(x,y)
0.25≤G(x,y)/(R(x,y)+1)≤0.65
0.05≤B(x,y)/(R(x,y)+1)≤0.45
0.20≤B(x,y)/(G(x,y)+1)≤0.60
wherein, (x, y) is the pixel position of the image, and R (x, y), G (x, y), B (x, y) are respectively the R component value, G component value, B component value corresponding to the pixel position (x, y).
In the present embodiment, the contour screening conditions include, but are not limited to, polygon fitting accuracy, contour aspect ratio, upper and lower limits of contour area, number of vertices to be fitted to a contour polygon, and/or white area ratio. In the present embodiment, the polygon fitting accuracy refers to the maximum distance between the fitted polygon and the original contour, which is set when performing polygon fitting on the contour of the image, and the smaller the polygon fitting accuracy is set, the more the number of fixed points of the fitted polygon is, but the more time is consumed. In particular embodiments, the polygon fitting accuracy may be dynamically set according to the contour perimeter of the image. Because the number of vertices of the polygon fitted by the contour of the fire area in each frame of image is relatively large, in the embodiment, when the contour of the image is subjected to polygon fitting according to the polygon fitting precision, the contour of the image is further subjected to contour searching and filtering according to the number of vertices of the contour polygon fitting so as to remove a detection area with a comparison rule in the image. In the present embodiment, when the contour of each frame of the visible light image or each frame of the infrared image is detected, the detection region exceeding the contour aspect ratio in the contour is also removed according to the contour aspect ratio of the contour. For example, when there is a long interference color such as a yellow rail in each visible image or each infrared image, the yellow rail can be removed by removing the area exceeding the contour aspect ratio in the contour according to the contour aspect ratio. In the embodiment, when the outline of each frame of visible light image or each frame of infrared image is detected, the detection area with too small or too large area in each frame of visible light image or each frame of infrared image can be removed according to the upper and lower limits of the area of the outline. The white area ratio refers to a ratio of a white area in each visible light image or each infrared image to a pure white image in each visible light image or each infrared image.
The intersection region determining module 104 obtains a corresponding coordinate relationship between each frame of infrared image and each frame of visible light image according to the camera parameters and the scale factors, and obtains an intersection region between the first key region and the second key region according to the corresponding coordinate relationship.
In this embodiment, the intersection region determining module 104 respectively performs camera calibration on each frame of infrared image and each frame of visible light image by using a calibration method to obtain the camera parameters and the scale factors, obtains a coordinate offset of each frame of infrared image relative to each frame of visible light image according to the camera parameters and the scale factors, and obtains a corresponding coordinate relationship between each frame of infrared image and each frame of visible light image according to the coordinate offset. In this embodiment, the calibration method is a calibration method in OpenCV, where OpenCV is a cross-platform computer vision library. In this embodiment, a calibration plate is provided, and the calibration plate includes a plurality of circle centers. In this embodiment, a first pixel difference between two centers of circles in the calibration board corresponding to each frame of infrared image is calculated, a second pixel difference between two centers of circles in the calibration board corresponding to each frame of visible image is calculated, and a ratio of the first pixel difference to the second pixel difference is determinedAnd calculating values to obtain the scale factor. In the present embodiment, the scale factor is calculated according to the formula S ═ T1/T2, where T1 ═ Tn-Tn-1,T2=T'n-T'n-1,TnAnd Tn-1The nth circle center and the n-1 th circle center in the calibration plate correspond to the pixel coordinate value, T ', in each frame of infrared image respectively'nAnd T'n-1The pixel coordinate values of the nth circle center and the n-1 th circle center in each frame of visible light image in the calibration plate are respectively, and n is a positive integer.
In this embodiment, the intersection region determining module 104 scales each frame of infrared image and each frame of visible light image to a uniform size according to the scale factor, and calculates the coordinate offset according to the pixel coordinate of each frame of infrared image after scaling of the center of circle of the scaling plate and the pixel coordinate of each frame of visible light image after scaling of the center of circle of the scaling plate. In this embodiment, the intersection region determination module 104 determines the intersection region according to the formula Xdiff=Xn-X'nAnd Ydiff=Yn-Y'nCalculating to obtain the coordinate offset, wherein XnThe n-th circle center of the calibration plate is the pixel coordinate, X 'of the zoomed infrared image relative to the X axis of the pixel coordinate system'nThe n-th circle center of the calibration plate is the pixel coordinate, Y, of the X axis relative to the pixel coordinate system in each frame of the zoomed visible light imagenThe n-th circle center of the calibration plate is the pixel coordinate, Y ', of the zoomed infrared image of each frame relative to the Y axis of the pixel coordinate system'nAnd the n-th circle center of the calibration plate is the pixel coordinate of the Y axis relative to the pixel coordinate system in each frame of the zoomed visible light image.
The static block elimination module 105 eliminates the static block in the intersection region by using a frame difference method to obtain a candidate region.
In this embodiment, the static block elimination module 105 performs a difference operation on a current frame image including an intersection region and a previous frame image continuous to the current frame image by using a frame difference method to subtract pixel points corresponding to the two frame images to obtain a difference image; for the differential imageCarrying out binarization processing on pixel points one by one to obtain a binarized image; and determining a static block in the current frame image from the binarized image, and removing the static block to obtain a candidate area. In this embodiment, the current frame image including the intersection area may be a visible light image or an infrared image. In this embodiment, the current frame image including the intersection area is denoted as the nth frame, the previous frame image of the current frame image is denoted as the n-1 st frame, and the images corresponding to the two frame images are denoted as F respectivelynAnd Fn-1The gray value of the corresponding pixel point of the two frames of images is recorded as fn(x, y) and fn-1(x, y), subtracting the gray values of the corresponding pixel points of the two frames of images and taking the absolute value to obtain a differential image which is recorded as DnWherein D isn(x,y)=|fn(x,y)-fn-1(x, y) |; setting a threshold value T according to a formula
Figure BDA0002458643230000161
Carrying out binarization processing on the difference image to obtain a binarized image; and taking the point set with the gray value of 0 in the binary image as a static block in the current frame image, taking the point set with the gray value of 255 in the binary image as a motion target area, and removing the static block to obtain a candidate area. In this embodiment, it is further determined whether the white area ratio in the binarized image exceeds a preset threshold value or not when the binarized image is obtained, and an area in which the white area does not exceed the preset threshold value is used as the stationary block, so that the false detection of flames due to the influence of small changes in the image is prevented.
The detection module 106 sends the candidate region to a trained machine learning model for detection to obtain a detection result.
In this embodiment, the machine learning model is a convolutional neural network. The convolutional neural network comprises a plurality of convolutional layers, a maximum pool sampling layer and a full connection layer. The convolution layer and the maximum pool sampling layer are alternately connected and used for carrying out feature extraction on the frame image containing the candidate area. The plurality of fully-connected layers are interconnected. The last maximum pool sampling layer in the maximum pool sampling layers is connected with a first full connection layer of the full connection layers and used for inputting a feature image acquired through feature extraction to the first full connection layer, the last full connection layer in the full connection layers is a classifier, and the classifier is used for classifying the feature image to obtain a detection result. In the present embodiment, the classification result is a fire or a non-fire. In this embodiment, the convolutional neural network is an Alexnet network, and the Alexnet network includes five convolutional layers, three maximum pool sampling layers, and three full connection layers.
In this embodiment, the detection module 106 is further configured to train the convolutional neural network by performing a layer-by-layer back adjustment on the weight parameters in the convolutional neural network by using a gradient descent method to minimize a loss function.
In this embodiment, the detection module 106 is further configured to expand the candidate region.
The display module 107 displays the result of each frame of visible light image or each frame of infrared image according to the detection result.
In this embodiment, the display module 107 records the coordinate position of the area where the detection result in each frame of visible light image or each frame of infrared image is a fire; and displaying each frame of visible light image or each frame of infrared image of the fire, and the coordinate position of the area of which the detection result is the fire in each frame of visible light image or each frame of infrared image.
According to the scheme, the corresponding coordinate relation between each frame of infrared image and each frame of visible light image is obtained according to camera parameters and scale factors to obtain the intersection region between each frame of infrared image and each frame of visible light image, the static block in the intersection region is eliminated by using a frame difference method to obtain a candidate region, and the obtained candidate region is sent to a trained machine learning model for detection to obtain a detection result, so that the position of small flame in a large-range environment can be positioned, the fire information can be reported earlier, the probability of small fire evolving into a large fire is reduced, and good information support is provided for timely eliminating potential safety hazards.
EXAMPLE III
Referring to fig. 2, the computer device 10 includes a memory 20, a processor 30, and a computer program 40 stored in the memory 20 and executable on the processor 30. The processor 30, when executing the computer program 40, implements the steps in the above-described embodiment of the small flame detection method, such as the steps S101 to S107 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 40, implements the functions of the modules/units in the above-mentioned embodiment of the small flame detection device, such as the modules 11-17 in fig. 4.
Illustratively, the computer program 40 may be partitioned into one or more modules/units that are stored in the memory 20 and executed by the processor 30 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 40 in the computer device 10. For example, the computer program 40 may be divided into an image acquisition module 101, a first filtering module 102, a second filtering module 103, an intersection region determination module 104, a static block elimination module 105, a detection module 106, a display module 107 and a display module 105 in fig. 4, where specific functions of each module are described in the second embodiment.
In this embodiment, the computer device 10 may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. It will be understood by those skilled in the art that the schematic diagram 2 is merely an example of the computer apparatus 10, and does not constitute a limitation to the computer apparatus 10, and may include more or less components than those shown, or combine some components, or different components, for example, the computer apparatus 10 may further include an input and output device, a network access device, a bus, etc.
The Processor 30 may be a Central Processing Unit (CPU), and may include other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 30 is the control center of the computer device 10 and connects the various parts of the entire computer device 10 using various interfaces and lines.
The memory 20 may be used for storing the computer program 40 and/or the module/unit, and the processor 30 implements various functions of the computer device 10 by running or executing the computer program and/or the module/unit stored in the memory 20 and calling data stored in the memory 20. The memory 20 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the stored data area may store data (such as audio data, a phonebook, etc.) created according to the use of the computer device 10, and the like. The storage 20 may include an external storage medium, and may also include a memory. In addition, the memory 20 may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The modules/units integrated with the computer apparatus 10 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method of implementing the above embodiments may also be implemented by instructing the relevant hardware through a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the above method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, etc. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
In the embodiments provided in the present invention, it should be understood that the disclosed computer apparatus and method can be implemented in other ways. For example, the above-described embodiments of the computer apparatus are merely illustrative, and for example, the division of the units is only one logical functional division, and there may be other divisions when the actual implementation is performed.
In addition, functional units in the embodiments of the present invention may be integrated into the same processing unit, or each unit may exist alone physically, or two or more units are integrated into the same unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be clear to a person skilled in the art that the present invention is not limited to the details of the exemplary embodiments presented above, but that it can be implemented in other specific forms without departing from the spirit or essential characteristics of the invention. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. The units or computer means recited in the computer means claims may also be implemented by the same unit or computer means by software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A small flame detection method based on infrared features and machine learning is characterized by comprising the following steps:
obtaining each frame of infrared image and each frame of visible light image from a video stream;
converting each frame of infrared image into a gray image, and carrying out contour searching and screening on the gray image according to contour screening conditions to obtain a first key area of each frame of infrared image;
inputting each frame of visible light image into a color model, and analyzing each frame of visible light image according to a color filtering condition of the color model and the contour filtering condition to obtain a second key area of the visible light image;
obtaining a corresponding coordinate relation between each frame of infrared image and each frame of visible light image according to camera parameters and scale factors, and obtaining an intersection area between the first key area and the second key area according to the corresponding coordinate relation;
eliminating the static block in the intersection area by using a frame difference method to obtain a candidate area;
sending the candidate region into a trained machine learning model for detection to obtain a detection result; and
and displaying the result of each frame of visible light image or each frame of infrared image according to the detection result.
2. The infrared-feature-based, machine-learning-based small flame detection method of claim 1, wherein the converting each frame of infrared image into a grayscale image comprises:
and taking the R component brightness, the G component brightness and the B component brightness in each frame of infrared image as gray values of three gray images, and selecting one gray value from the gray values of the three gray images as the gray value of each frame of infrared image.
3. The infrared-feature-based machine-learning-based small flame detection method of claim 1, wherein the color model is an RGB color model, and the analyzing the second key region of each frame of visible light image according to the color filtering condition and the contour filtering condition of the color model comprises:
carrying out RGB color filtering on each frame of visible light image according to color filtering conditions to obtain a template of each frame of visible light image;
expanding the template of each frame of visible light image; and
and carrying out contour searching and filtering on the expanded template according to the contour screening conditions to obtain a second key area of each frame of visible light image.
4. The infrared-feature-based machine-learning-based small flame detection method of claim 2, wherein the contour screening conditions comprise polygon fitting accuracy, contour aspect ratio, contour area upper and lower limits, number of vertices of contour polygon fitting and/or white area ratio.
5. The infrared-feature-based machine-learning-based small flame detection method of claim 1, wherein the obtaining of the corresponding coordinate relationship between each frame of infrared image and each frame of visible light image according to camera parameters and scale factors comprises:
and respectively carrying out camera calibration on each frame of infrared image and each frame of visible light image by adopting a calibration method to obtain the camera parameters and the scale factors, obtaining the coordinate offset of each frame of infrared image relative to each frame of visible light image according to the camera parameters and the scale factors, and obtaining the corresponding coordinate relation between each frame of infrared image and each frame of visible light image according to the coordinate offset.
6. The infrared-feature-based, machine-learning-based small flame detection method of claim 5, wherein the scale factor is obtained according to the following method:
providing a calibration plate, wherein the calibration plate comprises a plurality of circle centers;
calculating a first pixel difference of two circle centers in the calibration plate corresponding to each frame of infrared image;
calculating a second pixel difference of two circle centers in the calibration plate corresponding to each frame of visible light image; and
and calculating the scale factor according to the ratio of the first pixel difference to the second pixel difference.
7. The infrared-feature-based machine-learning-based small flame detection method of claim 6, wherein the obtaining the coordinate offset of each frame of infrared image relative to each frame of visible light image according to the camera parameters and the scale factors comprises:
scaling the infrared images and the visible light images to be uniform in size according to the scale factor; and
and calculating the pixel coordinates of the optical image according to the pixel coordinates of each frame of infrared image with the circle center of the calibration plate after zooming and each frame of infrared image with the circle center of the calibration plate after zooming to obtain the coordinate offset.
8. The infrared-feature-based machine-learning-based small flame detection method of claim 7, wherein the calculating the coordinate offset according to the pixel coordinates of each infrared image zoomed at the center of the calibration plate and the pixel coordinates of each light image zoomed at the center of the calibration plate comprises:
according to formula Xdiff=Xn-X'nAnd Ydiff=Yn-Y'nCalculating the coordinate offset, wherein XnThe n-th circle center of the calibration plate is the pixel coordinate, X 'of the zoomed infrared image relative to the X axis of the pixel coordinate system'nThe n-th circle center of the calibration plate is the pixel coordinate, Y, of the X axis relative to the pixel coordinate system in each frame of the zoomed visible light imagenThe n-th circle center of the calibration plate is the pixel coordinate, Y ', of the zoomed infrared image of each frame relative to the Y axis of the pixel coordinate system'nAnd the n-th circle center of the calibration plate is the pixel coordinate of the Y axis relative to the pixel coordinate system in each frame of the zoomed visible light image.
9. The infrared-feature-based machine-learning-based small flame detection method of claim 1, wherein the displaying the result of each frame of visible-light image or each frame of infrared image according to the detection result comprises:
recording the coordinate position of the area where the detection result in each frame of visible light image or each frame of infrared image is a fire disaster; and
and displaying each frame of visible light image or each frame of infrared image of which the detection result is a fire, wherein the detection result in each frame of visible light image or each frame of infrared image is the coordinate position of the area of the fire.
10. A computer arrangement, characterized in that the computer arrangement comprises a processor and a memory, the processor being configured to implement the infrared features, machine learning based small flame detection method according to any of claims 1-9 when executing a computer program stored in the memory.
CN202010313706.9A 2020-04-20 2020-04-20 Small flame detection method based on infrared features and machine learning and computer device Pending CN113537204A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010313706.9A CN113537204A (en) 2020-04-20 2020-04-20 Small flame detection method based on infrared features and machine learning and computer device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010313706.9A CN113537204A (en) 2020-04-20 2020-04-20 Small flame detection method based on infrared features and machine learning and computer device

Publications (1)

Publication Number Publication Date
CN113537204A true CN113537204A (en) 2021-10-22

Family

ID=78123657

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010313706.9A Pending CN113537204A (en) 2020-04-20 2020-04-20 Small flame detection method based on infrared features and machine learning and computer device

Country Status (1)

Country Link
CN (1) CN113537204A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115494193A (en) * 2022-11-16 2022-12-20 常州市建筑科学研究院集团股份有限公司 Machine vision-based flame transverse propagation detection method and system for single body combustion test

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11224389A (en) * 1998-02-09 1999-08-17 Hitachi Ltd Detecting method for flame, method and device for detecting fire
CN103413395A (en) * 2013-08-15 2013-11-27 北京声迅电子股份有限公司 Intelligent smoke detecting and early warning method and device
CN105488941A (en) * 2016-01-15 2016-04-13 中林信达(北京)科技信息有限责任公司 Double-spectrum forest fire disaster monitoring method and double-spectrum forest fire disaster monitoring device based on infrared-visible light image
CN105512667A (en) * 2014-09-22 2016-04-20 中国石油化工股份有限公司 Method for fire identification through infrared and visible-light video image fusion
CN106226239A (en) * 2016-10-14 2016-12-14 深圳全景威视科技有限公司 Big angle of visual field intelligent image type fire detector and Intelligent Fire Detection method thereof
CN108876856A (en) * 2018-06-29 2018-11-23 北京航空航天大学 A kind of heavy construction fire fire source recognition positioning method and system
CN110047241A (en) * 2019-04-27 2019-07-23 刘秀萍 A kind of forest fire unmanned plane cruise monitoring system
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
CN110570454A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 Method and device for detecting foreign matter invasion
CN110841220A (en) * 2019-12-09 2020-02-28 国网智能科技股份有限公司 Intelligent fire-fighting system and method for transformer substation
CN110969669A (en) * 2019-11-22 2020-04-07 大连理工大学 Visible light and infrared camera combined calibration method based on mutual information registration
CN111027541A (en) * 2019-11-15 2020-04-17 国网安徽省电力有限公司检修分公司 Flame detection method and system based on visible light and thermal imaging and storage medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11224389A (en) * 1998-02-09 1999-08-17 Hitachi Ltd Detecting method for flame, method and device for detecting fire
CN103413395A (en) * 2013-08-15 2013-11-27 北京声迅电子股份有限公司 Intelligent smoke detecting and early warning method and device
CN105512667A (en) * 2014-09-22 2016-04-20 中国石油化工股份有限公司 Method for fire identification through infrared and visible-light video image fusion
CN105488941A (en) * 2016-01-15 2016-04-13 中林信达(北京)科技信息有限责任公司 Double-spectrum forest fire disaster monitoring method and double-spectrum forest fire disaster monitoring device based on infrared-visible light image
CN106226239A (en) * 2016-10-14 2016-12-14 深圳全景威视科技有限公司 Big angle of visual field intelligent image type fire detector and Intelligent Fire Detection method thereof
CN108876856A (en) * 2018-06-29 2018-11-23 北京航空航天大学 A kind of heavy construction fire fire source recognition positioning method and system
CN110135269A (en) * 2019-04-18 2019-08-16 杭州电子科技大学 A kind of fire image detection method based on blend color model and neural network
CN110047241A (en) * 2019-04-27 2019-07-23 刘秀萍 A kind of forest fire unmanned plane cruise monitoring system
CN110570454A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 Method and device for detecting foreign matter invasion
CN111027541A (en) * 2019-11-15 2020-04-17 国网安徽省电力有限公司检修分公司 Flame detection method and system based on visible light and thermal imaging and storage medium
CN110969669A (en) * 2019-11-22 2020-04-07 大连理工大学 Visible light and infrared camera combined calibration method based on mutual information registration
CN110841220A (en) * 2019-12-09 2020-02-28 国网智能科技股份有限公司 Intelligent fire-fighting system and method for transformer substation

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115494193A (en) * 2022-11-16 2022-12-20 常州市建筑科学研究院集团股份有限公司 Machine vision-based flame transverse propagation detection method and system for single body combustion test

Similar Documents

Publication Publication Date Title
CN110660066B (en) Training method of network, image processing method, network, terminal equipment and medium
US8750573B2 (en) Hand gesture detection
US8792722B2 (en) Hand gesture detection
CN106878668B (en) Movement detection of an object
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN113450301A (en) Small flame detection method and computer device
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
CN109064613B (en) Face recognition method and device
CN112001886A (en) Temperature detection method, device, terminal and readable storage medium
US11402926B2 (en) Ambient light sensing device and method, and interactive device using same
CN112750162A (en) Target identification positioning method and device
CN112464797A (en) Smoking behavior detection method and device, storage medium and electronic equipment
CN110505397B (en) Camera selection method, device and computer storage medium
CN113537204A (en) Small flame detection method based on infrared features and machine learning and computer device
CN112333537B (en) Video integration method, device and computer readable storage medium
WO2018210039A1 (en) Data processing method, data processing device, and storage medium
CN112131957A (en) Document type picture identification method and device and storage medium
CN112037478A (en) Monitoring method and monitoring system suitable for power equipment
CN109074714B (en) Detection apparatus, method and storage medium for detecting event
Jun et al. Fire detection using multi-channel information and gray level co-occurrence matrix image features
CN112949526A (en) Face detection method and device
US9910828B2 (en) Spectrometer for personal context
CN112232282A (en) Gesture recognition method and device, storage medium and electronic equipment
JP7004786B1 (en) Detection device and detection method
CN111899615A (en) Scoring method, device, equipment and storage medium for experiment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20211022