CN114399882A - Fire source detection, identification and early warning method for fire-fighting robot - Google Patents
Fire source detection, identification and early warning method for fire-fighting robot Download PDFInfo
- Publication number
- CN114399882A CN114399882A CN202210067506.9A CN202210067506A CN114399882A CN 114399882 A CN114399882 A CN 114399882A CN 202210067506 A CN202210067506 A CN 202210067506A CN 114399882 A CN114399882 A CN 114399882A
- Authority
- CN
- China
- Prior art keywords
- image
- fire
- target
- fire source
- early warning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 55
- 238000000034 method Methods 0.000 title claims abstract description 46
- 230000004927 fusion Effects 0.000 claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 238000012544 monitoring process Methods 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 27
- 238000004364 calculation method Methods 0.000 claims description 16
- 230000009466 transformation Effects 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 13
- 238000003384 imaging method Methods 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 10
- 230000005855 radiation Effects 0.000 claims description 10
- 239000013598 vector Substances 0.000 claims description 10
- 230000008859 change Effects 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 8
- 230000000694 effects Effects 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 7
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 238000010276 construction Methods 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 4
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 3
- 238000007499 fusion processing Methods 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 238000002834 transmittance Methods 0.000 claims description 3
- 101100441252 Caenorhabditis elegans csp-2 gene Proteins 0.000 claims description 2
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 238000010586 diagram Methods 0.000 description 5
- 230000015572 biosynthetic process Effects 0.000 description 4
- 239000000779 smoke Substances 0.000 description 4
- 230000004913 activation Effects 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 238000001931 thermography Methods 0.000 description 2
- 206010026749 Mania Diseases 0.000 description 1
- 206010037180 Psychiatric symptoms Diseases 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B17/00—Fire alarms; Alarms responsive to explosion
- G08B17/12—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B17/00—Fire alarms; Alarms responsive to explosion
- G08B17/12—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
- G08B17/125—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B29/00—Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
- G08B29/18—Prevention or correction of operating errors
- G08B29/185—Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
- G08B29/188—Data fusion; cooperative systems, e.g. voting among different detectors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Emergency Management (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Probability & Statistics with Applications (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
The invention discloses a fire source detection, identification and early warning method for a fire-fighting robot, which comprises the following steps of S1, monitoring a target environment by the fire-fighting robot, acquiring a visible light image and an infrared thermal image, and carrying out image preprocessing on the visible light image and the infrared thermal image; s2, detecting whether a suspected target exists in the preprocessed visible light image, sending out an early warning after the suspected target is found, measuring the temperature of the preprocessed infrared image, judging whether the suspected target exceeds a set temperature threshold value, if so, judging that the suspected fire source target is a suspected fire source target and sending out the early warning, otherwise, continuing to monitor; and S3, after the suspected fire source target is judged, performing image registration and image fusion on the visible light image and the infrared image in a target fusion mode to determine a real fire source, positioning the position of the real fire source through binocular parallax, and reporting the position of the fire source. The fire-fighting robot can detect, identify and early warn the fire source, and reduce the possibility of fire.
Description
Technical Field
The invention relates to the technical field of fire-fighting early warning, in particular to a fire source detection, identification and early warning method for a fire-fighting robot.
Background
The fire disaster is the most frequent man-made disaster in the modern society, and seriously threatens the safety of human life and property. The existing fire-fighting robot has the functions of fire source detection, active fire extinguishing and the like, can continuously operate in high-risk places with fire disasters, and reduces the occurrence of the fire disasters. As the small fire in early combustion is not easy to be found, the fire extinguishing difficulty in the growing period is higher, and once a fire disaster occurs in a warehouse or a factory, serious casualties and property loss can be caused, the fire detection and early warning for fire prevention and control key places is particularly important.
Disclosure of Invention
The invention aims to provide a fire source detection, identification and early warning method for a fire-fighting robot. The fire-fighting robot can detect, identify and early warn the fire source, not only can timely and effectively early warn be achieved, but also the position of the fire source can be accurately identified and reported in time, and the possibility of fire occurrence is reduced.
The technical scheme of the invention is as follows: a fire source detection, identification and early warning method for a fire-fighting robot comprises the following steps:
s1, monitoring a target environment by a visible light camera and an infrared thermal imager on the fire-fighting robot, acquiring a visible light image and an infrared thermal image of the target environment during monitoring, and performing image preprocessing on the visible light image and the infrared thermal image;
s2, detecting whether a suspected target exists in the preprocessed visible light image, sending out an early warning after the suspected target is found, measuring the temperature of the preprocessed infrared image, judging whether the suspected target exceeds a set temperature threshold value, if so, judging that the suspected target is a fire source target and sending out the early warning, otherwise, continuing to monitor;
and S3, after the suspected fire source target is judged, performing image registration and image fusion on the visible light image and the infrared image in a target fusion mode so as to determine whether the suspected fire source target belongs to a real fire source, positioning the position of the real fire source through binocular parallax, and reporting the fire alarm early warning and the position of the fire source.
In the fire source detection, identification and early warning method for the fire-fighting robot, the visible light image preprocessing is to preprocess an input picture by adopting a linear change enhancement method, and the formula is as follows:
O(r,c)=a×I(r,c)+b,0≤r≤H,0≤c≤W;
in the formula: i (r, c) is an original picture pixel point; o (r, c) is an enhanced picture pixel point; H. w is the height and width of the graph respectively; the parameter a is a constant and is used for adjusting the contrast of the image; the parameter b is a constant for adjusting the brightness of the image.
According to the fire source detection, identification and early warning method for the fire-fighting robot, whether a suspected target exists in the preprocessed visible light image is detected through a Yolo v4 target detection model;
the input image size of the Yolo v4 target detection model is 608 pixels × 608 pixels, feature extraction is performed through a CSPDarknet53 main feature extraction network, feature layers with the sizes of 76 × 76, 38 × 38 and 19 × 19 are output and are respectively responsible for detecting three kinds of small, medium and large targets, the feature layers with the sizes of 76 × 76 and 38 × 38 are subjected to tensor splicing with a network of a PANET module through a 19 × 19 feature layer of an SSP module, and the detection performance of the network on objects with different sizes is enhanced; outputting detection results of the targets with three sizes by a Prediction module; meanwhile, a CSP 4 residual block is added between two CSP 2 and CSP 8 residual blocks of the CSP park net53, and the small target is trained by using the 4-time downsampling feature fusion target detection layer, so that the accuracy of small target detection is improved.
In the fire source detection, identification and early warning method for the fire-fighting robot, during construction of a Yolo v4 target detection model, a data set comprises a fire flame data set and a fire initial flame data set; the data set is formed by combining images in an open flame data set Image Net and a Bo W-Fire, video images intercepted in a general flame Fire data set and interference images of similar flame targets;
in the training of the Yolo V4 model, the data set is labeled through labellmg, and the flame area on the image is completely labeled during labeling, so that a shelter and interferents with similar colors are avoided, and the best training effect is obtained.
According to the fire source detection, identification and early warning method for the fire-fighting robot, the preprocessing of the infrared image is to perform mean filtering and noise reduction on the infrared image, improve the definition of a target in the image, and then perform histogram equalization on the infrared image, so that the image details are clearer, and the specific process is as follows:
(1) and (3) performing mean value filtering and noise reduction, setting a discrete image f (x, y) to traverse the image through a mean value filtering operator, calculating the mean value of all pixel points in a neighborhood by taking any pixel point of the image as a center, and taking the mean value as the pixel value of g (x, y) point of the filtered image, wherein the method comprises the following steps:
in the formula, S is a neighborhood determined by any pixel point, and M is the total number of pixel points in the neighborhood S; the mean filtering is implemented by means of convolution operation, and its output image g (x, y) is expressed as:
wherein the content of the first and second substances,determining the size of the template according to the size of the selected neighborhood, wherein m and n are the size of the neighborhood; h (r, s) is a convolution array;
(2) histogram equalization, and representing the gray level histogram of the image as follows:
in the formula, nkThe frequency of occurrence of the k-th gray level, N being the total number of pixels, rkIs the k-th gray level of the image, L is the total number of gray levels, p (r)k) Is the gray level probability;
the normalized original image and the histogram equalization transformed image are respectively represented by r and s:
0≤r≤1,0≤s≤1;
for any pixel point in the original image, a pixel value of a new image can be generated through change, and s ═ t (r) is a transformation function of histogram equalization, and a gray probability density function P of the original imager(r) the transformed image gray level probability density function is Ps(s); transformed P, as derived from probability theorys(s) is:
for a digital image, the gray level histogram value of the image is taken as the probability density function of the image, and then the gray level of the image is rkThe pixel occurrence probability of (a) is:
the change function for histogram equalization processing on the original image is as follows:
wherein N is the total number of pixels in the image and L is the total number of gray levels.
According to the fire source detection, identification and early warning method for the fire-fighting robot, the temperature of the preprocessed infrared image is measured through the infrared thermal imager:
f(T′0)=τa[εf(T0)+(1-ε)f(Tu)]+(1-τa)f(Ta);
wherein, T0The surface temperature of the measured object; t isuIs ambient temperature; t isaIs at atmospheric temperature; t'0The radiation temperature indicated for the thermal infrared imager; tau isaSpectral transmittance of atmosphere
The calculation formula of the real temperature of the surface of the measured object obtained by the Planck radiation law is as follows:
when infrared thermal imaging cameras with different wave bands are used, the value of n is different.
According to the fire source detection, identification and early warning method for the fire-fighting robot, the image registration specifically comprises the following steps:
firstly, detecting characteristic points: constructing a scale space:
in the formula: i (x, y) is an input original image; g (x, y, σ) is a variable Gaussian kernel function; sigma is a Gaussian fuzzy parameter of the scale space; (x, y) is the pixel location of the image; p and q are dimensions of the Gaussian template respectively;
the difference operator in the gaussian difference scale space is:
constructing a Gaussian difference image, extracting characteristic points in the image, comparing a middle detection point with 8 adjacent points of the same scale and 9 multiplied by 2 points corresponding to upper and lower adjacent scales to ensure that extreme points are detected in a scale space and a two-dimensional image space, and solving a local maximum point as the characteristic point;
then, describing the extracted feature points: by dividing the periphery of the pixel point into 16 square subregions, most features of the image can be covered, gradients and modulus values in 8 directions are obtained in the subregions, and 128-dimensional feature vectors are formed to describe the feature points; the calculation formula of the modulus and the gradient is as follows:
and then matching the characteristic points: matching the feature points by using Euclidean distances, calculating the ratio of the nearest Euclidean distance to the next nearest Euclidean distance, if the ratio is within the threshold range, the matching is successful, otherwise, the matching is failed; the calculation equation of the Euclidean distance is as follows:
wherein dis is the Euclidean distance of the two feature vectors; p is a descriptor dimension; m and n are feature points in the two graphs; dmAnd DnA feature descriptor for m and n, respectively;
and then carrying out spatial transformation: using an affine transformation model, the model is represented as:
wherein, (x, y) and (x ', y') are pixel coordinates of corresponding characteristic points in the visible light image and the thermal infrared image respectively, and 6 parameter vectors represent the conversion relation between the coordinates of the two images;
and finally, image fusion is carried out: adopting a weighted average image fusion algorithm, setting a visible light image as A, an infrared image as B and a fused image as F, wherein the weighted average fusion process is shown as the following formula:
F(i,j)=w1A(i,j)+w2B(i,j);
wherein, (i, j) represents the position coordinates of the pixel points in the representative image, w1And w2Are image weighting coefficients, w1+w2=1。
In the foregoing fire source detection, identification and early warning method for a fire-fighting robot, the calculation process for locating the position of a real fire source by using binocular parallax comprises the following steps:
setting point P to be a certain point on the real fire source, ORAnd OTThe optical centers of the two cameras are respectively, the imaging points of the point P on the two camera sensors are respectively P 'and P', f is the focal length of the cameras, B is the center distance of the two cameras, and X is the distance between the two camerasR、XTThe distance between two imaging points P 'and P' on the left and right imaging surfaces and the left edge of the image, and Z is depth information;
assuming that the distance from the point P 'to P' is Δ x, then: Δ X ═ B- (X)R-XT);
According to the similar triangle principle:
calculating to obtain:
in the formula, the focal length f of the camera and the center distance B of the camera are obtained by calibration, and X isR-XTI.e. parallax d, to obtain depth information
Assuming that the coordinates of the target point in the left view are (X, Y), the disparity d formed on the left and right views, and the coordinates of the target point in the world coordinate system with the left camera optical center as the origin are (X, Y, Z), there is a transformation matrix Q as shown:
wherein, cxAnd cyThe offsets of the coordinate system of the left image plane and the original point in the camera coordinate system are respectively obtained through three-dimensional calibration.
Compared with the prior art, the invention combines the visible light camera and the infrared camera, because in the early stage of fire formation, there is no significant flame signature, and the three typical physical phenomena present at this time are smoldering, fire plumes, and smoke, which are flames that occur early in the formation of a fire, the temperature is low, the visible light part of the radiation is little, the common visible light camera is difficult to find, but the infrared radiation emitted by the infrared camera can be captured by the infrared camera, although the visible camera is more prominent in capturing image details than the infrared camera, color features, dynamic features, texture features and the like are commonly used as detection bases, however, the misjudgment rate is very high in the environment full of smoke, so that the misjudgment rate of fire can be effectively reduced by combining the visible light camera and the thermal infrared imager, and meanwhile, the fire source target can be identified and positioned. In the process of detecting and identifying the fire source, the invention has multi-step early warning, including the early warning after the suspected target is found, the suspected fire source target is judged and the early warning is sent out, and the report after the suspected fire source target is determined as the real fire source is reported, so that the invention can comprehensively remind at multiple levels and send out early warning information, thereby people can quickly respond, and the possibility of fire occurrence is reduced. In addition, the method adopts the Yolo v4 target detection model to detect the suspicious target in the image, so that the method has good robustness and training effect in the construction process, and improves the identification accuracy of the suspicious target.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a network structure diagram of the YOLOv4 algorithm;
FIG. 3 is a schematic diagram of the CSPDarknet53 module structure;
FIG. 4 is a diagram of the basic components of the CSPDarknet53 module;
FIG. 5 is a schematic diagram of a SSP module architecture;
fig. 6 is a schematic diagram of the structure of a PANET module;
FIG. 7 is a flow chart of an image registration algorithm;
FIG. 8 is a schematic view of the principle of binocular ranging;
fig. 9 is a binocular imaging stereoscopic view.
Detailed Description
The invention is further illustrated by the following figures and examples, which are not to be construed as limiting the invention.
Example (b): a fire source detection, identification and early warning method for a fire-fighting robot comprises the fire-fighting robot, wherein the fire-fighting robot is provided with a Velodyne VLP-16 laser radar sensor, a visible light monocular camera, an infrared thermal imager, a wheel type encoder and an inspection module of the fire-fighting robot, and the autonomous positioning, drawing and navigation of the fire-fighting robot are realized through the Velodyne VLP-16 laser radar sensor and the wheel type encoder by utilizing an SLAM technology based on an ROS system; as shown in fig. 1, the method comprises the following steps:
s1, monitoring a target environment by a visible light camera and an infrared thermal imager on the fire-fighting robot, acquiring a visible light image and an infrared thermal image of the target environment during monitoring, and performing image preprocessing on the visible light image and the infrared thermal image;
in a monitoring state, the pictures in the environmental states such as indoor and night have the problems of unclear light, low pixels and the like, so that the contrast and brightness of the pictures are enhanced, and the recognition effect is improved. The fire source is used as a fuse for fire, and the brightness of the fire source is generally higher in the center of the flame than in the periphery. Therefore, in this embodiment, the visible light image preprocessing is to preprocess the input picture by using a linear variation enhancement method, and the formula is as follows:
O(r,c)=a×I(r,c)+b,0≤r≤H,0≤c≤W;
in the formula: i is(r,c)Pixel points of the original picture are obtained; o is(r,c)Enhancing the picture pixel points; H. w is the height and width of the graph respectively; the parameter a is a constant and is used for adjusting the contrast of the image; the parameter b is a constant for adjusting the brightness of the image.
The infrared thermal imager can receive the noise that parts such as detector, signal transmission channel produced in the formation of image, consequently will carry out mean value filtering to infrared image and handle, reduce the noise, promote the definition of target in the image. And finally, carrying out histogram equalization on the infrared image to make the image details clearer. In this embodiment, the preprocessing of the infrared image is to perform mean filtering and noise reduction on the infrared image, improve the definition of a target in the image, and then perform histogram equalization on the infrared image, so that the image details are clearer, and the specific process is as follows:
(1) and (3) performing mean value filtering and noise reduction, setting a discrete image f (x, y) to traverse the image through a mean value filtering operator, calculating the mean value of all pixel points in a neighborhood by taking any pixel point of the image as a center, and taking the mean value as the pixel value of g (x, y) point of the filtered image, wherein the method comprises the following steps:
in the formula, S is a neighborhood determined by any pixel point, and M is the total number of pixel points in the neighborhood S; the mean filtering is implemented by means of convolution operation, and its output image g (x, y) is expressed as:
wherein the content of the first and second substances,determining the size of the template according to the size of the selected neighborhood, wherein m and n are the size of the neighborhood; h (r, s) is a convolution array;
(2) histogram equalization, and representing the gray level histogram of the image as follows:
in the formula, nkThe frequency of occurrence of the k-th gray level, N being the total number of pixels, rkIs the k-th gray level of the image, L is the total number of gray levels, p (r)k) Is the gray level probability;
the normalized original image and the histogram equalization transformed image are respectively represented by r and S:
0≤r≤1,0≤s≤1;
for any pixel point in the original image, a pixel value of a new image can be generated through change, and s ═ t (r) is a transformation function of histogram equalization, and a gray probability density function P of the original imager(r) the transformed image gray level probability density function is Ps(s); transformed P, as derived from probability theorys(s) is:
for a digital image,taking the image gray histogram value as the probability density function of the image, the image gray is rkThe pixel occurrence probability of (a) is:
the change function for histogram equalization processing on the original image is as follows:
wherein N is the total number of pixels in the image and L is the total number of gray levels.
S2, detecting whether a suspected target exists in the preprocessed visible light image, sending out an early warning after the suspected target is found, measuring the temperature of the preprocessed infrared image, judging whether the suspected target exceeds a set temperature threshold value, if so, judging that the suspected target is a fire source target and sending out the early warning, otherwise, continuing to monitor;
in this embodiment, a Yolo v4 target detection model is used to detect whether a suspected target exists in the preprocessed visible light image;
as shown in fig. 2, the dimension of the input image of the Yolo v4 target detection model is 608 pixels × 608 pixels, feature extraction is performed through a CSPDarknet53 trunk feature extraction network, feature layers with the sizes of 76 × 76, 38 × 38 and 19 × 19 are output, three kinds of targets with small, medium and large sizes are respectively detected, and tensor splicing is performed on the feature layers with the sizes of 76 × 76 and 38 × 38 by using a network of a PANet module through a 19 × 19 feature layer of an SSP module, so as to enhance the detection performance of the network on objects with different sizes; and outputting detection results of the targets with three sizes by a Prediction module.
Since most of fire targets in the image data are small targets in the initial stage of a fire, the YOLO algorithm is improved, as shown in fig. 3, namely, a trunk feature extraction network structure is modified, a CSP × 4 residual block is added between two CSP × 2 and CSP × 8 residual blocks of the CSPDarknet53, the small targets are trained by using the feature fusion target detection layer with 4 times of downsampling, and the accuracy of small target detection is improved; meanwhile, the network uses the structure of CSPNet (cross Stage Partial network) for reference, and the learning capability of the convolutional neural network is enhanced, so that the network structure reduces the calculation amount while maintaining the detection precision, and reduces the calculation bottleneck and the memory cost.
The basic components in fig. 3 are shown in fig. 4. The minimum two components in the algorithm are CBM and CBL, the CBM component is composed of Conv (convolution), BN (batch normalization) and Mish (activation function), and the CBL component is composed of Conv (convolution), BN (batch normalization) and Leaky _ Relu (activation function). The Res Unit component refers to a residual error structure in the Resnet network, and the input features are convolved by the two CBM components and then added with the input feature tensor for output, so that the feature network is deeper to construct. CSP x n refers to CSPNet network structure, shallow feature mapping is divided into two parts, one part is subjected to CBM convolution twice and residual error component calculation n times, the other part is directly subjected to tensor splicing with the output of a residual error module after the CBM convolution once, and output after the CBM convolution once again.
As shown in fig. 5, the SSP module in fig. 2 uses a maximum pooling method of k ═ {1 × 1, 5 × 5,9 × 9, 13 × 13} and then merges feature maps of different scales, thereby more effectively increasing the acceptance range of the backbone features (increasing the perception view of the network) and significantly separating the most important context features.
The structure of the padet module in fig. 2 is shown in fig. 6, and 3 prediction boxes (feature) with different scales are output, each prediction box includes five basic parameters (x, y, w, h, confidence) (x, y are coordinates of a central point of the prediction box, w, h are width and height of the prediction box, confidence is confidence of the predicted target, and the higher the confidence is, the higher the accuracy of target detection is). The confidence is calculated according to a loss function of loss, which is as follows:
the loss function L is composed of four parts, namely a bounding box positioning loss LxySize loss of bounding box LwhLoss of confidence LconfAnd class loss LclsThe formula is as follows:
L=Lxy+Lwh+Lconf+Lcls;
wherein the bounding box location loss LxyUsing the mean square error loss function:
in the formula, s2Representing that an input image is divided into S × S meshes; b represents the number of predicted frames of a single grid, and the value is 3;the value is 1 when a jth bounding box of the ith grid prediction detects a certain target, and the value is 0 if the jth bounding box of the ith grid prediction does not detect the certain target; x is the number ofiRepresenting the horizontal coordinate of the central point of the prediction boundary box; y isiExpressing the ordinate of the central point of the predicted boundary mania;representing the abscissa of the center point of the actual bounding box;representing the actual bounding box center point ordinate.
Bounding box size loss LwhUsing the mean square error loss function:
in the formula: w is aiRepresenting a predicted bounding box width; h isiRepresenting a predicted bounding box height;which represents the actual bounding box width,representing the actual bounding box height;
loss of confidence LconfUsing cross entropy loss function:
In the formula, λobjThe weight coefficient is 1; lambda [ alpha ]nobjThe weight coefficient is taken as 100, so that a boundary box which does not contain the target generates a larger loss value, which indicates that the model error is larger at the moment;the value is 0 when the jth boundary of the ith grid prediction detects a certain target, otherwise, the value is 1: ciRepresenting a confidence of the predicted target;-the confidence of the actual target.
Class loss LclsUsing a cross entropy loss function:
in the formula:indicating a category to which the detected object belongs;indicating that when an object is detected by the ith mesh, the object belongs to the categoryThe actual probability of (d); p is a radical ofi(c) Representing the predicted probability that an object belongs to class c when the ith mesh detects that object.
In the construction of a Yolo v4 target detection model, a data set comprises a fire flame data set and a fire initial flame data set; the data set is formed by combining images in an open flame data set Image Net and a Bo W-Fire, video images intercepted in a general flame Fire data set and interference images of similar flame targets; and the robustness of the model is enhanced by arranging the interference image.
In the training of the Yolo V4 model, the data set is labeled through labellmg, and the flame area on the image is completely labeled during labeling, so that a shelter and interferents with similar colors are avoided, and the best training effect is obtained.
The Yolov4 target detection model is trained on an industrial control computer, and the hardware environment of the industrial control computer is as follows: an Intel Corei7-6700T processor, 16G running memory, 512G hard disk and GeForce GTX 2080Ti GPU. The software environment is Ubuntu18.04 mobile operating system. The input image size is 608 × 608, the maximum number of iterations is 20000, the weight decay rate is 0.0005, the initial learning rate is 0.001, and the momentum is set to 0.9.
In this embodiment, the temperature of the preprocessed infrared image is measured by an infrared thermal imager:
f(T′0)=τa[εf(T0)+(1-ε)f(Tu)]+(1-τa)f(Ta);
wherein, T0The surface temperature of the measured object; t isuIs ambient temperature; t isaIs at atmospheric temperature; t'0The radiation temperature indicated for the thermal infrared imager; tau isaSpectral transmittance of atmosphere
The calculation formula of the real temperature of the surface of the measured object obtained by the Planck radiation law is as follows:
when infrared thermal imaging cameras with different wave bands are used, the value of n is different.
The temperature threshold setting in this embodiment can be set according to actual conditions, and is generally set to the temperature of a common flame.
And S3, after the suspected fire source target is judged, performing image registration and image fusion on the visible light image and the infrared image in a target fusion mode, wherein the purpose of the image registration is to find the common target feature in the heterogeneous images, find a space geometric transformation model under the maximum similarity, realize the registration of one image with the other image through space coordinate transformation, perform image fusion after the image registration is completed, determine the real fire source, position the real fire source through binocular parallax, and report the position of the fire source. As shown in fig. 7, the image registration specifically includes the following steps:
firstly, detecting characteristic points: constructing a scale space:
in the formula: i (x, y) is an input original image; g (x, y, σ) is a variable Gaussian kernel function; sigma is a Gaussian fuzzy parameter of the scale space; (x, y) is the pixel location of the image; p and q are dimensions of the Gaussian template respectively;
the difference operator in the gaussian difference scale space is:
constructing a Gaussian difference image, extracting characteristic points in the image, comparing a middle detection point with 8 adjacent points of the same scale and 9 multiplied by 2 points corresponding to upper and lower adjacent scales to ensure that extreme points are detected in a scale space and a two-dimensional image space, and solving a local maximum point as the characteristic point;
then, describing the extracted feature points: by dividing the periphery of the pixel point into 16 square subregions, most features of the image can be covered, gradients and modulus values in 8 directions are obtained in the subregions, and 128-dimensional feature vectors are formed to describe the feature points; the calculation formula of the modulus and the gradient is as follows:
and then matching the characteristic points: matching the feature points by using Euclidean distances, calculating the ratio of the nearest Euclidean distance to the next nearest Euclidean distance, if the ratio is within the threshold range, the matching is successful, otherwise, the matching is failed; the calculation equation of the Euclidean distance is as follows:
wherein dis is the Euclidean distance of the two feature vectors; p is a descriptor dimension; m and n are feature points in the two graphs; dmAnd DnA feature descriptor for m and n, respectively;
and then carrying out spatial transformation: using an affine transformation model, the model is represented as:
wherein, (x, y) and (x ', y') are pixel coordinates of corresponding characteristic points in the visible light image and the thermal infrared image respectively, and 6 parameter vectors represent the conversion relation between the coordinates of the two images; at least 3 groups of matched point pairs are needed for solving parameters of affine transformation, and when the number of known matched point pairs is more than the number of parameters to be solved, the optimal registration parameters can be found by using a curve fitting mode.
And finally, image fusion is carried out: adopting a weighted average image fusion algorithm, setting a visible light image as A, an infrared image as B and a fused image as F, wherein the weighted average fusion process is shown as the following formula:
F(i,j)=w1A(i,j)+w2B(i,j);
wherein, (i, j) represents the position coordinates of the pixel points in the representative image, w1And w2Are image weighting coefficients, w1+w2=1。
Binocular vision is a method of passively perceiving a distance using a computer by simulating the principle of human vision. Observing an object from two points, calculating the offset between pixels according to the matching relation of the pixels between the images under different visual angles, and acquiring the three-dimensional information of the object by the triangulation principle. In this embodiment, a reference object (checkerboard paper) is calibrated in front of the camera, and the camera acquires an image of the object, and calculates internal and external parameters of the camera accordingly. The position of each feature point on the calibration reference object relative to the world coordinate system, which may be selected as the object coordinate system of the reference object, should be accurately determined at the time of manufacture. And calculating the internal and external parameters of the camera when the projection positions of the known points on the image are obtained.
The camera has radial distortion due to the characteristics of the optical lens, and can be formed by three parameters k1,k2And k3Determining; due to assembly errors, the sensor and the optical lens are not completely parallel, so that imaging has tangential distortion which can be measured by two parameters p1,p2And (4) determining. The calibration of a single camera mainly comprises the calculation of internal parameters (focal length f and imaging origin c) of the camerax,cyFive distortion parameters (generally only k needs to be calculated)1,k2,p1And p2That is, k needs to be calculated only when radial distortion such as fisheye lens is particularly large3) And external references (world coordinates of calibration objects). The calibration of the binocular camera not only needs to obtain internal parameters of each camera, but also needs to measure the relative position between the two cameras (namely, a rotation matrix R of the right camera relative to the left camera, and a translation vector t).
To calculate the disparity of the target point on the left and right views, the two corresponding image points of the target point on the left and right views are first matched. And the polar constraint is utilized to reduce the matching of the corresponding points from two-dimensional search to one-dimensional search, thereby reducing the matching search range. The binocular correction has the effect that the two images after distortion removal are strictly corresponding, the epipolar lines of the two images are just in the same horizontal line, so that any point on one image and the corresponding point on the other image have the same line number, and the corresponding point can be matched only by performing one-dimensional search on the line.
Through the above detailed explanation, as shown in fig. 8 and 9, the calculation process of the binocular parallax locating real fire source is as follows:
setting point P to be a certain point on the real fire source, ORAnd OTThe optical centers of the two cameras are respectively, the imaging points of the point P on the two camera sensors are respectively P 'and P', f is the focal length of the cameras, B is the center distance of the two cameras, and X is the distance between the two camerasR、XTThe distance between two imaging points P 'and P' on the left and right imaging surfaces and the left edge of the image, and Z is depth information;
assuming that the distance from the point P 'to P' is Δ x, then: Δ X ═ B- (X)R-XT);
According to the similar triangle principle:
calculating to obtain:
in the formula, the focal length f of the camera and the center distance B of the camera are obtained by calibration, and X isR-XTI.e. parallax d, to obtain depth information
Assuming that the coordinates of the target point in the left view are (X, Y), the disparity d formed on the left and right views, and the coordinates of the target point in the world coordinate system with the left camera optical center as the origin are (X, Y, Z), there is a transformation matrix Q as shown:
wherein, cxAnd cyThe offsets of the coordinate system of the left image plane and the original point in the camera coordinate system are respectively obtained through three-dimensional calibration.
As described above, the present invention combines the visible light camera and the infrared camera, since at the early stage of the fire, there is no significant flame signature, and the three typical physical phenomena present at this time are smoldering, fire plumes, and smoke, which are flames that occur early in the formation of a fire, the temperature is low, the visible light part of the radiation is little, the common visible light camera is difficult to find, but the infrared radiation emitted by the infrared camera can be captured by the infrared camera, although the visible camera is more prominent in capturing image details than the infrared camera, color features, dynamic features, texture features and the like are commonly used as detection bases, however, the misjudgment rate is very high in the environment full of smoke, so that the invention combines the visible light camera and the infrared thermal imager, can effectively reduce the misjudgment rate of fire, and can identify and position the fire source target. In the process of detecting and identifying the fire source, the invention has multi-step early warning, including the early warning after the suspected target is found, the suspected fire source target is judged and the early warning is sent out, and the report after the suspected fire source target is determined as the real fire source is reported, so that the invention can comprehensively remind at multiple levels and send out early warning information, thereby people can quickly respond, and the possibility of fire occurrence is reduced. In addition, the method adopts the Yolo v4 target detection model to detect the suspicious target in the image, so that the method has good robustness and training effect in the construction process, and improves the identification accuracy of the suspicious target.
Claims (8)
1. A fire source detection, identification and early warning method for a fire-fighting robot comprises the fire-fighting robot and is characterized in that: the method comprises the following steps:
s1: monitoring a target environment by a visible light camera and an infrared thermal imager on the fire-fighting robot, acquiring a visible light image and an infrared thermal image of the target environment during monitoring, and performing image preprocessing on the visible light image and the infrared thermal image;
s2: detecting whether a suspected target exists in the preprocessed visible light image, sending an early warning after the suspected target is found, measuring the temperature of the preprocessed infrared image, judging whether the suspected target exceeds a set temperature threshold value, if so, judging that the suspected target is a suspected fire source target and sending the early warning, otherwise, continuously monitoring;
s3: after the suspected fire source target is judged, image registration and image fusion are carried out on the visible light image and the infrared image in a target fusion mode, so that whether the suspected fire source target belongs to a real fire source or not is determined, then the position of the real fire source is located through binocular parallax, and then fire alarm early warning and the position of the fire source are reported.
2. The fire source detection, identification and early warning method for a fire-fighting robot according to claim 1, characterized in that: the visible light image preprocessing is to adopt a linear change enhancement method to preprocess an input picture, and the formula is as follows:
O(r,c)=a×I(r,c)+b,0≤r≤H,0≤c≤W;
in the formula: i is(r,c)Pixel points of the original picture are obtained; o is(r,c)Enhancing the picture pixel points; H. w is the height and width of the graph respectively; the parameter a is a constant and is used for adjusting the contrast of the image; the parameter b is a constant for adjusting the brightness of the image.
3. The fire source detection, identification and early warning method for a fire-fighting robot as recited in claim 2, wherein: detecting whether a suspected target exists in the preprocessed visible light image through a Yolo v4 target detection model;
the input image size of the Yolo v4 target detection model is 608 pixels × 608 pixels, feature extraction is performed through a CSPDarknet53 main feature extraction network, feature layers with the sizes of 76 × 76, 38 × 38 and 19 × 19 are output and are respectively responsible for detecting three kinds of small, medium and large targets, the feature layers with the sizes of 76 × 76 and 38 × 38 are subjected to tensor splicing with a network of a PANET module through a 19 × 19 feature layer of an SSP module, and the detection performance of the network on objects with different sizes is enhanced; outputting detection results of the targets with three sizes by a Prediction module; meanwhile, a CSP 4 residual block is added between two CSP 2 and CSP 8 residual blocks of the CSP park net53, and the small target is trained by using the 4-time downsampling feature fusion target detection layer, so that the accuracy of small target detection is improved.
4. The fire source detection, identification and early warning method for a fire-fighting robot as recited in claim 3, wherein: in the construction of a Yolo v4 target detection model, a data set comprises a fire flame data set and a fire initial flame data set; the data set is formed by combining images in an open flame data set Image Net and a Bo W-Fire, video images intercepted in a general flame Fire data set and interference images of similar flame targets;
in the training of the Yolo V4 model, the data set is labeled through labellmg, and the flame area on the image is completely labeled during labeling, so that a shelter and interferents with similar colors are avoided, and the best training effect is obtained.
5. The fire source detection, identification and early warning method for a fire-fighting robot according to claim 1, characterized in that: the preprocessing of the infrared image is to carry out mean value filtering and noise reduction on the infrared image, improve the definition of a target in the image, and then carry out histogram equalization on the infrared image, so that the image details are clearer, and the specific process is as follows:
(1) and (3) performing mean value filtering and noise reduction, setting a discrete image f (x, y) to traverse the image through a mean value filtering operator, calculating the mean value of all pixel points in a neighborhood by taking any pixel point of the image as a center, and taking the mean value as the pixel value of g (x, y) point of the filtered image, wherein the method comprises the following steps:
in the formula, S is a neighborhood determined by any pixel point, and M is the total number of pixel points in the neighborhood S; the mean filtering is implemented by means of convolution operation, and its output image g (x, y) is expressed as:
wherein the content of the first and second substances,determining the size of the template according to the size of the selected neighborhood, wherein m and n are the size of the neighborhood; h (r, s) is a convolution array;
(2) histogram equalization, and representing the gray level histogram of the image as follows:
in the formula, nkThe frequency of occurrence of the k-th gray level, N being the total number of pixels, rkIs the k-th gray level of the image, L is the total number of gray levels, p (r)k) Is the gray level probability;
the normalized original image and the histogram equalization transformed image are respectively represented by r and s:
0≤r≤1,0≤s≤1;
for any pixel point in the original image, a pixel value of a new image can be generated through change, and s ═ t (r) is a transformation function of histogram equalization, and a gray probability density function P of the original imager(r) the transformed image gray level probability density function is Ps(s); transformed P, as derived from probability theorys(s) is:
for a digital image, the gray level histogram value of the image is taken as the probability density function of the image, and then the gray level of the image is rkThe pixel occurrence probability of (a) is:
the change function for histogram equalization processing on the original image is as follows:
wherein N is the total number of pixels in the image and L is the total number of gray levels.
6. The fire source detection, identification and early warning method for a fire-fighting robot as recited in claim 5, wherein: measuring the temperature of the preprocessed infrared image through an infrared thermal imager:
f(T′0)=τa[εf(T0)+(1-ε)f(Tu)]+(1-τa)f(Ta);
wherein, T0The surface temperature of the measured object; t isuIs ambient temperature; t isaIs at atmospheric temperature; t'0The radiation temperature indicated by the infrared thermal imager; tau isaSpectral transmittance of atmosphere
The calculation formula of the real temperature of the surface of the measured object obtained by the Planck radiation law is as follows:
when infrared thermal imagers with different wave bands are used, the value of n is different.
7. The fire source detection, identification and early warning method for a fire-fighting robot according to claim 1, characterized in that: the image registration comprises the following specific steps:
firstly, detecting characteristic points: constructing a scale space:
in the formula: i (x, y) is an input original image; g (x, y, σ) is a variable Gaussian kernel function; sigma is a Gaussian fuzzy parameter of the scale space; (x, y) is the pixel location of the image; p and q are dimensions of the Gaussian template respectively;
the difference operator in the gaussian difference scale space is:
constructing a Gaussian difference image, extracting characteristic points in the image, comparing a middle detection point with 8 adjacent points of the same scale and 9 multiplied by 2 points corresponding to upper and lower adjacent scales to ensure that extreme points are detected in a scale space and a two-dimensional image space, and solving a local maximum point as the characteristic point;
then, describing the extracted feature points: by dividing the periphery of the pixel point into 16 square subregions, most features of the image can be covered, gradients and modulus values in 8 directions are obtained in the subregions, and 128-dimensional feature vectors are formed to describe the feature points; the calculation formula of the modulus and the gradient is as follows:
and then matching the characteristic points: matching the feature points by using Euclidean distances, calculating the ratio of the nearest Euclidean distance to the next nearest Euclidean distance, if the ratio is within the threshold range, the matching is successful, otherwise, the matching is failed; the calculation equation of the Euclidean distance is as follows:
wherein dis is the Euclidean distance of the two feature vectors; p is a descriptor dimension; m and n are feature points in the two graphs; dmAnd DnA feature descriptor for m and n, respectively;
and then carrying out spatial transformation: using an affine transformation model, the model is represented as:
wherein, (x, y) and (x ', y') are pixel coordinates of corresponding characteristic points in the visible light image and the thermal infrared image respectively, and 6 parameter vectors represent the conversion relation between the coordinates of the two images;
and finally, image fusion is carried out: adopting a weighted average image fusion algorithm, setting a visible light image as A, an infrared image as B and a fused image as F, wherein the weighted average fusion process is shown as the following formula:
F(i,j)=w1A(i,j)+w2B(i,j);
wherein, (i, j) represents the position coordinates of the pixel points in the representative image, w1And w2Are image weighting coefficients, w1+w2=1。
8. The fire source detection, identification and early warning method for a fire-fighting robot according to claim 1, characterized in that: the calculation process of the binocular parallax positioning real fire source comprises the following steps:
setting point P to be a certain point on the real fire source, ORAnd OTThe optical centers of the two cameras are respectively, the imaging points of the point P on the two camera sensors are respectively P 'and P', f is the focal length of the cameras, B is the center distance of the two cameras, and X is the distance between the two camerasR、XTThe distance between two imaging points P 'and P' on the left and right imaging surfaces and the left edge of the image, and Z is depth information;
assuming that the distance from the point P 'to P' is Δ x, then: Δ X ═ B- (X)R-XT);
According to the similar triangle principle:
calculating to obtain:
in the formula, the focal length f of the camera and the center distance B of the camera are obtained by calibration, and X isR-XTI.e. parallax d, to obtain depth information
Assuming that the coordinates of the target point in the left view are (X, Y), the disparity d formed on the left and right views, and the coordinates of the target point in the world coordinate system with the left camera optical center as the origin are (X, Y, Z), there is a transformation matrix Q as shown:
wherein, CxAnd CyThe offsets of the coordinate system of the left image plane and the original point in the camera coordinate system are respectively obtained through three-dimensional calibration.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210067506.9A CN114399882A (en) | 2022-01-20 | 2022-01-20 | Fire source detection, identification and early warning method for fire-fighting robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210067506.9A CN114399882A (en) | 2022-01-20 | 2022-01-20 | Fire source detection, identification and early warning method for fire-fighting robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114399882A true CN114399882A (en) | 2022-04-26 |
Family
ID=81233616
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210067506.9A Pending CN114399882A (en) | 2022-01-20 | 2022-01-20 | Fire source detection, identification and early warning method for fire-fighting robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114399882A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115494193A (en) * | 2022-11-16 | 2022-12-20 | 常州市建筑科学研究院集团股份有限公司 | Machine vision-based flame transverse propagation detection method and system for single body combustion test |
CN115512307A (en) * | 2022-11-23 | 2022-12-23 | 中国民用航空飞行学院 | Wide-area space infrared multi-point real-time fire detection method and system and positioning method |
CN116152758A (en) * | 2023-04-25 | 2023-05-23 | 松立控股集团股份有限公司 | Intelligent real-time accident detection and vehicle tracking method |
CN117711127A (en) * | 2023-11-08 | 2024-03-15 | 金舟消防工程(北京)股份有限公司 | Fire safety supervision method and system |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012118698A (en) * | 2010-11-30 | 2012-06-21 | Fuji Heavy Ind Ltd | Image processing system |
CN105261029A (en) * | 2015-11-20 | 2016-01-20 | 中国安全生产科学研究院 | Method and robot for performing fire source location and fire extinguishment based on binocular vision |
CN105962904A (en) * | 2016-04-21 | 2016-09-28 | 西安工程大学 | Human tissue focus detection method based on infrared thermal imaging technology |
CN106355809A (en) * | 2016-11-09 | 2017-01-25 | 宁波大红鹰学院 | Early warning and emergent processing system for forest fire |
CN108090495A (en) * | 2017-12-22 | 2018-05-29 | 湖南源信光电科技股份有限公司 | A kind of doubtful flame region extracting method based on infrared light and visible images |
CN109101882A (en) * | 2018-07-09 | 2018-12-28 | 石化盈科信息技术有限责任公司 | A kind of image-recognizing method and system of fire source |
CN109544501A (en) * | 2018-03-22 | 2019-03-29 | 广东电网有限责任公司清远供电局 | A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching |
CN112365669A (en) * | 2020-10-10 | 2021-02-12 | 北京索斯克科技开发有限公司 | Dual-band far infrared fusion-superposition imaging and fire early warning method and system |
CN112731925A (en) * | 2020-12-21 | 2021-04-30 | 浙江科技学院 | Conical barrel identification and path planning and control method for unmanned formula racing car |
CN113947711A (en) * | 2021-07-29 | 2022-01-18 | 苏州森合知库机器人科技有限公司 | Dual-channel flame detection algorithm for inspection robot |
-
2022
- 2022-01-20 CN CN202210067506.9A patent/CN114399882A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012118698A (en) * | 2010-11-30 | 2012-06-21 | Fuji Heavy Ind Ltd | Image processing system |
CN105261029A (en) * | 2015-11-20 | 2016-01-20 | 中国安全生产科学研究院 | Method and robot for performing fire source location and fire extinguishment based on binocular vision |
CN105962904A (en) * | 2016-04-21 | 2016-09-28 | 西安工程大学 | Human tissue focus detection method based on infrared thermal imaging technology |
CN106355809A (en) * | 2016-11-09 | 2017-01-25 | 宁波大红鹰学院 | Early warning and emergent processing system for forest fire |
CN108090495A (en) * | 2017-12-22 | 2018-05-29 | 湖南源信光电科技股份有限公司 | A kind of doubtful flame region extracting method based on infrared light and visible images |
CN109544501A (en) * | 2018-03-22 | 2019-03-29 | 广东电网有限责任公司清远供电局 | A kind of transmission facility defect inspection method based on unmanned plane multi-source image characteristic matching |
CN109101882A (en) * | 2018-07-09 | 2018-12-28 | 石化盈科信息技术有限责任公司 | A kind of image-recognizing method and system of fire source |
CN112365669A (en) * | 2020-10-10 | 2021-02-12 | 北京索斯克科技开发有限公司 | Dual-band far infrared fusion-superposition imaging and fire early warning method and system |
CN112731925A (en) * | 2020-12-21 | 2021-04-30 | 浙江科技学院 | Conical barrel identification and path planning and control method for unmanned formula racing car |
CN113947711A (en) * | 2021-07-29 | 2022-01-18 | 苏州森合知库机器人科技有限公司 | Dual-channel flame detection algorithm for inspection robot |
Non-Patent Citations (2)
Title |
---|
孟志敏: ""红外与可见光图像融合的目标识别方法研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 1, pages 135 - 205 * |
蒋文萍等: ""基于多重迁移学习的YOLO V5初期火灾探测研究"", 《消防科学与技术》, vol. 40, no. 1, pages 109 - 112 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115494193A (en) * | 2022-11-16 | 2022-12-20 | 常州市建筑科学研究院集团股份有限公司 | Machine vision-based flame transverse propagation detection method and system for single body combustion test |
CN115512307A (en) * | 2022-11-23 | 2022-12-23 | 中国民用航空飞行学院 | Wide-area space infrared multi-point real-time fire detection method and system and positioning method |
CN116152758A (en) * | 2023-04-25 | 2023-05-23 | 松立控股集团股份有限公司 | Intelligent real-time accident detection and vehicle tracking method |
CN117711127A (en) * | 2023-11-08 | 2024-03-15 | 金舟消防工程(北京)股份有限公司 | Fire safety supervision method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114399882A (en) | Fire source detection, identification and early warning method for fire-fighting robot | |
JP6667596B2 (en) | Object detection system, autonomous vehicle using the same, and object detection method thereof | |
CN107240124B (en) | Cross-lens multi-target tracking method and device based on space-time constraint | |
US9646212B2 (en) | Methods, devices and systems for detecting objects in a video | |
Treible et al. | Cats: A color and thermal stereo benchmark | |
CN111739250B (en) | Fire detection method and system combining image processing technology and infrared sensor | |
CN112102409B (en) | Target detection method, device, equipment and storage medium | |
CN111144207B (en) | Human body detection and tracking method based on multi-mode information perception | |
CN110197185B (en) | Method and system for monitoring space under bridge based on scale invariant feature transform algorithm | |
CN111079518A (en) | Fall-down abnormal behavior identification method based on scene of law enforcement and case handling area | |
CN110910456B (en) | Three-dimensional camera dynamic calibration method based on Harris angular point mutual information matching | |
CN114155501A (en) | Target detection method of unmanned vehicle in smoke shielding environment | |
JP2020149641A (en) | Object tracking device and object tracking method | |
CN107679542B (en) | Double-camera stereoscopic vision identification method and system | |
CN112836634A (en) | Multi-sensor information fusion gate trailing prevention method, device, equipment and medium | |
CN114842340A (en) | Robot binocular stereoscopic vision obstacle sensing method and system | |
CN117576461A (en) | Semantic understanding method, medium and system for transformer substation scene | |
CN112598738B (en) | Character positioning method based on deep learning | |
CN115937325A (en) | Vehicle-end camera calibration method combined with millimeter wave radar information | |
CN114067267A (en) | Fighting behavior detection method based on geographic video | |
CN111489384B (en) | Method, device, system and medium for evaluating shielding based on mutual viewing angle | |
CN110501709A (en) | Object detection system, autonomous vehicle and its object detection method | |
Su | Vanishing points in road recognition: A review | |
Hajebi et al. | Sparse disparity map from uncalibrated infrared stereo images | |
RU2315357C2 (en) | Object detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |