CN114386493A - Fire detection method, system, device and medium based on flame vision virtualization - Google Patents

Fire detection method, system, device and medium based on flame vision virtualization Download PDF

Info

Publication number
CN114386493A
CN114386493A CN202111613224.6A CN202111613224A CN114386493A CN 114386493 A CN114386493 A CN 114386493A CN 202111613224 A CN202111613224 A CN 202111613224A CN 114386493 A CN114386493 A CN 114386493A
Authority
CN
China
Prior art keywords
flame
fire detection
suspected
vision
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111613224.6A
Other languages
Chinese (zh)
Inventor
徐兵荣
陆音
郁建峰
陈子阳
许旻昱
蔡奕杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi IoT Technology Co Ltd
Original Assignee
Tianyi IoT Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi IoT Technology Co Ltd filed Critical Tianyi IoT Technology Co Ltd
Priority to CN202111613224.6A priority Critical patent/CN114386493A/en
Publication of CN114386493A publication Critical patent/CN114386493A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B17/00Fire alarms; Alarms responsive to explosion
    • G08B17/12Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
    • G08B17/125Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The invention discloses a fire detection method, a fire detection system, a fire detection device and a fire detection medium based on flame vision virtualization, wherein the method comprises the following steps: acquiring a first video image of a to-be-detected area, and performing color segmentation on the first video image to obtain a suspected flame area; determining dispersion characteristics, similarity characteristics and centroid movement characteristics of a suspected flame area; extracting flame vision blurring characteristics of a suspected flame area through binocular ranging and laser ranging; and constructing an MES multi-expert decision system, and performing feature fusion and decision analysis on the dispersion feature, the similarity feature, the centroid motion feature and the flame vision virtualization feature through the MES multi-expert decision system to obtain a fire detection result. The invention can accurately eliminate the interference of bulbs, reflective fire extinguishers, red fire extinguishers and the like, reduces the misjudgment rate of fire detection, can detect flames generated by various fuels, reduces the missed detection rate of fire detection, improves the accuracy of fire detection, and can be widely applied to the technical field of fire detection.

Description

Fire detection method, system, device and medium based on flame vision virtualization
Technical Field
The invention relates to the technical field of fire detection, in particular to a fire detection method, a fire detection system, a fire detection device and a fire detection medium based on flame vision virtualization.
Background
The frequency and universality of fire threatens the development of public safety and society, is considered to be one of the most serious threats in daily life, has high propagation speed, great property loss and makes the prevention and avoidance of fire become an indispensable task for protecting the life and property safety of personnel, the fire alarm system with low cost, reliability and wide coverage enables people to discover the fire as soon as possible and escape safely and quickly, and the fire detection and alarm system can also reduce the fire and the harm to the people as far as possible.
In various aspects of industry, agriculture and social production and life, the traditional fire detection system cannot meet the actual needs of a fire alarm system in a complex environment, and most of automatic fire alarm systems adopt a single passive sensor for alarming, so that some inevitable problems exist. For example, some devices that use photosensitive detectors may be affected by sunlight and light. The smoke detector can be influenced by various gases, and the multi-sensor false alarm rate based on a simple algorithm can be greatly improved along with the increase of the installation number of the sensors in the multi-sensor based flame detection technology, so that the alarm system has multiple times of false missing alarm or false alarm.
Video fire detection technology is a new technology that has only started to be applied in fire detection in recent decades. The video fire detection method is a non-contact detection method, and compared with the traditional methods such as smoke detection and temperature detection, the speed is higher, the method is more intelligent and more reliable. Generally, a common color camera is used for shooting a scene video, and some unique fire characteristics such as the color and the shape of a fire are extracted to be used as the input of a fire detection and identification algorithm, but the image type flame detection technology has low accuracy and has complex characteristics in a complex environment, so that the research on flame detection technologies except sensors and image types is very important.
Interpretation of terms:
visual blurring of flame: the Visual bluring of flame refers to that the gasified part of the flame glowing and glowing can be observed by human eyes, and the visible light of a camera can be captured, but the flame can not be reflected because special light sources such as laser and the like penetrate through the flame, and the whole flame presents a virtual state. However, the common interferents are basically in a solid state and have no visual blurring property.
And (3) multi-feature fusion: multi-feature fusion, which may also be referred to as Multi-sensor correlation, Multi-sensor fusion, or the like, refers to extracting information from different aspects of the same object, and comprehensively analyzing each feature by a computer according to a certain criterion, thereby accurately and comprehensively judging the same object.
Disclosure of Invention
The present invention aims to solve at least to some extent one of the technical problems existing in the prior art.
Therefore, an object of an embodiment of the present invention is to provide a fire detection method based on flame visual virtuality, which extracts a suspected flame region by using an RGB-HIS color segmentation model, captures the visual virtuality characteristic of flame by binocular ranging and laser ranging, and performs feature fusion and decision analysis by combining the dispersion characteristic, similarity characteristic and centroid motion characteristic of flame through an MES multi-expert decision system to obtain a fire detection result, thereby reducing the false judgment rate and the missed detection rate of fire detection and improving the accuracy of fire detection.
It is another object of embodiments of the present invention to provide a fire detection system based on visual flame virtualization.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the invention comprises the following steps:
in a first aspect, an embodiment of the present invention provides a fire detection method based on flame vision virtualization, including the following steps:
acquiring a first video image of a to-be-detected area, and performing color segmentation on the first video image to obtain a suspected flame area;
determining dispersion characteristics, similarity characteristics and centroid movement characteristics of the suspected flame area;
extracting the visual blurring characteristics of the flame in the suspected flame area through binocular ranging and laser ranging;
and constructing an MES multi-expert decision-making system, and performing feature fusion and decision analysis on the dispersion feature, the similarity feature, the centroid motion feature and the flame vision blurring feature through the MES multi-expert decision-making system to obtain a fire detection result.
Further, in an embodiment of the present invention, the step of performing color segmentation on the first video image to obtain a suspected flame area specifically includes:
determining a red component threshold and a saturation threshold;
constructing an RGB-HIS color segmentation model according to the red component threshold and the saturation threshold;
and carrying out color segmentation on the first video image through the RGB-HIS color segmentation model to obtain a suspected flame area.
Further, in an embodiment of the present invention, the RGB-HIS color segmentation model is:
R>G>B
R>RT
S>(255-R)×ST/RT
wherein R represents the red component of the target pixel point, G represents the green component of the target pixel point, B represents the blue component of the target pixel point, RTIndicating threshold of red componentValue, S represents the saturation of the target pixel point, STRepresenting a saturation threshold.
Further, in an embodiment of the present invention, the step of determining the dispersion characteristic, the similarity characteristic, and the centroid movement characteristic of the suspected flame area specifically includes:
carrying out image analysis on the suspected flame area, and extracting dispersion characteristics of each part of the suspected flame area;
comparing the suspected flame areas of the continuous frames of the first video image to obtain similarity characteristics of the suspected flame areas;
and determining the centroid position of the suspected flame area, and determining the centroid movement characteristic of the first video image according to the centroid position.
Further, in an embodiment of the present invention, the step of extracting the visual blurring characteristic of the flame of the suspected flame area through binocular ranging and laser ranging specifically includes:
carrying out binocular distance measurement on the suspected flame area through a binocular camera to obtain first flame depth information of the suspected flame area;
performing laser ranging on the suspected flame area through a laser measuring system to obtain second flame depth information of the suspected flame area;
and determining the visual blurring characteristics of the flame of the suspected flame area according to the difference value of the first flame depth information and the second flame depth information.
Further, in an embodiment of the present invention, the step of constructing the MES multi-expert decision system specifically includes:
obtaining a dispersion classifier, a similarity classifier, a centroid motion classifier and a flame vision virtualization classifier which are trained in advance;
determining a first weight of the dispersion classifier, a second weight of the similarity classifier, a third weight of the centroid motion classifier, and a fourth weight of the flame vision virtuality classifier;
and constructing an MES multi-expert decision system according to the dispersion classifier, the similarity classifier, the centroid motion classifier, the flame vision virtualization classifier, the first weight, the second weight, the third weight and the fourth weight.
Further, in an embodiment of the present invention, the step of performing feature fusion and decision analysis on the dispersion feature, the similarity feature, the centroid motion feature and the flame vision blurring feature through the MES multi-expert decision making system to obtain a fire detection result specifically includes:
classifying the dispersion feature, the similarity feature, the centroid motion feature and the flame vision blurring feature according to the dispersion classifier, the similarity classifier, the centroid motion classifier and the flame vision blurring classifier to obtain a plurality of flame classification labels;
and performing feature fusion according to the first weight, the second weight, the third weight, the fourth weight and the flame classification label to obtain a first weighted sum, and further determining whether the suspected flame area has a fire according to the first weighted sum and a preset threshold value.
In a second aspect, an embodiment of the present invention provides a fire detection system based on flame vision virtualization, including:
the color segmentation module is used for acquiring a first video image of a region to be detected and performing color segmentation on the first video image to obtain a suspected flame region;
the first characteristic determination module is used for determining dispersion characteristics, similarity characteristics and centroid movement characteristics of the suspected flame area;
the second characteristic determination module is used for extracting the flame vision blurring characteristic of the suspected flame area through binocular ranging and laser ranging;
and the characteristic fusion module is used for constructing an MES multi-expert decision-making system, and performing characteristic fusion and decision analysis on the dispersion characteristic, the similarity characteristic, the centroid motion characteristic and the flame vision blurring characteristic through the MES multi-expert decision-making system to obtain a fire detection result.
In a third aspect, an embodiment of the present invention provides a fire detection apparatus based on flame vision virtualization, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a fire detection method based on flame vision virtualisation as described above.
In a fourth aspect, the present invention also provides a computer-readable storage medium, in which a processor-executable program is stored, and the processor-executable program, when executed by a processor, is configured to perform a fire detection method based on flame vision virtuality as described above.
Advantages and benefits of the present invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention:
according to the embodiment of the invention, the suspected flame area is extracted by using the RGB-HIS color segmentation model, the visual blurring characteristic of the flame is captured by binocular ranging and laser ranging, and the characteristic fusion and decision analysis are carried out by combining the dispersion characteristic, the similarity characteristic and the mass center motion characteristic of the flame through the MES multi-expert decision system to obtain the fire detection result, so that on one hand, the interferences of a bulb, a reflected light, a red fire extinguisher and the like can be accurately eliminated, the misjudgment rate of fire detection is reduced, on the other hand, the flame generated by various fuels can be detected, the missed detection rate of fire detection is reduced, and the precision of fire detection is improved.
Drawings
In order to more clearly illustrate the technical solution in the embodiment of the present invention, the following description is made on the drawings required to be used in the embodiment of the present invention, and it should be understood that the drawings in the following description are only for convenience and clarity of describing some embodiments in the technical solution of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a flowchart illustrating the steps of a fire detection method based on visual flame virtualization according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of the principle of measuring flame depth information by binocular distance measurement according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a laser ranging measurement of flame depth information according to an embodiment of the present invention;
FIG. 4 is a schematic diagram illustrating a comparison of flame depth information obtained by binocular ranging and laser ranging provided in an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a MES multi-expert decision-making system according to an embodiment of the present invention;
FIG. 6 is a block diagram of a fire detection system based on visual flame virtualization according to an embodiment of the present invention;
fig. 7 is a block diagram of a fire detection device based on flame vision virtualization according to an embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
In the description of the present invention, the meaning of a plurality is two or more, if there is a description to the first and the second for the purpose of distinguishing technical features, it is not understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features or implicitly indicating the precedence of the indicated technical features. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art.
In the early days, people used conventional smoke, light and temperature sensors to detect fire, but before smoke particles or heat diffused to a certain extent, the sensors did not give an alarm, thereby causing a great delay in fire detection, and being not applicable to large spaces and open places. In recent years, with the maturity of computer image processing technology, the flame detection technology based on video images is greatly concerned, the image type flame detection technology overcomes the defects of single structure, poor real-time performance and low accuracy of a fire detection system of a sensor, realizes flame detection by extracting the dynamic and static characteristics of flame and the like and performing multi-characteristic fusion, and is an important breakthrough of the flame detection technology.
At present, the flame detection technology of video images mainly analyzes two aspects of flame static characteristics and dynamic characteristics.
Static characteristics: the most representative of the flame color space is the flame color space characteristics, and the color space mainly comprises RGB, HIS, YCbCr and the like. Static characteristics also include spectral information, regional structure, geometric characteristics of the flame. The spectral information comprises color characteristics, saturation characteristics and the like, the flame region structure comprises textures, gravity center height coefficients and the like, and the geometric characteristics comprise circularity, rectangularity, flame sharp angles and the like. The color information in the static characteristics is one of the most used characteristics in flame detection, can effectively distinguish different color objects, has higher flame saturation, and can distinguish the flame from a common red object, but the static characteristics can not effectively eliminate the interference of noise and objects similar to fire, such as bulbs, red walls, fire extinguishers and the like.
Dynamic characteristics: the fire detection method only using single characteristics has the defects of low accuracy, high false alarm rate and the like, so that a plurality of researches further analyze the dynamic characteristics of flame and integrate the static and dynamic characteristics of the flame to reduce the false alarm rate of flame detection. The dynamic features include global motion and random motion. The flame overall motion characteristics comprise area change, overall movement and similarity, and random motion comprises flame stroboscopic characteristics and shape change. The motion characteristics of the flame can distinguish objects such as light sources, light reflection and the like. However, the fire detection method based on the flame dynamic characteristics cannot effectively eliminate the interference of suspected fire sources such as moving light sources, e.g., tail lights, sunlight refraction, and the like. While flames produced by special fuels such as magnesium, phosphorus, copper, etc. cannot be detected.
And (3) multi-feature fusion: the single flame feature extraction is used for detecting whether a fire disaster occurs or not, and cannot meet actual requirements, although the single feature extraction algorithm is simple, the accuracy is low, and in a complex fire disaster environment, flames show various features, so that various features need to be extracted and feature fusion needs to be carried out. Processing high-dimensional feature vectors is a difficult problem in the current research field, and in machine learning and pattern recognition, the use of one high-dimensional feature vector would mean a significant increase in the required data training classifiers to avoid excessive specialization to achieve more reliable results. Furthermore, the variability of the fire and the large amount of noise in the data acquired in the fire environment hinder the recognition rate of the system.
In order to solve the problems of flame misjudgment, missing detection and the like in a visual image fire system, the embodiment of the invention discloses a fire detection method based on flame visual virtualization, which extracts the characteristics of flame visual virtualization by matching binocular ranging with laser ranging, on one hand, the interference of bulbs, light reflection, red fire extinguishers and the like can be accurately eliminated, the misjudgment rate is reduced, on the other hand, flames generated by various fuels can be detected, the missing detection rate is reduced, and finally, the flame visual virtualization and the characteristics of video image flames are fused through an MES (Multi-expert System), so that the flames are accurately detected, and meanwhile, the system has the application capability in a complex indoor and outdoor environment.
Referring to fig. 1, an embodiment of the present invention provides a fire detection method based on flame vision virtualization, which specifically includes the following steps:
s101, obtaining a first video image of a to-be-detected area, and performing color segmentation on the first video image to obtain a suspected flame area.
As a further optional implementation manner, the step of performing color segmentation on the first video image to obtain a suspected flame area specifically includes:
a1, determining a red component threshold value and a saturation threshold value;
a2, constructing an RGB-HIS color segmentation model according to the red component threshold and the saturation threshold;
and A3, performing color segmentation on the first video image through an RGB-HIS color segmentation model to obtain a suspected flame area.
As a further optional implementation, the RGB-HIS color segmentation model is:
R>G>B
R>RT
S>(255-R)×ST/RT
wherein R represents the red component of the target pixel point, G represents the green component of the target pixel point, B represents the blue component of the target pixel point, RTRepresenting the red component threshold, S representing the saturation of the target pixel, STRepresenting a saturation threshold.
Specifically, in order to divide a suspected flame area, flame color characteristics are used as judgment conditions, flame combustion is changed from red to yellow, and R & gtG & gtB is satisfied corresponding to RGB color space; setting a red component threshold R since the red component is distinctive when the flame is burningTOther color objects may be excluded; setting a flame saturation threshold S due to high flame saturationTThe influence of background objects with low saturation such as red pedestrians and red walls can be avoided.
According to the flame characteristics, firstly, an RGB-HIS color model is adopted to segment a suspected flame area, namely:
R>G>B
R>RT
S>(255-R)×ST/RT
wherein, R isIndicating the red component of the target pixel point, G indicating the green component of the target pixel point, B indicating the blue component of the target pixel point, RTRepresenting the red component threshold, S representing the saturation of the target pixel, STRepresenting a saturation threshold. It will be appreciated that the saturation will decrease as the R component increases, the R component increases towards a maximum value 255 and the saturation S will decrease to zero. RTAnd STRanges are 55-65 and 115-135 respectively, and in the present embodiment, R is setT=55,ST=125。
S102, determining dispersion characteristics, similarity characteristics and centroid movement characteristics of the suspected flame area.
Further as an optional implementation manner, step S102 specifically includes the following steps:
s1021, carrying out image analysis on the suspected flame area, and extracting dispersion characteristics of each part of the suspected flame area;
s1022, comparing suspected flame areas of continuous frames of the first video image to obtain similarity characteristics of the suspected flame areas;
and S1023, determining the centroid position of the suspected flame area, and determining the centroid motion characteristic of the first video image according to the centroid position.
Specifically, the combustion degree and temperature are different at different parts of the flame during combustion, and the flame is expressed as a certain degree of dispersion, the dispersion is defined by the standard deviation of color components, the blue component of the flame is generated by combustion oxygen, and the blue component is greatly different at different flame parts, so the standard deviation of the blue component is large, while the blue component of a non-flame object is usually determined by light, no dispersion is generated in a small range, and the standard deviation is very small; the flame has a flicker characteristic during combustion, appears irregular in a single frame, but shows certain similarity in continuous frames, which is greatly different from other fast moving light sources or interfering objects with flame color characteristics, so that the similarity can be used as a flame distinguishing basis; the flame continuously flickers in a certain time, the mass center moves in a repeated mode, therefore, the ratio of the total mass center displacement to the total mass center movement distance in the flame area is smaller than a certain threshold value, common interference sources, such as a moving flashlight, have uniform mass center movement in a short time, and therefore the ratio is larger.
S103, extracting the visual blurring characteristics of the flame in the suspected flame area through binocular ranging and laser ranging.
Specifically, under the interference of complex environments and different combustion materials, only video image characteristics of flame such as color and motion are used as flame detection characteristic bases, a high false detection rate still exists, and in order to improve the accuracy of flame detection, the novel flame characteristic of flame vision blurring is added in the embodiment of the invention, and the flame vision blurring characteristic is detected by combining binocular ranging and laser ranging. Step S103 specifically includes the following steps:
s1031, carrying out binocular distance measurement on the suspected flame area through a binocular camera to obtain first flame depth information of the suspected flame area;
s1032, performing laser ranging on the suspected flame area through a laser measuring system to obtain second flame depth information of the suspected flame area;
and S1033, determining the visual blurring characteristic of the flame of the suspected flame area according to the difference value of the first flame depth information and the second flame depth information.
Specifically, as shown in FIG. 2, first flame depth information is measured using a pre-calibrated variable baseline binocular camera (HNY-CV-002), according to the binocular measurement principle: the two cameras observe flames from two different positions at the same time, and the position deviation between image pixels, namely the parallax d, is calculated by utilizing the triangular geometric principle, so that the three-dimensional information of the detected target, namely the three-dimensional coordinate of the point P is obtained. P (X)0,Y0,Z0) For the pixel point of the target flame to be measured, OlAnd OrThe optical centers of the left and right cameras respectively. p is a radical ofl(xl,yl) And pr(xr,yr) Respectively, the projected points of point P on the left and right camera imaging planes.
According to the triangle similarity principle, the first flame depth information measured by the binocular distance measuring method can be expressed as
Figure BDA0003436172720000081
The principle of laser ranging is shown in fig. 3, a distance observation value L is obtained by emitting pulse to scan the time or phase difference of a measured object, any measured point P 'is obtained according to the observation values alpha and beta of scanning angles in the horizontal direction and the vertical direction, a three-dimensional point cloud picture is generated, an X axis is located in a transverse scanning plane, a Y axis is perpendicular to the X axis in the transverse scanning plane, a Z axis is perpendicular to the transverse scanning plane, and when the laser is used for measuring flame, the laser penetrates through a flame pixel point P and is shot to a certain point P' of a background object such as a wall body, the ground and the like.
The laser measurement system measures the second flame depth information as depth ═ L.
In order to analyze and verify the visual blurring of the flame, the embodiment of the present invention creates a schematic diagram of comparing flame depth information obtained by binocular ranging and laser ranging shown in fig. 4, and performs the following analysis:
as shown in fig. 4(a), a schematic diagram of comparing flame depth information obtained by binocular ranging and laser ranging when the ranging system and the flame are located at the same horizontal position, an included angle α between a connecting line of the ranging system and the flame and a horizontal line is 0 °, and when no background obstacle exists, the laser penetrates through the flame and emits to infinity, and a difference τ → ∞ between flame depth information measured by the ranging system and the flame.
As shown in fig. 4(b), a schematic diagram of comparing flame depth information obtained by binocular ranging and laser ranging when the ranging system and the flame are located at different horizontal positions is shown, a system measurement angle α belongs to (0,90 °), a system suspension height is fixed as h in an actual scene, and a flame distance system horizontal distance is s.
The triangle principle can be used to obtain:
Figure BDA0003436172720000091
the trapezoidal principle can be used:
Figure BDA0003436172720000092
wherein h' is h-stan α, and thus,
Figure BDA0003436172720000093
if the difference between the two system measurement distances is τ, then
Figure BDA0003436172720000094
The value of tau changes with the angle alpha, the larger alpha is, and the smaller alpha is.
As shown in fig. 4(c), which is a schematic diagram comparing the depth information of the interfering object obtained by the binocular distance measurement and the laser distance measurement, when the distance measurement system measures the interfering object such as a bulb, a pedestrian, etc. from any angle and position, the laser does not penetrate the interfering object, and the distance value measured by the system is theoretically consistent regardless of the physical difference of the devices, so τ is 0.
In an actual environment, the suspension angle of the camera is generally 30-45 degrees, according to the principle that the three-dimensional surface of an object can be reconstructed through binocular ranging and laser ranging, a suspected flame area formed by characteristic pixel points is measured through binocular ranging and laser ranging to measure the depth information of the characteristic pixel points in the area. The pixel point area with tau being approximately equal to 0 is an interference object, tau in the flame area is a finite value or infinity, so that the difference value of flame depth information measured by binocular ranging and laser ranging can be calculated, and the flame vision blurring characteristic is represented according to the difference value.
And S104, constructing an MES multi-expert decision system, and performing feature fusion and decision analysis on the dispersion feature, the similarity feature, the centroid motion feature and the flame vision virtualization feature through the MES multi-expert decision system to obtain a fire detection result.
Further as an optional implementation manner, the step of constructing the MES multi-expert decision making system specifically includes:
b1, acquiring a dispersion classifier, a similarity classifier, a centroid motion classifier and a flame vision virtualization classifier which are trained in advance;
b2, determining a first weight of a dispersion classifier, a second weight of a similarity classifier, a third weight of a centroid motion classifier and a fourth weight of a flame vision virtualization classifier;
and B3, constructing an MES multi-expert decision system according to the dispersion classifier, the similarity classifier, the centroid motion classifier, the flame vision virtualization classifier, the first weight, the second weight, the third weight and the fourth weight.
Further as an optional implementation manner, the step of obtaining a fire detection result by performing feature fusion and decision analysis on the dispersion feature, the similarity feature, the centroid motion feature and the flame vision virtualization feature through an MES multi-expert decision system specifically includes:
c1, classifying the dispersion characteristics, the similarity characteristics, the centroid motion characteristics and the flame vision virtualization characteristics according to the dispersion classifier, the similarity classifier, the centroid motion classifier and the flame vision virtualization classifier respectively to obtain a plurality of flame classification labels;
and C2, performing feature fusion according to the first weight, the second weight, the third weight, the fourth weight and the flame classification label to obtain a first weighted sum, and further determining whether the suspected flame area is in fire according to the first weighted sum and a preset threshold.
In particular, the MES (Multi-expert system) Multi-expert decision mechanism is widely used in the field of image recognition, and although there are many methods for Multi-feature fusion, weighted classification is one of the most effective methods for Multi-feature fusion, the MES cuts a feature set by segmenting feature vectors and using a set of classifiers, each of which is tailored to a feature set and then trained as an expert in a certain feature space. By combining the results of different single classifiers to make the decision, the MES multi-expert decision-making system performs better than the single optimal classifier in most cases.
Therefore, the embodiment of the invention adopts an MES multi-expert decision mechanism to perform multi-feature fusion and establishes an MES system based on flame vision blurring feature as shown in FIG. 5, wherein DE represents a color expert, SE represents a similarity expert, VE represents a centroid motion expert, IE represents a vision blurring expert, and a combination and decision rule which are experts for determining the performance of the MES are used for determining the performance of the MES, and for k expert, k belongs to the category c selected in a labelk(b) The blob assigned to the input (F denotes fire,
Figure BDA0003436172720000104
represents n on-fire); the vote for class i can be expressed as σik(b) If the corresponding category is output, the voting result is 1, otherwise the result is 0. Weight omegak(i) And (3) carrying out dynamic evaluation by a Bayesian formula to obtain the highest identification rate of the MES as follows:
Figure BDA0003436172720000101
the final decision identifies a particular class by maximizing the reliability of the entire MES, and the reliability of the blob belonging to class i is calculated by voting weights:
Figure BDA0003436172720000102
the decision of c is finally derived by maximizing the reliability of the different classes:
Figure BDA0003436172720000103
each expert in the MES is evaluated to classify an incoming blob as a flame if a given threshold or interval condition is met, and a non-flame otherwise. Wherein, WDE(F)、WSE(F)、WVE(F) And WIE(F) Respectively the recognition precision of the flame sample based on the color, the similarity, the mass center motion and the visual virtualization of the flame of the training data,
Figure BDA0003436172720000111
and
Figure BDA0003436172720000112
the accuracy of the expert's recognition of the interference samples is determined.
According to the concept of accuracy, the embodiments of the present invention define the true positive rate, the true negative rate, the false positive rate and the false negative rate as follows:
(1) the true positive rate is the number of true positives/(number of true positives + number of false positives), and the true positive indicates that the true fire is a fire.
(2) The true negative rate, which is the number of true negatives/(number of true negatives + number of false negatives), is indicated when the interferent is detected as a non-flame.
(3) False positive rate, the false positive rate is 1-true positive rate, false positive means that the misjudged interferent is flame.
(4) False negative rate, the false negative rate is 1-true negative rate, the false negative rate is expressed as misjudging that the flame is a disturbing object,
the multi-expert system was trained using random video at 20% of the data set, with the training results shown in table 1 below. For sample calculation, the average accuracy is defined as accuracycacy ═ (true positive rate + true negative rate)/(true positive rate + true negative rate + false positive rate + false negative rate).
Figure BDA0003436172720000113
TABLE 1
The results of comparing the accuracy, the false positive rate and the false negative rate detected by the embodiment of the invention and other methods are shown in the following table 2.
Figure BDA0003436172720000114
TABLE 2
As shown in Table 2, the detection method of the embodiment of the invention has a false negative rate of 0.46%, and has the best average accuracy (95.86%) and false positive rate (7.82%).
Compared with the traditional method based on the combination of characteristics such as color, motion and shape, the embodiment of the invention has the following improvements:
firstly, the false positive rate, namely the false positive rate, is high under outdoor environment and strong illumination interference, a strong light source and a flickering car light can be wrongly judged as flames by a multi-feature fusion algorithm based on the flame features of a video image, and a bright area similar to flames is formed because the bidirectional car light on a road flickers, but the multi-feature fusion algorithm based on the visual fuzziness of flames can accurately detect and eliminate entity interferents; the false negative rate is the false negative rate, namely the false negative rate, although the false negative rate of the latest visual image flame detection is lower, the white flame during magnesium phosphorus combustion and the green flame during copper combustion can be missed and judged only based on the color model and the dispersion component, but the visual fuzziness of the flame is not interfered by colors and motion conditions, the fuzziness flame generated by various fuels can be detected, the false negative rate is further reduced, and the fire detection precision is greatly improved.
The method steps of the embodiments of the present invention are described above. It can be understood that in the embodiment of the invention, the suspected flame area is extracted by using the RGB-HIS color segmentation model, the visual blurring characteristic of the flame is captured by binocular ranging and laser ranging, and the characteristic fusion and decision analysis are carried out by combining the dispersion characteristic, the similarity characteristic and the centroid movement characteristic of the flame through an MES multi-expert decision system to obtain the fire detection result, so that on one hand, the interferences of a bulb, a reflecting fire extinguisher, a red fire extinguisher and the like can be accurately eliminated, the misjudgment rate of fire detection is reduced, on the other hand, the flames generated by various fuels can be detected, the missed detection rate of fire detection is reduced, and the precision of fire detection is improved.
Referring to fig. 6, an embodiment of the present invention provides a fire detection system based on flame vision virtualization, including:
the color segmentation module is used for acquiring a first video image of the area to be detected and performing color segmentation on the first video image to obtain a suspected flame area;
the first characteristic determination module is used for determining dispersion characteristics, similarity characteristics and centroid movement characteristics of a suspected flame area;
the second characteristic determination module is used for extracting the flame vision blurring characteristic of the suspected flame area through binocular ranging and laser ranging;
and the characteristic fusion module is used for constructing an MES multi-expert decision system, and performing characteristic fusion and decision analysis on the dispersion characteristic, the similarity characteristic, the centroid motion characteristic and the flame vision virtualization characteristic through the MES multi-expert decision system to obtain a fire detection result.
The contents in the above method embodiments are all applicable to the present system embodiment, the functions specifically implemented by the present system embodiment are the same as those in the above method embodiment, and the beneficial effects achieved by the present system embodiment are also the same as those achieved by the above method embodiment.
Referring to fig. 7, an embodiment of the present invention provides a fire detection apparatus based on flame vision virtualization, including:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, the at least one program causes the at least one processor to implement a method for fire detection based on visual flame virtualisation as described above.
The contents in the above method embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above method embodiments, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above method embodiments.
Embodiments of the present invention also provide a computer-readable storage medium, in which a processor-executable program is stored, and the processor-executable program is configured to execute the above-mentioned fire detection method based on flame vision virtuality when executed by a processor.
The computer-readable storage medium of the embodiment of the invention can execute the fire detection method based on flame vision virtualization provided by the embodiment of the method of the invention, can execute any combination of the implementation steps of the embodiment of the method, and has corresponding functions and beneficial effects of the method.
The embodiment of the invention also discloses a computer program product or a computer program, which comprises computer instructions, and the computer instructions are stored in a computer readable storage medium. The computer instructions may be read by a processor of a computer device from a computer-readable storage medium, and executed by the processor to cause the computer device to perform the method illustrated in fig. 1.
In alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flow charts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed and in which sub-operations described as part of larger operations are performed independently.
Furthermore, although the present invention is described in the context of functional modules, it should be understood that, unless otherwise stated to the contrary, one or more of the above-described functions and/or features may be integrated in a single physical device and/or software module, or one or more of the functions and/or features may be implemented in a separate physical device or software module. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary for an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be understood within the ordinary skill of an engineer, given the nature, function, and internal relationship of the modules. Accordingly, those skilled in the art can, using ordinary skill, practice the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative of and not intended to limit the scope of the invention, which is defined by the appended claims and their full scope of equivalents.
The above functions, if implemented in the form of software functional units and sold or used as a separate product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Further, the computer readable medium could even be paper or another suitable medium upon which the above described program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the foregoing description of the specification, reference to the description of "one embodiment/example," "another embodiment/example," or "certain embodiments/examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A fire detection method based on flame vision virtualization is characterized by comprising the following steps:
acquiring a first video image of a to-be-detected area, and performing color segmentation on the first video image to obtain a suspected flame area;
determining dispersion characteristics, similarity characteristics and centroid movement characteristics of the suspected flame area;
extracting the visual blurring characteristics of the flame in the suspected flame area through binocular ranging and laser ranging;
and constructing an MES multi-expert decision-making system, and performing feature fusion and decision analysis on the dispersion feature, the similarity feature, the centroid motion feature and the flame vision blurring feature through the MES multi-expert decision-making system to obtain a fire detection result.
2. The method according to claim 1, wherein the step of performing color segmentation on the first video image to obtain a suspected flame area includes:
determining a red component threshold and a saturation threshold;
constructing an RGB-HIS color segmentation model according to the red component threshold and the saturation threshold;
and carrying out color segmentation on the first video image through the RGB-HIS color segmentation model to obtain a suspected flame area.
3. The fire detection method based on flame vision virtuality according to claim 2, wherein the RGB-HIS color segmentation model is as follows:
R>G>B
R>RT
S>(255-R)×ST/RT
wherein R represents the red component of the target pixel point, G represents the green component of the target pixel point, B represents the blue component of the target pixel point, RTRepresenting the red component threshold, S representing the saturation of the target pixel, STRepresenting a saturation threshold.
4. The method of claim 1, wherein the step of determining the dispersion characteristic, the similarity characteristic and the centroid motion characteristic of the suspected flame area comprises:
carrying out image analysis on the suspected flame area, and extracting dispersion characteristics of each part of the suspected flame area;
comparing the suspected flame areas of the continuous frames of the first video image to obtain similarity characteristics of the suspected flame areas;
and determining the centroid position of the suspected flame area, and determining the centroid movement characteristic of the first video image according to the centroid position.
5. The method of claim 1, wherein the step of extracting the flame vision blurring characteristics of the suspected flame area by binocular ranging and laser ranging specifically comprises:
carrying out binocular distance measurement on the suspected flame area through a binocular camera to obtain first flame depth information of the suspected flame area;
performing laser ranging on the suspected flame area through a laser measuring system to obtain second flame depth information of the suspected flame area;
and determining the visual blurring characteristics of the flame of the suspected flame area according to the difference value of the first flame depth information and the second flame depth information.
6. A fire detection method based on flame vision virtuality according to any one of claims 1 to 5, wherein the step of constructing an MES multi-expert decision-making system specifically comprises:
obtaining a dispersion classifier, a similarity classifier, a centroid motion classifier and a flame vision virtualization classifier which are trained in advance;
determining a first weight of the dispersion classifier, a second weight of the similarity classifier, a third weight of the centroid motion classifier, and a fourth weight of the flame vision virtuality classifier;
and constructing an MES multi-expert decision system according to the dispersion classifier, the similarity classifier, the centroid motion classifier, the flame vision virtualization classifier, the first weight, the second weight, the third weight and the fourth weight.
7. The fire detection method based on flame vision virtuality according to claim 6, wherein the step of performing feature fusion and decision analysis on the dispersion feature, the similarity feature, the centroid motion feature and the flame vision virtuality feature through the MES multi-expert decision system to obtain a fire detection result specifically comprises:
classifying the dispersion feature, the similarity feature, the centroid motion feature and the flame vision blurring feature according to the dispersion classifier, the similarity classifier, the centroid motion classifier and the flame vision blurring classifier to obtain a plurality of flame classification labels;
and performing feature fusion according to the first weight, the second weight, the third weight, the fourth weight and the flame classification label to obtain a first weighted sum, and further determining whether the suspected flame area has a fire according to the first weighted sum and a preset threshold value.
8. A fire detection system based on flame vision virtuality, comprising:
the color segmentation module is used for acquiring a first video image of a region to be detected and performing color segmentation on the first video image to obtain a suspected flame region;
the first characteristic determination module is used for determining dispersion characteristics, similarity characteristics and centroid movement characteristics of the suspected flame area;
the second characteristic determination module is used for extracting the flame vision blurring characteristic of the suspected flame area through binocular ranging and laser ranging;
and the characteristic fusion module is used for constructing an MES multi-expert decision-making system, and performing characteristic fusion and decision analysis on the dispersion characteristic, the similarity characteristic, the centroid motion characteristic and the flame vision blurring characteristic through the MES multi-expert decision-making system to obtain a fire detection result.
9. A fire detection device based on flame vision virtuality, comprising:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement a fire detection method based on flame vision virtualisation according to any of claims 1 to 7.
10. A computer readable storage medium having stored therein a processor executable program, wherein the processor executable program when executed by a processor is for performing a fire detection method based on flame vision virtualisation according to any of claims 1 to 7.
CN202111613224.6A 2021-12-27 2021-12-27 Fire detection method, system, device and medium based on flame vision virtualization Pending CN114386493A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111613224.6A CN114386493A (en) 2021-12-27 2021-12-27 Fire detection method, system, device and medium based on flame vision virtualization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111613224.6A CN114386493A (en) 2021-12-27 2021-12-27 Fire detection method, system, device and medium based on flame vision virtualization

Publications (1)

Publication Number Publication Date
CN114386493A true CN114386493A (en) 2022-04-22

Family

ID=81198528

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111613224.6A Pending CN114386493A (en) 2021-12-27 2021-12-27 Fire detection method, system, device and medium based on flame vision virtualization

Country Status (1)

Country Link
CN (1) CN114386493A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359247A (en) * 2022-08-30 2022-11-18 新创碳谷控股有限公司 Flame detection method and device based on dynamic characteristics and storage medium
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction
CN117152474A (en) * 2023-07-25 2023-12-01 华能核能技术研究院有限公司 High-temperature gas cooled reactor flame identification method based on K-means clustering algorithm
CN117593588A (en) * 2023-12-14 2024-02-23 小黄蜂智能科技(广东)有限公司 Intelligent identification method and device for flame image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115359247A (en) * 2022-08-30 2022-11-18 新创碳谷控股有限公司 Flame detection method and device based on dynamic characteristics and storage medium
CN116977634A (en) * 2023-07-17 2023-10-31 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction
CN116977634B (en) * 2023-07-17 2024-01-23 应急管理部沈阳消防研究所 Fire smoke detection method based on laser radar point cloud background subtraction
CN117152474A (en) * 2023-07-25 2023-12-01 华能核能技术研究院有限公司 High-temperature gas cooled reactor flame identification method based on K-means clustering algorithm
CN117593588A (en) * 2023-12-14 2024-02-23 小黄蜂智能科技(广东)有限公司 Intelligent identification method and device for flame image

Similar Documents

Publication Publication Date Title
CN114386493A (en) Fire detection method, system, device and medium based on flame vision virtualization
Gaur et al. Video flame and smoke based fire detection algorithms: A literature review
CN106845443B (en) Video flame detection method based on multi-feature fusion
WO2022105609A1 (en) High-altitude parabolic object detection method and apparatus, computer device, and storage medium
KR101237089B1 (en) Forest smoke detection method using random forest classifier method
Premal et al. Image processing based forest fire detection using YCbCr colour model
CN106296721B (en) Object aggregation detection method and device based on stereoscopic vision
Wang et al. A new fire detection method using a multi-expert system based on color dispersion, similarity and centroid motion in indoor environment
CN102881106B (en) Dual-detection forest fire identification system through thermal imaging video and identification method thereof
TWI775777B (en) Optical articles and systems interacting with the same
CN109711322A (en) A kind of people's vehicle separation method based on RFCN
CN110544271B (en) Parabolic motion detection method and related device
CN109074713B (en) Smoke detection device, method for detecting smoke of fire, and storage medium
CN109741565B (en) Coal mine fire disaster recognition system and method
CN108363992B (en) Fire early warning method for monitoring video image smoke based on machine learning
Chen et al. Fire detection using spatial-temporal analysis
Van den Broek et al. Detection and classification of infrared decoys and small targets in a sea background
US11823550B2 (en) Monitoring device and method for monitoring a man-overboard in a ship section
CN106815567B (en) Flame detection method and device based on video
Jakovčević et al. Visual spatial-context based wildfire smoke sensor
CN108648409B (en) Smoke detection method and device
CN112613483A (en) Outdoor fire early warning method based on semantic segmentation and recognition
Lei et al. Early fire detection in coalmine based on video processing
JP4568836B2 (en) Real-time pupil position detection system
Steffens et al. A texture driven approach for visible spectrum fire detection on mobile robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination