CN113537092A - Smoke and fire detection method, device, equipment and storage medium - Google Patents
Smoke and fire detection method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN113537092A CN113537092A CN202110823728.4A CN202110823728A CN113537092A CN 113537092 A CN113537092 A CN 113537092A CN 202110823728 A CN202110823728 A CN 202110823728A CN 113537092 A CN113537092 A CN 113537092A
- Authority
- CN
- China
- Prior art keywords
- frame
- video image
- smoke
- determining
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 239000000779 smoke Substances 0.000 title claims abstract description 146
- 238000001514 detection method Methods 0.000 title claims abstract description 75
- 238000003860 storage Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 28
- 238000009826 distribution Methods 0.000 claims abstract description 27
- 238000012216 screening Methods 0.000 claims description 14
- 230000000877 morphologic effect Effects 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 abstract description 11
- 230000006870 function Effects 0.000 description 13
- 230000001788 irregular Effects 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 7
- 230000003628 erosive effect Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 239000003086 colorant Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 230000010339 dilation Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000009825 accumulation Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 239000013598 vector Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20036—Morphological image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
The embodiment of the invention discloses a smoke and fire detection method, a smoke and fire detection device, smoke and fire detection equipment and a storage medium. The method comprises the following steps: acquiring continuous multi-frame video images; determining an undetermined firework area in each frame of video image according to the color distribution of each frame of video image; determining a pixel motion area in each frame of video image according to the change condition of continuous multi-frame video images; and determining a target firework area in each frame of video image according to the to-be-firework area and the pixel motion area. According to the technical scheme, the correlation information between frames in the video is fully considered, the problem of false alarm caused by certain special frame images is solved, and the accuracy rate of real-time smoke and fire detection based on video monitoring is improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a smoke and fire detection method, a smoke and fire detection device, smoke and fire detection equipment and a storage medium.
Background
In modern social life, fire prevention is a very important proposition of safety protection. Central cameras are used in industrial production areas, public spaces and in many natural environments for smoke and fire monitoring to ensure safety.
At present, a machine learning or deep learning method is mainly used for carrying out image processing and analysis on collected video images, and smoke and fire characteristics in each frame of image are extracted to realize smoke and fire detection. However, when smoke and fire detection is performed based on image features, false alarms are generated due to certain special frame images, and the accuracy of smoke and fire detection results is affected.
Disclosure of Invention
The embodiment of the invention provides a smoke and fire detection method, a smoke and fire detection device, smoke and fire detection equipment and a storage medium, and aims to improve the accuracy of smoke and fire detection.
In a first aspect, an embodiment of the present invention provides a smoke and fire detection method, including:
acquiring continuous multi-frame video images;
determining an undetermined firework area in each frame of video image according to the color distribution of each frame of video image;
determining a pixel motion area in each frame of video image according to the change condition of continuous multi-frame video images;
and determining a target firework area in each frame of video image according to the to-be-firework area and the pixel motion area.
Optionally, after determining the target firework area in each frame of the video image, the method further includes: after morphological operation is carried out on the target firework area, area boundaries of the target firework area are extracted; and correspondingly adding the region boundary to each matched frame of video image.
In the embodiment, the noise in the target firework area is compressed by performing morphological operation on the determined target firework area, so that the target firework area is more accurate; the region boundary of the target firework region is added to the corresponding position in the corresponding video image, so that the firework detection result is clear, and accurate positioning of fireworks is facilitated.
Optionally, determining a region to be fireworks in each frame of video image according to the color distribution of each frame of video image, including: determining flame pixel points and/or smoke pixel points in each frame of video image according to the color distribution of each frame of video image; and determining the region to be determined in each frame of video image according to the flame pixel points and/or the smoke pixel points in each frame of video image.
In the embodiment, for each frame of video image, flame pixel points and/or smoke pixel points are identified according to color distribution, and then the to-be-determined smoke and fire area is determined according to the flame pixel points and/or smoke pixel points, so that the accuracy of the to-be-determined smoke and fire area is improved.
Optionally, determining the flame pixel points in each frame of video image according to the color distribution of each frame of video image includes: converting each frame of video image from a red, green and blue (RGB) space to a Hue Saturation Value (HSV) space; and screening pixel points meeting preset HSV (hue, saturation, value) constraint conditions and first RGB (red, green and blue) constraint conditions in each frame of video image as the flame pixel points.
In the embodiment, the video image is converted from the RGB space to the HSV space, the pixel points are screened according to the HSV constraint condition and the first RGB constraint condition, and the accuracy of the screened flame pixel points is higher.
Optionally, determining a smoke pixel point in each frame of video image according to the color distribution of each frame of video image includes: converting each frame of video image from RGB space to hue saturation brightness HSI space; and screening pixel points meeting preset second RGB constraint conditions and HSI constraint conditions in each frame of video image to serve as the smoke pixel points.
In the embodiment, the video image is converted from the RGB space to the HSI space, and then the pixel points are screened according to the second RGB constraint condition and the HSI constraint condition, so that the accuracy of the screened smoke pixel points is higher.
Optionally, determining a pixel motion region in each frame of video image according to a change condition of consecutive multiple frames of video images, including: determining target pixel points which move irregularly in each frame of video image according to the continuous multi-frame video image; and determining a pixel motion area in each frame of video image according to the target pixel point.
In the embodiment, for each frame of video image, target pixel points which move irregularly are determined according to continuous multi-frame video images, and then a pixel motion area is determined according to the target pixel points, and the association information between frames in the video is considered, so that the accuracy of the pixel motion area is improved.
Optionally, determining a target pixel point that moves irregularly in each frame of video image according to the continuous multiple frames of video images includes: calculating the accumulated variation of each pixel point in each frame of video image according to the continuous multi-frame video image; and screening pixel points with accumulated variation larger than a preset threshold value in each frame of video image as the target pixel points.
In the embodiment, the pixel points are screened according to the accumulated variation, and each pixel point performing irregular motion is obtained and used as a target pixel point, so that the accuracy of irregular motion detection is improved.
In a second aspect, embodiments of the present invention also provide a smoke and fire detection device, including:
the task acquisition module is used for acquiring a task to be transmitted of a file; the task to be transmitted of the file comprises information of the file to be transmitted, information of a source computing center and information of a target computing center;
the file transmission module is used for transmitting the file to be transmitted to the target computing center from the source computing center according to the task to be transmitted of the file; wherein, data intercommunication is realized between any two computing centers through a data link.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a method according to any one of the embodiments of the invention.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method according to any embodiment of the present invention.
According to the technical scheme, when smoke and fire detection is carried out, color analysis is carried out on each frame of video image aiming at the obtained continuous multi-frame video image so as to determine the region to be subjected to smoke and fire in each frame of video image, the pixel motion region in each adjacent frame of video image is subjected to the change condition of the continuous multi-frame video image, and then the target smoke and fire region in each frame of video image can be determined according to the region to be subjected to smoke and fire and the pixel motion region in each frame of video image. When smoke and fire detection is carried out, the technical scheme fully considers the associated information between frames in the video, the problem of false alarm caused by certain special frame images is avoided, and the accuracy rate of real-time smoke and fire detection based on video monitoring is improved.
Drawings
FIG. 1 is a flow chart of a smoke and fire detection method according to a first embodiment of the present disclosure;
FIG. 2 is a flow chart of a smoke and fire detection method according to a second embodiment of the present invention;
FIG. 3 is a flow chart of a smoke and fire detection method in a third embodiment of the present invention;
FIG. 4 is a flow chart of a smoke and fire detection method in a fourth embodiment of the present invention;
FIG. 5 is a graph showing an example of the result of flame detection in the fourth embodiment of the present invention;
FIG. 6 is a schematic structural view of a smoke and fire detection device according to a fifth embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer device in the sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a smoke and fire detection method according to an embodiment of the present invention, which may be implemented by a smoke and fire detection apparatus, which may be implemented in hardware and/or software, and may be generally integrated in a computer device.
As shown in fig. 1, the method for detecting smoke and fire provided by the present embodiment includes the following steps:
and S110, acquiring continuous multi-frame video images.
The video image refers to an image obtained by decoding a video stream. The video stream can be collected by a camera device installed at any position and is used for fire safety monitoring.
After receiving the video stream, decoding the video stream to obtain each frame of video image, where each frame of video image may be stored using a data structure of vectors. Meanwhile, image preprocessing can be carried out on each frame of video image to eliminate irrelevant information in the image, useful real information is recovered, the detectability of relevant information is enhanced, and data is simplified to the maximum extent. For example, the video image of each frame may be subjected to related preprocessing such as smoothing, filtering, edge detection, and the like, which is not particularly limited in this embodiment.
S120, determining the region to be fireworks in each frame of video image according to the color distribution of each frame of video image.
The areas to be determined with smoke refer to the areas suspected to have smoke in the video image, and further confirmation is needed. Wherein, fireworks can refer to smoke and/or flames. In this embodiment, identification of an area to be fireworks is performed for each frame of video image, where it is determined that the area to be fireworks does not exist in a certain frame of video image according to the color distribution of the certain frame of video image.
And the color distribution is determined according to the pixel value of each pixel point in the video image. And analyzing each frame of video image based on a pixel point color detection method to determine whether an area to be determined with smoke and fire exists. Optionally, the color feature of the region to be fireworks and smoke is determined, and each frame of video image is analyzed according to the color feature to determine whether the region to be fireworks and smoke exists in each frame of video image.
As an optional implementation manner, the pixel points to be subjected to smoke and fire are identified according to a pixel point color detection method, for example, the color characteristics of the pixel points to be subjected to smoke and fire are obtained, and each pixel point in each frame of video image is analyzed according to the color characteristics, so as to determine each pixel point to be subjected to smoke and fire in each frame of video image. Furthermore, the region formed by each pixel point to be subjected to smoke and fire can be used as the region to be subjected to smoke and fire.
And S130, determining a pixel motion area in each frame of video image according to the change condition of the continuous multi-frame video image.
The pixel motion region refers to a pixel region of a moving object in each frame video image in a video composed of consecutive multiple frame video images. For example, a burning flame is a moving object in video monitoring, the pixel areas of the moving object in each frame of video image obtained by decoding a video are different, and the pixel areas in each frame of video image are pixel motion areas in the video image of the corresponding frame.
And determining the pixel motion area in the last frame of video image in the continuous multi-frame video images by combining the change conditions of the continuous multi-frame video images. The motion detection can be carried out on the continuous multi-frame video images according to the incidence relation between the video image frames and the frames, and then the pixel motion area in the last frame video image in the continuous multi-frame video images is determined.
As smoke and fire (including smoke and flame) can be regarded as that an object does irregular motion, irregular motion detection can be carried out on continuous multi-frame video images according to the association relationship between the video image frames, and a pixel motion area corresponding to the smoke and fire in the last frame of video image in the continuous multi-frame video images is determined.
S140, determining a target firework area in each frame of video image according to the to-be-firework area and the pixel motion area.
The target firework area refers to an area where the existence of fireworks is confirmed in the video image, and is used for indicating a firework detection result.
And for each frame of video image, if both the to-be-firework area and the pixel motion area are identified, taking the intersection of the to-be-firework area and the pixel motion area as a target firework area.
Further, as an optional implementation manner, after determining the target firework area in each frame of the video image, the method may further include: after morphological operation is carried out on the target firework area, area boundaries of the target firework area are extracted; and correspondingly adding the region boundary to each matched frame of video image.
Morphological operations refer to operations that change the morphology of an image in image processing techniques.
In this embodiment, after the target pyrotechnic region is determined, morphological operations, such as erosion and expansion, are performed on the target pyrotechnic region to achieve compression of the noise region in the target pyrotechnic region.
As an alternative embodiment, the erosion and expansion operations of the target pyrotechnic region may be performed using preset structural elements, for example, the size of the structural elements may be set to 5 × 5. Illustratively, the erosion and expansion operations for the target pyrotechnic region may be achieved based on:
wherein S is a target firework area, M is a preset structural element,andrespectively representing the erosion operation and the dilation operation.And performing morphological operation on the target firework area by using a preset structural element, wherein the obtained result is the morphologically processed target firework area.
Furthermore, the area boundary of the target firework area after morphological processing can be extracted, and the area boundary is correspondingly added to the corresponding position in each matched video image, so that the firework area in each video image is identified. The present embodiment is not particularly limited in this regard to the area boundary extraction technique.
In the embodiment, the noise in the target firework area is compressed by performing morphological operation on the determined target firework area, so that the target firework area is more accurate; the region boundary of the target firework region is added to the corresponding position in the corresponding video image, so that the firework detection result is clear, and accurate positioning of fireworks is facilitated.
According to the technical scheme, when smoke and fire detection is carried out, color analysis is carried out on each frame of video image aiming at the obtained continuous multi-frame video image so as to determine the region to be subjected to smoke and fire in each frame of video image, the pixel motion region in each adjacent frame of video image is subjected to the change condition of the continuous multi-frame video image, and then the target smoke and fire region in each frame of video image can be determined according to the region to be subjected to smoke and fire and the pixel motion region in each frame of video image. When smoke and fire detection is carried out, the technical scheme fully considers the associated information between frames in the video, the problem of false alarm caused by certain special frame images is avoided, and the accuracy rate of real-time smoke and fire detection based on video monitoring is improved.
Example two
Fig. 2 is a flowchart of a smoke and fire detection method provided in a second embodiment of the present invention, which is embodied on the basis of the foregoing embodiment, wherein the determining a region to be detected in each frame of video image according to a color distribution of each frame of video image may specifically be:
determining flame pixel points and/or smoke pixel points in each frame of video image according to the color distribution of each frame of video image; and determining the region to be determined in each frame of video image according to the flame pixel points and/or the smoke pixel points in each frame of video image.
As shown in fig. 2, the method for detecting smoke and fire provided by the present embodiment includes the following steps:
and S210, acquiring continuous multi-frame video images.
S220, determining flame pixel points and/or smoke pixel points in each frame of video image according to the color distribution of each frame of video image.
In this embodiment, the pixel points to be subjected to smoke and fire can be specifically flame pixel points and/or smoke pixel points. The flame pixel points refer to pixel points of suspected flames in the video image; the smoke pixel points refer to suspected smoke pixel points in the video image.
Illustratively, the color characteristics of the flame pixel points and the smoke pixel points are respectively determined, then each pixel point in each frame of video image can be analyzed according to the color characteristic analysis of the flame pixel points so as to determine each flame pixel point in each frame of video image, and each pixel point in each frame of video image can be analyzed according to the color characteristic analysis of the smoke pixel points so as to determine each smoke pixel point in each frame of video image.
It should be noted that only flame pixels, or only smoke pixels, or both flame and smoke pixels may exist in the video image.
As an optional implementation manner, determining the flame pixel point in each frame of video image according to the color distribution of each frame of video image may specifically be:
converting each frame of video image from an RGB (Red Green Blue) space to an HSV (Hue Saturation Value) space; and screening pixel points meeting preset HSV (hue, saturation, value) constraint conditions and first RGB (red, green and blue) constraint conditions in each frame of video image as the flame pixel points.
The RGB color scheme is a color standard in the industry, and various colors are obtained by changing three color channels of red (R), green (G) and blue (B) and superimposing them on each other.
The HSV color Model is a color space created according to the intuitive characteristics of colors, and is also called a hexagonal cone Model (Hexcone Model), and the parameters of the colors in the Model are respectively: hue (H), saturation (S), lightness (V).
The video image obtained by decoding the video stream is an RGB image, when the flame pixel points in the video image are identified, the video image is firstly converted from an RGB space to an HSV space, and the conversion process is as follows:
max=max(R,G,B);
min=min(R,G,B);
V=max(R,G,B);
if(max≠0) S=(max-min)/max;
if(max=0) S=0;
if(R=max) H=(G-B)/(max-min)*60;
if(G=max) H=120+(B-R)/(max-min)*60;
if(B=max) H=240+(R-G)/(max-min)*60;
where max (R, G, B) represents the maximum value in R, G, B, and min (R, G, B) represents the minimum value in R, G, B.
HSV constraints, which refer to constraints related to hue (H), saturation (S), value (V) in HSV space; the first RGB constraint refers to the constraint related to red (R), green (G), and blue (B) in the RGB space.
For example, the HSV constraints are: h is not less than H2 and not more than H1;
s1≤S≤s2;
v1≤V≤v2。
wherein h1 and h2 are angle constants for constraining hue values; s1 and s2 are numerical constants for constraining saturation values; v1 and v2 are numerical constants used to constrain the lightness values.
For another example, the first RGB constraint is: r (x, y)>Rmean;
R(x,y)>G(x,y)>B(x,y);
a1G(x,y)-a2≤R(x,y)≤a3G(x,y)+a4;
a5B(x,y)+a6≤G(x,y)≤a7B(x,y)+a8;
Wherein a1, a2, a3, a4, a5, a6, a7 and a8 are numerical constants and are used for constraining the relation between R (x, y) and G (x, y) and the relation between G (x, y) and B (x, y); rmeanAnd the average value of the R (x, y) values of all the pixels is used for constraining the R (x, y), and K is the total number of the pixels.
As a specific embodiment, the preset HSV constraint and the first RGB constraint may be:
0≤H≤60°
0≤S≤0.2
127≤V≤255
R(x,y)>G(x,y)>B(x,y)
1.1403G(x,y)-0.0759≤R(x,y)≤-0.9889G(x,y)+0.9913
0.8459B(x,y)+0.0482≤G(x,y)≤-0.4608B(x,y)+0.4964
and judging each pixel point in the video image, and judging whether the preset HSV constraint condition and the first RGB constraint condition are met, if so, taking the pixel point as a flame pixel point, and if not, not taking the pixel point as a flame pixel point.
In the embodiment, the video image is converted from the RGB space to the HSV space, the pixel points are screened according to the HSV constraint condition and the first RGB constraint condition, and the accuracy of the screened flame pixel points is higher.
As an optional implementation manner, determining a smoke pixel point in each frame of video image according to the color distribution of each frame of video image may specifically be:
converting each frame of video image from RGB space to HSI (Hue Saturation Intensity) space; and screening pixel points meeting preset second RGB constraint conditions and HSI constraint conditions in each frame of video image to serve as the smoke pixel points.
HSI is a model of a digital image, which reflects the way that the human visual system perceives color, and perceives color as three basic characteristic quantities, namely hue (H), saturation (S) and brightness (I). Wherein the components are independent of color information of the image; the H component and the S component are closely related to the way people perceive colors.
The video image obtained by decoding the video stream is an RGB image, when the smoke pixel points in the video image are identified, the video image is firstly converted into an HSI space from an RGB space, and the conversion process is as follows:
I=(R+G+B)/3
the second RGB constraint refers to the constraint related to red (R), green (G), and blue (B) in the RGB space. The HSI constraint refers to a constraint related to luminance (I) in the HSI space.
For example, the second RGB constraint is max (R, G, B) -min (R, G, B) < a; wherein a is a numerical constant used for constraining the magnitude relation between max (R, G, B) and min (R, G, B).
As another example, the HSI constraints are: k is a radical of1≤I≤k2(ii) a Wherein k is1And k2Is a numerical constant used for constraining the brightness value.
As a specific embodiment, the preset second RGB constraint and the HSI constraint may be:
wherein, the value of a is between 5 and 20, and k is1Is between 80 and 150, k2The value of (1) is between 190 and 225.
And judging each pixel point in the video image, and judging whether the preset second RGB constraint condition and the preset HSI constraint condition are met, if so, taking the pixel point as a smoke pixel point, and if not, not taking the pixel point as the smoke pixel point.
In the embodiment, the video image is converted from the RGB space to the HSI space, and then the pixel points are screened according to the second RGB constraint condition and the HSI constraint condition, so that the accuracy of the screened smoke pixel points is higher.
And S230, determining the region to be subjected to smoke and fire in each frame of video image according to the flame pixel points and/or smoke pixel points in each frame of video image.
Aiming at each frame of video image, if only flame pixel points are identified, determining a region to be subjected to fire and smoke in the frame of video image according to the flame pixel points, namely taking one or more regions formed by the flame pixel points as the region to be subjected to fire and smoke; if only smoke pixel points are identified, determining a region to be subjected to smoke and fire in the frame of video image according to the smoke and fire pixel points, namely taking one or more regions formed by the smoke and fire pixel points as the region to be subjected to smoke and fire; if both the flame pixel points and the smoke pixel points are identified, the undetermined smoke and fire area in the frame of video image is determined according to the smoke and fire pixel points and the smoke pixel points, and one or more areas formed by the flame pixel points and the smoke and fire pixel points are used as the undetermined smoke and fire area.
And S240, determining a pixel motion area in each frame of video image according to the change condition of the continuous multi-frame video image.
And S250, determining a target firework area in each frame of video image according to the to-be-firework area and the pixel motion area.
For those parts of this embodiment that are not explained in detail, reference is made to the aforementioned embodiments, which are not repeated herein.
In the technical scheme, for each frame of video image, flame pixel points and/or smoke pixel points are firstly identified according to color distribution, and then the to-be-firework area is determined according to the flame pixel points and/or smoke pixel points, so that the accuracy of the to-be-firework area is improved, the correlation information between frames in the video is fully considered, the accuracy of a target firework area determined based on the to-be-firework area and a pixel motion area is improved, the problem of false alarm caused by certain special frame images is avoided, and the accuracy of firework detection in real time based on video monitoring is also improved.
EXAMPLE III
Fig. 3 is a flowchart of a smoke and fire detection method according to a third embodiment of the present invention, which is embodied on the basis of the foregoing embodiments, wherein the determining a pixel motion region in each frame of video image according to a change condition of consecutive frames of video images may specifically be:
determining target pixel points which move irregularly in each frame of video image according to the continuous multi-frame video image; and determining a pixel motion area in each frame of video image according to the target pixel point.
As shown in fig. 3, the method for detecting smoke and fire provided by the present embodiment includes the following steps:
s310, acquiring continuous multi-frame video images.
S320, determining the region to be fireworks in each frame of video image according to the color distribution of each frame of video image.
S330, determining target pixel points which move irregularly in each frame of video image according to the continuous multi-frame video image.
In this embodiment, the flame and smoke are considered as objects in random motion. Considering that the motion detection needs to be determined based on continuous multi-frame video images, aiming at each frame of video image, based on the incidence relation between the continuous multi-frame video images before the frame of video image, the target pixel points which move irregularly are screened. The detection of the irregular motion can be realized based on the variation amplitude of the pixel values of the pixel points in the continuous multi-frame video images.
As an optional implementation manner, determining a target pixel point that moves irregularly in each frame of video image according to consecutive multiple frames of video images may specifically be:
calculating the accumulated variation of each pixel point in each frame of video image according to the continuous multi-frame video image; and screening pixel points with accumulated variation larger than a preset threshold value in each frame of video image as the target pixel points.
The accumulated variation may be an accumulated amount determined according to a variation of a plurality of consecutive pixels between adjacent frames, and is used to indicate a variation of a pixel in a consecutive multi-frame video image. Illustratively, the larger the accumulated variation of a pixel point is, the more the pixel point is continuously changed in consecutive multi-frame video images, for example, the variation amplitude of the pixel point in each two adjacent frames is large.
After determining the accumulated variation of a certain pixel point in a certain frame of video image, judging whether the accumulated variation exceeds a preset threshold, if so, taking the pixel point as the target pixel point, namely, a motion pixel point, and if not, continuing to judge the next pixel point.
Further, as a specific implementation manner, a function may be designed to describe a change condition of a pixel value of a pixel point in two adjacent frames of video images, and then a function is designed to count an accumulation condition of a pixel value change of the pixel point, that is, the accumulated variation.
For example, assuming that P (x, y, k) is a pixel value at a position coordinate (x, y) in the k-th frame video image, a change of a pixel point in an adjacent frame is first determined. Setting the change threshold to L, the function FD (x, y, k) for describing the change of the pixel value of the pixel point in the two adjacent frames of video images may be:
cumulative condition function H for counting pixel value changes of pixel pointsT(x, y, k) may be:
wherein HT(x, y, k) can be understood as a panoramic integral function, the independent variable is x, y, k, (x, y) is the coordinate of a pixel point, k is the number of frames of a video image in the video, b1And b2Is a numerical constant.
Hypothesis for evaluating HTThe preset threshold of the (x, y, k) function value is 20 (i.e. the preset threshold of the accumulated variation is 20), if H of a certain pixel in a certain frame of video image is HTIf the (x, y, k) function value exceeds a preset threshold value 20, the pixel point can be used as the target pixel point, namely, a motion pixel point, and if not, the next pixel point is continuously judged.
In the embodiment, the pixel points are screened according to the accumulated variation, and each pixel point performing irregular motion is obtained and used as a target pixel point, so that the accuracy of irregular motion detection is improved.
And S340, determining a pixel motion area in each frame of video image according to the target pixel point.
And aiming at each frame of video image, if only the target pixel points are identified, determining a pixel motion region in the frame of video image according to the target pixel points, namely, taking one or more regions formed by the target pixel points as the regions to be subjected to smoke and fire.
And S350, determining a target firework area in each frame of video image according to the to-be-firework area and the pixel motion area.
For those parts of this embodiment that are not explained in detail, reference is made to the aforementioned embodiments, which are not repeated herein.
In the technical scheme, for each frame of video image, firstly, target pixel points which move irregularly are determined according to continuous multi-frame video images, and then pixel motion areas are determined according to the target pixel points, and the correlation information between frames in the video is considered, so that the accuracy of the pixel motion areas is improved, the accuracy of the target firework areas determined based on the undetermined firework areas and the pixel motion areas is further improved, the problem of false alarm caused by certain special frame images is avoided, and the accuracy of firework detection based on video monitoring in real time is also improved.
Example four
Fig. 4 is a flowchart of a smoke and fire detection method according to a fourth embodiment of the present invention, and this embodiment provides a specific implementation manner based on the foregoing embodiment.
As shown in fig. 4, the method for detecting smoke and fire provided by the present embodiment includes the following steps:
and S410, acquiring continuous multi-frame video images.
And S420, determining firework pixel points in each frame of video image according to the color distribution of each frame of video image.
Wherein, the firework pixel includes flame pixel and/or smog pixel.
After converting the video image from the RGB space to the HSV space, screening flame pixel points meeting the following conditions:
0≤H≤60°
0≤S≤0.2
127≤V≤255
R(x,y)>G(x,y)>B(x,y)
1.1403G(x,y)-0.0759≤R(x,y)≤-0.9889G(x,y)+0.9913
0.8459B(x,y)+0.0482≤G(x,y)≤-0.4608B(x,y)+0.4964
after converting the video image from the RGB space to the HSI space, screening smoke pixel points meeting the following conditions:
wherein, the value of a is between 5 and 20, and k is1Is between 80 and 150, k2The value of (1) is between 190 and 225.
And S430, determining target pixel points which move irregularly in each frame of video image according to the continuous multi-frame video images.
Assuming that P (x, y, k) is a pixel value at a position coordinate (x, y) in the kth frame video image, first, a change of a pixel point in an adjacent frame is determined. Setting the change threshold to L, the function FD (x, y, k) for describing the change of the pixel value of the pixel point in the two adjacent frames of video images may be:
cumulative condition function H for counting pixel value changes of pixel pointsT(x, y, k) may be:
wherein HT(x, y, k) can be understood as a panoramic integral function, the independent variable is x, y, k, (x, y) is the coordinate of a pixel point, k is the number of frames of a video image in the video, b1And b2Is a numerical constant.
In this example, H isTAnd the pixel points with the (x, y, k) value exceeding 20 are taken as the target pixel points for irregular motion.
And S440, in each frame of video image, determining a target firework area according to the firework pixel points and the target pixel points.
When a target firework area is determined in each frame of video image, one or more areas formed by all pixel points which are firework pixel points and target pixel points can be used as the target firework area.
Alternatively, in each frame of video image, the pixel value of each pixel in the target fire area may be set to 1, and the pixel values of other pixels may be set to 0.
S450, after morphological operation is conducted on the target firework area, the area boundary of the target firework area is extracted, and the area boundary is correspondingly added to each frame of matched video image.
Alternatively, the erosion and expansion operations of the target pyrotechnic region may be performed using predetermined structural elements, for example, the size of the structural elements may be set to 5 × 5. Illustratively, the erosion and expansion operations for the target pyrotechnic region may be achieved based on:
wherein S is a target firework area, M is a preset structural element,andrespectively representing the erosion operation and the dilation operation.And performing morphological operation on the target firework area by using a preset structural element, wherein the obtained result is the morphologically processed target firework area.
In a specific example, a flame pixel point and a target pixel point performing irregular motion are identified for a frame of video image, a target firework area is determined according to the flame pixel point and the target pixel point, and after morphological operation is performed on the target firework area, an effect of adding an area boundary to the video image is extracted, which can be shown in fig. 5. Alternatively, the region boundary may be shown in a distinct color, such as red, in the video image (since fig. 5 is a grayscale image, the region boundary is not shown clearly) to improve readability.
Optionally, when the area boundary is correspondingly added to each frame of matched video image, a warning character may also be added at a preset position, such as "Fire Alarm!shown in FIG. 5! | A ", to further increase the warning effect. Optionally, when the region boundary is correspondingly added to each frame of matched video image, sound and light for warning may also be added.
For those parts of this embodiment that are not explained in detail, reference is made to the aforementioned embodiments, which are not repeated herein.
According to the technical scheme, the image from the monitoring video is received, the decoded image is analyzed by using a color-based detection method, and meanwhile, analysis processing is performed by combining high-order temporal information, so that the information of upper and lower frames is fully utilized, the smoke and fire detection and positioning are realized, the accuracy of the smoke and fire detection is improved, the false detection rate is reduced, and the possibility of false alarm is reduced. Experiments prove that flame detection is carried out by utilizing the frequency spectrum information and the time sequence information of the images, the accuracy rate (positive sun rate) of 98 percent and the detection rate of 40FPS (Frames Per Second ) can be achieved, and real-time detection of videos is realized.
EXAMPLE five
Fig. 6 is a schematic structural diagram of a smoke and fire detection device according to a fifth embodiment of the present invention, which is applicable to a situation where smoke and fire detection is performed in real time based on video monitoring, and the device may be implemented in software and/or hardware, and may be generally integrated in a computer device. As shown in fig. 6, the smoke and fire detection device specifically includes: a video image acquisition module 510, a pending fire zone determination module 520, a pixel motion zone determination module 530, and a target fire zone determination module 540. Wherein the content of the first and second substances,
a video image obtaining module 510, configured to obtain consecutive multi-frame video images;
the pending firework area determining module 520 is configured to determine a pending firework area in each frame of video image according to the color distribution of each frame of video image;
a pixel motion region determining module 530, configured to determine a pixel motion region in each frame of video image according to a change condition of consecutive frames of video images;
and a target firework area determining module 540, configured to determine a target firework area in each frame of the video image according to the to-be-determined firework area and the pixel motion area.
According to the technical scheme, when smoke and fire detection is carried out, color analysis is carried out on each frame of video image aiming at the obtained continuous multi-frame video image so as to determine the region to be subjected to smoke and fire in each frame of video image, the pixel motion region in each adjacent frame of video image is subjected to the change condition of the continuous multi-frame video image, and then the target smoke and fire region in each frame of video image can be determined according to the region to be subjected to smoke and fire and the pixel motion region in each frame of video image. When smoke and fire detection is carried out, the technical scheme fully considers the associated information between frames in the video, the problem of false alarm caused by certain special frame images is avoided, and the accuracy rate of real-time smoke and fire detection based on video monitoring is improved.
Optionally, the apparatus further comprises: the firework detection result processing module is used for performing morphological operation on a target firework area and extracting an area boundary of the target firework area after determining the target firework area in each frame of video image; and correspondingly adding the region boundary to each matched frame of video image.
Optionally, the module 520 for determining a region to be determined includes: a smoke and fire pixel point detection unit and a to-be-smoke and fire area determination unit, wherein,
the smoke and fire pixel point detection unit is used for determining flame pixel points and/or smoke pixel points in each frame of video image according to the color distribution of each frame of video image;
and the smoke and fire region determination unit is used for determining the smoke and fire region to be determined in each frame of video image according to the flame pixel points and/or the smoke pixel points in each frame of video image.
Optionally, the smoke and fire pixel point detection unit is specifically configured to convert each frame of video image from an RGB space to an HSV space; and screening pixel points meeting preset HSV (hue, saturation, value) constraint conditions and first RGB (red, green and blue) constraint conditions in each frame of video image as the flame pixel points.
Optionally, the smoke and fire pixel point detection unit is specifically configured to convert each frame of video image from an RGB space to an HSI space; and screening pixel points meeting preset second RGB constraint conditions and HSI constraint conditions in each frame of video image to serve as the smoke pixel points.
Optionally, the pixel motion region determining module 530 is specifically configured to determine, according to consecutive multiple frames of video images, a target pixel point that moves irregularly in each frame of video image; and determining a pixel motion area in each frame of video image according to the target pixel point.
Optionally, the pixel motion region determining module 530 is specifically configured to calculate, according to consecutive multiple frames of video images, an accumulated variation of each pixel point in each frame of video image; and screening pixel points with accumulated variation larger than a preset threshold value in each frame of video image as the target pixel points.
The smoke and fire detection device provided by the embodiment of the invention can execute the smoke and fire detection method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE six
Fig. 7 is a schematic structural diagram of a computer apparatus according to a sixth embodiment of the present invention, as shown in fig. 7, the computer apparatus includes a processor 610, a memory 620, an input device 630, and an output device 640; the number of processors 610 in the computer device may be one or more, and one processor 610 is taken as an example in fig. 7; the processor 610, the memory 620, the input device 630 and the output device 640 in the computer apparatus may be connected by a bus or other means, and fig. 7 illustrates an example of connection by a bus.
The memory 620, as a computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the smoke detection method in the embodiments of the present invention (e.g., the video image acquisition module 510, the pending smoke region determination module 520, the pixel motion region determination module 530, and the target smoke region determination module 540 in the smoke detection apparatus). The processor 610 executes various functional applications of the computer device and data processing by executing software programs, instructions and modules stored in the memory 620, namely, implements the smoke detection method described above.
The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 620 may further include memory located remotely from the processor 610, which may be connected to a computer device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus. The output device 640 may include a display device such as a display screen.
EXAMPLE seven
Embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method of smoke detection, the method comprising:
acquiring continuous multi-frame video images;
determining an undetermined firework area in each frame of video image according to the color distribution of each frame of video image;
determining a pixel motion area in each frame of video image according to the change condition of continuous multi-frame video images;
and determining a target firework area in each frame of video image according to the to-be-firework area and the pixel motion area.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the method operations described above, and may also execute the related operations in the smoke detection method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the smoke and fire detection device, the units and modules included in the embodiment are merely divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A method of smoke detection, comprising:
acquiring continuous multi-frame video images;
determining an undetermined firework area in each frame of video image according to the color distribution of each frame of video image;
determining a pixel motion area in each frame of video image according to the change condition of continuous multi-frame video images;
and determining a target firework area in each frame of video image according to the to-be-firework area and the pixel motion area.
2. The method of claim 1, further comprising, after determining the target pyrotechnic region in each frame of the video image:
after morphological operation is carried out on the target firework area, area boundaries of the target firework area are extracted;
and correspondingly adding the region boundary to each matched frame of video image.
3. The method according to claim 1, wherein determining the region of pending fire in each frame of video image according to the color distribution of each frame of video image comprises:
determining flame pixel points and/or smoke pixel points in each frame of video image according to the color distribution of each frame of video image;
and determining the region to be determined in each frame of video image according to the flame pixel points and/or the smoke pixel points in each frame of video image.
4. The method of claim 3, wherein determining the flame pixel points in each frame of video image according to the color distribution of each frame of video image comprises:
converting each frame of video image from a red, green and blue (RGB) space to a Hue Saturation Value (HSV) space;
and screening pixel points meeting preset HSV (hue, saturation, value) constraint conditions and first RGB (red, green and blue) constraint conditions in each frame of video image as the flame pixel points.
5. The method of claim 3, wherein determining the smoke pixels in each frame of the video image based on the color distribution of each frame of the video image comprises:
converting each frame of video image from RGB space to hue saturation brightness HSI space;
and screening pixel points meeting preset second RGB constraint conditions and HSI constraint conditions in each frame of video image to serve as the smoke pixel points.
6. The method of claim 1, wherein determining the pixel motion region in each frame of video image according to the variation of the consecutive frames of video image comprises:
determining target pixel points which move irregularly in each frame of video image according to the continuous multi-frame video image;
and determining a pixel motion area in each frame of video image according to the target pixel point.
7. The method of claim 1, wherein determining a target pixel point which moves irregularly in each frame of video image according to the continuous frames of video image comprises:
calculating the accumulated variation of each pixel point in each frame of video image according to the continuous multi-frame video image;
and screening pixel points with accumulated variation larger than a preset threshold value in each frame of video image as the target pixel points.
8. A smoke and fire detection device, comprising:
the video image acquisition module is used for acquiring continuous multi-frame video images;
the smoke and fire regions to be determined determining module is used for determining smoke and fire regions to be determined in each frame of video image according to the color distribution of each frame of video image;
the pixel motion area determining module is used for determining a pixel motion area in each frame of video image according to the change condition of continuous multi-frame video images;
and the target firework area determining module is used for determining a target firework area in each frame of video image according to the to-be-determined firework area and the pixel motion area.
9. A computer device, characterized in that the computer device comprises:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110823728.4A CN113537092A (en) | 2021-07-21 | 2021-07-21 | Smoke and fire detection method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110823728.4A CN113537092A (en) | 2021-07-21 | 2021-07-21 | Smoke and fire detection method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113537092A true CN113537092A (en) | 2021-10-22 |
Family
ID=78100752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110823728.4A Pending CN113537092A (en) | 2021-07-21 | 2021-07-21 | Smoke and fire detection method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113537092A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114827754A (en) * | 2022-02-23 | 2022-07-29 | 阿里巴巴(中国)有限公司 | Method and device for detecting video first frame time |
CN117994711A (en) * | 2024-04-07 | 2024-05-07 | 西安航天动力研究所 | Method, device and computer equipment for identifying flame based on engine plume image |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060115154A1 (en) * | 2004-11-16 | 2006-06-01 | Chao-Ho Chen | Fire detection and smoke detection method and system based on image processing |
US20100073477A1 (en) * | 2007-01-16 | 2010-03-25 | Utc Fire & Security Corporation | System and method for video detection of smoke and flame |
CN109726620A (en) * | 2017-10-31 | 2019-05-07 | 北京国双科技有限公司 | A kind of video flame detecting method and device |
CN112560657A (en) * | 2020-12-12 | 2021-03-26 | 南方电网调峰调频发电有限公司 | Smoke and fire identification method and device, computer equipment and storage medium |
-
2021
- 2021-07-21 CN CN202110823728.4A patent/CN113537092A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060115154A1 (en) * | 2004-11-16 | 2006-06-01 | Chao-Ho Chen | Fire detection and smoke detection method and system based on image processing |
US20100073477A1 (en) * | 2007-01-16 | 2010-03-25 | Utc Fire & Security Corporation | System and method for video detection of smoke and flame |
CN109726620A (en) * | 2017-10-31 | 2019-05-07 | 北京国双科技有限公司 | A kind of video flame detecting method and device |
CN112560657A (en) * | 2020-12-12 | 2021-03-26 | 南方电网调峰调频发电有限公司 | Smoke and fire identification method and device, computer equipment and storage medium |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114827754A (en) * | 2022-02-23 | 2022-07-29 | 阿里巴巴(中国)有限公司 | Method and device for detecting video first frame time |
CN114827754B (en) * | 2022-02-23 | 2023-09-12 | 阿里巴巴(中国)有限公司 | Video first frame time detection method and device |
CN117994711A (en) * | 2024-04-07 | 2024-05-07 | 西安航天动力研究所 | Method, device and computer equipment for identifying flame based on engine plume image |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022121129A1 (en) | Fire recognition method and apparatus, and computer device and storage medium | |
CN110516609B (en) | Fire disaster video detection and early warning method based on image multi-feature fusion | |
CN107944359B (en) | Flame detecting method based on video | |
US10991224B2 (en) | Fire detection system based on artificial intelligence and fire detection method based on artificial intelligence | |
CN108875619B (en) | Video processing method and device, electronic equipment and computer readable storage medium | |
CN102348128A (en) | Surveillance camera system having camera malfunction detection function | |
CN113537092A (en) | Smoke and fire detection method, device, equipment and storage medium | |
Pritam et al. | Detection of fire using image processing techniques with LUV color space | |
JPH07203303A (en) | Method and apparatus for supplying data | |
JP2021119523A (en) | Fire detection device and fire detection method | |
Benjamin et al. | Extraction of fire region from forest fire images using color rules and texture analysis | |
AU2002232008B2 (en) | Method of detecting a significant change of scene | |
US20120154545A1 (en) | Image processing apparatus and method for human computer interaction | |
CN113688820B (en) | Stroboscopic band information identification method and device and electronic equipment | |
Celik et al. | Computer vision based fire detection in color images | |
CN106254723B (en) | A kind of method of real-time monitoring video noise interference | |
AU2002232008A1 (en) | Method of detecting a significant change of scene | |
KR101920740B1 (en) | Real-time image processing system | |
US20230051823A1 (en) | Systems, methods, and computer program products for image analysis | |
JPH0620049A (en) | Intruder identification system | |
Thepade et al. | Fire Detection System Using Color and Flickering Behaviour of Fire with Kekre's LUV Color Space | |
CN112396024A (en) | Forest fire alarm method based on convolutional neural network | |
Hossen et al. | Fire detection from video based on temporal variation, temporal periodicity and spatial variance analysis | |
Jamal et al. | A novel framework for real-time fire detection in CCTV videos using a hybrid approach of motion-flicker detection, colour detection and YOLOv7 | |
KR102624333B1 (en) | Deep Learning-based Fire Monitoring System |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |