CN117011993A - Comprehensive pipe rack fire safety early warning method based on image processing - Google Patents
Comprehensive pipe rack fire safety early warning method based on image processing Download PDFInfo
- Publication number
- CN117011993A CN117011993A CN202311268369.6A CN202311268369A CN117011993A CN 117011993 A CN117011993 A CN 117011993A CN 202311268369 A CN202311268369 A CN 202311268369A CN 117011993 A CN117011993 A CN 117011993A
- Authority
- CN
- China
- Prior art keywords
- fire
- flame
- video frame
- area
- comprehensive pipe
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 50
- 238000012545 processing Methods 0.000 title claims abstract description 16
- 238000012544 monitoring process Methods 0.000 claims abstract description 65
- 230000000007 visual effect Effects 0.000 claims abstract description 28
- 238000012216 screening Methods 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000011156 evaluation Methods 0.000 claims description 9
- 238000013507 mapping Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000002737 fuel gas Substances 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B17/00—Fire alarms; Alarms responsive to explosion
- G08B17/12—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions
- G08B17/125—Actuation by presence of radiation or particles, e.g. of infrared radiation or of ions by using a video camera to detect fire or smoke
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/28—Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/766—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Emergency Management (AREA)
- Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Fire-Detection Mechanisms (AREA)
Abstract
A comprehensive pipe gallery fire safety early warning method based on image processing relates to the technical field of fire early warning, acquires pipe gallery information of a current comprehensive pipe gallery fire management area, and sets fire video monitoring points; the comprehensive pipe rack full-area monitoring visual view is constructed, video frame images containing suspicious areas are screened out through video data of each fire disaster video monitoring point, and the number of flame sharp angles is judged through three steps of setting a flame sharp angle identification interval, sharp angle average values of different sections in the identification interval and approximate triangle body states of the flame sharp angles, so that the accuracy of judging the flame sharp angles is improved; judging whether the fire disaster happens to the region in the video frame image or not based on the flame sharp angle number of the suspicious region, and judging the fire disaster type; and constructing a regression model to obtain a fire change trend prediction result, and visually displaying the fire change trend prediction result through the comprehensive pipe rack whole-area monitoring visual view, so that a fire data display mode is clearer and more visual.
Description
Technical Field
The application relates to the technical field of fire early warning, in particular to a comprehensive pipe rack fire safety early warning method based on image processing.
Background
The utility tunnel is also called a common ditch, is a modern, scientific and intensive urban infrastructure for intensively laying municipal pipelines such as electric power, communication, broadcasting television, water supply, water discharge, heating power, fuel gas and the like under the urban underground, is designed to optimize urban space, improve urban appearance, and is convenient to install, maintain and dismantle various municipal pipelines, and the fire safety early warning of the utility tunnel plays an increasingly important role;
the traditional video image fire detection and identification technology generally adopts the shape of sharp angles in a single frame image to judge the number of flame sharp angles, finally judges whether fire information exists in the single frame image based on the number of flame sharp angles, has poor robustness, is easily affected by interference such as sunlight, lamplight, color patterns and the like in practical application, does not consider the connection coupling relation among multiple frame images, and has low accuracy and high false alarm rate for fire judgment. In order to solve the technical problems, the utility tunnel fire safety pre-warning method based on image processing is provided.
Disclosure of Invention
In order to solve the technical problems, the application aims to provide an integrated pipe rack fire safety pre-warning method based on image processing, which comprises the following steps:
step S1: acquiring pipe gallery information of a current comprehensive pipe gallery fire disaster management area, and setting fire disaster video monitoring points according to the pipe gallery information;
step S2: constructing a comprehensive pipe rack full-area monitoring visual view, screening out video frame images containing suspicious areas through video data of each fire disaster video monitoring point, and judging the number of flame sharp angles of the suspicious areas through three screening steps of setting an identification interval of flame sharp angles, sharp angle average values at different stages in the identification interval and approximate triangle forms of the flame sharp angles;
step S3: judging whether the fire disaster happens to the region in the video frame image or not based on the flame sharp angle number of the suspicious region, and judging the fire disaster type;
step S4: and constructing a regression model through a video frame image of the fire, acquiring a fire change trend prediction result, and visually displaying the fire change trend prediction result through a comprehensive pipe rack whole-area monitoring visual view.
Further, for obtaining the piping lane information of the present utility tunnel fire management area, the process of setting fire video monitoring points according to the piping lane information includes:
obtaining municipal pipeline laying characteristics of a current comprehensive pipe rack fire disaster management area, extracting pipe rack information according to the municipal pipeline laying characteristics, splitting the comprehensive pipe rack fire disaster management area according to the pipe rack information, and dividing the comprehensive pipe rack fire disaster management area into a plurality of comprehensive pipe rack sub-areas;
selecting an evaluation index according to pipe gallery information and position characteristics of each comprehensive pipe gallery subarea, setting an index weight matrix of the evaluation index, and acquiring a membership matrix of each comprehensive pipe gallery subarea for a preset importance level through a fuzzy comprehensive evaluation method;
and acquiring importance levels of all the utility tunnel subareas according to the membership matrix and the index weight matrix, determining the number of fire video monitoring points according to the importance levels of the utility tunnel subareas, and determining the sampling distribution of the fire video monitoring points of the utility tunnel subareas according to the effective coverage area of the fire video monitoring points and the position characteristics of the utility tunnel subareas.
Further, the process for constructing the comprehensive pipe rack full area monitoring visual view comprises the following steps:
obtaining physical entities of municipal laid pipelines in a current comprehensive pipe rack fire management area, forming a comprehensive pipe rack map according to position characteristics and pipe rack connection relations among all the comprehensive pipe rack subareas in the current comprehensive pipe rack fire management area, and mapping the physical entities of the municipal laid pipelines to the comprehensive pipe rack map through three-dimensional modeling treatment to obtain a three-dimensional model;
and acquiring multi-source video data of each fire video monitoring point in each utility tunnel subarea, preprocessing the multi-source video data in a data format to obtain twin data, and matching the twin data with a three-dimensional model according to the assembly connection relation of each fire video monitoring point in a physical space and each utility tunnel subarea to obtain a utility tunnel whole-area monitoring visual view.
Further, the process of screening the video frame image containing the suspicious region for the video data of each fire video monitoring point comprises the following steps:
acquiring video data of each fire video monitoring point, converting the acquired video data into continuous video frame images, setting reference video frame images of each fire video monitoring point, subtracting pixel values of corresponding positions of the video frame images and the reference video frame images to obtain difference values, converting the difference values into pixel values of binary images, and generating the binary images;
setting two pixel point position traversing pointers, and simultaneously starting traversing from the first pixel point position and the last pixel point position of the binary image area;
marking a pixel point location area with a pixel value larger than a preset pixel threshold value in the binary image as a suspicious area;
and marking a pixel point location area with a pixel value smaller than or equal to a preset pixel threshold value in the binary image as a normal area.
Further, the process for judging the number of flame sharp angles in the suspicious region by setting three screening steps of the identification interval of the flame sharp angles, the sharp angle average value at different stages in the identification interval and the approximate triangle form of the flame sharp angles comprises the following steps:
selecting a pixel point in the center of a suspicious region in a binary image as a center pixel point, establishing a two-dimensional coordinate system by taking the center pixel point as an origin, mapping all the pixel points of the suspicious region into the two-dimensional coordinate system, and obtaining Euclidean distances between each boundary pixel point and the center pixel point of the edge of the suspicious region;
constructing a unidirectional linked list, randomly selecting one boundary pixel point of the suspicious region edge in the binary image, sequentially inputting the two-dimensional coordinates and Euclidean distances of all boundary pixel points of the suspicious region edge into the unidirectional linked list by taking the boundary pixel point as a starting point clockwise, determining the identification interval of the flame sharp angle as n boundary pixel points according to the total number of the boundary pixel points, and marking the boundary pixel points with Euclidean distances larger than the Euclidean distances of each n/2 boundary pixel points of the left and right neighborhood in the unidirectional linked list as possible flame sharp angles;
respectively obtaining the Euclidean distance sum S1, S2 and S3 of n/8, n/4 and n/2 boundary pixel points in the left and right fields of the possible flame sharp angles, and converting the possible flame sharp angles into suspected flame sharp angles when S1 is less than S2 and less than S3;
and constructing an approximate triangle by taking the two-dimensional coordinates of the suspected flame sharp angles as vertexes and taking the two-dimensional coordinates of n boundary pixel points in the flame sharp angle identification interval as bases, setting the tangent value of the standard vertex angle half angle of the approximate triangle of the flame sharp angles, and converting the suspected flame sharp angles into the flame sharp angles when the tangent value of the vertex angle half angle of the approximate triangle of the suspected flame sharp angles is smaller than or equal to the tangent value of the standard vertex angle half angle.
Further, in order to judge whether a fire occurs in an area in the video frame image based on the number of flame sharp angles of the suspicious area, the process of judging the fire type includes:
selecting the flame sharp angle number of the t second video frame image, and comparing the flame sharp angle number of the video frame image with a preset flame sharp angle number;
when the flame sharp angle number is larger than the preset flame sharp angle number, marking the video frame image as a fire source image, and marking a suspicious region in the video frame image as a flame region;
when the flame sharp angle number is smaller than or equal to the preset flame sharp angle number, marking the video frame image as other interference source images;
if the t second video frame image is a fire source image, carrying out difference average operation on the flame sharp angle number in the video frame image, the flame sharp angle number of the t-1 second video frame image, the flame sharp angle number of the t-2 second video frame image and the flame sharp angle number of the t-3 second video frame image respectively to obtain sharp angle change average values of the t second video frame image and the continuous adjacent three frames of video frame images;
setting a sharp angle change average threshold value, and comparing the sharp angle change average value of the t second video frame image with the sharp angle change average threshold value;
if the sharp angle change average value is larger than the sharp angle change average threshold value, marking the fire source in the video frame image as an out-of-control fire source;
and if the sharp angle change average value is smaller than or equal to the sharp angle change average threshold value, marking the fire source in the video frame image as a stable fire source.
Further, the process of constructing a regression model for the video frame image through which the fire occurs and obtaining the prediction result of the fire variation trend includes:
sampling continuous video frame images of which the fire source is an uncontrolled fire source in the fire source image at equal intervals to obtain k sample video frame images;
generating flame areas according to the number of the pixel points contained in the flame areas in the video frame images, and generating flame area centroids according to the pixel values of the pixel points contained in the flame areas in the video frame images;
performing curve fitting on the flame areas and flame area centroids of k sample video frame images by using a regression model to respectively obtain a flame area growth curve and a flame area centroid movement curve;
and acquiring the flame area change trend and the flame spreading direction trend in the next frame of video frame image according to the flame area increase curve and the flame area centroid movement curve.
Further, the process of visually displaying the predicted result of the fire variation trend through the comprehensive pipe rack whole-area monitoring visual view comprises the following steps:
acquiring analysis results of video frame images of all fire video monitoring points, and if the analysis results of the video frame images are fire source images and the fire source is an uncontrolled fire source, displaying a red alarm area as a comprehensive pipe gallery subarea where the uncontrolled fire source is located by the comprehensive pipe gallery total area monitoring visual view;
if the analysis result of the video frame image is a fire source image and the fire source is a stable fire source, displaying an orange alarm area on a comprehensive pipe rack subarea where the uncontrolled fire source is located by the comprehensive pipe rack full-area monitoring visual view;
the comprehensive pipe rack whole-area monitoring visual view obtains the fire spreading arrival time and the fire spreading range of the non-alarm area according to the flame area change trend and the flame spreading direction trend of the red alarm area, and sets the non-alarm area in the fire spreading range as a time-division early-warning area according to the fire spreading arrival time.
Compared with the prior art, the application has the beneficial effects that: compared with the traditional video image fire detection and identification method which generally adopts the shape of sharp angles in a single frame image to judge the number of the flame sharp angles, the method provided by the application judges the number of the flame sharp angles through three screening steps of setting the identification interval of the flame sharp angles, the sharp angle average value of different sections in the identification interval and the approximate triangle body state of the flame sharp angles, thereby greatly reducing the interference influence of sunlight, lamplight, color patterns and the like and improving the accuracy of judging the number of the flame sharp angles;
on the other hand, the flame types are obtained through the comparison of the number of the flame sharp angles among the multi-frame images, and the fire change trend prediction result is visually displayed through the comprehensive pipe gallery whole-area monitoring visual view, so that the effect of multiple purposes of one graph is achieved.
Drawings
FIG. 1 is a schematic diagram of a comprehensive pipe rack fire safety pre-warning method based on image processing according to an embodiment of the application.
Description of the embodiments
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
As shown in FIG. 1, the utility tunnel fire safety pre-warning method based on image processing comprises the following steps:
step S1: acquiring pipe gallery information of a current comprehensive pipe gallery fire disaster management area, and setting fire disaster video monitoring points according to the pipe gallery information;
step S2: constructing a comprehensive pipe rack full-area monitoring visual view, screening out video frame images containing suspicious areas through video data of each fire disaster video monitoring point, and judging the number of flame sharp angles of the suspicious areas through three screening steps of setting an identification interval of flame sharp angles, sharp angle average values at different stages in the identification interval and approximate triangle forms of the flame sharp angles;
step S3: judging whether the fire disaster happens to the region in the video frame image or not based on the flame sharp angle number of the suspicious region, and judging the fire disaster type;
step S4: and constructing a regression model through a video frame image of the fire, acquiring a fire change trend prediction result, and visually displaying the fire change trend prediction result through a comprehensive pipe rack whole-area monitoring visual view.
It should be further described that, in the implementation process, the process of obtaining the pipe gallery information of the current comprehensive pipe gallery fire disaster management area and setting the fire disaster video monitoring point position according to the pipe gallery information includes:
obtaining municipal pipeline laying characteristics of a current comprehensive pipe rack fire disaster management area, extracting pipe rack information according to the municipal pipeline laying characteristics, splitting the comprehensive pipe rack fire disaster management area according to the pipe rack information, and dividing the comprehensive pipe rack fire disaster management area into a plurality of comprehensive pipe rack sub-areas; the utility tunnel sub-areas comprise a trunk utility tunnel, a branch utility tunnel, a secondary branch utility tunnel and a cable utility tunnel; the pipe gallery information comprises the laying number of various municipal pipelines, the position characteristics of the comprehensive pipe gallery and the size of the inner space of the comprehensive pipe gallery;
selecting an evaluation index according to pipe gallery information and position characteristics of each comprehensive pipe gallery subarea, setting an index weight matrix of the evaluation index, and acquiring a membership matrix of each comprehensive pipe gallery subarea for a preset importance level through a fuzzy comprehensive evaluation method;
and acquiring importance levels of all the utility tunnel subareas according to the membership matrix and the index weight matrix, determining the number of fire video monitoring points according to the importance levels of the utility tunnel subareas, and determining the sampling distribution of the fire video monitoring points of the utility tunnel subareas according to the effective coverage area of the fire video monitoring points and the position characteristics of the utility tunnel subareas.
It should be further noted that, in the implementation process, the process of constructing the comprehensive pipe rack full area monitoring visual view includes:
obtaining physical entities of municipal laid pipelines in a current comprehensive pipe rack fire management area, forming a comprehensive pipe rack map according to position characteristics and pipe rack connection relations among all the comprehensive pipe rack subareas in the current comprehensive pipe rack fire management area, and mapping the physical entities of the municipal laid pipelines to the comprehensive pipe rack map through three-dimensional modeling treatment to obtain a three-dimensional model;
and acquiring multi-source video data of each fire video monitoring point in each utility tunnel subarea, preprocessing the multi-source video data in a data format to obtain twin data, and matching the twin data with a three-dimensional model according to the assembly connection relation of each fire video monitoring point in a physical space and each utility tunnel subarea to obtain a utility tunnel whole-area monitoring visual view.
It should be further noted that, in the implementation process, the process of screening the video frame image containing the suspicious region by the video data of each fire video monitoring point location includes:
acquiring video data of each fire video monitoring point, converting the acquired video data into continuous video frame images, setting reference video frame images of each fire video monitoring point, subtracting pixel values of corresponding positions of the video frame images and the reference video frame images to obtain difference values, converting the difference values into pixel values of binary images, and generating the binary images;
setting two pixel point position traversing pointers, and simultaneously starting traversing from the first pixel point position and the last pixel point position of the binary image area;
marking a pixel point location area with a pixel value larger than a preset pixel threshold value in the binary image as a suspicious area;
and marking a pixel point location area with a pixel value smaller than or equal to a preset pixel threshold value in the binary image as a normal area.
It should be further described that, in the specific implementation process, the process of judging the number of flame sharp angles in the suspicious region through setting three screening steps of the identification interval of the flame sharp angles, the sharp angle average value at different stages in the identification interval and the approximate triangle form of the flame sharp angles includes:
selecting a pixel point in the center of a suspicious region in a binary image as a center pixel point, establishing a two-dimensional coordinate system by taking the center pixel point as an origin, mapping all the pixel points of the suspicious region into the two-dimensional coordinate system, and obtaining Euclidean distances between each boundary pixel point and the center pixel point of the edge of the suspicious region;
constructing a unidirectional linked list, randomly selecting one boundary pixel point of the suspicious region edge in the binary image, sequentially inputting the two-dimensional coordinates and Euclidean distances of all boundary pixel points of the suspicious region edge into the unidirectional linked list by taking the boundary pixel point as a starting point clockwise, determining the identification interval of the flame sharp angle as n boundary pixel points according to the total number of the boundary pixel points, and marking the boundary pixel points with Euclidean distances larger than the Euclidean distances of each n/2 boundary pixel points of the left and right neighborhood in the unidirectional linked list as possible flame sharp angles;
respectively obtaining the Euclidean distance sum S1, S2 and S3 of n/8, n/4 and n/2 boundary pixel points in the left and right fields of the possible flame sharp angles, and converting the possible flame sharp angles into suspected flame sharp angles when S1 is less than S2 and less than S3;
and constructing an approximate triangle by taking the two-dimensional coordinates of the suspected flame sharp angles as vertexes and taking the two-dimensional coordinates of n boundary pixel points in the flame sharp angle identification interval as bases, setting the tangent value of the standard vertex angle half angle of the approximate triangle of the flame sharp angles, and converting the suspected flame sharp angles into the flame sharp angles when the tangent value of the vertex angle half angle of the approximate triangle of the suspected flame sharp angles is smaller than or equal to the tangent value of the standard vertex angle half angle.
It should be further noted that, in the implementation process, the process of judging whether the fire occurs in the region in the video frame image based on the flame sharp angle number of the suspicious region and judging the fire type includes:
selecting the flame sharp angle number of the t second video frame image, and comparing the flame sharp angle number of the video frame image with a preset flame sharp angle number;
when the flame sharp angle number is larger than the preset flame sharp angle number, marking the video frame image as a fire source image, and marking a suspicious region in the video frame image as a flame region;
when the flame sharp angle number is smaller than or equal to the preset flame sharp angle number, marking the video frame image as other interference source images;
if the t second video frame image is a fire source image, carrying out difference average operation on the flame sharp angle number in the video frame image, the flame sharp angle number of the t-1 second video frame image, the flame sharp angle number of the t-2 second video frame image and the flame sharp angle number of the t-3 second video frame image respectively to obtain sharp angle change average values of the t second video frame image and the continuous adjacent three frames of video frame images;
setting a sharp angle change average threshold value, and comparing the sharp angle change average value of the t second video frame image with the sharp angle change average threshold value;
if the sharp angle change average value is larger than the sharp angle change average threshold value, marking the fire source in the video frame image as an out-of-control fire source;
and if the sharp angle change average value is smaller than or equal to the sharp angle change average threshold value, marking the fire source in the video frame image as a stable fire source.
It should be further described that, in the implementation process, the process of constructing a regression model through the video frame image of the fire disaster, and obtaining the prediction result of the fire disaster variation trend includes:
sampling continuous video frame images of which the fire source is an uncontrolled fire source in the fire source image at equal intervals to obtain k sample video frame images;
generating flame areas according to the number of the pixel points contained in the flame areas in the video frame images, and generating flame area centroids according to the pixel values of the pixel points contained in the flame areas in the video frame images;
performing curve fitting on the flame areas and flame area centroids of k sample video frame images by using a regression model to respectively obtain a flame area growth curve and a flame area centroid movement curve;
and acquiring the flame area change trend and the flame spreading direction trend in the next frame of video frame image according to the flame area increase curve and the flame area centroid movement curve.
It should be further noted that, in the implementation process, the process of visually displaying the fire variation trend prediction result through the comprehensive pipe rack whole-area monitoring visual view includes:
acquiring analysis results of video frame images of all fire video monitoring points, and if the analysis results of the video frame images are fire source images and the fire source is an uncontrolled fire source, displaying a red alarm area as a comprehensive pipe gallery subarea where the uncontrolled fire source is located by the comprehensive pipe gallery total area monitoring visual view;
if the analysis result of the video frame image is a fire source image and the fire source is a stable fire source, displaying an orange alarm area on a comprehensive pipe rack subarea where the uncontrolled fire source is located by the comprehensive pipe rack full-area monitoring visual view;
the comprehensive pipe rack whole-area monitoring visual view obtains the fire spreading arrival time and the fire spreading range of the non-alarm area according to the flame area change trend and the flame spreading direction trend of the red alarm area, and sets the non-alarm area in the fire spreading range as a time-division early-warning area according to the fire spreading arrival time.
The above embodiments are only for illustrating the technical method of the present application and not for limiting the same, and it should be understood by those skilled in the art that the technical method of the present application may be modified or substituted without departing from the spirit and scope of the technical method of the present application.
Claims (8)
1. The utility tunnel fire disaster safety pre-warning method based on image processing is characterized by comprising the following steps:
step S1: acquiring pipe gallery information of a current comprehensive pipe gallery fire disaster management area, and setting fire disaster video monitoring points according to the pipe gallery information;
step S2: constructing a comprehensive pipe rack full-area monitoring visual view, screening out video frame images containing suspicious areas through video data of each fire disaster video monitoring point, and judging the number of flame sharp angles of the suspicious areas through three screening steps of setting an identification interval of flame sharp angles, sharp angle average values at different stages in the identification interval and approximate triangle forms of the flame sharp angles;
step S3: judging whether the fire disaster happens to the region in the video frame image or not based on the flame sharp angle number of the suspicious region, and judging the fire disaster type;
step S4: and constructing a regression model through a video frame image of the fire, acquiring a fire change trend prediction result, and visually displaying the fire change trend prediction result through a comprehensive pipe rack whole-area monitoring visual view.
2. The utility tunnel fire safety precaution method based on image processing according to claim 1, wherein the process of obtaining the tunnel information of the current utility tunnel fire management area and setting the fire video monitoring point according to the tunnel information comprises the following steps:
obtaining municipal pipeline laying characteristics of a current comprehensive pipe rack fire disaster management area, extracting pipe rack information according to the municipal pipeline laying characteristics, splitting the comprehensive pipe rack fire disaster management area according to the pipe rack information, and dividing the comprehensive pipe rack fire disaster management area into a plurality of comprehensive pipe rack sub-areas;
selecting an evaluation index according to pipe gallery information and position characteristics of each comprehensive pipe gallery subarea, setting an index weight matrix of the evaluation index, and acquiring a membership matrix of each comprehensive pipe gallery subarea for a preset importance level through a fuzzy comprehensive evaluation method;
and acquiring importance levels of all the utility tunnel subareas according to the membership matrix and the index weight matrix, determining the number of fire video monitoring points according to the importance levels of the utility tunnel subareas, and determining the sampling distribution of the fire video monitoring points of the utility tunnel subareas according to the effective coverage area of the fire video monitoring points and the position characteristics of the utility tunnel subareas.
3. The image processing-based utility tunnel fire safety precaution method of claim 2, wherein the process of constructing the utility tunnel full area monitoring visual view comprises:
obtaining physical entities of municipal laid pipelines in a current comprehensive pipe rack fire management area, forming a comprehensive pipe rack map according to position characteristics and pipe rack connection relations among all the comprehensive pipe rack subareas in the current comprehensive pipe rack fire management area, and mapping the physical entities of the municipal laid pipelines to the comprehensive pipe rack map through three-dimensional modeling treatment to obtain a three-dimensional model;
and acquiring multi-source video data of each fire video monitoring point in each utility tunnel subarea, preprocessing the multi-source video data in a data format to obtain twin data, and matching the twin data with a three-dimensional model according to the assembly connection relation of each fire video monitoring point in a physical space and each utility tunnel subarea to obtain a utility tunnel whole-area monitoring visual view.
4. The utility tunnel fire safety precaution method based on image processing according to claim 3, wherein the process of screening out the video frame image containing suspicious region by the video data of each fire video monitoring point comprises:
acquiring video data of each fire video monitoring point, converting the acquired video data into continuous video frame images, setting reference video frame images of each fire video monitoring point, subtracting pixel values of corresponding positions of the video frame images and the reference video frame images to obtain difference values, converting the difference values into pixel values of binary images, and generating the binary images;
setting two pixel point position traversing pointers, and simultaneously starting traversing from the first pixel point position and the last pixel point position of the binary image area;
marking a pixel point location area with a pixel value larger than a preset pixel threshold value in the binary image as a suspicious area;
and marking a pixel point location area with a pixel value smaller than or equal to a preset pixel threshold value in the binary image as a normal area.
5. The utility tunnel fire safety precaution method based on image processing according to claim 4, wherein the process of judging the number of flame sharp angles in the suspicious region through three screening steps of setting an identification interval of flame sharp angles, sharp angle average values at different stages in the identification interval and approximate triangle forms of the flame sharp angles comprises the following steps:
selecting a pixel point in the center of a suspicious region in a binary image as a center pixel point, establishing a two-dimensional coordinate system by taking the center pixel point as an origin, mapping all the pixel points of the suspicious region into the two-dimensional coordinate system, and obtaining Euclidean distances between each boundary pixel point and the center pixel point of the edge of the suspicious region;
constructing a unidirectional linked list, randomly selecting one boundary pixel point of the suspicious region edge in the binary image, sequentially inputting the two-dimensional coordinates and Euclidean distances of all boundary pixel points of the suspicious region edge into the unidirectional linked list by taking the boundary pixel point as a starting point clockwise, determining the identification interval of the flame sharp angle as n boundary pixel points according to the total number of the boundary pixel points, and marking the boundary pixel points with Euclidean distances larger than the Euclidean distances of each n/2 boundary pixel points of the left and right neighborhood in the unidirectional linked list as possible flame sharp angles;
respectively obtaining the Euclidean distance sum S1, S2 and S3 of n/8, n/4 and n/2 boundary pixel points in the left and right fields of the possible flame sharp angles, and converting the possible flame sharp angles into suspected flame sharp angles when S1 is less than S2 and less than S3;
and constructing an approximate triangle by taking the two-dimensional coordinates of the suspected flame sharp angles as vertexes and taking the two-dimensional coordinates of n boundary pixel points in the flame sharp angle identification interval as bases, setting the tangent value of the standard vertex angle half angle of the approximate triangle of the flame sharp angles, and converting the suspected flame sharp angles into the flame sharp angles when the tangent value of the vertex angle half angle of the approximate triangle of the suspected flame sharp angles is smaller than or equal to the tangent value of the standard vertex angle half angle.
6. The method for warning the fire disaster safety of the utility tunnel based on the image processing according to claim 5, wherein the process of judging whether the fire disaster occurs in the area in the video frame image based on the number of flame sharp angles of the suspicious area and judging the fire disaster type comprises:
selecting the flame sharp angle number of the t second video frame image, and comparing the flame sharp angle number of the video frame image with a preset flame sharp angle number;
when the flame sharp angle number is larger than the preset flame sharp angle number, marking the video frame image as a fire source image, and marking a suspicious region in the video frame image as a flame region;
when the flame sharp angle number is smaller than or equal to the preset flame sharp angle number, marking the video frame image as other interference source images;
if the t second video frame image is a fire source image, carrying out difference average operation on the flame sharp angle number in the video frame image, the flame sharp angle number of the t-1 second video frame image, the flame sharp angle number of the t-2 second video frame image and the flame sharp angle number of the t-3 second video frame image respectively to obtain sharp angle change average values of the t second video frame image and the continuous adjacent three frames of video frame images;
setting a sharp angle change average threshold value, and comparing the sharp angle change average value of the t second video frame image with the sharp angle change average threshold value;
if the sharp angle change average value is larger than the sharp angle change average threshold value, marking the fire source in the video frame image as an out-of-control fire source;
and if the sharp angle change average value is smaller than or equal to the sharp angle change average threshold value, marking the fire source in the video frame image as a stable fire source.
7. The method for warning the fire safety of the utility tunnel based on image processing according to claim 6, wherein the process of obtaining the prediction result of the trend of fire variation by constructing a regression model from the video frame image of the occurrence of the fire is characterized in that the method comprises the following steps:
sampling continuous video frame images of which the fire source is an uncontrolled fire source in the fire source image at equal intervals to obtain k sample video frame images;
generating flame areas according to the number of the pixel points contained in the flame areas in the video frame images, and generating flame area centroids according to the pixel values of the pixel points contained in the flame areas in the video frame images;
performing curve fitting on the flame areas and flame area centroids of k sample video frame images by using a regression model to respectively obtain a flame area growth curve and a flame area centroid movement curve;
and acquiring the flame area change trend and the flame spreading direction trend in the next frame of video frame image according to the flame area increase curve and the flame area centroid movement curve.
8. The image processing-based utility tunnel fire safety precaution method according to claim 7, characterized in that the process of visually displaying the fire variation trend prediction result through the utility tunnel whole area monitoring visual view comprises the following steps:
acquiring analysis results of video frame images of all fire video monitoring points, and if the analysis results of the video frame images are fire source images and the fire source is an uncontrolled fire source, displaying a red alarm area as a comprehensive pipe gallery subarea where the uncontrolled fire source is located by the comprehensive pipe gallery total area monitoring visual view;
if the analysis result of the video frame image is a fire source image and the fire source is a stable fire source, displaying an orange alarm area on a comprehensive pipe rack subarea where the uncontrolled fire source is located by the comprehensive pipe rack full-area monitoring visual view;
the comprehensive pipe rack whole-area monitoring visual view obtains the fire spreading arrival time and the fire spreading range of the non-alarm area according to the flame area change trend and the flame spreading direction trend of the red alarm area, and sets the non-alarm area in the fire spreading range as a time-division early-warning area according to the fire spreading arrival time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311268369.6A CN117011993A (en) | 2023-09-28 | 2023-09-28 | Comprehensive pipe rack fire safety early warning method based on image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311268369.6A CN117011993A (en) | 2023-09-28 | 2023-09-28 | Comprehensive pipe rack fire safety early warning method based on image processing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117011993A true CN117011993A (en) | 2023-11-07 |
Family
ID=88562120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311268369.6A Pending CN117011993A (en) | 2023-09-28 | 2023-09-28 | Comprehensive pipe rack fire safety early warning method based on image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117011993A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117711127A (en) * | 2023-11-08 | 2024-03-15 | 金舟消防工程(北京)股份有限公司 | Fire safety supervision method and system |
CN118015198A (en) * | 2024-04-09 | 2024-05-10 | 电子科技大学 | Pipe gallery fire risk assessment method based on convolutional neural network image processing |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006252368A (en) * | 2005-03-14 | 2006-09-21 | Nohmi Bosai Ltd | Dustproof cover and viewing angle restriction adaptor for sensor |
CN101930541A (en) * | 2010-09-08 | 2010-12-29 | 大连古野软件有限公司 | Video-based flame detecting device and method |
CN102743830A (en) * | 2012-07-10 | 2012-10-24 | 西安交通大学 | Automatic electric switch cabinet fire extinguishing system and fire recognition method |
CN105788142A (en) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | Video image processing-based fire detection system and detection method |
CN106845443A (en) * | 2017-02-15 | 2017-06-13 | 福建船政交通职业学院 | Video flame detecting method based on multi-feature fusion |
CN110135347A (en) * | 2019-05-16 | 2019-08-16 | 中国船舶重工集团公司第七0三研究所 | A kind of flame identification method based on video image |
CN209433517U (en) * | 2018-10-29 | 2019-09-24 | 西安工程大学 | It is a kind of based on more flame images and the fire identification warning device for combining criterion |
CN113819881A (en) * | 2021-09-09 | 2021-12-21 | 南阳中天防爆电气股份有限公司 | Fire source distance and map azimuth detection method for reconnaissance and inspection robot |
-
2023
- 2023-09-28 CN CN202311268369.6A patent/CN117011993A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006252368A (en) * | 2005-03-14 | 2006-09-21 | Nohmi Bosai Ltd | Dustproof cover and viewing angle restriction adaptor for sensor |
CN101930541A (en) * | 2010-09-08 | 2010-12-29 | 大连古野软件有限公司 | Video-based flame detecting device and method |
CN102743830A (en) * | 2012-07-10 | 2012-10-24 | 西安交通大学 | Automatic electric switch cabinet fire extinguishing system and fire recognition method |
CN105788142A (en) * | 2016-05-11 | 2016-07-20 | 中国计量大学 | Video image processing-based fire detection system and detection method |
CN106845443A (en) * | 2017-02-15 | 2017-06-13 | 福建船政交通职业学院 | Video flame detecting method based on multi-feature fusion |
CN209433517U (en) * | 2018-10-29 | 2019-09-24 | 西安工程大学 | It is a kind of based on more flame images and the fire identification warning device for combining criterion |
CN110135347A (en) * | 2019-05-16 | 2019-08-16 | 中国船舶重工集团公司第七0三研究所 | A kind of flame identification method based on video image |
CN113819881A (en) * | 2021-09-09 | 2021-12-21 | 南阳中天防爆电气股份有限公司 | Fire source distance and map azimuth detection method for reconnaissance and inspection robot |
Non-Patent Citations (3)
Title |
---|
周清锋;杨宣访;张相宜;: "火焰及干扰图像形态特征分析研究", 消防科学与技术, no. 12 * |
席廷宇;邱选兵;孙冬远;李宁;李传亮;王高;鄢玉;: "多特征量对数回归的火焰快速识别算法", 计算机应用, no. 07 * |
赵伟;于芳芳;范晓婧;张南楠;: "无人机森林防火***的火灾图像识别仿真", 计算机仿真, no. 09 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117711127A (en) * | 2023-11-08 | 2024-03-15 | 金舟消防工程(北京)股份有限公司 | Fire safety supervision method and system |
CN118015198A (en) * | 2024-04-09 | 2024-05-10 | 电子科技大学 | Pipe gallery fire risk assessment method based on convolutional neural network image processing |
CN118015198B (en) * | 2024-04-09 | 2024-06-18 | 电子科技大学 | Pipe gallery fire risk assessment method based on convolutional neural network image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN117011993A (en) | Comprehensive pipe rack fire safety early warning method based on image processing | |
CN108846335B (en) | Intelligent construction site area management and intrusion detection method and system based on video images | |
CN107767382B (en) | The extraction method and system of static three-dimensional map contour of building line | |
CN106930784B (en) | Tunnel monitoring method based on 3 D laser scanning | |
CN111080645B (en) | Remote sensing image semi-supervised semantic segmentation method based on generation type countermeasure network | |
CN107314819B (en) | A kind of detection of photovoltaic plant hot spot and localization method based on infrared image | |
CN101872526B (en) | Smoke and fire intelligent identification method based on programmable photographing technology | |
CN103491351A (en) | Intelligent video monitoring method for illegal buildings | |
CN103206957B (en) | The lane detection and tracking method of vehicular autonomous navigation | |
CN110084169B (en) | Illegal building identification method based on K-Means clustering and contour topology constraint | |
CN110990983A (en) | Water supply pipeline cross arrangement method | |
CN109753949B (en) | Multi-window traffic sign detection method based on deep learning | |
CN103886760B (en) | Real-time vehicle detecting system based on traffic video | |
CN101315701B (en) | Movement destination image partition method | |
CN103065494B (en) | Free parking space detection method based on computer vision | |
CN111076096B (en) | Gas pipe network leakage identification method and device | |
CN111369539B (en) | Building facade window detecting system based on multi-feature image fusion | |
CN111339905A (en) | CIM well lid state visual detection system based on deep learning and multi-view angle | |
CN103945197B (en) | Electric power facility external force damage prevention early warning scheme based on Video Motion Detection technology | |
CN105574468A (en) | Video flame detection method, device and system | |
CN114120141A (en) | All-weather remote sensing monitoring automatic analysis method and system thereof | |
CN110636281A (en) | Real-time monitoring camera shielding detection method based on background model | |
CN105374037A (en) | Checkerboard angular point automatic screening method of corner detection | |
CN110210428A (en) | A kind of smog root node detection method under remote complex environment based on MSER | |
CN103425959A (en) | Flame video detection method for identifying fire hazard |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |