CN116977634A - Fire smoke detection method based on laser radar point cloud background subtraction - Google Patents

Fire smoke detection method based on laser radar point cloud background subtraction Download PDF

Info

Publication number
CN116977634A
CN116977634A CN202310874836.3A CN202310874836A CN116977634A CN 116977634 A CN116977634 A CN 116977634A CN 202310874836 A CN202310874836 A CN 202310874836A CN 116977634 A CN116977634 A CN 116977634A
Authority
CN
China
Prior art keywords
feature map
image
point cloud
size
multiplied
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310874836.3A
Other languages
Chinese (zh)
Other versions
CN116977634B (en
Inventor
李泊宁
王力
张曦
李欣芮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Fire Research Institute of MEM
Original Assignee
Shenyang Fire Research Institute of MEM
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Fire Research Institute of MEM filed Critical Shenyang Fire Research Institute of MEM
Priority to CN202310874836.3A priority Critical patent/CN116977634B/en
Publication of CN116977634A publication Critical patent/CN116977634A/en
Application granted granted Critical
Publication of CN116977634B publication Critical patent/CN116977634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a fire smoke detection method based on laser radar point cloud background subtraction, which comprises the following steps: acquiring point cloud data as a background when no smoke exists in a monitoring place through a laser radar; performing preprocessing such as background subtraction on a real-time point cloud image acquired by a laser radar; cutting the preprocessed image; respectively carrying out smoke detection on the segmented images by using the trained network model; the results detected at the slit image are aggregated and displayed on the original image. The method effectively utilizes the point cloud echo data collected by the sensor, and has the advantages of being high in accuracy, long in detection distance and good in night monitoring effect.

Description

Fire smoke detection method based on laser radar point cloud background subtraction
Technical Field
The invention relates to the technical field of fire detection and alarm in the field of deep learning, and provides a fire smoke detection method based on laser radar point cloud background subtraction.
Background
According to the statistics of domestic and foreign data, more than 10% of all fires worldwide are caused by common fire sources in life such as cigarette ends. In such fire cases, the initial stage of the fire typically undergoes a smoldering process over a period of time. Many common materials in life, such as paper, sawdust, fiber fabrics and the like, have the risk of smoldering when encountering high-temperature heat sources such as cigarette heads. Smoldering does not produce a more pronounced flame than an open flame, but rather produces a white or bluish-white smoke. But if either oxygen is sufficient or there is a sufficient amount of wind blowing through, smoldering combustion is highly likely to be converted to open flame. The smoldering flame does not generate flame, so that the smoldering flame has extremely strong concealment and is difficult to be found in time in the initial stage. Therefore, the smoke generated by smoldering is detected, the smoldering fire point can be found at the initial stage of fire, a dangerous source is positioned in time, and an effective means is adopted for treatment, so that the loss caused by fire spreading is greatly reduced.
The traditional smoke detection mode mainly comprises a point smoke-sensing fire detector, a light beam smoke-sensing fire detector, an image type fire detector and the like. The principle of the spot smoke-sensing fire detector is basically the same as that of the light beam fire detector: the entering of smog particle can influence the intensity of light between luminescent terminal and the receiving terminal, changes the light intensity through the sensor and changes into the signal of telecommunication, when the collection value reached the default, think that the conflagration takes place promptly. The detector has simple design and convenient installation, and solves the fire alarm problem of certain scenes to a certain extent. The image type fire detector processes and monitors images in the monitor by using methods such as image processing, deep learning and the like, realizes the identification of smoke and has the advantage of wide coverage.
However, the current fire detection method has some problems as well: (1) Insufficient coverage of spot-type and line-type fire detectors typically requires large area arrangements to achieve effective detection. For example, the effective monitoring radius of the spot smoke-sensing fire detector does not exceed ten meters, and a large number of layout parts are needed to realize the omnibearing coverage of a monitoring place; the line fire detector represented by the opposite-type beam smoke-sensing fire detector can only effectively monitor the area between the transmitter and the receiver, and cannot cover the whole place. Meanwhile, the two detection modes are easily affected by dust, winged insects and the like, and the false alarm rate is high. (2) The image type fire detector can realize large-area coverage of a monitoring place through image processing algorithm on video stream detection, but is limited by changeable smoldering smoke shape, the algorithm accuracy can not be ensured, and the smoke detection effect of the image type fire detector is not measured by the related standard at present. Meanwhile, the image type fire detector is greatly disturbed by the environment, is easily affected by strong light, plastic bags and white objects to generate false alarm, and can reduce the reliability and even not work in scenes with poor illumination conditions such as night.
Disclosure of Invention
In view of the above problems, the present invention proposes a new fire detection method: a fire smoke detection method based on laser radar point cloud background subtraction, the method comprising:
s1: acquiring a point cloud image when no smoke exists in a monitoring place through a laser radar, and taking the point cloud image as a background image;
s2: acquiring a real-time point cloud image acquired by a laser radar, and preprocessing the real-time point cloud image based on the background image;
s3: splitting the preprocessed point cloud image to obtain a plurality of split images;
s4: respectively carrying out smoke detection on each segmented image by using the trained network model to obtain a smoke detection result corresponding to each segmented image;
s5: and aggregating and displaying smoke detection results corresponding to each segmented image on the real-time point cloud image.
Optionally, preprocessing the real-time point cloud image based on the background image in step S2 includes:
the background image and the real-time point cloud image are subjected to difference to obtain a first point cloud image after background subtraction;
the point cloud image after the background subtraction is subjected to difference with the background image again to obtain a second point cloud image;
and performing a closing operation on the second point cloud image by using a MORPH_CLOSE function in OpenCV to obtain a preprocessed point cloud image.
Optionally, step S3 of slicing the preprocessed point cloud image, to obtain a plurality of sliced images includes:
splitting the preprocessed point cloud image to obtain two split images, namely a first split image and a second split image; an overlapping region exists between the first segmented image and the second segmented image.
Optionally, step S4 uses the trained network model to perform smoke detection on each segmented image, and obtaining a smoke detection result corresponding to each segmented image includes:
initializing two threads, including a first thread and a second thread;
reading the first segmentation image and the second segmentation image through the first thread and the second thread respectively;
and respectively carrying out smoke detection on the first segmentation image and the second segmentation image read by the first thread and the second thread by using a Shuffle-vit-YOLO network model so as to obtain a first smoke detection frame corresponding to the first segmentation image and a second smoke detection frame corresponding to the second segmentation image.
Optionally, the Shuffle-vit-YOLO network model is composed of convolutions, upsamples, concatenations, shuffle-vit modules, and Shuffle-vit-a modules.
Optionally, the Shuffle-vit module and the Shuffle-vit-a module specifically execute the following steps:
step A01: input feature pair using a convolution kernel of 1 x 1 block convolutionPerforming convolution operation to obtain characteristic->
Step A02: for the features obtained in step A01Scattering and reorganizing by using a torch.transfer function in a PyTorch network frame channel by channel to obtain a characteristic +.>
Step A03: depth separable convolution with convolution kernel size 3 x 3 for the features obtained in step a02Performing convolution operation to obtain characteristic->
Step A04: input feature pair using a convolution kernel of 1 x 1 block convolutionPerforming convolution operation to obtain characteristic->
Step A05: the features obtained in the step A04And input features->Adding to obtain the characteristic->
Step A06: for the features obtained in step A05ReLu operation is performed to obtain the characteristic->
Step a07: depth separable convolution pair input features using convolution kernel size of 3 x 3Performing convolution operation to obtain characteristic->
Step A08: the feature obtained in step A07 is convolved with a convolution kernel size of 1×1Performing convolution operation to obtain characteristic->
Step A09: the characteristics obtained in the step A08The transfer module is fed in, in which the feature is first to be implemented +.>Performing a partitioning operation to change the features into a plurality of blocks by +.>A representation;
wherein the method comprises the steps ofRepresenting +.about.in Einops function library>A function;
step A10: in the transducer module, the features obtained in step A09 are compared withAdding position code to obtain characteristic->
Wherein the method comprises the steps ofRepresenting the splicing operation in PyTorch, < >>A random number code generated using a randn function in PyTorch that represents a dimension of 1;
step A11: the characteristics obtained in the step A10Respectively mapped to->A matrix;
wherein the method comprises the steps ofFor the nn. Linear linear mapping function in the PyTorch framework, +.>For inquiring about>For key(s)>Is a value.
Step A12: calculating the feature matrix obtained in the step A11Is to get the feature->
Wherein the method comprises the steps ofRepresenting a query matrix->And key matrix->Euclidean distance between them; />Representation->A function;
step A13: the step A08 is carried outAnd +.>Performing splicing operation to obtain->
Step A14: the features obtained in step a12 are convolved using depth separable convolutions with a convolution kernel size of 3 x 3Performing convolution operation to obtain characteristic->
Step A15: the characteristics obtained in the step A06Features from step A14->Splicing to obtain characteristic->The output of the Shuffle-vit module;
step A16: the characteristics obtained in the step A06Features from step A14->Adding to obtain the characteristic->The output of the Shuffle-vit-a module.
Optionally, for the first segmented image or the second segmented image, performing smoke detection using a Shuffle-vit-YOLO network model includes:
step B01: the image characteristic diagram read in the step A01 is adjusted to be 640 multiplied by 3;
step B02: performing convolution operation on the image obtained in the step B01 to obtain a feature map with the size of 320 multiplied by 64;
step B03: performing convolution operation on the feature map obtained in the step B02, and sending the feature map to a Shuffle-vit module corresponding to the step A01-step A14 to obtain a feature map with the size of 160 multiplied by 128;
step B04: performing convolution operation on the feature map obtained in the step B03, and sending the feature map to a Shuffle-vit module corresponding to the A01-A15 to obtain a feature map with the size of 80 multiplied by 256;
step B05: performing convolution operation on the feature map obtained in the step B04, and sending the feature map to a Shuffle-vit module corresponding to the A01-A15 to obtain a feature map with the size of 40 multiplied by 512;
step B06: carrying out convolution operation on the feature map obtained in the step B05 and sending the feature map to a Shuffle-vit module corresponding to the step A01-step A15 to obtain a feature map with the size of 20 multiplied by 1024;
step B07: performing convolution operation on the feature map obtained in the step B06 to obtain a feature map with the size of 20 multiplied by 512;
step B08: performing up-sampling operation on the feature map obtained in the step B07 to obtain a feature map with the size of 40 multiplied by 512;
step B09: splicing the feature map obtained in the step B08 with the feature map obtained in the step B05 to obtain a feature map with the size of 40 multiplied by 1024;
step B10: sending the feature map obtained in the step B09 to a Shuffle-vit-a module corresponding to the step A01-step A16 to obtain a feature map with the size of 40 multiplied by 512;
step B11: performing convolution operation on the feature map obtained in the step B10 to obtain a feature map with the size of 40 multiplied by 256;
step B12: performing up-sampling operation on the feature map obtained in the step B11 to obtain a feature map with the size of 80 multiplied by 256;
step B13: splicing the feature map obtained in the step B12 with the feature map obtained in the step B04 to obtain a feature map with the size of 80 multiplied by 512;
step B14: the feature map obtained in the step B13 is sent to a Shuffle-vit-a module corresponding to the step A01-step A16 to obtain a feature map with the size of 80 multiplied by 256;
step B15: b14, sending the feature map obtained in the step B to a YOLO head to obtain a detection result of the smoke with the first size;
step B16: performing convolution operation on the feature map obtained in the step B14 to obtain a feature map with the size of 40 multiplied by 256;
step B17: splicing the feature map obtained in the step B16 with the feature map obtained in the step B11, and sending the feature map to a Shuffle-vit module corresponding to the step A01-step A15 to obtain a feature map with the size of 40 multiplied by 512;
step B18: b17, sending the feature map obtained in the step B to a YOLO head to obtain a detection result of smoke of a second size; the second dimension is greater than the first dimension;
step B19: performing convolution operation on the feature map obtained in the step B17 to obtain a feature map with the size of 20 multiplied by 512;
step B20: splicing the feature map obtained in the step B19 with the feature map obtained in the step B07, and sending the feature map to a Shuffle-vit module corresponding to the step A01-the step A15 to obtain a feature map with the size of 20 multiplied by 1024;
step B21: b20, sending the feature map obtained in the step into a YOLO head to obtain a detection result of smoke of a third size; the third dimension is greater than the second dimension;
step B22: and B15, B18 and B21 are synthesized, and the coordinates of the first smoke boundary frame corresponding to the first segmentation image or the coordinates of the second smoke boundary frame corresponding to the second segmentation image are finally detected.
Optionally, S5 aggregating and displaying the smoke detection result corresponding to each segmented image on the real-time point cloud image includes:
and fusing the first smoke boundary frame and the second smoke boundary frame which are respectively detected by the first thread and the second thread with the real-time point cloud image to obtain a final smoke alarm scene image.
The fire smoke detection algorithm based on laser radar point cloud background subtraction provided by the invention comprises four steps of data acquisition, preprocessing, detection and fusion. The data acquisition is responsible for acquiring data from the laser radar in real time; preprocessing, which is used for preprocessing the point cloud image acquired in the sensor; detecting, namely detecting the preprocessed image by using a network model; and the fusion is used for fusing the detection results to obtain final output. The method effectively utilizes the advantages of wide detection range, long detection distance, no influence of dark illumination at night, high accuracy, long detection distance and good night monitoring effect of the laser radar.
The foregoing description is only an overview of the present invention, and is intended to be implemented in accordance with the teachings of the present invention in order that the same may be more clearly understood and to make the same and other objects, features and advantages of the present invention more readily apparent.
The above, as well as additional objectives, advantages, and features of the present invention will become apparent to those skilled in the art from the following detailed description of a specific embodiment of the present invention when read in conjunction with the accompanying drawings.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 shows an overall flow diagram of an embodiment of the present invention;
FIG. 2 shows a schematic diagram of a Shuffle-vit module of an embodiment of the invention;
FIG. 3 shows a schematic diagram of a detection module according to an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present invention are shown in the drawings, it should be understood that the present invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
The embodiment of the invention provides a fire smoke detection method based on laser radar point cloud background subtraction, and the detailed process is described as follows.
Step S1, acquiring a point cloud image when no smoke exists in a monitoring place through a laser radar, and taking the point cloud image as a background image.
Acquiring a point cloud image of a monitoring place without fire and smoke state through a laser radarAs a background image.
Wherein the laser radar hardware requirements:
measuring range of 150 m@10% reflectivity
The angle of view is 120 ° (horizontal) ×25° (vertical)
Angular resolution 0.18 ° (horizontal) ×0.23 ° (vertical)
Data rate: 45 ten thousand points/second
Point cloud size: 4 pixel
And S2, acquiring a real-time point cloud image acquired by a laser radar, and preprocessing the real-time point cloud image based on the background image.
S2-1: acquiring real-time point cloud images of monitoring places through laser radar,/>The resolution of (2) is 2048×1080.
S2-2: for the real-time point cloud image obtained in the step S2-1And the background image obtained in the step S2-1Making difference to obtain a first point cloud image +.>
Step S2-3: point cloud image obtained by background subtraction in step S2-2Background image obtained with S2-1 +.>Making difference to obtain a second point cloud image after background subtraction>
Step S2-4: and (3) performing a closing operation on the image obtained in the step (S2-3) after the background subtraction, and taking the obtained result as a preprocessed point cloud image.
Wherein the method comprises the steps ofThe MORPH_CLOSE function in OpenCV is used.
And step S3, segmenting the point cloud image preprocessed in the step S2 to obtain a plurality of segmented images. In this embodiment, the preprocessed point cloud image is segmented to obtain two segmented images, which are respectively first segmented imagesAnd a second slit image->The method comprises the steps of carrying out a first treatment on the surface of the An overlapping region exists between the first segmented image and the second segmented image.
Wherein the method comprises the steps ofRepresenting the segmentation of an image with a resolution of 2048×1080 into two parts +.>And->The dimensions are 1080×1080, with 112×1080 regions in between overlapping regions.
And S4, respectively carrying out smoke detection on each segmented image by using the trained network model to obtain a smoke detection result corresponding to each segmented image.
Step S4-1: two threads are initialized: the first Thread1 and the second Thread2 respectively read the first partial image obtained in step S3And a second slit image->Main thread main read step S2 is not preprocessed and segmented real-time point cloud image +.>And (5) carrying out subsequent fusion.
Step S4-2: first dividing the read image in the first Thread1 and the second Thread2 initialized in step S4-1, respectivelyAnd a second slit image->Smoke detection was performed using the Shuffle-vit-YOLO network model.
The Shuffle-vit-YOLO network model, referred to above as the trained network model, is composed of convolutions, upsamples, splices, shuffle-vit and Shuffle-vit-a modules.
Wherein the Shuffle-vit and Shuffle-vit-a modules specifically include the steps of:
step A01: input feature pair using a convolution kernel of 1 x 1 block convolutionPerforming convolution operation to obtain characteristic->
Step A02: for the features obtained for A01Scattering and reorganizing by using a torch.transfer function in a PyTorch network frame channel by channel to obtain a characteristic +.>
Step A03: using a depth of 3 x 3 of convolution kernel sizeFeatures derived from the degree separable convolution pair A02Performing convolution operation to obtain characteristic->
Step A04: input feature pair using a convolution kernel of 1 x 1 block convolutionPerforming convolution operation to obtain characteristic->
Step A05: the features obtained in the step A04And input features->Adding to obtain the characteristic->
Step A06: for the features obtained in step A05ReLu operation is performed to obtain the characteristic->
Step a07: depth separable convolution pair input features using convolution kernel size of 3 x 3Performing convolution operation to obtain characteristic->
Step A08: the feature obtained in step A07 is convolved with a convolution kernel size of 1×1Performing convolution operation to obtain characteristic->
Step A09: the characteristics obtained in the step A08The transfer module is fed in, in which the feature is first to be implemented +.>Performing a partitioning operation to change the features into a plurality of blocks by +.>And (3) representing.
Wherein the method comprises the steps ofRepresenting +.about.in Einops function library>A function.
Step A10: in the transducer module, the features obtained in step A09 are compared withAdding position code to obtain characteristic->
Wherein the method comprises the steps ofRepresenting the splicing operation in PyTorch, < >>Representing a random number code generated using the randn function in PyTorch with dimension 1.
Step A11: the characteristics obtained in the step A10Respectively mapped to->(inquiry),>(bond),>(value) matrix.
Wherein the method comprises the steps ofIs an nn. Linear mapping function in the PyTorch framework.
Step A12: calculating the feature matrix obtained in the step A11Is to get the feature->
Wherein the method comprises the steps ofRepresenting a query matrix->And key matrix->Euclidean distance between->Representation->A function.
Step A13: the step A08 is carried outAnd +.>Performing splicing operation to obtain->
Step A14: the features obtained in step a12 are convolved using depth separable convolutions with a convolution kernel size of 3 x 3Performing convolution operation to obtain characteristic->
Step A15: the characteristics obtained in the step A06Features from step A14->Splicing to obtain characteristic->The output of the Shuffle-vit module.
Step A16: features obtained in A06Features from step A14->Adding to obtain the characteristic->The output of the Shuffle-vit-a module.
The Shuffle-vit-YOLO network model specifically includes the following steps:
step B01: the image feature map read in step a01 is adjusted to a size of 640×640×3.
Step B02: and (3) performing convolution operation on the image obtained in the step B01 to obtain a characteristic diagram with the size of 320 multiplied by 64.
Step B03: and (3) carrying out convolution operation on the feature map obtained in the step B02, and sending the feature map to a Shuffle-vit module corresponding to the step A01-the step A14 to obtain the feature map with the size of 160 multiplied by 128.
Step B04: and (3) carrying out convolution operation on the feature map obtained in the step B03, and sending the feature map to a Shuffle-vit module corresponding to the A01-A15 to obtain the feature map with the size of 80 multiplied by 256.
Step B05: and B04, carrying out convolution operation on the feature map and sending the feature map to a Shuffle-vit module corresponding to A01-A15 to obtain a feature map with the size of 40 multiplied by 512.
Step B06: and (3) carrying out convolution operation on the feature map obtained in the step B05 and sending the feature map to a Shuffle-vit module corresponding to the step A01-the step A15 to obtain a feature map with the size of 20 multiplied by 1024.
Step B07: and (3) performing convolution operation on the feature map obtained in the step B06 to obtain a feature map with the size of 20 multiplied by 512.
Step B08: the feature map obtained in step B07 is up-sampled to obtain a feature map having a size of 40×40×512.
Step B09: and (3) splicing the characteristic map obtained in the step B08 with the characteristic map obtained in the step B05 to obtain the characteristic map with the size of 40 multiplied by 1024.
Step B10: and (3) sending the feature map obtained in the step B09 to a Shuffle-vit-a module corresponding to the step A01-step A16 to obtain a feature map with the size of 40 multiplied by 512.
Step B11: and (3) performing convolution operation on the feature map obtained in the step (B10) to obtain a feature map with the size of 40 multiplied by 256.
Step B12: and (3) performing up-sampling operation on the feature map obtained in the step B11 to obtain a feature map with the size of 80 multiplied by 256.
Step B13: and (3) splicing the characteristic diagram obtained in the step B12 with the characteristic diagram obtained in the step B04 to obtain the characteristic diagram with the size of 80 multiplied by 512.
Step B14: and (3) sending the feature map obtained in the step B13 to a Shuffle-vit-a module corresponding to the step A01-step A16 to obtain the feature map with the size of 80 multiplied by 256.
Step B15: and B, sending the feature map obtained in the step B14 into a YOLO head to obtain a detection result of the small-size smoke.
Step B16: and (3) performing convolution operation on the feature map obtained in the step (B14) to obtain a feature map with the size of 40 multiplied by 256.
Step B17: and C, splicing the feature map obtained in the step B16 with the feature map obtained in the step B11, and sending the feature map to a Shuffle-vit module corresponding to the step A01-the step A15 to obtain the feature map with the size of 40 multiplied by 512.
Step B18: and B17, sending the feature map obtained in the step B17 into a YOLO head to obtain a detection result of medium-size smoke.
Step B19: and (3) performing convolution operation on the feature map obtained in the step B17 to obtain a feature map with the size of 20 multiplied by 512.
Step B20: and C, splicing the feature map obtained in the step B19 with the feature map obtained in the step B07, and sending the feature map to a Shuffle-vit module corresponding to the step A01-the step A15 to obtain the feature map with the size of 20 multiplied by 1024.
Step B21: and B20, sending the feature map obtained in the step into a YOLO head to obtain a detection result of the large-size smoke.
Step B22: and (3) integrating the smoke detection results in the step (B15), the step (B18) and the step (B21), and outputting the coordinates of the boundary box of the finally detected smoke.
And S5, aggregating and displaying smoke detection results corresponding to each segmented image on the real-time point cloud image.
The first smoke boundary box and the second smoke boundary box which are respectively detected by the first Thread1 and the second Thread2 are respectively detected with the real-time point cloud imageAnd fusing to obtain a final smoke alarm scene image.
The fire smoke detection method based on the laser radar point cloud background subtraction provided by the embodiment of the invention effectively utilizes the advantages of wide detection range, long detection distance, no influence of dark illumination at night, high accuracy and good night monitoring effect of the point cloud echo data acquired by the sensor.
It will be clear to those skilled in the art that the specific working processes of the above-described systems, devices, modules and units may refer to the corresponding processes in the foregoing method embodiments, and for brevity, the description is omitted here.
In addition, each functional unit in the embodiments of the present invention may be physically independent, two or more functional units may be integrated together, or all functional units may be integrated in one processing unit. The integrated functional units may be implemented in hardware or in software or firmware.
Those of ordinary skill in the art will appreciate that: the integrated functional units, if implemented in software and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in essence or in whole or in part in the form of a software product stored in a storage medium, comprising instructions for causing a computing device (e.g., a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present invention when the instructions are executed. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a read-only memory (ROM), a random-access memory (RAM), a magnetic disk, or an optical disk, etc.
Alternatively, all or part of the steps of implementing the foregoing method embodiments may be implemented by hardware (such as a personal computer, a server, or a computing device such as a network device) associated with program instructions, where the program instructions may be stored on a computer-readable storage medium, and where the program instructions, when executed by a processor of the computing device, perform all or part of the steps of the method according to the embodiments of the present invention.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all technical features thereof can be replaced by others within the spirit and principle of the present invention; such modifications and substitutions do not depart from the scope of the invention.

Claims (8)

1. The fire smoke detection method based on laser radar point cloud background subtraction is characterized by comprising the following steps of:
s1: acquiring a point cloud image when no smoke exists in a monitoring place through a laser radar, and taking the point cloud image as a background image;
s2: acquiring a real-time point cloud image acquired by a laser radar, and preprocessing the real-time point cloud image based on the background image;
s3: splitting the preprocessed point cloud image to obtain a plurality of split images;
s4: respectively carrying out smoke detection on each segmented image by using the trained network model to obtain a smoke detection result corresponding to each segmented image;
s5: and aggregating and displaying smoke detection results corresponding to each segmented image on the real-time point cloud image.
2. The method of claim 1, wherein preprocessing the real-time point cloud image based on the background image in step S2 comprises:
the background image and the real-time point cloud image are subjected to difference to obtain a first point cloud image after background subtraction;
the point cloud image after the background subtraction is subjected to difference with the background image again to obtain a second point cloud image;
and performing a closing operation on the second point cloud image by using a MORPH_CLOSE function in OpenCV to obtain a preprocessed point cloud image.
3. The method of claim 1, wherein the step S3 of slicing the preprocessed point cloud image to obtain a plurality of sliced images includes:
splitting the preprocessed point cloud image to obtain two split images, namely a first split image and a second split image; an overlapping region exists between the first segmented image and the second segmented image.
4. A method according to claim 3, wherein step S4 of performing smoke detection on each segmented image using the trained network model, respectively, to obtain a smoke detection result corresponding to each segmented image includes:
initializing two threads, including a first thread and a second thread;
reading the first segmentation image and the second segmentation image through the first thread and the second thread respectively;
and respectively carrying out smoke detection on the first segmentation image and the second segmentation image read by the first thread and the second thread by using a Shuffle-vit-YOLO network model so as to obtain a first smoke detection frame corresponding to the first segmentation image and a second smoke detection frame corresponding to the second segmentation image.
5. The method of claim 4, wherein the Shuffle-vit-YOLO network model is comprised of convolutions, upsamples, concatenations, shuffle-vit modules, and Shuffle-vit-a modules.
6. The method of claim 5, wherein the Shuffle-vit module and the Shuffle-vit-a module specifically perform the steps of:
step A01: input feature pair using a convolution kernel of 1 x 1 block convolutionPerforming convolution operation to obtain characteristic->
Step A02: for the features obtained in step A01Scattering and reorganizing by using a torch.transfer function in a PyTorch network frame channel by channel to obtain a characteristic +.>
Step A03: depth separable convolution with convolution kernel size 3 x 3 for the features obtained in step a02Performing convolution operation to obtain characteristic->
Step A04: input feature pair using a convolution kernel of 1 x 1 block convolutionPerforming convolution operation to obtain characteristic->
Step A05: the features obtained in the step A04And input features->Adding to obtain the characteristic->
Step A06: for the features obtained in step A05ReLu operation is performed to obtain the characteristic->
Step a07: depth separable convolution pair input features using convolution kernel size of 3 x 3Performing convolution operation to obtain characteristic->
Step A08: the feature obtained in step A07 is convolved with a convolution kernel size of 1×1Performing convolution operation to obtain characteristic->
Step A09: the characteristics obtained in the step A08The transfer module is fed in, in which the feature is first to be implemented +.>Performing a partitioning operation to change the features into a plurality of blocks by +.>A representation;
wherein->Representing +.about.in Einops function library>A function;
step A10: in the transducer module, the features obtained in step A09 are compared withAdding position codes to obtain features
Wherein->Representing the splicing operation in PyTorch, < >>A random number code generated using a randn function in PyTorch that represents a dimension of 1;
step A11: the characteristics obtained in the step A10Respectively mapped to->A matrix;
wherein->For the nn. Linear linear mapping function in the PyTorch framework, +.>For inquiring about>For key(s)>Is a value. Step A12: calculating the feature matrix obtained in step A11 +.>Is to get the feature->Wherein->Representing a query matrix->And key matrix->Euclidean distance between them; />Representation ofA function;
step A13: the step A08 is carried outAnd +.>Performing splicing operation to obtain->
Step A14: the features obtained in step a12 are convolved using depth separable convolutions with a convolution kernel size of 3 x 3Performing convolution operation to obtain characteristic->
Step A15: the characteristics obtained in the step A06Features from step A14->Splicing to obtain characteristic->The output of the Shuffle-vit module;
step A16: the characteristics obtained in the step A06Features from step A14->Adding to obtain the characteristic->The output of the Shuffle-vit-a module.
7. The method of claim 6, wherein for the first or second segmented image, using a Shuffle-vit-YOLO network model for smoke detection comprises:
step B01: the image characteristic diagram read in the step A01 is adjusted to be 640 multiplied by 3;
step B02: performing convolution operation on the image obtained in the step B01 to obtain a feature map with the size of 320 multiplied by 64;
step B03: performing convolution operation on the feature map obtained in the step B02, and sending the feature map to a Shuffle-vit module corresponding to the step A01-step A14 to obtain a feature map with the size of 160 multiplied by 128;
step B04: performing convolution operation on the feature map obtained in the step B03, and sending the feature map to a Shuffle-vit module corresponding to the A01-A15 to obtain a feature map with the size of 80 multiplied by 256;
step B05: performing convolution operation on the feature map obtained in the step B04, and sending the feature map to a Shuffle-vit module corresponding to the A01-A15 to obtain a feature map with the size of 40 multiplied by 512;
step B06: carrying out convolution operation on the feature map obtained in the step B05 and sending the feature map to a Shuffle-vit module corresponding to the step A01-step A15 to obtain a feature map with the size of 20 multiplied by 1024;
step B07: performing convolution operation on the feature map obtained in the step B06 to obtain a feature map with the size of 20 multiplied by 512;
step B08: performing up-sampling operation on the feature map obtained in the step B07 to obtain a feature map with the size of 40 multiplied by 512;
step B09: splicing the feature map obtained in the step B08 with the feature map obtained in the step B05 to obtain a feature map with the size of 40 multiplied by 1024;
step B10: sending the feature map obtained in the step B09 to a Shuffle-vit-a module corresponding to the step A01-step A16 to obtain a feature map with the size of 40 multiplied by 512;
step B11: performing convolution operation on the feature map obtained in the step B10 to obtain a feature map with the size of 40 multiplied by 256;
step B12: performing up-sampling operation on the feature map obtained in the step B11 to obtain a feature map with the size of 80 multiplied by 256;
step B13: splicing the feature map obtained in the step B12 with the feature map obtained in the step B04 to obtain a feature map with the size of 80 multiplied by 512;
step B14: the feature map obtained in the step B13 is sent to a Shuffle-vit-a module corresponding to the step A01-step A16 to obtain a feature map with the size of 80 multiplied by 256;
step B15: b14, sending the feature map obtained in the step B to a YOLO head to obtain a detection result of the smoke with the first size;
step B16: performing convolution operation on the feature map obtained in the step B14 to obtain a feature map with the size of 40 multiplied by 256;
step B17: splicing the feature map obtained in the step B16 with the feature map obtained in the step B11, and sending the feature map to a Shuffle-vit module corresponding to the step A01-step A15 to obtain a feature map with the size of 40 multiplied by 512;
step B18: b17, sending the feature map obtained in the step B to a YOLO head to obtain a detection result of smoke of a second size; the second dimension is greater than the first dimension;
step B19: performing convolution operation on the feature map obtained in the step B17 to obtain a feature map with the size of 20 multiplied by 512;
step B20: splicing the feature map obtained in the step B19 with the feature map obtained in the step B07, and sending the feature map to a Shuffle-vit module corresponding to the step A01-the step A15 to obtain a feature map with the size of 20 multiplied by 1024;
step B21: b20, sending the feature map obtained in the step into a YOLO head to obtain a detection result of smoke of a third size; the third dimension is greater than the second dimension;
step B22: and B15, B18 and B21 are synthesized, and the coordinates of the first smoke boundary frame corresponding to the first segmentation image or the coordinates of the second smoke boundary frame corresponding to the second segmentation image are finally detected.
8. The method according to any one of claims 4 to 7, wherein S5 aggregating and displaying smoke detection results corresponding to each segmented image on the real-time point cloud image comprises:
and fusing the first smoke boundary frame and the second smoke boundary frame which are respectively detected by the first thread and the second thread with the real-time point cloud image to obtain a final smoke alarm scene image.
CN202310874836.3A 2023-07-17 2023-07-17 Fire smoke detection method based on laser radar point cloud background subtraction Active CN116977634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310874836.3A CN116977634B (en) 2023-07-17 2023-07-17 Fire smoke detection method based on laser radar point cloud background subtraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310874836.3A CN116977634B (en) 2023-07-17 2023-07-17 Fire smoke detection method based on laser radar point cloud background subtraction

Publications (2)

Publication Number Publication Date
CN116977634A true CN116977634A (en) 2023-10-31
CN116977634B CN116977634B (en) 2024-01-23

Family

ID=88470619

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310874836.3A Active CN116977634B (en) 2023-07-17 2023-07-17 Fire smoke detection method based on laser radar point cloud background subtraction

Country Status (1)

Country Link
CN (1) CN116977634B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
CN112633231A (en) * 2020-12-30 2021-04-09 珠海大横琴科技发展有限公司 Fire disaster identification method and device
CN112861635A (en) * 2021-01-11 2021-05-28 西北工业大学 Fire and smoke real-time detection method based on deep learning
CN113537226A (en) * 2021-05-18 2021-10-22 哈尔滨理工大学 Smoke detection method based on deep learning
CN113793472A (en) * 2021-09-15 2021-12-14 应急管理部沈阳消防研究所 Image type fire detector pose estimation method based on feature depth aggregation network
CN114386493A (en) * 2021-12-27 2022-04-22 天翼物联科技有限公司 Fire detection method, system, device and medium based on flame vision virtualization
CN114677406A (en) * 2021-11-29 2022-06-28 西安交远能源科技有限公司 Method for identifying petroleum flame from video stream by using accumulated frame difference and color statistics
CN114972732A (en) * 2022-05-17 2022-08-30 深圳市爱深盈通信息技术有限公司 Smoke and fire detection method, device, equipment and computer readable storage medium
CN115082817A (en) * 2021-03-10 2022-09-20 中国矿业大学(北京) Flame identification and detection method based on improved convolutional neural network
CN116092259A (en) * 2023-01-19 2023-05-09 北醒(北京)光子科技有限公司 Smoke identification method, processing method, device, storage medium and electronic equipment
CN116189037A (en) * 2022-12-23 2023-05-30 深圳太极数智技术有限公司 Flame detection identification method and device and terminal equipment
CN116434002A (en) * 2023-03-24 2023-07-14 国网河北省电力有限公司电力科学研究院 Smoke detection method, system, medium and equipment based on lightweight neural network

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107749067A (en) * 2017-09-13 2018-03-02 华侨大学 Fire hazard smoke detecting method based on kinetic characteristic and convolutional neural networks
CN112633231A (en) * 2020-12-30 2021-04-09 珠海大横琴科技发展有限公司 Fire disaster identification method and device
CN112861635A (en) * 2021-01-11 2021-05-28 西北工业大学 Fire and smoke real-time detection method based on deep learning
CN115082817A (en) * 2021-03-10 2022-09-20 中国矿业大学(北京) Flame identification and detection method based on improved convolutional neural network
CN113537226A (en) * 2021-05-18 2021-10-22 哈尔滨理工大学 Smoke detection method based on deep learning
CN113793472A (en) * 2021-09-15 2021-12-14 应急管理部沈阳消防研究所 Image type fire detector pose estimation method based on feature depth aggregation network
CN114677406A (en) * 2021-11-29 2022-06-28 西安交远能源科技有限公司 Method for identifying petroleum flame from video stream by using accumulated frame difference and color statistics
CN114386493A (en) * 2021-12-27 2022-04-22 天翼物联科技有限公司 Fire detection method, system, device and medium based on flame vision virtualization
CN114972732A (en) * 2022-05-17 2022-08-30 深圳市爱深盈通信息技术有限公司 Smoke and fire detection method, device, equipment and computer readable storage medium
CN116189037A (en) * 2022-12-23 2023-05-30 深圳太极数智技术有限公司 Flame detection identification method and device and terminal equipment
CN116092259A (en) * 2023-01-19 2023-05-09 北醒(北京)光子科技有限公司 Smoke identification method, processing method, device, storage medium and electronic equipment
CN116434002A (en) * 2023-03-24 2023-07-14 国网河北省电力有限公司电力科学研究院 Smoke detection method, system, medium and equipment based on lightweight neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
YAZAN AL-SMADI 等: "Early Wildfire Smoke Detection Using Different YOLO Models", 《MACHINES》, vol. 11, pages 1 - 18 *
董文奇: "基于视频图像的火灾检测算法研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》, no. 02, pages 026 - 51 *
蒋镕圻 等: "针对弱小无人机目标的轻量级目标检测算法", 《激光与光电子学进展》, vol. 59, no. 8, pages 1 - 12 *

Also Published As

Publication number Publication date
CN116977634B (en) 2024-01-23

Similar Documents

Publication Publication Date Title
CN111033231B (en) System and method for quantifying gas leakage
EP1364351B1 (en) Method and device for detecting fires based on image analysis
US10810858B2 (en) Infrared imaging systems and methods for gas leak detection
EP2686667B1 (en) Mwir sensor for flame detection
CN107223332B (en) Audio visual scene analysis based on acoustic camera
US10070053B2 (en) Method and camera for determining an image adjustment parameter
Krstinić et al. Histogram-based smoke segmentation in forest fire detection system
WO2019213279A1 (en) Infrared imaging systems and methods for oil leak detection
Ko et al. Survey of computer vision–based natural disaster warning systems
CN112113913B (en) Himapari 8 land fire point detection algorithm based on background threshold
US20220035003A1 (en) Method and apparatus for high-confidence people classification, change detection, and nuisance alarm rejection based on shape classifier using 3d point cloud data
CN116153016B (en) Multi-sensor fusion forest fire real-time monitoring and early warning device and method thereof
US11927944B2 (en) Method and system for connected advanced flare analytics
CN116977634B (en) Fire smoke detection method based on laser radar point cloud background subtraction
Saito et al. Cloud discrimination from sky images using a clear-sky index
JP2017188070A (en) Method for increasing reliability in monitoring systems
Liu et al. BriGuard: A lightweight indoor intrusion detection system based on infrared light spot displacement
Ho et al. Nighttime fire smoke detection system based on machine vision
CN112215122B (en) Fire detection method, system, terminal and storage medium based on video image target detection
CN112771568A (en) Infrared image processing method, device, movable platform and computer readable medium
US11921024B2 (en) Airborne particulate density determination using standard user equipment
CN109658359A (en) Aerosol detection system and its detection method
CN110505371B (en) Infrared shielding detection method and camera equipment
KR20090008621A (en) Method and apparatus for detecting a meaningful motion
JP3490196B2 (en) Image processing apparatus and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant