CN112052821B - Fire-fighting channel safety detection method, device, equipment and storage medium - Google Patents

Fire-fighting channel safety detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN112052821B
CN112052821B CN202010970417.6A CN202010970417A CN112052821B CN 112052821 B CN112052821 B CN 112052821B CN 202010970417 A CN202010970417 A CN 202010970417A CN 112052821 B CN112052821 B CN 112052821B
Authority
CN
China
Prior art keywords
fire
picture
fighting
channel
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010970417.6A
Other languages
Chinese (zh)
Other versions
CN112052821A (en
Inventor
张子扬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Visual Intelligence Innovation Center Co ltd
Original Assignee
Zhejiang Smart Video Security Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Smart Video Security Innovation Center Co Ltd filed Critical Zhejiang Smart Video Security Innovation Center Co Ltd
Priority to CN202010970417.6A priority Critical patent/CN112052821B/en
Publication of CN112052821A publication Critical patent/CN112052821A/en
Application granted granted Critical
Publication of CN112052821B publication Critical patent/CN112052821B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a fire control channel safety detection method, which comprises the following steps: acquiring a fire control channel picture and a reference picture to be detected; inputting the firefighting channel picture to be detected and the reference picture into a pre-trained feature extraction model which ignores illumination change to obtain picture features to be detected and reference picture features; and detecting whether an obstacle exists in the fire fighting access according to the similarity of the picture features to be detected and the reference picture features. According to the fire-fighting channel safety detection method disclosed by the invention, the image features to be detected and the reference image features under the condition of fire-fighting channel safety are extracted by training the feature extraction model which ignores illumination changes in advance, whether obstacles exist in the fire-fighting channel is determined based on the feature similarity, interference caused by the illumination changes is eliminated, and the detection accuracy is greatly improved.

Description

Fire-fighting channel safety detection method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of computer vision, in particular to a fire-fighting channel safety detection method, a fire-fighting channel safety detection device, fire-fighting channel safety detection equipment and a fire-fighting channel safety detection storage medium.
Background
The fire-fighting channel is used for firefighters to rescue and evacuate trapped people when various dangerous situations occur, and any unit or person cannot occupy, block or seal the fire-fighting channel. However, since fire-fighting channels are generally lack of management, various garbage, objects, vehicles and other obstacles often appear in the fire-fighting channels to block the fire-fighting channels, and when dangerous situations occur, huge harm is brought to lives and properties of people. Therefore, detection of obstacles to a fire passage is particularly important.
In the prior art, manual inspection is mainly relied on, namely, special staff is assigned to a specific fire-fighting channel for inspection at regular intervals, and the method is simple and feasible, does not need to rely on any equipment, increases the burden of the staff, and can not timely find potential safety hazards in the fire-fighting channel, so that the purposes of real-time detection and automatic detection can not be achieved.
Disclosure of Invention
The embodiment of the disclosure provides a fire-fighting access safety detection method, a fire-fighting access safety detection device, fire-fighting access safety detection equipment and a storage medium. The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview and is intended to neither identify key/critical elements nor delineate the scope of such embodiments. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
In a first aspect, an embodiment of the present disclosure provides a fire-fighting access safety detection method, including:
acquiring a fire control channel picture and a reference picture to be detected;
inputting a firefighting channel picture to be detected and a reference picture into a pre-trained feature extraction model which ignores illumination change to obtain picture features to be detected and reference picture features;
and detecting whether an obstacle exists in the fire-fighting channel according to the similarity of the picture features to be detected and the reference picture features.
Optionally, detecting whether an obstacle exists in the fire-fighting channel according to the similarity of the picture feature to be detected and the reference picture feature includes:
calculating cosine similarity between the picture features to be detected and the reference picture features;
and when the cosine similarity is smaller than or equal to a preset similarity threshold, determining that an obstacle exists in the fire-fighting channel.
Optionally, after determining that there is an obstacle in the fire-fighting access, further comprising:
and calculating the number of the fire fighting access pictures with the continuous obstacles, and outputting alarm information when the number is larger than or equal to a preset number threshold value.
Optionally, before inputting the firefighting channel picture to be detected and the reference picture into the pre-trained feature extraction model which ignores illumination variation, the method further comprises:
constructing fire control channel scene data sets under different illumination conditions;
and training a feature extraction model which ignores illumination changes based on the data set.
Optionally, constructing the fire channel scene data set under different lighting conditions includes:
constructing a plurality of 3D fire-fighting channel scene models through 3D modeling software;
setting a key frame and time-varying ambient light in a time axis of the model;
selecting a plurality of camera view angles by each 3D model, and deriving video frames with different illumination conditions under each camera view angle;
adding barriers in a fire-fighting channel in the same camera view angle, and guiding out a video frame again;
the derived video frames are used as a training data set.
Optionally, the feature extraction model that ignores illumination changes is a deep learning based convolutional neural network model.
In a second aspect, embodiments of the present disclosure provide a fire-fighting access safety detection device, including:
the acquisition module is used for acquiring a fire-fighting channel picture to be detected and a reference picture;
the feature extraction module is used for inputting the firefighting channel picture to be detected and the reference picture into a pre-trained feature extraction model which ignores illumination change to obtain picture features to be detected and reference picture features;
the detection module is used for detecting whether an obstacle exists in the fire-fighting channel according to the similarity of the picture features to be detected and the reference picture features.
Optionally, the detection module includes:
the computing unit is used for computing cosine similarity between the picture features to be detected and the reference picture features;
and the determining unit is used for determining that an obstacle exists in the fire-fighting channel when the cosine similarity is smaller than or equal to a preset similarity threshold value.
In a third aspect, an embodiment of the present disclosure provides a fire-fighting access safety detection device, including a processor and a memory storing program instructions, the processor being configured to execute the fire-fighting access safety detection method provided by the above embodiment when executing the program instructions.
In a fourth aspect, embodiments of the present disclosure provide a computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement a fire channel security detection method provided by the above embodiments.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
according to the fire-fighting channel safety detection method, firstly, the feature extraction model which ignores illumination change is trained in advance, then the picture feature to be detected and the reference picture feature under the condition of fire-fighting channel safety are extracted, whether an obstacle exists in the fire-fighting channel is detected based on feature similarity, the feature extraction model is focused on picture feature extraction, the influence of illumination change is ignored, interference caused by the illumination change is eliminated, detection accuracy is greatly improved, and the safety of the fire-fighting channel can be automatically detected in real time.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a flow diagram illustrating a fire channel security detection method according to an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a fire channel security detection method according to an example embodiment;
FIG. 3 is a schematic diagram illustrating one different scene state according to an example embodiment;
FIG. 4 is a schematic diagram of a fire channel safety detection device according to an exemplary embodiment;
FIG. 5 is a schematic structural view of a fire channel safety detection device according to an example embodiment;
fig. 6 is a schematic diagram of a computer storage medium shown according to an example embodiment.
Detailed Description
So that the manner in which the features and techniques of the disclosed embodiments can be understood in more detail, a more particular description of the embodiments of the disclosure, briefly summarized below, may be had by reference to the appended drawings, which are not intended to be limiting of the embodiments of the disclosure. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may still be practiced without these details. In other instances, well-known structures and devices may be shown simplified in order to simplify the drawing.
The embodiment of the disclosure trains a feature extraction model capable of ignoring illumination changes based on a deep learning method, avoids the problem that the traditional method is easy to occur in a complex illumination environment, and realizes the comparison of the changes of picture contents in a more complex and changeable illumination environment. The method can improve the accuracy of fire control channel safety detection to a certain extent and increase the support to special application scenes.
The following describes in detail the fire-fighting access safety detection method, device, equipment and storage medium provided in the embodiments of the present application with reference to fig. 1 to fig. 6.
Referring to fig. 1, the method specifically includes the following steps;
s101, acquiring a fire fighting access picture to be detected and a reference picture.
The fire-fighting channel is used for firefighters to rescue and evacuate trapped people when various dangerous situations occur, and the detection of whether the fire-fighting channel is safe and smooth has great significance for guaranteeing the life and property safety of people. Firstly, a fire-fighting channel picture and a reference picture to be detected are acquired, in some optional embodiments, cameras are erected in a fire-fighting channel opening and a fire-fighting channel to be detected, the fire-fighting channel picture and the reference picture to be detected are acquired through the cameras, wherein the fire-fighting channel picture to be detected is acquired in real time, and the reference picture can be acquired and stored in advance.
The reference picture is a scene picture of the fire-fighting channel under the condition of safe unblocked, for example, the reference picture comprises a scene picture of the fire-fighting channel under the condition of unblocked, and comprises a scene picture of the fire-fighting channel under the condition of unblocked outdoors.
S102, inputting the firefighting channel picture to be detected and the reference picture into a pre-trained feature extraction model which ignores illumination change, and obtaining the picture feature to be detected and the reference picture feature.
The method comprises the steps of training an image feature extraction model capable of outputting neglected illumination in a deep learning mode, specifically, firstly constructing a data set used for model training, generating the data set by adopting a 3D modeling mode, adjusting different ambient lights in the same fire-fighting channel scene to obtain a series of unobstructed samples, and simultaneously adding obstacles in the fire-fighting channel to obtain the blocked samples under different illumination conditions. As shown in fig. 3, scene state 1 and scene state 2 differ by one carton, but are all located under different lighting conditions.
The specific manufacturing method includes that firstly, a plurality of different types of 3D fire-fighting channel scene models are built through 3D modeling software, for example, a 3D scene model is built through Blender modeling software, other modeling software can be adopted by a person skilled in the art, and the embodiment of the disclosure is not limited in detail.
The key frames and the time-varying ambient light are then set in the time axis of the model in order to obtain samples under different lighting conditions. The method comprises the steps of selecting a plurality of camera view angles for each selected 3D model, deriving video frames with different illumination conditions from each camera view angle, adding barriers in fire-fighting channels in the same camera view angle, deriving video frames again to obtain samples blocked under different illumination conditions, taking the derived video frames as training data sets, and obtaining sufficient training data by continuously selecting different 3D models, different camera view angles and adding different barriers, wherein the training data comprise samples with the fire-fighting channels being safe and smooth under different illumination conditions and samples with the fire-fighting channels being blocked under different illumination conditions.
The obtained training data is enhanced, for example, gaussian noise is added, the robustness of the model is improved through adding noise data, overturning, translation and the like can be performed on the training data, the training data quantity is increased, and the generalization capability of the model is improved.
Based on the obtained training data, a feature extraction model ignoring illumination changes is trained, and the feature extraction model can be a convolutional neural network model based on deep learning, for example, a ResNet50 convolutional neural network model is adopted.
Assuming that fire-fighting channel scenes can be divided into unobstructed and blocked, the optimization goal of the network is to make the picture feature distances of the same scene in the same state close, and at the same time make the picture feature distances between different states of the same scene as far as possible, so the loss function is defined as:
L=max(d(a,p)-d(a,n)+margin,0)
its input is a triplet < a, p, n >, where a is the anchor sample, p represents the same class of samples as a, n represents a different class of samples as a, d () is a distance function, and the optimization objective of the network is to pull closer the distance of a, p and farther the distance of a, n.
For example, when a is a clear fire-fighting channel sample picture, p is a clear fire-fighting channel sample picture of different illumination conditions in the same scene, and n is a fire-fighting channel sample picture blocked by an obstacle in the same scene.
Inputting the firefighting channel picture to be detected and the reference picture into a pre-trained feature extraction model which ignores illumination change, and obtaining the picture feature to be detected and the reference picture feature.
f N =CNN(X F ,W * )
f S =CNN(X s ,W * )
Wherein CNN (, W) represents a trained feature extraction model ignoring illumination changes * Is the final parameter obtained by training the network, X F Is a picture of a fire-fighting channel to be detected, X S Is a reference pictureSheet, f N Is the characteristic of the extracted picture to be detected, f S Is a reference picture feature.
The step trains a feature extraction model insensitive to illumination, and realizes the extraction of picture features in more complex and changeable illumination environments.
S103, detecting whether an obstacle exists in the fire-fighting channel according to the similarity of the picture features to be detected and the reference picture features.
Specifically, the similarity between the picture feature to be detected and the reference picture feature is calculated, the cosine similarity is calculated in the embodiment of the disclosure, and other similarities can be calculated by those skilled in the art. When the calculated cosine similarity is smaller than or equal to a preset similarity threshold, the fact that the difference between the characteristics of the picture to be detected and the characteristics of the reference picture is large is indicated, and the existence of the obstacle in the fire fighting channel is determined. When the cosine similarity is larger than a preset similarity threshold, the characteristics of the picture to be detected and the characteristics of the reference picture are similar, and the clear fire-fighting channel is determined. The specific similarity threshold may be set by one skilled in the art.
Wherein, the calculation formula of similarity includes:
Figure BDA0002683829070000061
when the calculated feature similarity is smaller than or equal to a preset similarity threshold, it is determined that an obstacle exists in the fire-fighting access picture detected at the moment, but the obstacle in the picture may temporarily appear, for example, a pedestrian passes by, and the system does not immediately send alarm information at the moment, and also needs to detect whether the obstacle stays in the fire-fighting access all the time. Therefore, the number of the fire-fighting channel pictures with the continuous obstacles is also required to be calculated, if the continuous preset number of pictures are provided with the obstacles, the continuous stop obstacles in the fire-fighting channel are determined, the fire-fighting channel is likely to be jammed, and at the moment, the system outputs alarm information to prompt relevant staff that the obstacles are in the fire-fighting channel.
In some alternative embodiments, the alarm information is sent to an upper computer, and the upper computer can be a personal computer of a duty room of a related worker or a mobile phone of the related worker, and the related worker can go to the site to remove the obstacle in time after receiving the alarm information.
Through the step, the obstacle in the fire-fighting channel can be automatically detected in real time, alarm information is sent out, and the workload of staff is reduced.
In order to facilitate understanding of the fire-fighting access safety detection method provided in the embodiments of the present application, the following description is made with reference to fig. 2.
As shown in fig. 2, the method includes:
s201, acquiring a fire-fighting channel picture to be detected and a reference picture, wherein the reference picture is a scene picture under the condition that the fire-fighting channel is safe and unobstructed.
S202, constructing a fire-fighting channel scene data set under different illumination conditions.
S203, training a feature extraction model which ignores illumination changes based on the data set.
S204, inputting the firefighting channel picture to be detected and the reference picture into a pre-trained feature extraction model which ignores illumination change, and obtaining the picture feature to be detected and the reference picture feature.
S205 calculates cosine similarity between the picture feature to be detected and the reference picture feature.
S206, judging whether the cosine similarity is smaller than or equal to a preset similarity threshold, executing step S207, determining that an obstacle exists in the fire-fighting channel picture to be detected, and executing step S201 when the cosine similarity is larger than the preset similarity threshold, and acquiring the fire-fighting channel picture to be detected and the reference picture.
S207, determining that an obstacle exists in the fire fighting access picture to be detected.
S208 calculates the number of fire-fighting access pictures in which obstacles exist continuously.
S209 judges whether the number of the fire-fighting channel pictures with the continuous obstacles is larger than or equal to a preset number threshold, when the number of the fire-fighting channel pictures with the continuous obstacles is larger than or equal to the preset number threshold, step S210 is executed, alarm information is output, and when the number of the fire-fighting channel pictures with the continuous obstacles is smaller than a preset similarity threshold, step S201 is executed, and a fire-fighting channel picture to be detected and a reference picture are obtained.
S210 outputs alarm information.
According to the fire-fighting channel detection method provided by the embodiment of the disclosure, the picture feature to be detected and the reference picture feature under the condition of fire-fighting channel safety are extracted according to the feature extraction model which is trained in advance and ignores illumination change, whether an obstacle exists in the fire-fighting channel is judged based on feature similarity, the feature extraction model is focused on picture feature extraction, the influence of illumination change is ignored, the interference caused by illumination change is eliminated, the detection accuracy is greatly improved, the safety of the fire-fighting channel can be detected in real time and automatically, and alarm information can be sent out timely.
In a second aspect, an embodiment of the present disclosure further provides a fire-fighting access safety detection device for performing the fire-fighting access safety detection method of the above embodiment, as shown in fig. 4, where the device includes:
an acquisition module 401, configured to acquire a fire-fighting access picture to be detected and a reference picture;
the feature extraction module 402 is configured to input a firefighting channel picture to be detected and a reference picture into a pre-trained feature extraction model for ignoring illumination changes, so as to obtain a picture feature to be detected and a reference picture feature;
the detection module 403 is configured to detect whether an obstacle exists in the fire-fighting channel according to the similarity between the image feature to be detected and the reference image feature.
Optionally, the detection module 403 includes:
the computing unit is used for computing cosine similarity between the picture features to be detected and the reference picture features;
and the determining unit is used for determining that an obstacle exists in the fire-fighting channel when the cosine similarity is smaller than or equal to a preset similarity threshold value.
It should be noted that, when the fire-fighting access safety detection device provided in the above embodiment performs the fire-fighting access safety detection method, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the fire-fighting channel safety detection device and the fire-fighting channel safety detection method provided in the foregoing embodiments belong to the same concept, which embody the detailed implementation process in the method embodiment, and are not repeated here.
In a third aspect, an embodiment of the present disclosure further provides an electronic device corresponding to the fire-fighting access safety detection method provided in the foregoing embodiment, so as to execute the fire-fighting access safety detection method.
Referring to fig. 5, a schematic diagram of an electronic device according to some embodiments of the present application is shown. As shown in fig. 5, the electronic device includes: processor 500, memory 501, bus 502 and communication interface 503, processor 500, communication interface 503 and memory 501 being connected by bus 502; the memory 501 stores a computer program that can be executed on the processor 500, and when the processor 500 executes the computer program, the fire-fighting access safety detection method provided in any of the foregoing embodiments of the present application is executed.
The memory 501 may include a high-speed random access memory (RAM: random Access Memory), and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The communication connection between the system network element and at least one other network element is implemented via at least one communication interface 503 (which may be wired or wireless), the internet, a wide area network, a local network, a metropolitan area network, etc. may be used.
Bus 502 may be an ISA bus, a PCI bus, an EISA bus, or the like. The buses may be divided into address buses, data buses, control buses, etc. The memory 501 is configured to store a program, and the processor 500 executes the program after receiving an execution instruction, and the fire-fighting access safety detection method disclosed in any of the embodiments of the present application may be applied to the processor 500 or implemented by the processor 500.
The processor 500 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware or instructions in software in the processor 500. The processor 500 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a Digital Signal Processor (DSP), application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The disclosed methods, steps, and logic blocks in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be embodied directly in hardware, in a decoded processor, or in a combination of hardware and software modules in a decoded processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in the memory 501, and the processor 500 reads the information in the memory 501, and in combination with its hardware, performs the steps of the method described above.
The electronic equipment provided by the embodiment of the application and the fire-fighting access safety detection method provided by the embodiment of the application are the same in conception and have the same beneficial effects as the method adopted, operated or realized by the electronic equipment.
In a fourth aspect, an embodiment of the present application further provides a computer readable storage medium corresponding to the fire-fighting access safety detection method provided in the foregoing embodiment, referring to fig. 6, the computer readable storage medium is shown as an optical disc 600, on which a computer program (i.e. a program product) is stored, where the computer program, when executed by a processor, performs the fire-fighting access safety detection method provided in any of the foregoing embodiments.
It should be noted that examples of the computer readable storage medium may also include, but are not limited to, a phase change memory (PRAM), a Static Random Access Memory (SRAM), a Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash memory, or other optical or magnetic storage medium, which will not be described in detail herein.
The computer readable storage medium provided by the above embodiment of the present application has the same beneficial effects as the method adopted, operated or implemented by the application program stored therein, because of the same inventive concept as the fire fighting access safety detection method provided by the embodiment of the present application.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. A fire passage safety detection method, comprising:
acquiring a fire control channel picture and a reference picture to be detected;
constructing a fire channel scene data set under different lighting conditions, comprising: constructing a plurality of 3D fire-fighting channel scene models through 3D modeling software; setting a key frame and time-varying ambient light in a time axis of the model; selecting a plurality of camera view angles by each 3D model, and deriving video frames with different illumination conditions under each camera view angle; adding barriers in a fire-fighting channel in the same camera view angle, and guiding out a video frame again; taking the derived video frames as a training data set; training a feature extraction model ignoring illumination changes based on the data set;
wherein the loss function is represented by the following formula:
L=max(d(a,p)-d(a,n)+margin,0)
the input of the system is a triplet < a, p, n >, wherein a is an anchor sample, p represents a sample similar to a, n represents a sample different from a, when a is a clear fire-fighting channel sample picture, p is a clear fire-fighting channel sample picture with different illumination conditions under the same scene, n is a fire-fighting channel sample picture with an obstacle blockage under the same scene, d () is a distance function, and the optimization target of the network is to pull the distance of a, p and pull the distance of a, n;
inputting the firefighting channel picture to be detected and the reference picture into a pre-trained feature extraction model which ignores illumination change to obtain picture features to be detected and reference picture features;
and detecting whether an obstacle exists in the fire fighting access according to the similarity of the picture features to be detected and the reference picture features.
2. The method according to claim 1, wherein the detecting whether an obstacle exists in the fire-fighting channel according to the similarity of the picture feature to be detected and the reference picture feature comprises:
calculating cosine similarity between the picture features to be detected and the reference picture features;
and when the cosine similarity is smaller than or equal to a preset similarity threshold, determining that an obstacle exists in the fire fighting channel.
3. The method of claim 2, wherein after determining that an obstacle is present in the fire passage, further comprising:
and calculating the number of the fire fighting access pictures with the continuous obstacles, and outputting alarm information when the number is larger than or equal to a preset number threshold value.
4. The method of claim 1, wherein the feature extraction model that ignores illumination changes is a deep learning based convolutional neural network model.
5. A fire channel safety detection device, comprising:
the acquisition module is used for acquiring a fire-fighting channel picture to be detected and a reference picture;
the model training module is used for constructing a fire control channel scene data set under different illumination conditions, and comprises the following components: constructing a plurality of 3D fire-fighting channel scene models through 3D modeling software; setting a key frame and time-varying ambient light in a time axis of the model; selecting a plurality of camera view angles by each 3D model, and deriving video frames with different illumination conditions under each camera view angle; adding barriers in a fire-fighting channel in the same camera view angle, and guiding out a video frame again; taking the derived video frames as a training data set; training a feature extraction model ignoring illumination changes based on the data set;
wherein the loss function is represented by the following formula:
L=max(d(a,p)-d(a,n)+margin,0)
the input of the system is a triplet < a, p, n >, wherein a is an anchor sample, p represents a sample similar to a, n represents a sample different from a, when a is a clear fire-fighting channel sample picture, p is a clear fire-fighting channel sample picture with different illumination conditions under the same scene, n is a fire-fighting channel sample picture with an obstacle blockage under the same scene, d () is a distance function, and the optimization target of the network is to pull the distance of a, p and pull the distance of a, n;
the feature extraction module is used for inputting the firefighting channel picture to be detected and the reference picture into a pre-trained feature extraction model which ignores illumination change to obtain picture features to be detected and reference picture features;
and the detection module is used for detecting whether an obstacle exists in the fire-fighting channel according to the similarity of the picture features to be detected and the reference picture features.
6. The apparatus of claim 5, wherein the detection module comprises:
the computing unit is used for computing cosine similarity between the picture features to be detected and the reference picture features;
and the determining unit is used for determining that an obstacle exists in the fire fighting channel when the cosine similarity is smaller than or equal to a preset similarity threshold value.
7. A fire path safety detection apparatus comprising a processor and a memory storing program instructions, wherein the processor is configured, when executing the program instructions, to perform the fire path safety detection method of any one of claims 1 to 4.
8. A computer readable medium having stored thereon computer readable instructions executable by a processor to implement a fire channel safety detection method as claimed in any one of claims 1 to 4.
CN202010970417.6A 2020-09-15 2020-09-15 Fire-fighting channel safety detection method, device, equipment and storage medium Active CN112052821B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010970417.6A CN112052821B (en) 2020-09-15 2020-09-15 Fire-fighting channel safety detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010970417.6A CN112052821B (en) 2020-09-15 2020-09-15 Fire-fighting channel safety detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112052821A CN112052821A (en) 2020-12-08
CN112052821B true CN112052821B (en) 2023-07-07

Family

ID=73604262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010970417.6A Active CN112052821B (en) 2020-09-15 2020-09-15 Fire-fighting channel safety detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112052821B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112668423B (en) * 2020-12-18 2024-05-28 平安科技(深圳)有限公司 Corridor sundry detection method and device, terminal equipment and storage medium
CN112966545A (en) * 2020-12-31 2021-06-15 杭州拓深科技有限公司 Average hash-based fire fighting channel occupancy monitoring method and device, electronic device and storage medium
CN112989930A (en) * 2021-02-04 2021-06-18 西安美格智联软件科技有限公司 Method, system, medium and terminal for automatically monitoring fire fighting channel blockage
CN113033367A (en) * 2021-03-18 2021-06-25 山东渤聚通云计算有限公司 Monitoring method and device based on fire-fighting site and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366374A (en) * 2013-07-12 2013-10-23 重庆大学 Fire fighting access obstacle detection method based on image matching
CN104463253A (en) * 2015-01-06 2015-03-25 电子科技大学 Fire fighting access safety detection method based on self-adaptation background study
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss
CN109190446A (en) * 2018-07-06 2019-01-11 西北工业大学 Pedestrian's recognition methods again based on triple focused lost function

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109241896B (en) * 2018-08-28 2022-08-23 腾讯数码(天津)有限公司 Channel safety detection method and device and electronic equipment
US11126870B2 (en) * 2018-10-18 2021-09-21 Cartica Ai Ltd. Method and system for obstacle detection
CN110399856B (en) * 2019-07-31 2021-09-14 上海商汤临港智能科技有限公司 Feature extraction network training method, image processing method, device and equipment
CN110443196A (en) * 2019-08-05 2019-11-12 上海天诚比集科技有限公司 Fire-fighting road occupying detection method based on SSIM algorithm

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103366374A (en) * 2013-07-12 2013-10-23 重庆大学 Fire fighting access obstacle detection method based on image matching
CN104463253A (en) * 2015-01-06 2015-03-25 电子科技大学 Fire fighting access safety detection method based on self-adaptation background study
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss
WO2019128367A1 (en) * 2017-12-26 2019-07-04 广州广电运通金融电子股份有限公司 Face verification method and apparatus based on triplet loss, and computer device and storage medium
CN109190446A (en) * 2018-07-06 2019-01-11 西北工业大学 Pedestrian's recognition methods again based on triple focused lost function

Also Published As

Publication number Publication date
CN112052821A (en) 2020-12-08

Similar Documents

Publication Publication Date Title
CN112052821B (en) Fire-fighting channel safety detection method, device, equipment and storage medium
US20190294881A1 (en) Behavior recognition
WO2021047306A1 (en) Abnormal behavior determination method and apparatus, terminal, and readable storage medium
EP2798578A2 (en) Clustering-based object classification
CN107122743B (en) Security monitoring method and device and electronic equipment
CN111368615B (en) Illegal building early warning method and device and electronic equipment
US20210166042A1 (en) Device and method of objective identification and driving assistance device
KR20190046351A (en) Method and Apparatus for Detecting Intruder
CN111160187B (en) Method, device and system for detecting left-behind object
CN111178119A (en) Intersection state detection method and device, electronic equipment and vehicle
CN111914656A (en) Personnel behavior detection method and device, electronic equipment and storage medium
CN110557628A (en) Method and device for detecting shielding of camera and electronic equipment
CN112330964A (en) Road condition information monitoring method and device
CN111914670A (en) Method, device and system for detecting left-over article and storage medium
Sefat et al. Implementation of vision based intelligent home automation and security system
CN113591758A (en) Human behavior recognition model training method and device and computer equipment
JP6621092B1 (en) Risk determination program and system
CN114359618A (en) Training method of neural network model, electronic equipment and computer program product
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN114155470A (en) River channel area intrusion detection method, system and storage medium
CN106803937B (en) Double-camera video monitoring method, system and monitoring device with text log
CN111008609B (en) Traffic light and lane matching method and device and electronic equipment
CN115830562B (en) Lane information determination method, computer device and medium
CN112001453A (en) Method and device for calculating accuracy of video event detection algorithm
CN111325198B (en) Video object feature extraction method and device, and video object matching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 311215 unit 1, building 1, area C, Qianjiang Century Park, ningwei street, Xiaoshan District, Hangzhou City, Zhejiang Province

Patentee after: Zhejiang Visual Intelligence Innovation Center Co.,Ltd.

Address before: 311215 unit 1, building 1, area C, Qianjiang Century Park, ningwei street, Xiaoshan District, Hangzhou City, Zhejiang Province

Patentee before: Zhejiang smart video security Innovation Center Co.,Ltd.

CP01 Change in the name or title of a patent holder
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201208

Assignee: Institute of Information Technology, Zhejiang Peking University

Assignor: Zhejiang Visual Intelligence Innovation Center Co.,Ltd.

Contract record no.: X2024330000024

Denomination of invention: Fire safety inspection methods, devices, equipment, and storage media for fire exits

Granted publication date: 20230707

License type: Common License

Record date: 20240401

EE01 Entry into force of recordation of patent licensing contract