CN113361420A - Mine fire monitoring method, device and equipment based on robot and storage medium - Google Patents

Mine fire monitoring method, device and equipment based on robot and storage medium Download PDF

Info

Publication number
CN113361420A
CN113361420A CN202110648237.0A CN202110648237A CN113361420A CN 113361420 A CN113361420 A CN 113361420A CN 202110648237 A CN202110648237 A CN 202110648237A CN 113361420 A CN113361420 A CN 113361420A
Authority
CN
China
Prior art keywords
robot
flame
result
smoke
monitoring information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110648237.0A
Other languages
Chinese (zh)
Inventor
苗可彬
马建
张德胜
李泽芳
赵云龙
张维振
李起伟
黄增波
丰颖
刘梅华
朱文硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Coal Research Institute CCRI
Original Assignee
China Coal Research Institute CCRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Coal Research Institute CCRI filed Critical China Coal Research Institute CCRI
Priority to CN202110648237.0A priority Critical patent/CN113361420A/en
Publication of CN113361420A publication Critical patent/CN113361420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21FSAFETY DEVICES, TRANSPORT, FILLING-UP, RESCUE, VENTILATION, OR DRAINING IN OR OF MINES OR TUNNELS
    • E21F17/00Methods or devices for use in mines or tunnels, not covered elsewhere
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21FSAFETY DEVICES, TRANSPORT, FILLING-UP, RESCUE, VENTILATION, OR DRAINING IN OR OF MINES OR TUNNELS
    • E21F17/00Methods or devices for use in mines or tunnels, not covered elsewhere
    • E21F17/18Special adaptations of signalling or alarm devices
    • EFIXED CONSTRUCTIONS
    • E21EARTH OR ROCK DRILLING; MINING
    • E21FSAFETY DEVICES, TRANSPORT, FILLING-UP, RESCUE, VENTILATION, OR DRAINING IN OR OF MINES OR TUNNELS
    • E21F5/00Means or methods for preventing, binding, depositing, or removing dust; Preventing explosions or fires
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Mining & Mineral Resources (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Geology (AREA)
  • Geochemistry & Mineralogy (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Geometry (AREA)
  • Fire-Detection Mechanisms (AREA)

Abstract

The utility model discloses a mine fire monitoring method, device, equipment and storage medium based on robot, relating to the technical field of computer, the concrete realization scheme is as follows: acquiring first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring environmental information of the current position of the robot; acquiring second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot; processing the first monitoring information to obtain a first identification result; processing the second monitoring information to obtain a second identification result; and determining whether the current position of the robot has a fire or not according to the first recognition result and the second recognition result. Therefore, the characteristics of the environment, smoke, sensitive gas information and the like of the current position of the robot are fused, and the reliability and accuracy of coal mine fire identification can be improved.

Description

Mine fire monitoring method, device and equipment based on robot and storage medium
Technical Field
The disclosure relates to the technical field of computers, in particular to a mine fire monitoring method, device, equipment and storage medium based on a robot.
Background
The frequent occurrence of coal mine fire brings serious threat to the safety production of mines, and once the fire breaks out, the serious consequences can be caused to the safety production of the mines. In the related art, the fire detection mode is single, and is easily influenced by the environment, for example, in the actual detection of the optical fiber distributed temperature measurement method, a monitoring blind area is easily generated, and the optical fiber is not easy to install and is easily damaged. Therefore, how to find a reliable, effective and high-accuracy mine fire monitoring means is a problem which needs to be solved urgently in mine safety at present.
Disclosure of Invention
The disclosure provides a mine fire monitoring method and device based on a robot, computer equipment and a storage medium.
According to a first aspect of the present disclosure, there is provided a robot-based mine fire monitoring method, comprising:
acquiring first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring environmental information of the current position of the robot;
acquiring second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot;
processing the first monitoring information to obtain a first identification result;
processing the second monitoring information to obtain a second identification result;
and determining whether the current position of the robot has a fire or not according to the first recognition result and the second recognition result.
According to a second aspect of the present disclosure, there is provided a robot-based mine fire monitoring device comprising:
the robot monitoring system comprises a first acquisition module, a second acquisition module and a monitoring module, wherein the first acquisition module is used for acquiring first monitoring information acquired by first equipment in the robot, and the first equipment is used for acquiring environmental information of the current position of the robot;
the second acquisition module is used for acquiring second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot;
the first processing module is used for processing the first monitoring information to obtain a first identification result;
the second processing module is used for processing the second monitoring information to obtain a second identification result;
and the determining module is used for determining whether the current position of the robot has a fire disaster or not according to the first recognition result and the second recognition result.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method as described in an embodiment of the above aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon a computer program having instructions for causing a computer to perform the method of the above-described embodiment of the one aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method of an embodiment of the above-mentioned aspect.
The robot-based mine fire monitoring method, device, equipment and storage medium provided by the disclosure have at least the following beneficial effects:
the device of the embodiment of the disclosure firstly acquires first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring the environment of the current position of the robot, acquires second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot, then processes the first monitoring information to acquire a first identification result, and processes the second monitoring information to acquire a second identification result, and finally determines whether a fire disaster exists in the current position of the robot according to the first identification result and the second identification result. Therefore, the first recognition result and the second recognition result are combined, the characteristics of the environment, smoke, sensitive gas information and the like of the current position of the robot are fused, the reliability and the accuracy of coal mine fire recognition can be improved, and fire monitoring in different areas can be realized due to the fact that the method is based on the robot.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic flow chart of a robot-based mine fire monitoring method provided by the present disclosure;
FIG. 2 is a schematic flow diagram of another robot-based mine fire monitoring method provided by the present disclosure;
FIG. 3 is a schematic flow diagram of yet another robot-based mine fire monitoring method provided by the present disclosure;
fig. 4 is a block diagram of a robot-based mine fire monitoring device according to the present disclosure;
FIG. 5 is a block diagram of an electronic device used to implement an embodiment of the disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The robot-based mine fire monitoring method provided by the present disclosure may be performed by the robot-based mine fire monitoring device provided by the present disclosure, or may be performed by the robot provided by the present disclosure, and the following explains the present disclosure by taking the example that the robot-based mine fire monitoring device provided by the present disclosure performs the robot-based mine fire monitoring method provided by the present disclosure, but the present disclosure is not limited thereto, and is hereinafter simply referred to as "device".
The following describes a robot-based mine fire monitoring method provided by the present disclosure in detail with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a robot-based mine fire monitoring method according to an embodiment of the present disclosure.
As shown in fig. 1, the robot-based mine fire monitoring method may include the steps of:
step 101, acquiring first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring environmental information of the current position of the robot.
The first device may be a camera, such as a high definition camera, an infrared camera, a night vision camera, and the like, which is not limited thereto. It is understood that the first device may be configured to capture video of an environment where the robot is located to obtain the first monitoring information.
The first monitoring information can be video image information, and it can be understood that the device can monitor the environment of the mine position where the robot is located by using the first monitoring information, for example, whether flame, smoke, and the like exist in the environment is identified by using the first monitoring information, so that characteristic information indicating that the mine has fire hazard can be obtained, and the characteristic information is not limited.
It should be noted that, when the robot is patrolling or performing other operations, the device may collect information about the environment of the current location of the robot, for example, a camera may be used to capture the surrounding environment of the robot to obtain video information about the environment of the current location of the robot, so that the device may analyze the video information to monitor whether a fire occurs at the location of the robot.
Because the robot patrols and examines in the mine, perhaps when carrying out other operations, can constantly carry out the change of position to the device can obtain first monitoring information in the different positions of mine, with the realization to the conflagration monitoring of each position of mine, thereby can reduce the blind area.
And 102, acquiring second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot.
The second device may be a sensor, such as a photoelectric smoke sensor, a gas-sensitive smoke sensor, or any component having a certain measurement capability on smoke and sensitive gas, which is not limited in this disclosure.
The second monitoring information may be, but is not limited to, the concentration of smoke and the concentration of a sensitive gas.
It can be understood that when a fire disaster occurs, smoke is usually associated, and when the fire disaster is serious, the smoke concentration of the fire disaster occurring position is relatively high, so that the device can perform photoelectric monitoring on the smoke of the current position of the robot through the smoke sensor, and then can determine whether the fire disaster occurs at the current position of the robot.
In addition, some sensitive gases, such as carbon monoxide, sulfur dioxide, methane, etc., are generally present in the mine, and in the event of a fire in the mine, a large amount of carbon monoxide, or other sensitive gases, is generally present. Therefore, the device can collect the sensitive gas in the air through the second equipment, so that the occurrence condition of the fire can be determined according to the variation or concentration of the sensitive gas, and the device is not limited in this respect.
It should be noted that, since a plurality of sensitive gases may appear in a mine during a fire, and characteristics of different types of sensitive gases may be different, the disclosure does not limit the second device and the second monitoring information.
Step 103, processing the first monitoring information to obtain a first identification result.
In order to more accurately identify a possible fire in the mine, the device may perform different processing on the first monitoring information to obtain identification results about different angles of the fire, such as identification results of smoke, identification results of flames, and the like, which are not limited herein. Wherein, smog and flame are the characteristics that the fire emergence shows, and the device can be through processing first monitoring information to confirm whether flame or smog have appeared in the environment at robot position.
For example, if the device determines whether flames appear in the environment of the current position of the robot, the device may perform preprocessing on the first monitoring information, that is, the video image, for example, may perform graying, binarization, and filtering denoising on the video image, and then calculate the area of the flames in each frame of image in the video. If the area of the flame is smaller than the preset area threshold, it may be determined that no flame occurs in the current video image, and if the area of the flame is larger than the preset area threshold, it may be determined that the possibility of flame occurring in the current video image is high, and then the apparatus may further analyze the video image including the flame to determine whether a real flame occurs in the video image, for example, the method may be color analysis, gray gradient analysis, or the like, which is not limited by the present disclosure.
Or, if the device determines whether smoke is present in the environment of the current location, the device may perform preprocessing on the first monitoring information, that is, the video image, where the preprocessing may be different from the preprocessing performed by the device when determining whether flame is present in the environment of the current location of the robot, for example, graying, image smoothing, and the like may be performed on the video image, then the smoke motion region in the video image may be segmented, and then the smoke region is determined through the neural network generated by training, which is not limited.
And 104, processing the second monitoring information to obtain a second identification result.
The second recognition result can be a recognition result of the device on the information of smoke and sensitive gas at the current position of the robot.
For example, after acquiring smoke information and temperature information of the current position of the robot, the second device may determine smoke concentration and sensitive gas concentration generated by fire combustion, if the smoke concentration and/or the sensitive gas concentration is greater than a specified threshold, it may be determined that the second recognition result is smoke, and if the smoke concentration and/or the sensitive gas concentration is less than the specified threshold, it may be determined that the second recognition result is no smoke, which is not limited herein.
And 105, determining whether the current position of the robot has a fire or not according to the first recognition result and the second recognition result.
It should be noted that, in the present disclosure, the first identification result and the second identification result may be combined to implement composite monitoring of a fire, so as to improve reliability, effectiveness and accuracy of monitoring a mine fire, and to find a fire in a short time when a fire occurs in a mine.
Specifically, the first recognition result can be divided into a smoke recognition result and a flame recognition result, if the flame recognition result is that flame exists, the device can determine that a fire disaster occurs in the current mine, wherein the second recognition result is also a smoke recognition result, and when the device determines whether the fire disaster occurs in the mine through smoke, the device needs to combine the first recognition result and the second recognition result, that is, only when the smoke recognition result in the first recognition result is that smoke exists and the second recognition result is that smoke exists, the device can indicate that the fire disaster occurs in the current mine.
It is understood that if the device detects either smoke or flame, it can determine that the robot is in a fire, or if smoke and flame exist at the same time, it indicates that the robot is in a fire.
It should be noted that, after determining that the robot is currently located in a fire, the device may warn the fire through an alarm device capable of sending out any type of alarm information, and the disclosure is not limited herein.
The device of the embodiment of the disclosure firstly acquires first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring the environment of the current position of the robot, acquires second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot, then processes the first monitoring information to acquire a first identification result, and processes the second monitoring information to acquire a second identification result, and finally determines whether a fire disaster exists in the current position of the robot according to the first identification result and the second identification result. Therefore, the first recognition result and the second recognition result are combined, the characteristics of the environment, smoke, sensitive gas information and the like of the current position of the robot are fused, the reliability and the accuracy of coal mine fire recognition can be improved, and fire monitoring in different areas can be realized due to the fact that the method is based on the robot.
Fig. 2 is a schematic flow chart of another robot-based mine fire monitoring method according to another embodiment of the present disclosure.
As shown in fig. 2, the robot-based mine fire monitoring method may include the steps of:
step 201, acquiring first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring environmental information of the current position of the robot.
Step 202, second monitoring information acquired by second equipment in the robot is acquired, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot.
It should be noted that, in steps 201 and 202, reference may be made to the specific implementation process of the foregoing embodiment, which is not described herein again.
Step 203, performing motion recognition on the first monitoring information to determine each frame of image containing the suspected smoke area.
The first monitoring information may be video information, that is, a video shot by the device at the current position of the robot, and the video may include environmental information around the robot.
It should be noted that the device can acquire the region where the smoke changes in the first monitoring information through motion recognition, so that the changed region can be extracted from the image in real time for further processing, and therefore, the monitoring efficiency and accuracy can be greatly improved.
Specifically, the device may process each frame of image in the video to determine each frame of image in the video that includes the suspected smoke region. For example, each frame of image in the video may be preprocessed, such as graying, image smoothing, and the like, and then each frame of image obtained after preprocessing is subjected to region segmentation, such as mean background modeling and motion region extraction, so as to obtain each frame of image including a suspected smoke region.
Step 204, inputting each frame of image containing the suspected smoke area into the trained neural network model to determine the smoke area contained in the first monitoring information.
It should be noted that, in order to better distinguish the suspected smoke region from the smoke region, the apparatus may input each frame of image including the suspected smoke region into a trained neural network model, such as a feature extractor, which is not limited to this, so that the trained neural network model may be used to extract the smoke region.
In order to improve the reliability of monitoring the smoke area, the neural network model can extract the smoke area in each frame of image containing the suspected smoke area by combining the smoke motion characteristic, the smoke texture characteristic and the smoke fuzzy model characteristic, so that the smoke area with better reliability in the image is determined by the abundant detail characteristic of smoke, thereby reducing the possibility of false detection caused by the similar targets of water mist and the like to the smoke, and realizing the smoke monitoring in the complex environment.
Step 205, determining a first recognition result according to the area of the smoke region.
Specifically, if an image in which the area of the smoke region is larger than a specified area threshold appears in each frame of image including the suspected smoke region, it is determined that smoke appears in the environment where the robot is currently located. If the area of the smoke area is larger than the designated area threshold, the first recognition result is that smoke exists, and if the area of the smoke area is smaller than the designated area threshold, the first recognition result is that no smoke exists.
Step 206, the second monitoring information is processed to obtain a second identification result.
Optionally, the device may determine, according to the second monitoring information, the smoke concentration and the concentration of the sensitive gas at the current position of the inspection robot, and then determine whether there is smoke at the current position of the robot according to the concentrations of the smoke and the sensitive gas.
Specifically, the device may determine the concentration of smoke in the air and the concentration of the sensitive gas through a smoke sensor and a photoelectric monitoring device, which is not limited herein.
Specifically, a smoke concentration threshold and a sensitive gas concentration threshold may be preset, if the concentration of smoke and/or sensitive gas is greater than a specified threshold, it is determined that smoke exists at the current position of the robot, and if the concentration of smoke and/or sensitive gas is less than the specified threshold, it is determined that no smoke exists at the current position of the robot.
And step 207, determining whether the current position of the robot has a fire disaster or not according to the first recognition result and the second recognition result.
Specifically, if both the first recognition result and the second recognition result are smoke, it is indicated that a fire disaster exists at the current position of the robot, if only one of the first recognition result and the second recognition result is smoke, it is indicated that a fire disaster exists at the current position of the robot, and if both the first recognition result and the second recognition result are no smoke, it is indicated that a fire disaster does not exist at the current position of the robot.
The device in the embodiment of the disclosure firstly acquires first monitoring information acquired by first equipment in a robot, the first equipment is used for acquiring environmental information of a current position of the robot, then acquires second monitoring information acquired by second equipment in the robot, the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot, then carries out motion recognition on the first monitoring information to determine each frame of image containing a suspected smoke area, inputs each frame of image containing the suspected smoke area into a neural network model generated by training to determine the smoke area contained in the first monitoring information, and finally determines whether a fire disaster exists at the current position of the robot according to a first recognition result and a second recognition result. Therefore, the device can determine whether the mine where the current robot is located is in a fire or not from different angles of smoke due to the combination of the first identification result and the second identification result, and the accuracy and the reliability of monitoring are effectively improved.
Fig. 3 is a schematic flow chart of another robot-based mine fire monitoring method according to an embodiment of the present disclosure.
As shown in fig. 3, the robot-based mine fire monitoring method may include the steps of:
step 301, acquiring first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring environmental information of a current position of the robot.
Step 302, second monitoring information acquired by second equipment in the robot is acquired, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot.
It should be noted that, in steps 301 and 302, reference may be made to the specific implementation process of the foregoing embodiment, which is not described herein again.
Step 303, preprocessing the first monitoring information to determine a flame region included in the video information.
Specifically, the device may perform preprocessing, such as graying, binarization, and denoising, on the first monitoring information, that is, the video image information, and then calculate the area of the flame region in the video image, which is not limited in this disclosure.
Step 304, determining a first flame judgment result according to the area of the flame area.
Specifically, the device may compare the area of the flame region with a specified area threshold to obtain a first flame determination result. In the present disclosure, the area of the flame region is greater than the predetermined threshold as the first predetermined result, and if the area of the flame region is less than or equal to the predetermined threshold, the first flame determination result is not the first predetermined result.
Optionally, in a case where the first determination result is not the first specified result, the apparatus may determine that the first recognition result is that the robot is currently located at a position where there is no flame.
Optionally, in a case that the first determination result is the first specified result, the apparatus may further process the video image to determine whether there is a flame at the current position of the robot.
In step 305, in the case that the first flame determination result is the first designated result, a first color analysis is performed on the flame area to determine a second flame determination result.
Specifically, if the current first flame determination result is the first specified result, it indicates that the area of the flame region is larger than the specified threshold, and the device may perform a first color analysis on the flame region, for example, RGB color analysis (R represents red, G represents green, and B represents blue), that is, perform a color analysis on the image of the flame region using the RGB color as a criterion.
It is understood that the device may also utilize HSV color space or HLS color space for analyzing the flame zone, and is not limited herein.
In the present disclosure, a result that the RGB color threshold is greater than the preset RGB color threshold may be used as the second specified result, and when the RGB color threshold of the image in the flame area is greater than the preset RGB color threshold, the apparatus may further process the image in the flame area.
Optionally, in a case that the first flame determination result is not the second determination result, the apparatus may determine that the first recognition result is that the robot is currently located at a position where there is no flame.
And step 306, performing a second color analysis on the flame area to determine a third flame determination result under the condition that the second flame determination result is the second designated result.
Specifically, if the current second flame determination result is the second specified result, it is determined that the RGB color threshold of the image in the flame region is greater than the preset RGB color threshold, and the device may perform a second color analysis on the flame region, for example, a YCbCr flame color analysis (Y is a luminance signal, Cb is a blue component color difference, and Cr is a red component color difference), that is, perform a color analysis on the image in the flame region using the YCbCr color as a criterion.
In the disclosure, a result that the YCbCr color threshold is greater than the preset YCbCr color threshold may be used as the second specified result, and if the YCbCr color threshold is greater than the preset YCbCr color threshold, the apparatus may further process the image of the flame region.
Optionally, in a case that the second flame determination result is not the second specified result, the apparatus may determine that the first recognition result is that the robot is currently located at a position where there is no flame.
And 307, under the condition that the third flame judgment result is a third specified result, performing flame sharp angle analysis on the flame area to determine a first identification result.
Specifically, if the third flame determination result is the third specified result, that is, the YCbCr color threshold is greater than the preset YCbCr color threshold, the device may perform flame map gray gradient analysis and flame tip angle analysis on the flame region image, which is not limited thereto.
The sharp corner refers to a bulge on the edge of the flame shape, the fixed point of the sharp corner is a local extreme point of a flame image, the flame sharp corner has a long and narrow characteristic, and the number of the sharp corners appears irregular jumping when a mine fire occurs, so that the device can judge whether the fire occurs by identifying the number of the sharp corners.
Therefore, the device can determine that the first recognition result is that the robot is in the current position with flames under the condition that the number of sharp corners in the flame area reaches a specified threshold value, and determine that the first recognition result is that the robot is in the current position without flames under the condition that the number of sharp corners in the flame area is smaller than the specified threshold value.
And 308, processing the second monitoring information to obtain a second identification result.
And 309, determining whether the current position of the robot has a fire or not according to the first identification result and the second identification result.
It should be noted that, the specific implementation processes of steps 308 and 309 may refer to embodiments 104 and 105 described above, which are not described herein again.
The device in the disclosed embodiment firstly obtains first monitoring information collected by a first device in the robot, wherein the first device is used for collecting environmental information of the current position of the robot, then obtains second monitoring information collected by a second device in the robot, wherein the second device is used for collecting smoke and sensitive gas information of the current position of the robot, then determines a first flame judgment result according to the area of the flame region, performs first color analysis on the flame region to determine a second flame judgment result under the condition that the first flame judgment result is a first specified result, performs second color analysis on the flame region to determine a third flame judgment result under the condition that the second flame judgment result is a second specified result, performs sharp-angle flame analysis on the flame region under the condition that the third flame judgment result is a third specified result, and under the condition that a sharp angle exists in a flame area, determining that the first identification result is that the robot is in the current position with flames, then processing second monitoring information to obtain a second identification result, and finally determining whether the robot is in the current position with a fire or not according to the first identification result and the second identification result. Therefore, the device can determine whether the current mine where the robot is located is in a fire or not from the perspective of smoke and flame by combining the first recognition result and the second recognition result, and the accuracy and the reliability of monitoring are effectively improved.
In order to realize the embodiment, the embodiment of the disclosure also provides a mine fire monitoring device based on the robot. Fig. 4 is a schematic structural diagram of a mine fire monitoring device based on a robot according to an embodiment of the present disclosure.
As shown in fig. 4, the robot-based mine fire monitoring apparatus includes: a first obtaining module 410, a second obtaining module 420, a first processing module 430, a second processing module 440, and a determining module 450.
A first obtaining module 410, configured to obtain first monitoring information collected by a first device in the robot, where the first device is used to collect environmental information of a current location of the robot;
a second obtaining module 420, configured to obtain second monitoring information collected by a second device in the robot, where the second device is used to collect information about smoke and sensitive gas at a current position of the robot;
the first processing module 430 is configured to process the first monitoring information to obtain a first identification result;
the second processing module 440 is configured to process the second monitoring information to obtain a second identification result;
the determining module 450 is configured to determine whether a fire is present at the current location of the robot according to the first recognition result and the second recognition result.
Optionally, the first monitoring information is video information, and the first processing module is specifically configured to:
performing motion recognition on the first monitoring information to determine each frame of image containing a suspected smoke area;
inputting each frame of image containing a suspected smoke area into a trained neural network model to determine the smoke area contained in the first monitoring information;
and determining a first recognition result according to the area of the smoke area.
Optionally, the first monitoring information is video information, and the first processing module includes:
the first determining unit is used for preprocessing the first monitoring information to determine a flame area contained in the video information;
the second determining unit is used for determining a first flame judgment result according to the area of the flame area;
the third determining unit is used for performing first color analysis on the flame area to determine a second flame judgment result under the condition that the first flame judgment result is the first specified result;
the fourth determining unit is used for performing second color analysis on the flame area to determine a third flame judgment result under the condition that the second flame judgment result is the second specified result;
and the fifth determining unit is used for analyzing the flame sharp angle of the flame area under the condition that the third flame judgment result is the third specified result, and determining the first identification result.
Optionally, the third determining unit is further configured to:
and under the condition that the first judgment result is not the first specified result, determining that the first identification result is that the current position of the robot has no flame.
Optionally, the fourth determining unit is further configured to:
in a case where the second determination result is not the second specification result, the first recognition result is determined to be flameless.
Optionally, the fifth determining unit is further configured to:
and determining that the first recognition result is no flame if the third determination result of flame is not the third specified result.
Optionally, the second processing module is specifically configured to:
according to the second monitoring information, determining the smoke concentration and the concentration of the sensitive gas of the current position of the inspection robot;
and determining whether the current position of the robot has smoke or not according to the concentrations of the smoke and the sensitive gas.
The device of the embodiment of the disclosure firstly acquires first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring the environment of the current position of the robot, acquires second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot, then processes the first monitoring information to acquire a first identification result, and processes the second monitoring information to acquire a second identification result, and finally determines whether a fire disaster exists in the current position of the robot according to the first identification result and the second identification result. Therefore, the first recognition result and the second recognition result are combined, the characteristics of the environment, smoke, sensitive gas information and the like of the current position of the robot are fused, the reliability and the accuracy of coal mine fire recognition can be improved, and fire monitoring in different areas can be realized due to the fact that the method is based on the robot.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The calculation unit 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The computing unit 501 performs the various methods and processes described above, such as a robot-based mine fire monitoring method. For example, in some embodiments, the robot-based mine fire monitoring method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by the computing unit 501, one or more steps of the robot-based mine fire monitoring method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the robot-based mine fire monitoring method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), the internet, and blockchain networks.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
The device of the embodiment of the disclosure firstly acquires first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring the environment of the current position of the robot, acquires second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot, then processes the first monitoring information to acquire a first identification result, and processes the second monitoring information to acquire a second identification result, and finally determines whether a fire disaster exists in the current position of the robot according to the first identification result and the second identification result. Therefore, the first recognition result and the second recognition result are combined, the characteristics of the environment, smoke, sensitive gas information and the like of the current position of the robot are fused, the reliability and the accuracy of coal mine fire recognition can be improved, and fire monitoring in different areas can be realized due to the fact that the method is based on the robot.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (17)

1. A mine fire monitoring method based on a robot is characterized by comprising the following steps:
acquiring first monitoring information acquired by first equipment in the robot, wherein the first equipment is used for acquiring environmental information of the current position of the robot;
acquiring second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot;
processing the first monitoring information to obtain a first identification result;
processing the second monitoring information to obtain a second identification result;
and determining whether the current position of the robot has a fire or not according to the first recognition result and the second recognition result.
2. The method of claim 1, wherein the first monitoring information is video information, and the processing the first monitoring information to obtain the first recognition result comprises:
performing motion recognition on the first monitoring information to determine each frame of image containing a suspected smoke area;
inputting each frame of image containing the suspected smoke area into a trained neural network model to determine the smoke area contained in the first monitoring information;
and determining a first recognition result according to the area of the smoke region.
3. The method of claim 1, wherein the first monitoring information is video information, and the processing the first monitoring information to obtain the first recognition result comprises:
preprocessing the first monitoring information to determine a flame region contained in the video information;
determining a first flame judgment result according to the area of the flame area;
under the condition that the first flame judgment result is a first specified result, performing first color analysis on the flame area to determine a second flame judgment result;
under the condition that the second flame judgment result is a second specified result, performing second color analysis on the flame area to determine a third flame judgment result;
and under the condition that the third flame judgment result is a third appointed result, performing flame sharp angle analysis on the flame area to determine the first identification result.
4. The method of claim 3, after determining the first determination of flame based on the area of the target flame region, further comprising:
and under the condition that the first judgment result is not the first designated result, determining that the first identification result is that the current position of the robot is flameless.
5. The method of claim 3, wherein after the performing a first color analysis of the target flame region to determine a second determination of the flame, further comprising:
determining that the first recognition result is flameless if the second determination result is not the second specification result.
6. The method of claim 3, wherein after the performing a second color analysis on the target flame region to determine a third determination of the flame, further comprising:
determining that the first recognition result is no flame if the third determination result of the flame is not the third specification result.
7. The method according to any one of claims 1 to 6, wherein the processing the second monitoring information to obtain a second recognition result comprises:
according to the second monitoring information, determining the smoke concentration and the concentration of sensitive gas of the current position of the inspection robot;
and determining whether the current position of the robot has smoke or not according to the concentrations of the smoke and the sensitive gas.
8. A mine fire monitoring device based on robot, its characterized in that includes:
the robot monitoring system comprises a first acquisition module, a second acquisition module and a monitoring module, wherein the first acquisition module is used for acquiring first monitoring information acquired by first equipment in the robot, and the first equipment is used for acquiring environmental information of the current position of the robot;
the second acquisition module is used for acquiring second monitoring information acquired by second equipment in the robot, wherein the second equipment is used for acquiring smoke and sensitive gas information of the current position of the robot;
the first processing module is used for processing the first monitoring information to obtain a first identification result;
the second processing module is used for processing the second monitoring information to obtain a second identification result;
and the determining module is used for determining whether the current position of the robot has a fire disaster or not according to the first recognition result and the second recognition result.
9. The apparatus of claim 8, wherein the first monitoring information is video information, and the first processing module is specifically configured to:
performing motion recognition on the first monitoring information to determine each frame of image containing a suspected smoke area;
inputting each frame of image containing the suspected smoke area into a trained neural network model to determine the smoke area contained in the first monitoring information;
and determining a first recognition result according to the area of the smoke region.
10. The apparatus of claim 8, wherein the first monitoring information is video information, and the first processing module comprises:
a first determining unit, configured to pre-process the first monitoring information to determine a flame region included in the video information;
the second determining unit is used for determining a first flame judgment result according to the area of the flame region;
the third determining unit is used for performing first color analysis on the flame area to determine a second flame judgment result under the condition that the first flame judgment result is a first specified result;
a fourth determining unit, configured to perform a second color analysis on the flame region to determine a third flame determination result when the second flame determination result is a second determination result;
and the fifth determining unit is used for performing flame sharp angle analysis on the flame area to determine the first recognition result under the condition that the third flame judgment result is a third specified result.
11. The apparatus of claim 10, wherein the third determining unit is further configured to:
and under the condition that the first judgment result is not the first designated result, determining that the first identification result is that the current position of the robot is flameless.
12. The apparatus of claim 10, wherein the fourth determining unit is further configured to:
determining that the first recognition result is flameless if the second determination result is not the second specification result.
13. The apparatus of claim 10, wherein the fifth determining unit is further configured to:
determining that the first recognition result is no flame if the third determination result of the flame is not the third specification result.
14. The apparatus according to any one of claims 8 to 13, wherein the second processing module is specifically configured to:
according to the second monitoring information, determining the smoke concentration and the concentration of sensitive gas of the current position of the inspection robot;
and determining whether the current position of the robot has smoke or not according to the concentrations of the smoke and the sensitive gas.
15. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
16. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-7.
17. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-7.
CN202110648237.0A 2021-06-10 2021-06-10 Mine fire monitoring method, device and equipment based on robot and storage medium Pending CN113361420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110648237.0A CN113361420A (en) 2021-06-10 2021-06-10 Mine fire monitoring method, device and equipment based on robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110648237.0A CN113361420A (en) 2021-06-10 2021-06-10 Mine fire monitoring method, device and equipment based on robot and storage medium

Publications (1)

Publication Number Publication Date
CN113361420A true CN113361420A (en) 2021-09-07

Family

ID=77533776

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110648237.0A Pending CN113361420A (en) 2021-06-10 2021-06-10 Mine fire monitoring method, device and equipment based on robot and storage medium

Country Status (1)

Country Link
CN (1) CN113361420A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114446002A (en) * 2022-01-17 2022-05-06 厦门理工学院 Fire on-line monitoring method, device, medium and system
CN114506221A (en) * 2022-03-03 2022-05-17 西南交通大学 Tunnel fire scene environment detection system and method based on high-temperature superconducting magnetic levitation
CN116630843A (en) * 2023-04-13 2023-08-22 安徽中科数智信息科技有限公司 Fire prevention supervision and management method and system for fire rescue

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108295407A (en) * 2017-12-21 2018-07-20 山东康威通信技术股份有限公司 Robot cable piping lane scene fire alarm and extinguishing method, device, system
CN109145689A (en) * 2017-06-28 2019-01-04 南京理工大学 A kind of robot fire detection method
CN111111074A (en) * 2019-12-16 2020-05-08 山东康威通信技术股份有限公司 Fire extinguishing scheduling method and system for power tunnel fire-fighting robot
US20220362939A1 (en) * 2019-10-24 2022-11-17 Ecovacs Commercial Robotics Co., Ltd. Robot positioning method and apparatus, intelligent robot, and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109145689A (en) * 2017-06-28 2019-01-04 南京理工大学 A kind of robot fire detection method
CN108295407A (en) * 2017-12-21 2018-07-20 山东康威通信技术股份有限公司 Robot cable piping lane scene fire alarm and extinguishing method, device, system
US20220362939A1 (en) * 2019-10-24 2022-11-17 Ecovacs Commercial Robotics Co., Ltd. Robot positioning method and apparatus, intelligent robot, and storage medium
CN111111074A (en) * 2019-12-16 2020-05-08 山东康威通信技术股份有限公司 Fire extinguishing scheduling method and system for power tunnel fire-fighting robot

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114446002A (en) * 2022-01-17 2022-05-06 厦门理工学院 Fire on-line monitoring method, device, medium and system
CN114446002B (en) * 2022-01-17 2023-10-31 厦门理工学院 Fire on-line monitoring method, device, medium and system
CN114506221A (en) * 2022-03-03 2022-05-17 西南交通大学 Tunnel fire scene environment detection system and method based on high-temperature superconducting magnetic levitation
CN114506221B (en) * 2022-03-03 2023-08-08 西南交通大学 Tunnel fire scene environment detection system and method based on high-temperature superconductive magnetic levitation
CN116630843A (en) * 2023-04-13 2023-08-22 安徽中科数智信息科技有限公司 Fire prevention supervision and management method and system for fire rescue
CN116630843B (en) * 2023-04-13 2024-05-17 安徽中科数智信息科技有限公司 Fire prevention supervision and management method and system for fire rescue

Similar Documents

Publication Publication Date Title
CN113361420A (en) Mine fire monitoring method, device and equipment based on robot and storage medium
CN111392619B (en) Tower crane early warning method, device and system and storage medium
CN113283344B (en) Mining conveyor belt deviation detection method based on semantic segmentation network
CN111178424A (en) Petrochemical production site safety compliance real-time detection system and method
CN115620192A (en) Method and device for detecting wearing of safety rope in aerial work
CN112257604A (en) Image detection method, image detection device, electronic equipment and storage medium
CN113343779A (en) Environment anomaly detection method and device, computer equipment and storage medium
CN114663672B (en) Method and system for detecting corrosion of steel member of power transmission line tower
CN110378421B (en) Coal mine fire identification method based on convolutional neural network
CN114445398A (en) Method and device for monitoring state of side protection plate of hydraulic support of coal mining machine
CN117129815B (en) Comprehensive detection method and system for multi-degradation insulator based on Internet of things
CN116682162A (en) Robot detection algorithm based on real-time video stream
CN116503494A (en) Infrared image generation method, device, equipment and storage medium
CN115965625A (en) Instrument detection device based on visual identification and detection method thereof
CN113780178A (en) Road detection method, road detection device, electronic equipment and storage medium
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium
Netto et al. Early Defect Detection in Conveyor Belts using Machine Vision.
CN115474441A (en) Obstacle detection method, apparatus, device, and computer storage medium
CN114155589B (en) Image processing method, device, equipment and storage medium
CN110263661B (en) Flame detection method and device based on new color space and fast-LOF
CN115775365A (en) Controlled smoke and fire interference identification method and device for historical relic and ancient building and computing equipment
CN118262270A (en) Early warning information determining method, device, server and storage medium
CN118172698A (en) Recognition method, device, equipment and medium for monitoring sentry abnormal behavior
CN115731386A (en) Method and device for identifying lamp cap type of traffic signal lamp and automatic driving vehicle
CN114863538A (en) Abnormal behavior identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination