CN112183397A - Method for identifying sitting protective fence behavior based on cavity convolutional neural network - Google Patents

Method for identifying sitting protective fence behavior based on cavity convolutional neural network Download PDF

Info

Publication number
CN112183397A
CN112183397A CN202011062063.1A CN202011062063A CN112183397A CN 112183397 A CN112183397 A CN 112183397A CN 202011062063 A CN202011062063 A CN 202011062063A CN 112183397 A CN112183397 A CN 112183397A
Authority
CN
China
Prior art keywords
neural network
pedestrian
sitting
pedestrians
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011062063.1A
Other languages
Chinese (zh)
Inventor
陈友明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Honghe Communication Co ltd
Original Assignee
Sichuan Honghe Communication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Honghe Communication Co ltd filed Critical Sichuan Honghe Communication Co ltd
Priority to CN202011062063.1A priority Critical patent/CN112183397A/en
Publication of CN112183397A publication Critical patent/CN112183397A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

The invention discloses a method for identifying the behavior of a sitting protective fence based on a cavity convolutional neural network, which comprises the following steps: the monitoring system collects real-time videos of the area near the oiling machine in real time, and captures an image every preset time to obtain a real-time image set; detecting the pedestrians in the real-time image set by using a YOLO V3 algorithm, and extracting the pedestrians in the image to obtain a pedestrian image set; defining the classes of the pedestrians according to the arm angles of the pedestrians in the image, wherein 0 represents a normal pedestrian, 1 represents a pedestrian sitting on the guard rail, and 2 represents other pedestrians in abnormal conditions; constructing a cavity convolution neural network, and training the pedestrian image set by using the cavity convolution neural network to obtain a trained cavity convolution neural network; judging a pedestrian image set by using the trained hole convolution neural network, and if the output is 1, judging that the pedestrian has a sitting protective guard behavior; if the output is 2, judging that no pedestrian sits on the protective guard.

Description

Method for identifying sitting protective fence behavior based on cavity convolutional neural network
Technical Field
The invention relates to the technical field of images, in particular to a method for identifying the behavior of a sitting protective fence based on a hole convolutional neural network.
Background
At the beginning of the establishment of a gas station, a camera is installed in the region of the oiling machine according to security requirements, and the safe operation and safe operation of the oiling machine of the gas station are investigated in a video recording mode of the camera.
The guard rail beside the oiling machine can often lead customers to sit for rest, and the protection function of the guard rail is invalid under the condition, so that great potential safety hazards exist in the operation of a gas station.
The prior art completely adopts a manual intervention method, monitors whether a person is on a guard rail beside a oiling machine through a camera, and does not have an objective non-manual and accurate processing method. This method, which relies on manual completion, has three problems:
1. the human cost is high, needs the staff to carry out real time monitoring.
2. The risk of error is high and manual inspection always leads to errors due to occasional fatigue or inadvertence.
3. The superior leaders basically cannot supervise and manage the monitoring personnel.
Disclosure of Invention
In order to solve the problem that gestures of an oiler are supervised only through manual intervention in the prior art, the invention provides a method for identifying the behavior of a sitting protective fence based on a cavity convolution neural network.
The invention is realized by the following technical scheme:
the method for identifying the behavior of the sitting protective fence based on the cavity convolutional neural network comprises the following steps of:
s1: the monitoring system collects real-time videos of the area near the oiling machine in real time, and captures an image every preset time to obtain a real-time image set;
s2: detecting the pedestrians in the real-time image set by using a YOLO V3 algorithm, and extracting the pedestrians in the image to obtain a pedestrian image set;
s3: defining the classes of the pedestrians according to the arm angles of the pedestrians in the image, wherein 0 represents a normal pedestrian, 1 represents a pedestrian sitting on the guard rail, and 2 represents other pedestrians in abnormal conditions;
s4: constructing a cavity convolution neural network, and training the pedestrian image set by using the cavity convolution neural network to obtain a trained cavity convolution neural network;
s5: judging a pedestrian image set by using the trained hole convolution neural network, and if the output is 1, judging that the pedestrian has a sitting protective guard behavior; if the output is 2, judging that no pedestrian sits on the protective guard.
On the basis of the scheme, the method further comprises the following steps: the monitoring system in the step S1 comprises a plurality of cameras, the horizontal distance between the installation position of each camera and the oiling machine monitored by the corresponding camera is 8-12 meters, and the distance between the installation position of each camera and the ground is 3-5 meters.
On the basis of the scheme, the method further comprises the following steps: the specific method for extracting the pedestrian in the image in step S2 is as follows: and intercepting the pedestrians in the image to generate an image only containing the pedestrians.
On the basis of the scheme, the method further comprises the following steps: the step S4 includes the following sub-steps:
s41: selecting a training dataset and a validation dataset;
s42: defining a hollow convolution kernel, wherein the size of the convolution kernel is 5 x n, and the number of parameters is 3 x n;
s43: building a cavity convolution neural network, inputting 256 × 3 from the input end of the cavity convolution neural network, outputting 1 × 3 from the output end of the cavity convolution neural network after 5 times of convolution and pooling operations, and outputting the probabilities that data are 0, 1 and 2 in real time;
s44: defining a Loss function Loss, wherein the calculation formula of the Loss function Loss is as follows:
Figure BDA0002712708180000031
wherein m is the number of network output categories and is the output of a network full connection layer, a and b are network hyper-parameters, and y is a real label of data;
s45: training the training set by using a gradient descent method through a loss function to optimize a cavity convolution neural network;
s46: and verifying the verification set by using the cavity convolutional neural network, and finishing the training of the cavity convolutional neural network when the verification precision is more than 95% and the verification precision is not improved any more, so that the trained cavity convolutional neural network is obtained.
On the basis of the scheme, the method further comprises the following steps: the training data set in step S41 includes 20000 labeled images, and the verification data set includes 2000 labeled images.
On the basis of the scheme, the method further comprises the following steps: the training data set and the verification data set in step S41 each include three types of data, i.e., 0, 1, and 2, in a ratio of 1:2: 1.
On the basis of the scheme, the method further comprises the following steps: the values of the network hyper-parameters a and b in the step S44 are respectively: a is 5 and b is 1.
Compared with the prior art, the invention has the following advantages and beneficial effects:
the invention overcomes the defects that the supervision of the risk control information of the protective guard is lagged and the decision cannot be provided due to the fact that manual management can only be carried out in the prior art, and provides a new intelligent supervision algorithm, so that the risk of the protective guard being seated can be timely perceived, the management cost is reduced, the labor cost is saved, and the safety management of a gas station is effectively improved.
Drawings
A further understanding of the embodiments of the present invention may be obtained from the following claims of the invention and the following description of the preferred embodiments when taken in conjunction with the accompanying drawings. Individual features of the different embodiments shown in the figures may be combined in any desired manner in this case without going beyond the scope of the invention. In the drawings:
FIG. 1 is a logic flow diagram of the present invention;
FIG. 2 is a schematic diagram of an embodiment of a high-speed camera;
FIG. 3 is a convolution kernel of the present invention;
FIG. 4 is a hole convolutional neural network.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
Example (b):
as shown in fig. 1, in this embodiment, the method for identifying a sitting guard rail behavior based on a hole convolutional neural network includes the following steps:
s1: the monitoring system collects real-time videos of the area near the oiling machine in real time, and captures an image every preset time to obtain a real-time image set;
s2: detecting the pedestrians in the real-time image set by using a YOLO V3 algorithm, and extracting the pedestrians in the image to obtain a pedestrian image set;
s3: defining the classes of the pedestrians according to the arm angles of the pedestrians in the image, wherein 0 represents a normal pedestrian, 1 represents a pedestrian sitting on the guard rail, and 2 represents other pedestrians in abnormal conditions;
s4: constructing a cavity convolution neural network, and training the pedestrian image set by using the cavity convolution neural network to obtain a trained cavity convolution neural network;
s5: judging a pedestrian image set by using the trained hole convolution neural network, and if the output is 1, judging that the pedestrian has a sitting protective guard behavior; if the output is 2, judging that no pedestrian sits on the protective guard.
As shown in fig. 2, the monitoring system in step S1 includes a plurality of cameras, the cameras are installed at a distance of 10 meters from the horizontal of the fuel dispenser monitored by the corresponding cameras, and the cameras are 3 meters from the ground.
Preferably, the specific method for extracting the pedestrian in the image in step S2 is as follows: and intercepting the pedestrians in the image to generate an image only containing the pedestrians.
Preferably, the step S4 includes the following sub-steps:
s41: selecting a training dataset and a validation dataset;
s42: as shown in fig. 3, a hollow convolution kernel is defined, the convolution kernel size is 5 × n, and the parameter number is 3 × n;
s43: as shown in fig. 4, a cavity convolution neural network is built, 256 × 3 is input from the input end of the cavity convolution neural network, and after 5 times of convolution and pooling operations, 1 × 3 is output from the output end of the cavity convolution neural network, and the probabilities that the real-time output data are respectively three types of data, i.e., 0, 1 and 2, are output;
s44: in the method, the accuracy that the class is 1 is particularly concerned, and the influence between other two classes is not concerned, so that the method designs and self-defines the weighted Loss function, a network learns the class concerned by the user with emphasis, the Loss function Loss is defined, and a calculation formula of the Loss function Loss is as follows:
Figure BDA0002712708180000051
wherein m is the number of network output categories and is the output of a network full connection layer, a and b are network hyper-parameters, and y is a real label of data;
s45: training the training set by using a gradient descent method through a loss function to optimize a cavity convolution neural network;
s46: and verifying the verification set by using the cavity convolutional neural network, and finishing the training of the cavity convolutional neural network when the verification precision is more than 95% and the verification precision is not improved any more, so that the trained cavity convolutional neural network is obtained.
Preferably, the training data set in step S41 includes 20000 labeled images, and the verification data set includes 2000 labeled images.
Preferably, the training data set and the verification data set in step S41 each include three types of data, i.e., 0, 1 and 2, in a ratio of 1:2: 1.
Preferably, the values of the network hyper-parameters a and b in the step S44 are respectively: a is 5 and b is 1.
The invention can be seen by combining the embodiment, the problems that the supervision of the risk control information of the protective guard is lagged and the decision cannot be provided due to the defect that only manual management can be performed in the prior art are overcome, and a novel intelligent supervision algorithm is provided, so that the risk of the protective guard being seated is perceived in time, the management cost is reduced, the labor cost is saved, and the safety management of the gas station is effectively improved.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention are further described in detail, it should be understood that the above-mentioned embodiments are only examples of the present invention, and are not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes that are changed from the content of the present specification and the drawings, or are directly or indirectly applied to other related technical fields are included in the scope of the present invention.

Claims (7)

1. The method for identifying the behavior of the sitting protective fence based on the cavity convolutional neural network is characterized by comprising the following steps of:
s1: the monitoring system collects real-time videos of the area near the oiling machine in real time, and captures an image every preset time to obtain a real-time image set;
s2: detecting the pedestrians in the real-time image set by using a YOLO V3 algorithm, and extracting the pedestrians in the image to obtain a pedestrian image set;
s3: defining the classes of the pedestrians according to the arm angles of the pedestrians in the image, wherein 0 represents a normal pedestrian, 1 represents a pedestrian sitting on the guard rail, and 2 represents other pedestrians in abnormal conditions;
s4: constructing a cavity convolution neural network, and training the pedestrian image set by using the cavity convolution neural network to obtain a trained cavity convolution neural network;
s5: judging a pedestrian image set by using the trained hole convolution neural network, and if the output is 1, judging that the pedestrian has a sitting protective guard behavior; if the output is 2, judging that no pedestrian sits on the protective guard.
2. The method for identifying sitting and guard rail behaviors based on the hole convolutional neural network as claimed in claim 1, wherein the monitoring system in the step S1 comprises a plurality of cameras, the horizontal distance between the installation position of the camera and the fuel dispenser monitored by the corresponding camera is 8-12 m, and the height from the ground is 3-5 m.
3. The method for identifying the sitting protective fence behavior based on the hole convolutional neural network as claimed in claim 1, wherein the specific method for extracting the pedestrian in the image in the step S2 is as follows: and intercepting the pedestrians in the image to generate an image only containing the pedestrians.
4. The method for identifying the sitting protective fence behavior based on the hole convolutional neural network as claimed in claim 1, wherein the step S4 comprises the following sub-steps:
s41: selecting a training dataset and a validation dataset;
s42: defining a hollow convolution kernel, wherein the size of the convolution kernel is 5 x n, and the number of parameters is 3 x n;
s43: building a cavity convolution neural network, inputting 256 × 3 from the input end of the cavity convolution neural network, outputting 1 × 3 from the output end of the cavity convolution neural network after 5 times of convolution and pooling operations, and outputting the probabilities that data are 0, 1 and 2 in real time;
s44: defining a Loss function Loss, wherein the calculation formula of the Loss function Loss is as follows:
Figure FDA0002712708170000021
wherein m is the number of network output categories and is the output of a network full connection layer, a and b are network hyper-parameters, and y is a real label of data;
s45: training the training set by using a gradient descent method through a loss function to optimize a cavity convolution neural network;
s46: and verifying the verification set by using the cavity convolutional neural network, and finishing the training of the cavity convolutional neural network when the verification precision is more than 95% and the verification precision is not improved any more, so that the trained cavity convolutional neural network is obtained.
5. The method for identifying sitting protective fence behavior based on hole convolutional neural network as claimed in claim 4, wherein the training data set in step S41 comprises 20000 labeled images, and the verification data set comprises 2000 labeled images.
6. The method for identifying sitting protective fence behavior based on hole convolutional neural network as claimed in claim 4, wherein the training data set and the verification data set in step S41 each contain three types of data of 0, 1 and 2 in a ratio of 1:2: 1.
7. The method for identifying the sitting protective fence behavior based on the hole convolutional neural network as claimed in claim 4, wherein the values of the network hyper-parameters a and b in the step S44 are respectively as follows: a is 5 and b is 1.
CN202011062063.1A 2020-09-30 2020-09-30 Method for identifying sitting protective fence behavior based on cavity convolutional neural network Pending CN112183397A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011062063.1A CN112183397A (en) 2020-09-30 2020-09-30 Method for identifying sitting protective fence behavior based on cavity convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011062063.1A CN112183397A (en) 2020-09-30 2020-09-30 Method for identifying sitting protective fence behavior based on cavity convolutional neural network

Publications (1)

Publication Number Publication Date
CN112183397A true CN112183397A (en) 2021-01-05

Family

ID=73947476

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011062063.1A Pending CN112183397A (en) 2020-09-30 2020-09-30 Method for identifying sitting protective fence behavior based on cavity convolutional neural network

Country Status (1)

Country Link
CN (1) CN112183397A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109284733A (en) * 2018-10-15 2019-01-29 浙江工业大学 A kind of shopping guide's act of omission monitoring method based on yolo and multitask convolutional neural networks
US20190130580A1 (en) * 2017-10-26 2019-05-02 Qualcomm Incorporated Methods and systems for applying complex object detection in a video analytics system
CN109993122A (en) * 2019-04-02 2019-07-09 中国石油大学(华东) A kind of pedestrian based on depth convolutional neural networks multiplies staircase anomaly detection method
CN110119656A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations
CN110414320A (en) * 2019-06-13 2019-11-05 温州大学激光与光电智能制造研究院 A kind of method and system of safety manufacture supervising
CN110866453A (en) * 2019-10-22 2020-03-06 同济大学 Real-time crowd stable state identification method and device based on convolutional neural network
CN111105047A (en) * 2019-12-12 2020-05-05 陕西瑞海工程智慧数据科技有限公司 Operation and maintenance monitoring method and device, electronic equipment and storage medium
CN111223129A (en) * 2020-01-10 2020-06-02 深圳中兴网信科技有限公司 Detection method, detection device, monitoring equipment and computer readable storage medium
CN111461075A (en) * 2020-05-09 2020-07-28 于珂 Guardrail crossing detection method combining deep neural network and block chain
CN111553329A (en) * 2020-06-14 2020-08-18 深圳天海宸光科技有限公司 Gas station intelligent safety processing system and method based on machine vision
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
CN111582056A (en) * 2020-04-17 2020-08-25 上善智城(苏州)信息科技有限公司 Automatic detection method for fire-fighting equipment in oil unloading operation site of gas station

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190130580A1 (en) * 2017-10-26 2019-05-02 Qualcomm Incorporated Methods and systems for applying complex object detection in a video analytics system
CN110119656A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 Intelligent monitor system and the scene monitoring method violating the regulations of operation field personnel violating the regulations
CN109284733A (en) * 2018-10-15 2019-01-29 浙江工业大学 A kind of shopping guide's act of omission monitoring method based on yolo and multitask convolutional neural networks
WO2020164270A1 (en) * 2019-02-15 2020-08-20 平安科技(深圳)有限公司 Deep-learning-based pedestrian detection method, system and apparatus, and storage medium
CN109993122A (en) * 2019-04-02 2019-07-09 中国石油大学(华东) A kind of pedestrian based on depth convolutional neural networks multiplies staircase anomaly detection method
CN110414320A (en) * 2019-06-13 2019-11-05 温州大学激光与光电智能制造研究院 A kind of method and system of safety manufacture supervising
CN110866453A (en) * 2019-10-22 2020-03-06 同济大学 Real-time crowd stable state identification method and device based on convolutional neural network
CN111105047A (en) * 2019-12-12 2020-05-05 陕西瑞海工程智慧数据科技有限公司 Operation and maintenance monitoring method and device, electronic equipment and storage medium
CN111223129A (en) * 2020-01-10 2020-06-02 深圳中兴网信科技有限公司 Detection method, detection device, monitoring equipment and computer readable storage medium
CN111582056A (en) * 2020-04-17 2020-08-25 上善智城(苏州)信息科技有限公司 Automatic detection method for fire-fighting equipment in oil unloading operation site of gas station
CN111461075A (en) * 2020-05-09 2020-07-28 于珂 Guardrail crossing detection method combining deep neural network and block chain
CN111553329A (en) * 2020-06-14 2020-08-18 深圳天海宸光科技有限公司 Gas station intelligent safety processing system and method based on machine vision

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
元黎明: "基于图像识别技术的建筑工人不安全行为检测及应用研究", 《中国优秀硕士学位论文全文数据库工程科技Ⅰ辑》, no. 09, pages 026 - 7 *
蒋红权: "加油站智能高清视频***创新应用设计实现", 《信息***工程》, pages 116 - 118 *

Similar Documents

Publication Publication Date Title
CN106199276B (en) The intelligent diagnosis system and method for exception information in a kind of power information acquisition system
CN111241959B (en) Method for detecting personnel not wearing safety helmet through construction site video stream
CN107666594A (en) A kind of video monitoring monitors the method operated against regulations in real time
CN108762228A (en) A kind of multi-state fault monitoring method based on distributed PCA
CN100474878C (en) Image quality prediction method and apparatus and fault diagnosis system
CN106950945B (en) A kind of fault detection method based on dimension changeable type independent component analysis model
JPH1145919A (en) Manufacture of semiconductor substrate
CN109858389A (en) Vertical ladder demographic method and system based on deep learning
CN109858367A (en) The vision automated detection method and system that worker passes through support unsafe acts
CN112004061A (en) Oil discharge flow normative intelligent monitoring method based on computer vision
KR20080070543A (en) Early warning method for estimating inferiority in automatic production line
CN109858886A (en) It is a kind of that control success rate promotion analysis method is taken based on integrated study
CN103135519A (en) Instrument status displaying device and instrument status displaying method
CN112990870A (en) Patrol file generation method and device based on nuclear power equipment and computer equipment
CN110458794B (en) Quality detection method and device for accessories of rail train
CN106779096A (en) Power distribution network reports situation active forewarning system for repairment
CN111401131A (en) Image processing method and device for tunnel pipe gallery, computer equipment and storage medium
CN116229560B (en) Abnormal behavior recognition method and system based on human body posture
CN116700193A (en) Factory workshop intelligent monitoring management system and method thereof
CN112132092A (en) Fire extinguisher and fire blanket identification method based on convolutional neural network
CN116563781A (en) Image monitoring and diagnosing method for inspection robot
CN113361686A (en) Multilayer heterogeneous multi-mode convolutional neural network integrated robot inspection method
CN114912678A (en) Online automatic detection and early warning method and system for abnormal operation of power grid regulation and control
CN112183397A (en) Method for identifying sitting protective fence behavior based on cavity convolutional neural network
CN109917184A (en) A kind of stealing detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination