CN107396094B - Automatic testing method towards camera single in multi-cam monitoring system damage - Google Patents

Automatic testing method towards camera single in multi-cam monitoring system damage Download PDF

Info

Publication number
CN107396094B
CN107396094B CN201710704123.7A CN201710704123A CN107396094B CN 107396094 B CN107396094 B CN 107396094B CN 201710704123 A CN201710704123 A CN 201710704123A CN 107396094 B CN107396094 B CN 107396094B
Authority
CN
China
Prior art keywords
camera
class
probability
network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710704123.7A
Other languages
Chinese (zh)
Other versions
CN107396094A (en
Inventor
袁泽峰
李恒宇
饶进军
丁长权
谢少荣
罗均
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201710704123.7A priority Critical patent/CN107396094B/en
Publication of CN107396094A publication Critical patent/CN107396094A/en
Application granted granted Critical
Publication of CN107396094B publication Critical patent/CN107396094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of automatic testing methods towards camera single in multi-cam monitoring system damage, the specific steps are as follows: (1) acquires, make training test sample collection;(2) constructing neural network;(3) training neural network parameter observes network error, until network convergence;(4) with trained network application deployment, and real-time monitoring.The method that the present invention uses is based on convolutional neural networks, and convolutional neural networks are developed recentlies, cause a kind of efficient identification method paid attention to extensively.Raw video image passes through convolutional layer, pond layer, the processing of the series of features such as active coating, the classification of final output image.The method of the present invention can substitute manual operation completely, have detection it is automatic, in real time, efficiently, precision it is accurate, do not need additional hardware, it is at low cost the features such as.

Description

Automatic testing method towards camera single in multi-cam monitoring system damage
Technical field
The present invention relates to a kind of automatic testing methods towards camera single in multi-cam monitoring system damage, belong to Field of video monitoring.
Background technique
Video monitoring system is the physical basis monitored in real time to every profession and trade key sector or important place, management department Door can obtain valid data image information by it, and the process of paroxysmal abnormality event is timely monitored and remembered, is used It commands efficiently, in time with offer and quickly arranges police strength, settles a case.The video monitoring system of multi-cam composition, it is real The dual function for having showed monitoring with communication, meets the monitoring of the every field such as traffic, water conservancy, oil field, bank, telecommunications comprehensively and answers Anxious commander's demand.Reaching the demand just necessarily requires each camera in monitoring system component part must normal work Make.Since monitoring system is usually round-the-clock, all the period of time, twenty four hours is monitored incessantly, so just necessarily requiring System energy when some camera has damage to occur is automatic, real time notification system maintenance person repairs replacement.
In existing video monitoring system, the maintenance and monitoring of camera are relied primarily on manually, can not achieve real-time prison Control, and cannot determination promptly and accurately which camera damage, need gradually to check, time-consuming and laborious, inefficiency.
Summary of the invention
In view of the defects existing in the prior art, the object of the present invention is to provide one kind towards single in multi-cam monitoring system The automatic testing method of a camera damage, have detection it is automatic, in real time, efficiently, precision it is accurate, do not need additional hardware, at The features such as this is low.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of automatic testing method towards camera single in multi-cam monitoring system damage, the specific steps are as follows:
(1) it acquires, makes training test sample collection;
(1a) acquires original image: while each n of image in m monitor camera are acquired, wherein m, n are positive integer, 300 × 300 sizes are zoomed to, it is corresponding to be placed in m file, wherein n picture is according to acquisition in each file Chronological order is numbered from 1 to n;
(1b) makes damaged image: to number n since number 1, whether damaging this with 50% probability selection every time Number picture;If selection is, a file is randomly choosed from m file, the selected probability of each file is 1/m;Increasing randomly shaped, the solid-color image block of random color, the image block on the number picture in the file chosen Area accounts for 30% or more of the picture area, represents video camera damage;Every image all generates text file of the same name, in file Holding is picture label, if picture damage label is 1, not damaging label is 0;
(1c) training stage sequentially inputs m images according to m file, in articulamentum, m 300 × 300 × 3 300 × 300 × (3 × m) sizes are merged on channel, and it is 1 that label, which is set as the number image label in file, Folder tabs.Such as: if certain number m images, the image label of file 1 is 1, the number in remaining m-1 file The label of image is 0, then the data label after merging is set as 1, represents first kind sample;If should in m file Number image label is 0, then label is 0 after merging, and represents null class sample.
(2) constructing neural network;
The input of network is by the realtime graphic frame of m video camera of permanent order input, wherein every image is scaled To 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, 3 channels of every cromogram;Next articulamentum M camera image is merged into 300 × 300 × (3 × m) sizes on a passage;M+1 value finally is exported in full articulamentum, point Not Biao Shi m+1 class probability size, if the maximum probability of null class, then it represents that m camera all works normally;If the first kind Maximum probability, indicate camera 1 work it is abnormal;If the maximum probability of the second class, it is abnormal to indicate that camera 2 works;If It is abnormal to indicate that camera 3 works for the maximum probability of third class;If the maximum probability of m class, indicate that camera m work is abnormal.
(3) training neural network parameter observes network error, until network convergence;
Sample is inputted network and merges image in articulamentum by (3a), after generating sample label, is grasped using neural network Make, exports m+1 value according to following formula in full articulamentum, represent the probability of m+1 class;
Zi=Wi*X+b
Wherein, X is one layer on full connection output layer of network output matrix, WiIt is i-th of output unit of full articulamentum Weight matrix, b are the preamble of full articulamentum, ZiRefer to the output valve of i-th of output unit, i has m+1 value altogether from 0~m;Point The class probability of m+1 class is not represented;
All kinds of probability output values of full articulamentum are input to SoftmaxWithLoss layers by (3b), according to following formula meter The penalty values of network, and the parameter of backpropagation training network are calculated,
Wherein, k indicates classification, and y indicates the sample label, ZiRefer to the probability value of i-th of sample.
(4) with trained network application deployment, and real-time monitoring;
The above trained network is disposed, the input of network is the realtime graphic by m video camera of permanent order input Frame, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, every cromogram 3 A channel;M camera image is merged into 300 × 300 × (3 × m) sizes by next articulamentum on a passage;It is rolled up again Lamination, pond layer, the operation of active coating;M+1 value finally is exported in full articulamentum, respectively indicates the size of m+1 class probability, such as The maximum probability of fruit null class, then it represents that m camera all works normally;If the maximum probability of the first kind, indicate that camera 1 works It is abnormal;If the maximum probability of the second class, it is abnormal to indicate that camera 2 works;If the maximum probability of third class, camera is indicated 3 work are abnormal;If the maximum probability of m class, indicate that camera m work is abnormal;With this, multi-cam monitoring system is realized In single camera damage automatic detection.
Compared with prior art, the invention has the following advantages that
The method that the present invention uses is based on convolutional neural networks, and convolutional neural networks are developed recentlies, causes wide A kind of efficient identification method of general attention.Raw video image passes through convolutional layer, pond layer, at the series of features such as active coating Reason, the classification of final output image.The method of the present invention can substitute manual operation completely, automatic with detection, real-time, efficient, The features such as precision is accurate, does not need additional hardware, at low cost.
Detailed description of the invention
Fig. 1 is the work flow diagram of embodiment of the present invention method.
The schematic diagram of Fig. 2 the method for the present invention training sample and its class label.
Fig. 3 is the neural network structure figure of the method for the present invention phrase of operation and implementation.
Specific embodiment
With reference to the accompanying drawing, specific embodiments of the present invention are described further.
Referring to Fig.1, the embodiment of the present invention is described in further detail with m=4, n=20000:
A kind of automatic testing method towards camera single in multi-cam monitoring system damage, the specific steps are as follows:
Step 1, it acquires, makes training test sample collection:
Step 1 acquires original image: while image each 20000 in four monitor cameras are acquired, zoom to 300 × 300 sizes, it is corresponding to be placed in four files, wherein 20000 pictures are according to acquisition time elder generation in each file Sequence afterwards is numbered from 00001 to 20000.
Step 2 makes damaged image: to number 20000 since number 00001, being with 50% probability selection every time It is no to damage the number picture;If selection is, a file is randomly choosed from four files, each file is selected In probability be 25%;Increasing randomly shaped, the solid-color image of random color on the number picture in the file chosen Block, the image block area account for 30% or more of the picture area.Represent video camera damage.Every image all generates text of the same name File, file content are picture label, if picture damage label is 1, not damaging label is 0.
Step 3, training stage sequentially input four images according to four files, in articulamentum, four 300 × 300 × 3 are merged into 300 × 300 × 12 sizes on a passage, and it is 1 that label, which is set as the number image label in file, Folder tabs.Such as: if four images of certain number, the image label of file 1 is 1, the number in excess-three file The label of image is 0, then the data label after merging is set as 1, represents first kind sample;If should in four files Number image label is 0, then label is 0 after merging, and represents null class sample;Fig. 2 summarizes this method classification and label shows Meaning: classification 0 indicates that four cameras all work normally;Classification 1 indicates first camera damage in four cameras;Classification 2 Indicate second camera damage in four cameras;Classification 3 indicates third camera damage in four cameras;Classification 4 Indicate the 4th camera damage in four cameras.
Step 2, neural network used in this method is constructed:
Fig. 3 is this method schematic network structure, it is characterized in that: the input of network is four by permanent order input The realtime graphic frame of video camera, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pictures It is plain high, 3 channels of every cromogram;Four camera images are merged into 300 × 300 × 12 by next articulamentum on a passage Size;5 values finally are exported in full articulamentum, respectively indicate the size of five class probability, if the maximum probability of null class, table Show that four cameras all work normally;If the maximum probability of the first kind, it is abnormal to indicate that camera 1 works;If the second class is general Rate is maximum, and it is abnormal to indicate that camera 2 works;If the maximum probability of third class, it is abnormal to indicate that camera 3 works;If the 4th It is abnormal to indicate that camera 4 works for the maximum probability of class.
Step 3, with sample training network:
Sample is inputted network and merges image in articulamentum, after generating sample label, using neural network by step 1 Operation, exports 5 values, represents the probability of five classes in full articulamentum according to following formula;
Zi=Wi*X+b
Wherein, X is one layer on full connection output layer of network output matrix, WiIt is i-th of output unit of full articulamentum Weight matrix, b are the preamble of full articulamentum, ZiRefer to the output valve of i-th of output unit.I has 5 values altogether from 0~4.Respectively Represent the class probability of five classes.
All kinds of probability output values of full articulamentum are input to SoftmaxWithLoss layers, according to following formula by step 2 The penalty values of network, and the parameter of backpropagation training network are calculated, network error declines always during practicing, and measuring accuracy is not Disconnected to improve, when network convergence, training is completed, and obtains one group of high network parameter of detection accuracy.
Wherein, k indicates classification, and y indicates the sample label, ZiRefer to the probability value of i-th of sample.
Step 4, with trained network application deployment:
The above trained network is disposed, the input of network is the realtime graphic by four video cameras of permanent order input Frame, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, every cromogram 3 A channel;Four camera images are merged into 300 × 300 × 12 sizes by next articulamentum on a passage;Convolution is carried out again Layer, pond layer, the operation such as active coating;5 values finally are exported in full articulamentum, respectively indicate the size of five class probability, if the The maximum probability of null class, then it represents that four cameras all work normally;If the maximum probability of the first kind, the work of camera 1 is indicated not Normally;If the maximum probability of the second class, it is abnormal to indicate that camera 2 works;If the maximum probability of third class, camera 3 is indicated It works abnormal;If the maximum probability of the 4th class, it is abnormal to indicate that camera 4 works.With this, multi-cam monitoring system is realized In single camera damage automatic detection.

Claims (1)

1. a kind of automatic testing method towards camera single in multi-cam monitoring system damage, which is characterized in that specific Steps are as follows:
(1) it acquires, makes training test sample collection, its step are as follows:
(1a) acquires original image: while each n of image in m monitor camera are acquired, wherein m, n are positive integer, scaling It is corresponding to be placed in m file to 300 × 300 sizes, wherein n picture is according to acquisition time elder generation in each file Sequence afterwards is numbered from 1 to n;
(1b) makes damaged image: to number n since number 1, whether damaging the number with 50% probability selection every time Picture;If selection is, a file is randomly choosed from m file, the selected probability of each file is 1/m; Increasing randomly shaped, the solid-color image block of random color, the image block area on the number picture in the file chosen 30% or more of the picture area is accounted for, then represents video camera damage;Every image all generates text file of the same name, file content For picture label, if picture damage label is 1, not damaging label is 0;
(1c) training stage sequentially inputs m images according to m file, in articulamentum, m 300 × 300 × 3 logical It is merged into 300 × 300 × (3 × m) sizes on road, and label is set as the text that the number image label is 1 in file Part presss from both sides label;
(2) neural network used in this method is constructed, its step are as follows:
The input of (2a) network is by the realtime graphic frame of m video camera of permanent order input, wherein every image is scaled To 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, 3 channels of every cromogram;Next articulamentum M camera image is merged into 300 × 300 × (3 × m) sizes on a passage;M+1 value finally is exported in full articulamentum, point Not Biao Shi m+1 class probability size, if the maximum probability of null class, then it represents that m camera all works normally;If the first kind Maximum probability, indicate camera 1 work it is abnormal;If the maximum probability of the second class, it is abnormal to indicate that camera 2 works;If It is abnormal to indicate that camera 3 works for the maximum probability of third class;If the maximum probability of m class, indicate that camera m work is abnormal;
(3) sample training network is used, its step are as follows:
Sample is inputted network and merges image in articulamentum by (3a), after generating sample label, is operated using neural network, M+1 value is exported, the probability of m+1 class is represented according to following formula in full articulamentum;
Wherein, X is one layer on full connection output layer of network output matrix,It is the weight of i-th of output unit of full articulamentum Matrix, b are the preambles of full articulamentum,Refer to the output valve of i-th of output unit, i has m+1 value altogether from 0 ~ m;Point The class probability of m+1 class is not represented;
All kinds of probability output values of full articulamentum are input to SoftmaxWithLoss layers by (3b), calculate net according to following formula The penalty values of network, and the parameter of backpropagation training network,
Wherein, k indicates classification, and y indicates sample label,Refer to the probability value of i-th of sample;
(4) with trained network application deployment, its step are as follows:
(4a) disposes the above trained network, and the input of network is the realtime graphic by m video camera of permanent order input Frame, wherein every image is scaled to 300 × 300 × 3 sizes, i.e. 300 pixels are wide, 300 pixel height, every cromogram 3 A channel;M camera image is merged into 300 × 300 × (3 × m) sizes by next articulamentum on a passage;It is rolled up again Lamination, pond layer activate layer operation;M+1 value finally is exported in full articulamentum, respectively indicates the size of m+1 class probability, if The maximum probability of null class, then it represents that m camera all works normally;If the maximum probability of the first kind, the work of camera 1 is indicated not Normally;If the maximum probability of the second class, it is abnormal to indicate that camera 2 works;If the maximum probability of third class, camera 3 is indicated It works abnormal;If the maximum probability of m class, indicate that camera m work is abnormal;With this, multi-cam monitoring system is realized In single camera damage automatic detection.
CN201710704123.7A 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage Active CN107396094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710704123.7A CN107396094B (en) 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710704123.7A CN107396094B (en) 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage

Publications (2)

Publication Number Publication Date
CN107396094A CN107396094A (en) 2017-11-24
CN107396094B true CN107396094B (en) 2019-02-22

Family

ID=60353113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710704123.7A Active CN107396094B (en) 2017-08-17 2017-08-17 Automatic testing method towards camera single in multi-cam monitoring system damage

Country Status (1)

Country Link
CN (1) CN107396094B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109063761B (en) * 2018-07-20 2020-11-03 北京旷视科技有限公司 Diffuser falling detection method and device and electronic equipment
CN110868586A (en) * 2019-11-08 2020-03-06 北京转转精神科技有限责任公司 Automatic detection method for defects of camera

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
US7457458B1 (en) * 1999-11-26 2008-11-25 Inb Vision Ag. Method and apparatus for defining and correcting image data
CN102098530A (en) * 2010-12-02 2011-06-15 惠州Tcl移动通信有限公司 Method and device for automatically distinguishing quality of camera module
CN106650919A (en) * 2016-12-23 2017-05-10 国家电网公司信息通信分公司 Information system fault diagnosis method and device based on convolutional neural network
CN106650932A (en) * 2016-12-23 2017-05-10 郑州云海信息技术有限公司 Intelligent fault classification method and device for data center monitoring system
CN106686377A (en) * 2016-12-30 2017-05-17 佳都新太科技股份有限公司 Algorithm for determining video key area based on deep neural network
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN106991668A (en) * 2017-03-09 2017-07-28 南京邮电大学 A kind of evaluation method of day net camera shooting picture

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10057928C1 (en) * 2000-11-22 2002-02-21 Inb Vision Ag Surface fault detection method uses evaluation of matrix camera image of object surface via neural network
US9734567B2 (en) * 2015-06-24 2017-08-15 Samsung Electronics Co., Ltd. Label-free non-reference image quality assessment via deep neural network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7457458B1 (en) * 1999-11-26 2008-11-25 Inb Vision Ag. Method and apparatus for defining and correcting image data
CN101282481A (en) * 2008-05-09 2008-10-08 中国传媒大学 Method for evaluating video quality based on artificial neural net
CN102098530A (en) * 2010-12-02 2011-06-15 惠州Tcl移动通信有限公司 Method and device for automatically distinguishing quality of camera module
CN106709511A (en) * 2016-12-08 2017-05-24 华中师范大学 Urban rail transit panoramic monitoring video fault detection method based on depth learning
CN106650919A (en) * 2016-12-23 2017-05-10 国家电网公司信息通信分公司 Information system fault diagnosis method and device based on convolutional neural network
CN106650932A (en) * 2016-12-23 2017-05-10 郑州云海信息技术有限公司 Intelligent fault classification method and device for data center monitoring system
CN106686377A (en) * 2016-12-30 2017-05-17 佳都新太科技股份有限公司 Algorithm for determining video key area based on deep neural network
CN106991668A (en) * 2017-03-09 2017-07-28 南京邮电大学 A kind of evaluation method of day net camera shooting picture

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
京沪高铁综合视频监控图像质量诊断技术研究与实现;孙启良;《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》;20150215;全文

Also Published As

Publication number Publication date
CN107396094A (en) 2017-11-24

Similar Documents

Publication Publication Date Title
CN113469953B (en) Transmission line insulator defect detection method based on improved YOLOv4 algorithm
CN107194396A (en) Method for early warning is recognized based on the specific architecture against regulations in land resources video monitoring system
CN107396094B (en) Automatic testing method towards camera single in multi-cam monitoring system damage
CN109858389A (en) Vertical ladder demographic method and system based on deep learning
CN111062278B (en) Abnormal behavior identification method based on improved residual error network
CN110346699A (en) Insulator arc-over information extracting method and device based on ultraviolet image processing technique
CN106934319A (en) People's car objective classification method in monitor video based on convolutional neural networks
CN114723750B (en) Transmission line strain clamp defect detection method based on improved YOLOX algorithm
CN110113327A (en) A kind of method and device detecting DGA domain name
CN112924471A (en) Equipment fault diagnosis system and diagnosis method thereof
CN109800712A (en) A kind of vehicle detection method of counting and equipment based on depth convolutional neural networks
CN112883929A (en) Online video abnormal behavior detection model training and abnormal detection method and system
CN116503318A (en) Aerial insulator multi-defect detection method, system and equipment integrating CAT-BiFPN and attention mechanism
CN116256586A (en) Overheat detection method and device for power equipment, electronic equipment and storage medium
CN110599458A (en) Underground pipe network detection and evaluation cloud system based on convolutional neural network
CN110866453B (en) Real-time crowd steady state identification method and device based on convolutional neural network
CN109145743A (en) A kind of image-recognizing method and device based on deep learning
CN108664886A (en) A kind of fast face recognition method adapting to substation's disengaging monitoring demand
CN114881665A (en) Method and system for identifying electricity stealing suspected user based on target identification algorithm
CN114155551A (en) Improved pedestrian detection method and device based on YOLOv3 under complex environment
CN116503398B (en) Insulator pollution flashover detection method and device, electronic equipment and storage medium
CN116883717A (en) Platen checking method and device, computer readable storage medium and computer equipment
CN110533328A (en) A kind of project scene traffic control method, apparatus, medium and terminal device
CN111144392A (en) Neural network-based extremely-low-power-consumption optical target detection method and device
CN109738437A (en) A kind of metallic film capacitor self-healing point measuring device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant