CN109977840A - A kind of airport scene monitoring method based on deep learning - Google Patents

A kind of airport scene monitoring method based on deep learning Download PDF

Info

Publication number
CN109977840A
CN109977840A CN201910213187.6A CN201910213187A CN109977840A CN 109977840 A CN109977840 A CN 109977840A CN 201910213187 A CN201910213187 A CN 201910213187A CN 109977840 A CN109977840 A CN 109977840A
Authority
CN
China
Prior art keywords
scene
target
deep learning
training
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910213187.6A
Other languages
Chinese (zh)
Inventor
李炜
黄国新
梁斌斌
张心言
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Chuanda Zhisheng Software Co Ltd
Wisesoft Co Ltd
Original Assignee
Sichuan Chuanda Zhisheng Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Chuanda Zhisheng Software Co Ltd filed Critical Sichuan Chuanda Zhisheng Software Co Ltd
Priority to CN201910213187.6A priority Critical patent/CN109977840A/en
Publication of CN109977840A publication Critical patent/CN109977840A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of airport scene monitoring methods based on deep learning, this method utilizes deep learning network implementations airdrome scene target identification, the study to a large amount of airdrome scene pictures need to only be passed through, the airdrome scene target identification of high-accuracy can be realized, and, hardware modification advantage of lower cost only need to increase monitoring camera in the place that airdrome scene does not cover monitoring, the monitoring and target identification of airdrome scene can be realized.Especially, pass through random initializtion weight in deep learning network, classification based training is carried out to airdrome scene data set, to be promoted to the single scene objects recognition accuracy of airdrome scene, anchors mechanism is also quoted simultaneously, reduces the training time, accelerates weight parameter convergence, and corresponding target prediction scale is used in different layers, greatly enhance target identification correctness.Therefore, the present invention is based on the airport scene monitoring methods of deep learning can not only solve the problems, such as traditional control tower monitoring inefficiency, moreover it is possible to save the investment of hardware cost.

Description

A kind of airport scene monitoring method based on deep learning
Technical field
The invention belongs to airport scene monitoring field more particularly to a kind of airport scene monitoring sides based on deep learning Method.
Background technique
Airdrome scene targeted surveillance be ensure airdrome scene on aircraft, vehicle, worker safety's efficient operation elder generation Certainly condition.For a long time, the airport scene monitoring system in China is mainly in such a way that tower controller visually adds voice communication To be monitored and be managed the aircraft and vehicle in scene.As the market demand of China's aviation transport constantly expands, hand over The rapid growth of through-current capacity, the efficiency for carrying out airdrome scene management by the mode of " naked eyes " viewing is lower and lower, and is difficult to reality When, effectively monitor and find pop-up threats.
Currently, large airport often includes that surveillance radar, multipoint positioning (MLAT) and broadcast type are automatic with scene monitoring system Three kinds of dependent surveillance (ADS-B).Wherein, surveillance radar has higher requirement for weather conditions, barrier, landform, in order to Reduce coverage hole and increase monitoring coverage area simultaneously, needs to build a series of superelevation mast in different location, in mast Scene surveillance radar is arranged in top, but scene surveillance radar itself can only be detected and be positioned to target, can not identify quilt Detecting target type, and blocking due to buildings such as shelter bridge, terminals, monitoring range is there is also certain blind area, one Surely the data that cooperate flight plan and secondary radar could be waited until to compare accurately aircraft identification and flight progress, can not be incited somebody to action Scene vehicle is included within the scope of monitoring.Regardless of being multipoint positioning or Automatic dependent surveillance broadcast, since it needs to lead to It crosses cordless communication network and the R-T unit that is mounted in monitored target is just able to achieve the positioning and monitoring of degree of precision, for Be fitted without the noncooperative target of R-T unit, such as in most vehicles of scene operation and flight crew etc., multipoint positioning and Automatic dependent surveillance broadcast cannot achieve effective position and monitoring, meanwhile, the two system costs and later maintenance expense With higher.
Summary of the invention
It is an object of the invention to: a kind of airport using deep learning network implementations airdrome scene target identification is provided Face monitors method.
In order to achieve the above-mentioned object of the invention, the present invention provides following technical schemes:
A kind of airport scene monitoring method based on deep learning comprising following steps:
Step 1: it reads for monitoring video captured by the camera of airdrome scene, and read video is carried out Frame processing is taken out, each frame individual picture of extraction is saved as into corresponding scene picture;
Step 2: every scene picture of calibration, and generate a corresponding text file;Wherein, the text file packet Target frame location information and target category information containing each target in corresponding scene picture;
Step 3: each layer weight of classifier in random initializtion training network, and to the scene picture demarcated Classification based training is carried out, pre-training model is obtained;
Step 4: according to the scene picture demarcated, anchors parameter is calculated;
Step 5: the prediction scale of various types target in the scene picture that modification has been demarcated;
Step 6: according to calculated anchors parameter, the configuration file of the pre-training model is modified, and based on volume Product residual error network learns the scene picture handled through the step 5, and obtains training pattern;
Step 7: all targets occurred in video to be detected are identified using the training pattern, and are accordingly exported The target frame information of each target comprising appearance and the recognition result image of target category information.
According to a kind of specific embodiment, in the step of the present invention is based on the airport scene monitoring method of deep learning seven In, when the confidence level of the target of identification and IoU value reach the threshold value of setting, then accordingly output include the target target The recognition result image of frame information and target category information.
According to a kind of specific embodiment, the present invention is based on the steps of the airport scene monitoring method of deep learning In rapid four, to the scene picture demarcated, a certain number of cluster points of initializing set, and use k-means algorithm meter The position of each cluster point is calculated, and then determines the anchors parameter for calculating each cluster point.
According to a kind of specific embodiment, in the step of the present invention is based on the airport scene monitoring method of deep learning three In, using each layer weight of classifier darknet-53 random initializtion network of ImageNet data set training;Moreover, working as top1 When index reaches given threshold, successful pre-training model is generated.
According to a kind of specific embodiment, in the step of the present invention is based on the airport scene monitoring method of deep learning six In, when loss functional value drops to 1 or less, complete e-learning.
Compared with prior art, beneficial effects of the present invention:
Airport scene monitoring method proposed by the present invention is to realize that airdrome scene target is known using deep learning network YOLO Not, only the airdrome scene target identification of high-accuracy need to can be realized by the study to a large amount of airdrome scene pictures, moreover, Hardware modification advantage of lower cost only need to increase monitoring camera in the place that airdrome scene does not cover monitoring, machine can be realized The monitoring and target identification of field scene.In particular, by random initializtion weight in deep learning network, to airdrome scene data Collection carries out classification based training, to be promoted to the single scene objects recognition accuracy of airdrome scene, while also quoting anchors machine System reduces the training time, accelerates weight parameter convergence, and use corresponding target prediction scale in different layers, greatly enhances Target identification correctness.Therefore, the present invention is based on the airport scene monitoring methods of deep learning can not only solve traditional control tower The problem of monitoring inefficiency, moreover it is possible to save the investment of hardware cost.
Detailed description of the invention:
Fig. 1 is the flow chart of airport scene monitoring method of the present invention;
Fig. 2 is the structure chart of deep learning network of the present invention;
The recognition result image that Fig. 3 is exported when being using airport scene monitoring method of the present invention.
Specific embodiment
Below with reference to test example and specific embodiment, the present invention is described in further detail.But this should not be understood It is all that this is belonged to based on the technology that the content of present invention is realized for the scope of the above subject matter of the present invention is limited to the following embodiments The range of invention.
As shown in Figure 1, the present invention is based on the airport scene monitoring method of deep learning the following steps are included:
Step 1: it reads for monitoring video captured by the camera of airdrome scene, and read video is carried out Frame processing is taken out, each frame individual picture of extraction is saved as into corresponding scene picture.Specifically, the view of the airdrome scene read Frequency mainly reads the video that the video recorder of monitoring system is recorded, and can also read in some storage equipment and store monitoring system The video recorded of video recorder.
Step 2: every scene picture of calibration, and generate a corresponding text file;Wherein, the text file packet Target frame location information and target category information containing each target in corresponding scene picture.Specifically, by scene picture The targets such as aircraft, motor vehicle, staff use in target circle, and the classification for selecting its target to belong to, as shown in figure 3, By aircraft in target circle, and marking the classification of target is Airplane.Meanwhile generating the mesh comprising each target The text file of frame position and target category information is marked, for example sets each style of writing in this text file and originally respectively corresponds one Target, this includes 5 parameters for every style of writing, and first parameter indicates target category code name, behind four parameters indicate in target frames Heart coordinate and target frame length and width account for the ratio of entire picture.
Step 3: each layer weight of classifier in random initializtion training network, and to the scene picture demarcated Classification based training is carried out, pre-training model is obtained.Specifically, using ImageNet data set training classifier darknet-53 with Machine initializes each layer weight of network, and carries out classification based training to the scene picture demarcated, and loss functional value is made to reach one Definite value when testing classification device performance, only focuses on top1 index, that is, only focuses on mesh since airdrome scene target category is less The ratio of the correct total testing time of Zhan of mark classification has obtained a qualified pre-training when top1 index reaches given threshold Model.
Step 4: according to the scene picture demarcated, anchors parameter is calculated.Specifically, to the institute demarcated Scene picture, a certain number of cluster points of initializing set are stated, and calculate each position for clustering point using k-means algorithm It sets, and then determines the anchors parameter for calculating each cluster point.The corresponding target frame of one anchor parameter, center For the position of a cluster point, anchor parameter includes length-width ratio (such as: 1:1,1:2 etc.), the size (example of target frame Such as: 64,128 etc., unit pixel).Certainly, under some special cases, the position of the same cluster point can have multiple Anchor, but general each cluster point has anchor number of identical quantity.
Step 5: the prediction scale of various types target in the scene picture that modification has been demarcated;Specifically, the machine of being directed to Field scene target, aircraft, motor vehicle, staff, accounting for the maximum target overwhelming majority of image-region is all aircraft, on the scene Aircraft size length-width ratio concentrates between 1:1 to 3:1 in face all angles and orientation;Account for the biggish motor-driven vehicle commander of image-region Wide ratio is in 2:1 or so;The smallest staff's length-width ratio of image-region is accounted in 1:4 or so.Thus in detection different scale size Target when, corresponding prediction block length-width ratio is different.
As shown in Fig. 2, detecting 1, detection 2, detection 3 detects aircraft, motor vehicle, staff respectively.Detect 1 characteristic pattern Size is minimum, is 13x13, so being used to identify the biggish aircraft of target, predicted size is revised as 1:1 and 3:1;Detection 2 Characteristic pattern identifies motor vehicle having a size of 26x26, and predicted size is revised as 2:1;Similarly, 3 characteristic patterns of detection are 52x52, size Maximum, preferable to small target deteection effect, for detecting staff, predicted size is revised as 1:4.
Step 6: according to calculated anchors parameter, the configuration file of the pre-training model is modified, and based on volume Product residual error network learns the scene picture handled through the step 5, and obtains training pattern.Specifically, working as loss letter When numerical value drops to 1 or less, e-learning is completed, to obtain training pattern.
Step 7: identifying all targets occurred in video to be detected using training pattern, and accordingly output includes The target frame information of each target occurred and the recognition result image of target category information.Specifically, by video to be detected Stream is input in the training pattern that step 6 training is completed, and video flowing is handled by pumping frame, and each frame image is divided into 7x7 rectangle Region carries out convolution operation, and each region is responsible for predicting target's center point position and length and width, and if only if the confidence of prediction target When degree and IoU value reach the threshold value of setting, it is believed that this target is effective target, is otherwise mistake or redundancy object.
Airport scene monitoring method proposed by the present invention is to realize that airdrome scene target is known using deep learning network YOLO Not, only the airdrome scene target identification of high-accuracy need to can be realized by the study to a large amount of airdrome scene pictures, moreover, Hardware modification advantage of lower cost only need to increase monitoring camera in the place that airdrome scene does not cover monitoring, machine can be realized The monitoring and target identification of field scene.In particular, by random initializtion weight in deep learning network, to airdrome scene data Collection carries out classification based training, to be promoted to the single scene objects recognition accuracy of airdrome scene, while also quoting anchors machine System reduces the training time, accelerates weight parameter convergence, and use corresponding target prediction scale in different layers, greatly enhances Target identification correctness.Therefore, the present invention is based on the airport scene monitoring methods of deep learning can not only solve traditional control tower The problem of monitoring inefficiency, moreover it is possible to save the investment of hardware cost.

Claims (5)

1. a kind of airport scene monitoring method based on deep learning, which comprises the following steps:
Step 1: it reads and carries out pumping frame for monitoring video captured by the camera of airdrome scene, and to read video Processing, saves as corresponding scene picture for each frame individual picture of extraction;
Step 2: every scene picture of calibration, and generate a corresponding text file;Wherein, the text file includes pair Answer the target frame location information and target category information of each target in scene picture;
Step 3: each layer weight of classifier in random initializtion training network, and the scene picture demarcated is carried out Classification based training obtains pre-training model;
Step 4: according to the scene picture demarcated, anchors parameter is calculated;
Step 5: the prediction scale of various types target in the scene picture that modification has been demarcated;
Step 6: according to calculated anchors parameter, the configuration file of the pre-training model is modified, and residual based on convolution Poor network learns the scene picture handled through the step 5, and obtains training pattern;
Step 7: identifying all targets occurred in video to be detected using the training pattern, and accordingly output includes The target frame information of each target occurred and the recognition result image of target category information.
2. as described in claim 1 based on the airport scene monitoring method of deep learning, which is characterized in that the step 7 In, when the confidence level and IoU value that identify target reach the threshold value of setting, then accordingly export the target frame comprising the target The recognition result image of information and target category information.
3. as described in claim 1 based on the airport scene monitoring method of deep learning, which is characterized in that the step 4 In, to the scene picture demarcated, a certain number of cluster points of initializing set, and calculated using k-means algorithm The position of each cluster point, and then determine the anchors parameter for calculating each cluster point.
4. as described in claim 1 based on the airport scene monitoring method of deep learning, which is characterized in that the step 3 In, using each layer weight of classifier darknet-53 random initializtion network of ImageNet data set training;Wherein, work as top1 When index reaches given threshold, the pre-training model is obtained.
5. as described in claim 1 based on the airport scene monitoring method of deep learning, which is characterized in that the step 6 In, when loss functional value drops to 1 or less, complete e-learning.
CN201910213187.6A 2019-03-20 2019-03-20 A kind of airport scene monitoring method based on deep learning Pending CN109977840A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910213187.6A CN109977840A (en) 2019-03-20 2019-03-20 A kind of airport scene monitoring method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910213187.6A CN109977840A (en) 2019-03-20 2019-03-20 A kind of airport scene monitoring method based on deep learning

Publications (1)

Publication Number Publication Date
CN109977840A true CN109977840A (en) 2019-07-05

Family

ID=67079654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910213187.6A Pending CN109977840A (en) 2019-03-20 2019-03-20 A kind of airport scene monitoring method based on deep learning

Country Status (1)

Country Link
CN (1) CN109977840A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210474A (en) * 2020-02-26 2020-05-29 上海麦图信息科技有限公司 Method for acquiring real-time ground position of airplane in airport
CN111462534A (en) * 2020-03-16 2020-07-28 温州大学大数据与信息技术研究院 Airport moving target detection system and method based on intelligent perception analysis
CN111610517A (en) * 2020-06-09 2020-09-01 电子科技大学 Secondary radar signal processing method based on deep four-channel network
CN113343933A (en) * 2021-07-06 2021-09-03 安徽水天信息科技有限公司 Airport scene monitoring method based on video target identification and positioning
CN113596473A (en) * 2021-07-28 2021-11-02 浙江大华技术股份有限公司 Video compression method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330410A (en) * 2017-07-03 2017-11-07 南京工程学院 Method for detecting abnormality based on deep learning under complex environment
CN107527009A (en) * 2017-07-11 2017-12-29 浙江汉凡软件科技有限公司 A kind of remnant object detection method based on YOLO target detections
CN107657224A (en) * 2017-09-19 2018-02-02 武汉大学 A kind of multilayer parallel network SAR image Aircraft Targets detection method based on part
CN108090442A (en) * 2017-12-15 2018-05-29 四川大学 A kind of airport scene monitoring method based on convolutional neural networks
CN109241904A (en) * 2018-08-31 2019-01-18 平安科技(深圳)有限公司 Text region model training, character recognition method, device, equipment and medium
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330410A (en) * 2017-07-03 2017-11-07 南京工程学院 Method for detecting abnormality based on deep learning under complex environment
CN107527009A (en) * 2017-07-11 2017-12-29 浙江汉凡软件科技有限公司 A kind of remnant object detection method based on YOLO target detections
CN107657224A (en) * 2017-09-19 2018-02-02 武汉大学 A kind of multilayer parallel network SAR image Aircraft Targets detection method based on part
CN108090442A (en) * 2017-12-15 2018-05-29 四川大学 A kind of airport scene monitoring method based on convolutional neural networks
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109241904A (en) * 2018-08-31 2019-01-18 平安科技(深圳)有限公司 Text region model training, character recognition method, device, equipment and medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JOSEPH REDMON ET AL.: "YOLOv3: An Incremental Improvement", 《ARXIV:1804.02767V1[CS.CV]》 *
KK123K: "YOLO v3详解", 《CSDN》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210474A (en) * 2020-02-26 2020-05-29 上海麦图信息科技有限公司 Method for acquiring real-time ground position of airplane in airport
CN111210474B (en) * 2020-02-26 2023-05-23 上海麦图信息科技有限公司 Method for acquiring real-time ground position of airport plane
CN111462534A (en) * 2020-03-16 2020-07-28 温州大学大数据与信息技术研究院 Airport moving target detection system and method based on intelligent perception analysis
CN111462534B (en) * 2020-03-16 2021-06-15 温州大学大数据与信息技术研究院 Airport moving target detection system and method based on intelligent perception analysis
CN111610517A (en) * 2020-06-09 2020-09-01 电子科技大学 Secondary radar signal processing method based on deep four-channel network
CN111610517B (en) * 2020-06-09 2022-06-07 电子科技大学 Secondary radar signal processing method based on deep four-channel network
CN113343933A (en) * 2021-07-06 2021-09-03 安徽水天信息科技有限公司 Airport scene monitoring method based on video target identification and positioning
CN113596473A (en) * 2021-07-28 2021-11-02 浙江大华技术股份有限公司 Video compression method and device
CN113596473B (en) * 2021-07-28 2023-06-13 浙江大华技术股份有限公司 Video compression method and device

Similar Documents

Publication Publication Date Title
CN109977840A (en) A kind of airport scene monitoring method based on deep learning
CN112101088B (en) Unmanned aerial vehicle electric power automatic inspection method, device and system
CN108009473A (en) Based on goal behavior attribute video structural processing method, system and storage device
CN105447459A (en) Unmanned plane automation detection target and tracking method
CN108446630A (en) Airfield runway intelligent control method, application server and computer storage media
CN201159903Y (en) License plate recognition device
CN115603466B (en) Ship shore power system based on artificial intelligence visual identification
CN105404867B (en) A kind of substation isolating-switch state identification method of view-based access control model
CN108680833B (en) Composite insulator defect detection system based on unmanned aerial vehicle
CN111488803A (en) Airport target behavior understanding system integrating target detection and target tracking
CN104573659A (en) Driver call-making and call-answering monitoring method based on svm
CN111783579B (en) Unmanned aerial vehicle visual analysis-based detection system for crossing fence by constructors
CN109740412A (en) A kind of signal lamp failure detection method based on computer vision
CN114049624B (en) Ship cabin intelligent detection method and system based on machine vision
CN113076899B (en) High-voltage transmission line foreign matter detection method based on target tracking algorithm
EP3680608A1 (en) Antenna downward inclination angle measurement method based on multi-scale detection algorithm
CN111079694A (en) Counter assistant job function monitoring device and method
CN115880231A (en) Power transmission line hidden danger detection method and system based on deep learning
CN112507760A (en) Method, device and equipment for detecting violent sorting behavior
CN105868776A (en) Transformer equipment recognition method and device based on image processing technology
CN116363573A (en) Transformer substation equipment state anomaly identification method and system
CN117218756B (en) Intelligent safety pick-up system and method based on face recognition
CN117292111A (en) Offshore target detection and positioning system and method combining Beidou communication
CN117197978A (en) Forest fire monitoring and early warning system based on deep learning
CN103198326B (en) A kind of image classification method of transmission line of electricity helicopter routing inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190705