CN108920711B - Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide - Google Patents

Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide Download PDF

Info

Publication number
CN108920711B
CN108920711B CN201810825689.XA CN201810825689A CN108920711B CN 108920711 B CN108920711 B CN 108920711B CN 201810825689 A CN201810825689 A CN 201810825689A CN 108920711 B CN108920711 B CN 108920711B
Authority
CN
China
Prior art keywords
marked
unmanned aerial
labeling
client
aerial vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810825689.XA
Other languages
Chinese (zh)
Other versions
CN108920711A (en
Inventor
胡天江
周勇
周晗
赵框
唐邓清
常远
周正元
方强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201810825689.XA priority Critical patent/CN108920711B/en
Publication of CN108920711A publication Critical patent/CN108920711A/en
Application granted granted Critical
Publication of CN108920711B publication Critical patent/CN108920711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A deep learning label data generation method facing unmanned aerial vehicle take-off and landing guide is characterized in that an administrator client establishes a database system, defines marking requirements and dispatches tasks; each user logs in a labeling client, receives labeling tasks and labeling requirements through each labeling client, manually labels each scene image to be labeled, stores each labeled scene image in a database system in an xml format, and updates the database system in real time. After all the scene images to be labeled are labeled, the auditor logs in the client of the auditor, accesses the database system through the network of the client of the auditor, and audits the labeling results (namely, the labeled scene images) by the client of the auditor. According to the invention, the labeling task is issued in a networked manner, and the design auditing method automatically audits the labeling result, so that the data labeling efficiency and the labeling result reliability are greatly improved, and the practical requirement of deep learning of large-scale sample labeling is effectively met.

Description

Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide
Technical Field
The invention mainly relates to the field of design of an autonomous take-off and landing guide system of an unmanned aerial vehicle, in particular to a deep learning label data generation method for take-off and landing guide of the unmanned aerial vehicle.
Background
The unmanned aerial vehicle take-off and landing guide system aims to solve the problems of autonomous take-off and landing in a weak GPS or GPS rejection environment. The guiding system acquires a scene image containing an unmanned aerial vehicle target in the taking-off and landing process of the unmanned aerial vehicle through the camera, and solves the world coordinate pose of the unmanned aerial vehicle by extracting the target area and the anchor point coordinates of the unmanned aerial vehicle in the image and applying methods such as computer vision measurement, filtering estimation and the like, so that the unmanned aerial vehicle is guided to take off and land autonomously. Extracting the unmanned aerial vehicle target area and the anchor point coordinates from the image is an essential function of the guidance system.
The method for extracting the unmanned aerial vehicle target area and the anchor point coordinates by aiming at the characteristics such as the corners and the edges has the defects of weak applicability, sensitive parameters and the like, and a deep learning scheme is provided to remove parameter dependence and improve the scene applicability. The deep learning method automatically extracts the unmanned aerial vehicle target and the anchor point and needs to construct a tag data set, and due to the fact that the scale of deep learning sample data is large, a tag data generation tool which is convenient and fast to interact, efficient to operate and capable of running in a networked mode is urgently needed.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a deep learning label data generation method for unmanned aerial vehicle take-off and landing guidance.
In order to achieve the technical purpose, the technical scheme of the invention is as follows:
the deep learning label data generation method facing the unmanned aerial vehicle take-off and landing guide comprises the following steps;
(1) a database system is established, and the database system is established,
the database system is established by the administrator client, the administrator client manages the database system, the pictures can be uploaded to the database system, the pictures in the database system can be deleted, the pictures stored in the database system can be inquired, and the labeling result can be exported.
All scene images to be marked are stored in the database system, wherein the scene images to be marked comprise scene images which are not marked and contain unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicles shot by the cameras, and scene images which are manually marked more than once and contain the unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicles shot by the cameras.
(2) And according to the task requirement, determining the target area of the unmanned aerial vehicle to be marked and the anchor point coordinates by the administrator client.
The coordinates of the anchor points can be selected from eight anchor points such as a machine head, a left wing, a right wing, a left empennage, a right empennage, a left foot rest, a middle foot rest and a right foot rest.
(3) Dynamically distributing scene images to be annotated to each annotation client by the administrator client through a network;
sequencing all scene images to be marked in sequence from few times to many times according to the times of manual marking of the scene images, wherein the times of the scene images containing the unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicle shot by the camera and not marked are 0.
When distributing the scene images to be marked, the priority distribution sequence is as follows: firstly, the scene images which are not marked are preferentially and randomly distributed to all marking client sides, and then the scene images to be marked are sequentially and randomly distributed to all marking client sides according to the sequence of the times of manual marking from small to large, so that all the scene images to be marked can be ensured to be marked. The random distribution means that the scene images to be labeled are randomly distributed to more than one labeling client side in the current round of labeling for labeling. Therefore, in the current round of annotation, the scene images to be annotated may be annotated by more than one annotation client, that is, in the current round of annotation, there may be some scene images to be annotated that are annotated multiple times.
(4) And (3) each annotation client receives an annotation task and an annotation requirement, wherein the annotation task is a scene image to be annotated, which is issued to each annotation client by an administrator client, and the annotation requirement is the target area of the unmanned aerial vehicle to be annotated and the anchor point coordinate determined in the step (2).
And each marking client manually marks each scene image to be marked, namely, the unmanned aerial vehicle target area and the anchor point coordinates in each image are selected, each marked scene image is stored in a database system in an xml format, and the database system is updated in real time.
(5) Auditing the labeling result
After all the scene images to be labeled are labeled, the client of the auditor accesses the database system through the network, and the client of the auditor audits the labeling results (namely, the labeled scene images).
The auditing mode can adopt a manual auditing or automatic auditing method.
The invention provides an automatic auditing method, which comprises the following steps:
in order to eliminate the influence of individual samples on the labeling result, the error caused by individual singular samples is reduced by adopting a statistical average method, and the realization method comprises the following steps:
for a marked scene image stored in a database system, if the marked frequency is N times, N groups of unmanned aerial vehicle anchor point coordinate sample values marked for N times can be obtained, and the coordinates of the extracted unmanned aerial vehicle anchor point of the ith anchor point are
Figure BDA0001742434620000031
The abscissa is selected for the following processing:
first obtaining the maximum value of the N groups of horizontal coordinates
Figure BDA0001742434620000032
And minimum value
Figure BDA0001742434620000033
Section of will
Figure BDA0001742434620000034
Equally divided into N-1 sub-intervals, and each sub-interval is set to have a length of delta xiLength of interval
Figure BDA0001742434620000035
Then the jth subinterval of the ith anchor point which can obtain the abscissa is xi,j=xi min +(j-1)×△xiThen x can be obtainediHas a distribution probability of
Figure BDA0001742434620000041
Wherein the content of the first and second substances,
Figure BDA0001742434620000042
to obtain p (x)i,j) Then, a threshold value is set
Figure BDA0001742434620000043
Rejection probability lower than
Figure BDA0001742434620000044
To obtain a new data point set
Figure BDA0001742434620000045
Wherein N ispFor the number of new sets of data points,
Figure BDA0001742434620000046
by the formula (2), the statistical average of the N sets of abscissas can be obtained
Figure BDA0001742434620000047
By obtaining statistical averages of N sets of abscissas
Figure BDA0001742434620000048
In the same way, the statistical mean of the N sets of ordinates can be obtained
Figure BDA0001742434620000049
Obtaining coordinates of a center point
Figure BDA00017424346200000410
Then, the coordinate point is taken
Figure BDA00017424346200000411
As the circle center, r pixel values are radius, when the anchor point coordinate (x) obtained by the useri,yi) It is effective when the following formula (3) is satisfied; and when the condition is not met, prompting the user to mark errors and rejecting the marking result.
Figure BDA00017424346200000412
Where r is a threshold set according to the accuracy requirement.
Compared with the prior art, the invention can produce the following technical effects:
the unmanned aerial vehicle take-off and landing guide-oriented deep learning label data generation system disclosed by the invention releases the labeling task in a networked manner, and the design algorithm automatically audits the labeling result, so that the data labeling efficiency and the labeling result reliability are greatly improved, and the practical requirement of deep learning large-scale sample labeling is effectively met. The invention has the main characteristics that: firstly, the operation is released in a networked mode, the open source crowdsourcing advantage is fully played, and a user group facing to the label is wider; secondly, the setting of the labeling priority of the graph source is more scientific, and the system can avoid the situation that part of the graph sources are not labeled by counting the labeling times of the same graph source and preferentially labeling the graph sources with less labeling times; and thirdly, the system has an auditing function, and the result of the labeling error is eliminated through manual or algorithm auditing, so that the label data is more reliable. The label data generation tool designed by the invention has important application value and significance for quickly and accurately acquiring the deep learning label data set.
Drawings
FIG. 1 is a block diagram of the system architecture of the present invention.
FIG. 2 is a flow chart of the present invention.
Detailed Description
The technical scheme of the invention is further shown and described in the following by combining the drawings of the specification.
Referring to fig. 1 and 2, a deep learning label data generation method for unmanned aerial vehicle take-off and landing guide is as follows;
(1) a database system is established, and the database system is established,
an administrator logs in an administrator client to establish a database system, manages the database system, can upload pictures to the database system, can delete the pictures in the database system, can query the pictures stored in the database system and can derive a labeling result.
All scene images to be marked are stored in the database system, wherein the scene images to be marked comprise scene images which are not marked and contain unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicles shot by the cameras, and scene images which are manually marked more than once and contain the unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicles shot by the cameras.
(2) According to task requirements, an administrator defines marking requirements through a plumber client, namely the administrator client determines the target area of the unmanned aerial vehicle to be marked and anchor coordinates.
The coordinates of the anchor points can be selected from eight anchor points such as a machine head, a left wing, a right wing, a left empennage, a right empennage, a left foot rest, a middle foot rest and a right foot rest.
(3) Dispatching tasks
And dynamically distributing the scene images to be annotated to each annotation client by the administrator client through the network.
Sequencing all scene images to be marked in sequence from few times to many times according to the times of manual marking of the scene images, wherein the times of the scene images containing the unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicle shot by the camera and not marked are 0.
When distributing the scene images to be marked, the priority distribution sequence is as follows: firstly, the scene images which are not marked are preferentially and randomly distributed to all marking client sides, and then the scene images to be marked are sequentially and randomly distributed to all marking client sides according to the sequence of the times of manual marking from small to large, so that all the scene images to be marked can be ensured to be marked. The random distribution means that the scene images to be labeled are randomly distributed to more than one labeling client side in the current round of labeling for labeling. Therefore, in the current round of annotation, the scene images to be annotated may be annotated by more than one annotation client, that is, in the current round of annotation, there may be some scene images to be annotated that are annotated multiple times.
(4) And (3) each user logs in a labeling client, receives a labeling task and a labeling requirement through each labeling client, wherein the labeling task is a scene image to be labeled which is issued to each labeling client by an administrator client, and the labeling requirement is the target area of the unmanned aerial vehicle to be labeled and the anchor point coordinate determined in the step (2).
And each marking client manually marks each scene image to be marked, namely, the unmanned aerial vehicle target area and the anchor point coordinates in each image are selected, each marked scene image is stored in a database system in an xml format, and the database system is updated in real time.
(5) Auditing the labeling result
After all the scene images to be labeled are labeled, the auditor logs in the client of the auditor, accesses the database system through the network of the client of the auditor, and audits the labeling results (namely, the labeled scene images) by the client of the auditor.
The auditing mode can adopt a manual auditing or automatic auditing method.
The invention provides an automatic auditing method, which comprises the following steps:
in order to eliminate the influence of individual samples on the labeling result, the error caused by individual singular samples is reduced by adopting a statistical average method, and the realization method comprises the following steps:
for a marked scene image stored in a database system, if the marked frequency is N times, N groups of unmanned aerial vehicle anchor point coordinate sample values marked for N times can be obtained, and the coordinates of the extracted unmanned aerial vehicle anchor point of the ith anchor point are
Figure BDA0001742434620000071
The abscissa is selected for the following processing:
first obtaining the maximum value of the N groups of horizontal coordinates
Figure BDA0001742434620000072
And minimum value
Figure BDA0001742434620000073
Section of will
Figure BDA0001742434620000074
Equally divided into N-1 sub-intervals, and each sub-interval is set to have a length of delta xiThe interval length is
Figure BDA0001742434620000075
Then the jth subinterval of the ith anchor point which can obtain the abscissa is xi,j=xi min +(j-1)×△xiThen x can be obtainediHas a distribution probability of
Figure BDA0001742434620000081
Wherein the content of the first and second substances,
Figure BDA0001742434620000082
to obtain p (x)i,j) Then, a threshold value is set
Figure BDA0001742434620000083
Rejection probability lower than
Figure BDA0001742434620000084
To obtain a new data point set
Figure BDA0001742434620000085
Wherein N ispFor the number of new sets of data points,
Figure BDA0001742434620000086
by the formula (2), the statistical average of the N sets of abscissas can be obtained
Figure BDA0001742434620000087
By obtaining statistical averages of N sets of abscissas
Figure BDA0001742434620000088
In the same way, the statistical mean of the N sets of ordinates can be obtained
Figure BDA0001742434620000089
Obtaining coordinates of a center point
Figure BDA00017424346200000810
Then, the coordinate point is taken
Figure BDA00017424346200000811
As the circle center, r pixel values are radius, when the anchor point coordinate (x) obtained by the useri,yi) It is effective when the following formula (3) is satisfied; and when the condition is not met, prompting the user to mark errors and rejecting the marking result.
Figure BDA00017424346200000812
Where r is a threshold set according to the accuracy requirement.
The deep learning label data generation system for unmanned aerial vehicle take-off and landing guide comprises an administrator client, a labeling client and an auditor client; the administrator client, the labeling client and the auditor client are connected through network communication;
and the administrator client establishes a database system, and stores all scene images to be marked, which are shot by the camera and contain unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicles, into the database system. And simultaneously, according to task requirements, determining a target area and anchor point coordinates of the unmanned aerial vehicle to be marked, and issuing marking tasks and marking requirements to each marking client through a network, wherein the marking tasks are scene images to be marked which are issued to each marking client by an administrator client, and the marking requirements are the number and type of image marking anchor points and the marking times of each image which are established by the administrator client when the task is issued according to the task requirements. The marked scene image, namely the marking result, is stored in the database system in an xml format, and the database system is updated in real time. The corresponding relation between the labeling client and the labeling result can be determined through the xml file stored in the database system.
The annotation client receives an annotation task and an annotation standard issued by the administrator client through a network, manually annotates each scene image to be annotated in the annotation task by utilizing a browser login website in the annotation client, namely, the unmanned aerial vehicle target area and the anchor point coordinate in each image are selected in a frame, and the annotation result is stored in an xml format. And all the labeling results of all the labeling clients are sent to a database system established by the administrator client for storage.
And the auditor client side audits all the marking results stored in the database system by accessing the database system. The auditing mode can adopt a manual auditing or algorithm automatic auditing method.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. The deep learning label data generation method facing unmanned aerial vehicle take-off and landing guide is characterized by comprising the following steps: the method comprises the following steps:
(1) establishing a database system;
the method comprises the steps that a database system is established by an administrator client, and all scene images to be marked are stored in the database system, wherein the scene images to be marked comprise scene images which are not marked and contain unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicles shot by cameras, and scene images which are marked more than once and contain the unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicles shot by the cameras;
(2) according to task requirements, determining an unmanned aerial vehicle target area to be marked and anchor point coordinates by an administrator client;
(3) dynamically distributing scene images to be annotated to each annotation client by the administrator client through a network;
sequencing all scene images to be marked in sequence from few times to many times according to the times of manual marking of the scene images, wherein the times of the scene images containing the unmanned aerial vehicle targets in the taking-off and landing processes of the unmanned aerial vehicle shot by the camera and not marked are 0;
when distributing the scene images to be marked, the priority distribution sequence is as follows: firstly, preferentially and randomly distributing the unmarked scene images to all the marking clients, and then sequentially and randomly distributing the scene images to be marked to all the marking clients according to the sequence of the times of manual marking from small to large, so as to ensure that all the scene images to be marked can be marked; the random distribution means that the scene images to be labeled are randomly distributed to more than one labeling client side in the current round of labeling for labeling;
(4) each annotation client receives an annotation task and an annotation requirement, wherein the annotation task is a scene image to be annotated, which is issued to each annotation client by an administrator client, and the annotation requirement is the target area of the unmanned aerial vehicle to be annotated and the anchor point coordinate determined in the step (2);
each marking client end carries out manual marking on each scene image to be marked, namely, an unmanned aerial vehicle target area and anchor point coordinates in each image are selected, each marked scene image is stored in a database system in an xml format, and the database system is updated in real time;
(5) auditing the labeling result;
after all the scene images to be marked are marked, the client of the auditor accesses the database system through the network, and the client of the auditor checks the marked results, namely the marked scene images.
2. The method for generating deep learning label data for unmanned aerial vehicle take-off and landing guidance according to claim 1, wherein the method comprises the following steps: the administrator client manages the database system, can upload pictures to the database system, can delete the pictures in the database system, can query the pictures stored in the database system and can derive the labeling results.
3. The method for generating deep learning label data for unmanned aerial vehicle take-off and landing guidance according to claim 1, wherein the method comprises the following steps: and (3) selecting 8 characteristic parts of the machine head, the left wing, the right wing, the left empennage, the right empennage, the left foot rest, the middle foot rest and the right foot rest as anchor points in the step (2).
4. The method for generating deep learning label data for unmanned aerial vehicle take-off and landing guidance according to claim 1, wherein the method comprises the following steps: and (5) adopting a manual auditing or automatic auditing method as an auditing mode.
5. The method for generating deep learning label data for unmanned aerial vehicle take-off and landing guidance according to claim 4, wherein the method comprises the following steps: the automatic auditing method in the step (5) is as follows:
for a marked scene image stored in a database system, if the marked frequency is N times, N groups of unmanned aerial vehicle anchor point coordinate sample values marked for N times can be obtained, and the coordinates of the extracted unmanned aerial vehicle anchor point of the ith anchor point are
Figure FDA0003117457630000021
The abscissa is selected for the following processing:
first obtaining the maximum value of the N groups of horizontal coordinates
Figure FDA0003117457630000022
And minimum value
Figure FDA0003117457630000023
Section of will
Figure FDA0003117457630000024
Equally divided into N-1 sub-intervals, and each sub-interval is set to have a length of delta xiLength of interval
Figure FDA0003117457630000025
Then the jth subinterval of the ith anchor point which can obtain the abscissa is xi,j=xi min+(j-1)×△xiThen x can be obtainediThe distribution probability of (a) is:
Figure FDA0003117457630000031
wherein the content of the first and second substances,
Figure FDA0003117457630000032
to obtain p (x)i,j) Then, a threshold value is set
Figure FDA0003117457630000033
Rejection probability lower than
Figure FDA0003117457630000034
To obtain a new data point set
Figure FDA0003117457630000035
Wherein N ispFor the number of new sets of data points,
Figure FDA0003117457630000036
the statistical average of the N sets of abscissas is obtained by the formula (2)
Figure FDA0003117457630000037
By obtaining statistical averages of N sets of abscissas
Figure FDA0003117457630000038
In the same way, the statistical mean of the N sets of ordinates is obtained
Figure FDA0003117457630000039
Obtaining coordinates of a center point
Figure FDA00031174576300000310
Then, the coordinate point is taken
Figure FDA00031174576300000311
As the circle center, r pixel values are radius, when the anchor point coordinate (x) obtained by the useri,yi) It is effective when the following formula (3) is satisfied; when the condition is not met, prompting the user of error labeling and rejecting the labeling result;
Figure FDA00031174576300000312
where r is a threshold set according to the accuracy requirement.
CN201810825689.XA 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide Active CN108920711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810825689.XA CN108920711B (en) 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810825689.XA CN108920711B (en) 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide

Publications (2)

Publication Number Publication Date
CN108920711A CN108920711A (en) 2018-11-30
CN108920711B true CN108920711B (en) 2021-09-24

Family

ID=64416638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810825689.XA Active CN108920711B (en) 2018-07-25 2018-07-25 Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide

Country Status (1)

Country Link
CN (1) CN108920711B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110058756B (en) * 2019-04-19 2021-03-02 北京朗镜科技有限责任公司 Image sample labeling method and device
CN112347947A (en) * 2020-11-10 2021-02-09 厦门长江电子科技有限公司 Image data processing system and method integrating intelligent detection and automatic test
CN113010739B (en) * 2021-03-18 2024-01-26 北京奇艺世纪科技有限公司 Video tag auditing method and device and electronic equipment
CN112990202B (en) * 2021-05-08 2021-08-06 中国人民解放军国防科技大学 Scene graph generation method and system based on sparse representation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2778819A1 (en) * 2013-03-12 2014-09-17 Thomson Licensing Method for shooting a film performance using an unmanned aerial vehicle
US9476730B2 (en) * 2014-03-18 2016-10-25 Sri International Real-time system for multi-modal 3D geospatial mapping, object recognition, scene annotation and analytics
CN108230240B (en) * 2017-12-31 2020-07-31 厦门大学 Method for obtaining position and posture in image city range based on deep learning

Also Published As

Publication number Publication date
CN108920711A (en) 2018-11-30

Similar Documents

Publication Publication Date Title
CN108920711B (en) Deep learning label data generation method oriented to unmanned aerial vehicle take-off and landing guide
CN110580475A (en) line diagnosis method based on unmanned aerial vehicle inspection, electronic device and storage medium
CN107223246B (en) Image labeling method and device and electronic equipment
CN111784685A (en) Power transmission line defect image identification method based on cloud edge cooperative detection
US20180108182A1 (en) Context-aware tagging for augmented reality environments
CN106767812A (en) A kind of interior semanteme map updating method and system based on Semantic features extraction
CN112132197A (en) Model training method, image processing method, device, computer equipment and storage medium
CN113034025B (en) Remote sensing image labeling system and method
CN109784272A (en) A kind of container identifying system and container recognition methods
CN112817755A (en) Edge cloud cooperative deep learning target detection method based on target tracking acceleration
CN110865654A (en) Power grid unmanned aerial vehicle inspection defect processing method
CN108762936A (en) Distributed computing system based on artificial intelligence image recognition and method
CN110414375A (en) Recognition methods, device, storage medium and the electronic equipment of low target
CN113298042A (en) Method and device for processing remote sensing image data, storage medium and computer equipment
CN113033386A (en) High-resolution remote sensing image-based transmission line channel hidden danger identification method and system
CN112508193B (en) Deep learning platform
CN114253284A (en) Unmanned aerial vehicle automatic control method, device, equipment and storage medium
CN111783552B (en) Live three-dimensional model singulation method and device, storage medium and electronic equipment
CN109409392A (en) The method and device of picture recognition
CN113781524A (en) Target tracking system and method based on two-dimensional label
CN114241202A (en) Method and device for training dressing classification model and method and device for dressing classification
CN110633702A (en) Unmanned aerial vehicle-based line maintenance charge calculation method, server and storage medium
CN114782805B (en) Unmanned plane patrol oriented human in-loop hybrid enhanced target recognition method
CN111291597A (en) Image-based crowd situation analysis method, device, equipment and system
KR101918820B1 (en) Unmanned Aerial Vehicle Homing Control Method using Scene Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant