CN110490155A - A kind of no-fly airspace unmanned plane detection method - Google Patents

A kind of no-fly airspace unmanned plane detection method Download PDF

Info

Publication number
CN110490155A
CN110490155A CN201910782216.0A CN201910782216A CN110490155A CN 110490155 A CN110490155 A CN 110490155A CN 201910782216 A CN201910782216 A CN 201910782216A CN 110490155 A CN110490155 A CN 110490155A
Authority
CN
China
Prior art keywords
unmanned plane
size
loss
target
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910782216.0A
Other languages
Chinese (zh)
Other versions
CN110490155B (en
Inventor
叶润
闫斌
甘雨涛
青辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Sichuan Agricultural University
Original Assignee
University of Electronic Science and Technology of China
Sichuan Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China, Sichuan Agricultural University filed Critical University of Electronic Science and Technology of China
Priority to CN201910782216.0A priority Critical patent/CN110490155B/en
Publication of CN110490155A publication Critical patent/CN110490155A/en
Application granted granted Critical
Publication of CN110490155B publication Critical patent/CN110490155B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to no-fly airspace air vehicle technique fields, and in particular to a kind of no-fly airspace unmanned plane detection method.Unmanned machine testing, purpose reduce " black to fly " unmanned plane effectively by detecting to unmanned plane during flying to unmanned aerial vehicle of the present invention to no-fly airspace in real time and accurately.It is also possible to realize the unmanned plane of faster more quasi- discovery airspace flight, implement unmanned plane counterattacking measure faster, reduces unmanned plane " black to fly " bring loss to the greatest extent and reduce unmanned plane and bring safety accident probability.The result that the present invention detected can detect small unmanned plane target, and accurately identify target and be the approximate location of what and target, and enable the algorithm to reach real-time ability.The considerable reaction time is brought to handle " black to fly " of unmanned plane in time.

Description

A kind of no-fly airspace unmanned plane detection method
Technical field
The invention belongs to no-fly airspace air vehicle technique fields, and in particular to a kind of no-fly airspace unmanned plane detection method.
Background technique
In recent years, unmanned plane was widely used in various industries field, brought convenience to many industries.It brings simultaneously Bad phenomenon, the safety accident that unmanned plane " black to fly " attracts all repeatedly occurs in the whole nation or even countries in the world, such as unmanned plane It disturbs boat, smuggling and upsets sensitive area, it is serious to have threatened national defence and public safety.Directly show unmanned plane regulation technique Defect and loophole.In order to supervise unmanned plane by effective flight, safety accident caused by reduction unmanned plane " black to fly ", so this Text is directed to a kind of new no-fly airspace unmanned plane detection method.It is difficult to detect now for the unmanned plane Small object of distant place It arrives, and is easy to obscure with birds or other similar analyte detection, can also detect target in real time.Therefore the present invention for The Small object and chaff interferent of distant place can be identified accurately, and meet real-time ability.
Summary of the invention
The invention proposes a kind of brand-new no-fly airspace unmanned plane detection methods, and entire method is by using depth The method of habit realizes unmanned machine testing.This method is built upon on target detection model YOLOv3 algorithm, by effective Improve frame model detection, realize accurately in real time unmanned machine testing.Process proposed herein is mainly by sample Acquisition and pretreatment, overall network structure, prediction result processing, loss function calculate four part form.
1. sample acquisition and pretreatment
The acquisition of sample is either the picture downloaded on network, oneself can also collect pictures, but be to ensure that acquisition Picture in the background, size and pattern of unmanned plane all have otherness.The picture of acquisition is labeled later, uses mesh General rectangle frame carrys out label target in mark detection, and label data is saved.Before carrying out image and being input to network, need Image is pre-processed, the process of processing mainly has image cropping, scaling, overturning, displacement, brightness adjustment and plus noise, leads to These pretreatments are crossed, so that input picture can be with fixed size, such as 416*416, while being also required to do accordingly label data Processing.This method can greatly increase trained Quantityanddiversity, improve the robustness of model, allow the network to more Complicated image is added to have better generalization ability.
2. overall network structure
After obtaining pretreated input picture, it can be input in network and be handled.Overall network mainly includes three Part, backbone network, Fusion Features network and prediction three parts of network.
2.1 backbone network
Backbone network is that the image that will be inputted carries out feature extraction, obtains the more complicated feature to target.Structure chart As shown in the backbone network module of attached drawing 1, which, using darknet53 as template, mainly includes 52 convolutional layers.The network Input be the obtained image of pretreatment, once pass through 52 convolutional networks, the 26th layer, the 43rd chosen in this 52 convolutional layers Three characteristic patterns of layer and the 52nd layer of characteristic pattern as next step Fusion Features.The size of three layers of characteristic pattern is 52 respectively × 52,26×26,13×13。
2.2 Fusion Features networks
Fusion Features part is that backbone network three obtained characteristic pattern is done mixing operation, allows the network to adapt to not With the target of size.Structure chart is as shown in the Fusion Features module of attached drawing 1.The network first handles the feature that size is 13 × 13 Figure successively passes through 5 convolution, the characteristic pattern that output size is 13 × 13;Later by obtained just now 13 × 13 characteristic pattern into Row up-sampling obtains the characteristic pattern that size is 26 × 26, and the characteristic pattern that the size obtained with backbone network is 26 × 26 does square The characteristic pattern of size 26 × 26 is obtained by 5 convolutional layers after battle array attended operation;It up-sampling is done to this feature figure obtains size and be 52 × 52 characteristic pattern, and the characteristic pattern that the size obtained with backbone network is 52 × 52 is done after matrix attended operation by 5 Convolutional layer obtains the characteristic pattern of size 52 × 52.Three characteristic patterns by Fusion Features, size can be obtained by aforesaid operations Respectively 52 × 52,26 × 26,13 × 13.
2.3 prediction networks
Predict the structure of network as shown in the prediction module of attached drawing 1.The network mainly only has three groups of two convolution compositions.Often The input of one group of convolution is the characteristic pattern exported after merging, and the result of prediction includes three parts, is that the position of target is returned respectively Return the classification of value, the confidence level of target and target.The predicted value size of three groups of networks is respectively 13 × 13 × 3 × (4+1+1), and 26 × 26 × 3 × (4+1+1), 52 × 52 × 3 × (4+1+1), wherein 13,26,52 indicate the size of characteristic pattern, 3 indicate priori frame Quantity, 4 indicate position regressand values, and intermediate 1 indicates the confidence level of target, and subsequent 1 indicates classification, due to there was only unmanned plane One kind, therefore be 1.
3. prediction result is handled
The result that neural network forecast obtains can not directly obtain position and the classification of target, need by handling ability indirectly Obtain desired result.Conversion process mainly includes position recurrence and optimum selecting.In the part, main input is prediction network Three groups of obtained predicted values successively do same processing to this three groups of predicted values.
3.1 positions return
Since network is for directly training label data poor effect, label data is carried out using round-about way Training.Pre-defined first rectangle frame (priori frame), there are 3 sizes in each grid of a characteristic pattern for the rectangle frame Consistent rectangle frame, rectangle frame is not of uniform size on different characteristic patterns.
As shown in Fig. 2, (the t in 4 position regressand valuesx,ty) indicate translation predicted value and (tw,th) indicate that scaling is pre- Measured value, is responsible for the high size of coordinate position and width of priori frame executing fine tuning, rectangle frame and callout box after being finally reached fine tuning Degree of overlapping it is best.Dotted line frame P in Fig. 2w,PhFor priori frame, solid box b is usedw,bhIt is callout box, the dotted line square not marked Shape frame is the rectangle frame that prediction obtains.
In order to further increase prediction precision, the present invention is optimized prediction technique on the basis of YOLOv3, Reference points increase four times, and fine tuning direction has become four, help to obtain the degree of overlapping better with callout box.Such as Shown in Fig. 3, dotted line frame Pw,PhFor priori frame, solid box b is usedw,bhIt is callout box, left figure is the schematic diagram of former algorithm, right Figure is the schematic diagram of innovatory algorithm.Grid increases four angle points (α, β, λ, δ).The present invention selects hyperbolic tangent function tanhx Predicted value (t will be translatedx,ty) transform in (- 1,1) range.Increased four angle points can be predicted to obtain four groups of squares predicted Shape frame (bx,by,bw,bh), formula is as follows, wherein (tx,ty,tw,th) it is the value that prediction obtains, (cx,cy) it is grid upper left Deviant between angle and the characteristic pattern upper left corner, (pw,ph) it is the wide high of priori frame,It is the width height of prediction block, I ∈ (α, β, λ, δ):
α angle point:
β angle point:
γ angle point:
δ angle point:
By taking size is 13 × 13 characteristic pattern as an example, by regression formula above, available 13 × 13 × 3 rectangles Frame, the size of these rectangle frames are using the size of characteristic pattern as referential, it is therefore desirable to map that the figure of input network As size 416 × 416, it finally revert to the size of original image again.
3.2 optimum selecting
The above is to align to put back into return to process, and is to do corresponding processing to confidence level and classification here.With predicted value 13 × For 13 × 3 × 6,13 × 13 × 3 × 1 confidence level predicted value can be obtained, for so more predicted values, can preferentially selection be greater than The predicted value of given threshold value (0.5).The quantity of predicted value can be greatly reduced in this way, obtained result is more accurate.For classification It does not need to process, because only that a kind of.By this optimum selecting, still can there are many interference rectangle frames, these squares Shape frame can predict target, but difference is had on registration, it is therefore desirable to which prediction block filter operation filters out those repetitions Prediction block.
The filtering of 3.3 prediction blocks
It predicts that there may be the same targets to correspond to multiple prediction blocks for obtained unmanned plane bounding box, needs at this time to extra Prediction block be filtered, finally obtain best prediction block.It can commonly be realized with NMS (non-maxima suppression) algorithm. But multiple unmanned plane overlapping phenomenons may occur in the detection process, it can not then detect overlapping nobody simultaneously using NMS Machine target then selects Soft-NMS as the prediction block filter algorithm during unmanned plane prediction.
Traditional NMS is in calculating process, can be directly by wherein some is bad pre- when two prediction blocks overlap Frame is surveyed to filter out.Soft-NMS will not filter that prediction block of difference, but slightly lower score is used to replace original score.Formula It is as follows:
Adjacent prediction block biOverlapping with prediction block M hands over union ratio I OU to be greater than given threshold Nt, by predicted boundary frame biScore linear reduction is presented.That is to say, reduce closer to M prediction block fractional value it is bigger, more not close to M then more not phase It closes.
4. loss function calculates
Since convolutional network is a trained process, it is therefore desirable to which loss function preferably to obtain predicted value.Loss Calculating is broadly divided into three aspects, the loss of position regressand value, the loss of objective degrees of confidence and the loss of target category.As truly The central point of value has been fallen in some grid of characteristic pattern, then three priori frames corresponding to this grid are just responsible for prognostic chart As rectangle frame.The degree of overlapping for calculating separately three priori frames and true value simultaneously takes that priori frame of Maximum overlap degree as pre- The recurrence frame of survey.In calculating position regressand value loss and target category lose when only use that maximum priori frame of degree of overlapping and Its classification calculates, i.e. restrictive condition in following formulaThe maximum priori frame of degree of overlapping had both been considered for objective degrees of confidence loss Confidence level loss, also to consider the loss of the lesser priori frame of degree of overlapping, that is, think the priori frame cannot be used to predict target, Restrictive condition i.e. in following formulaTotal objective degrees of confidence loss is the sum of two parts.Part is lost calculating, inputs and is The label data (true value) of three groups of predicted values and correspondence image that prediction network obtains.Total losses formula is as follows:
In above formula,It is used to predict unmanned plane as i-th of j-th of grid, a-th of priori frame offset angle point, Indicate that j-th of i-th of grid, a-th of priori frame offset angle point is not used to predict unmanned plane, s2Indicate the size of characteristic pattern, B Indicate the quantity of priori frame, B=3,4 indicate to revert to 4 angle points, (t respectivelyx,ty,tw,th,t0, s) and it is the value that prediction obtains,It is true value,For coordinate true value,For classification true value, c ∈ { 0,1 } is total class number, σ(t0) indicate to predict the confidence of obtained bounding box, BCE is that binary intersects entropy function.λcoordThe loss of=1 indicates coordinate Weight, λobj=5, λnoobj=0.5 has respectively indicated target and aimless loss weight.
Beneficial effects of the present invention are the present invention unmanned machine examination real-time to the unmanned aerial vehicle in no-fly airspace and accurate It surveys, purpose reduces " black to fly " unmanned plane effectively by detecting to unmanned plane during flying.It is also possible to realize faster more The unmanned plane of quasi- discovery airspace flight, implements unmanned plane counterattacking measure faster, reduces unmanned plane " black to fly " bring to the greatest extent Loss and reduction unmanned plane bring safety accident probability.The result that the present invention detected can detect small unmanned plane target, And accurately identifying target is the approximate location of what and target, and enables the algorithm to reach real-time ability.It is timely " black to fly " of processing unmanned plane brings the considerable reaction time.
Detailed description of the invention
Fig. 1: whole network structure
Fig. 2: location and shape return schematic diagram
Fig. 3: the position of optimization returns schematic diagram
Fig. 4: total system flow chart
Fig. 5: the schematic diagram of Fusion Features network
Fig. 6: unmanned plane detection effect figure
Specific embodiment
With reference to the accompanying drawing, the present invention is described in further detail:
Attached drawing 4 is overall flow figure, illustrates technical solution of the present invention by the flow chart.
1) initial data is shunted, a part is training set, and another part is test set, the ratio of training set and test set For 7:3.Training set is used for the training of network, and test set is used for the test of trained model.
2) training set being pre-processed, pretreated operation includes image cropping, scaling, overturning, displacement, brightness adjustment, Plus noise and standardization obtain the input picture 416*416 of fixed size by these pretreatments.The label data of image simultaneously It is also required to do corresponding processing.A batch is combined the images into later, is input in network.
3) the feature extraction network in figure includes backbone network and Fusion Features network, and the part is mainly by the data of input Feature extraction is carried out, is predicted for prediction network later.Feature extraction network can obtain three features of different sizes Figure, size is respectively 13*13,26*26 and 52*52.
4) prediction network gives a forecast respectively to feature extraction network three obtained characteristic pattern, and obtained prediction result is not The same stage has different calculations.In the training stage, loss is mainly calculated, in predicted value and processed number of tags Loss is calculated between, which includes three parts, and rectangle frame calculates loss, confidence level loss and classification loss.In reasoning The predicted value that prediction network obtains mainly is mapped to the rectangle frame on image and makees corresponding place by the stage (forecast period) Reason, obtains the rectangle frame finally predicted in original image.It is predicted respectively due to being divided into three parts, it is therefore desirable to by these three points As a result it is merged, obtains final result.
It is finally model measurement, input is no longer training set, but test set, only needs to mark to the pretreatment of test set Standardization and fixed size are 416*416.Feature extraction network, prediction network, prediction result is passed sequentially through later to handle to obtain most Terminate fruit.Obtained final result and actual label data are subjected to index calculating, the performance indicator of the model is obtained, needs The performance indicator of calculating mainly has Average Accuracy, detects speed.Since the present invention be directed to the detections of Small object, it is also necessary to single The solely accuracy rate of detection Small object.
The present invention proposes a kind of new no-fly airspace unmanned plane detection method, wherein the overall architecture comprising network, nobody This enhancing of press proof design, the design of bounding box prediction technique, whole loss function, multiple scale detecting, prediction block filter design and nothing Man-machine classification prediction, obtains unmanned plane detection model eventually by sample training.Wherein, sample enhancing design is to abundant unmanned plane Sample set enhances unmanned plane training pattern robustness.Bounding box prediction technique increases by four times and refers to angle point, it is pre- to promote unmanned plane Bounding box quantity is surveyed, final prediction obtains more accurately unmanned plane bounding box position and wide high proportion.Multiple scale detecting design packet Contained the multiple dimensioned design of training stage and the multiple dimensioned design of forecast period, improve unmanned plane multiple scale detecting performance and Small target deteection effect.Finally, unmanned plane bounding box filter design has selected Soft-NMS algorithm filtered boundary frame, avoids nothing The man-machine unpredictable phenomenon of overlapping.The unmanned plane detection model that finally training obtains is detected by experiment to be obtained, index AP50、 AP75Value with APS is respectively 1.00,0.85,0.83, and inference time (Inference Time) is 0.030s.As can be seen that this Itd is proposed unmanned plane detection method is invented, detection real-time is able to satisfy while meeting high-precision detection effect.Therefore, originally The new no-fly airspace unmanned plane detection method of one kind that text is proposed is with a wide range of applications in unmanned plane supervision area.

Claims (1)

1. a kind of no-fly airspace unmanned plane detection method, which comprises the following steps:
S1, sample acquisition and pretreatment: it obtains unmanned plane during flying image and is labeled, use rectangle horizontal in target detection Frame saves label data to mark unmanned plane target;Image is pre-processed, including image cropping, scaling, overturning, Displacement, brightness adjustment and plus noise, obtain the sample image of fixed size;
S2, network is identified by the method for deep learning training unmanned plane, unmanned plane identification network includes that backbone network, feature are melted Network and prediction network are closed, method particularly includes:
S21, feature extraction is carried out to the sample image of input by backbone network, backbone network using darknet53 as template, Including 52 convolutional layers, inputs to pre-process obtained sample image, once pass through 52 convolutional networks, in this 52 convolutional layers Characteristic pattern of the middle characteristic pattern for choosing the 26th layer, the 43rd layer and the 52nd layer as Fusion Features, the size point of three layers of characteristic pattern It is not 52 × 52,26 × 26,13 × 13;
S22, Fusion Features network are used to backbone network three obtained characteristic pattern doing mixing operation, first handle size be 13 × 13 characteristic pattern successively passes through 5 convolution, the fusion feature figure that output size is 13 × 13;It is later 13 × 13 by size Fusion feature figure is up-sampled, and obtains the sampling characteristic pattern that size is 26 × 26, and the size obtained with backbone network is 26 × 26 characteristic pattern does and obtains the fusion feature figure of size 26 × 26 by 5 convolutional layers after matrix attended operation, to size 26 × 26 fusion feature figure does up-sampling and obtains the sampling characteristic pattern that size is 52 × 52, and the size obtained with backbone network is 52 × 52 characteristic pattern does and obtains the fusion feature figure of size 52 × 52 by 5 convolutional layers after matrix attended operation, by upper The fusion feature figure that operation obtains three by Fusion Features is stated, size is respectively 52 × 52,26 × 26,13 × 13;
S23, prediction network include three groups of two convolution, and the input of each group of convolution is the fusion feature figure exported after merging, in advance The result of survey includes three parts, is the coordinate regressand value (t of target respectivelyx,ty,tw,th), the confidence level t of target0With target Classification;The predicted value size of three groups of networks be respectively 13 × 13 × 3 × (4+1+1), 26 × 26 × 3 × (4+1+1), 52 × 52 × 3 × (4+1+1), wherein 13,26,52 indicate the size of characteristic pattern, 3 indicate the quantity of rectangle frame, and 4 indicate position regressand value, in Between 1 confidence level for indicating target, subsequent 1 indicates classification, i.e. only unmanned plane is a kind of;
Loss function: loss function calculating is broadly divided into three aspect, the loss of position regressand value, the loss of objective degrees of confidence and The loss of target category, the central point for defining true value have been fallen in some grid of characteristic pattern, then three corresponding to this grid A priori frame is just responsible for forecast image rectangle frame, while calculating separately the degree of overlapping of three priori frames and true value, takes Maximum overlap Recurrence frame of that the priori frame of degree as prediction only uses overlapping when regressand value loss and target category are lost in calculating position That maximum priori frame and its classification are spent to calculate, and the maximum priori frame of degree of overlapping had both been considered for objective degrees of confidence loss Confidence level loss, will also consider the loss of the lesser priori frame of degree of overlapping, that is, think that the priori frame cannot be used to predict target, i.e., Restrictive condition, total objective degrees of confidence loss is the sum of two parts;Three groups of predicted values that input obtains for prediction network, and The label data of correspondence image, loss formula are as follows:
Wherein,It is used to predict unmanned plane bounding box as i-th of j-th of grid, a-th of priori frame offset angle point,Table Show that j-th of i-th of grid, a-th of priori frame offset angle point is not used to predict unmanned plane bounding box, s2Indicate the big of characteristic pattern Small, B indicates the quantity of rectangle frame, (tx,ty,tw,th,t0, s) and it is the value that prediction obtains, s indicates the class probability value of prediction,It is true value,For coordinate true value,For classification true value, c ∈ { 0,1 } is total classification number Mesh, σ (t0) indicate to predict the confidence of obtained bounding box, BCE is that binary intersects entropy function, λcoord=1 indicates coordinate The weight of loss, λobj=5, λnoobj=0.5 has respectively indicated target and aimless loss weight;
S3, unmanned plane image is predicted using trained unmanned plane identification network, obtains three groups of predicted values;
S4, predicted value is handled, obtains unmanned plane testing result, specifically:
S41, position return: (the t in 4 position regressand valuesx,ty) indicate translation predicted value and (tw,th) indicate scaling predicted value, It is responsible for the high size of coordinate position and width of priori frame executing fine tuning, the overlapping of rectangle frame and callout box after being finally reached fine tuning It spends best;
S42, optimum selecting: selection is greater than the predicted value of given threshold value;
S43, prediction block filtering: for still having much by the prediction rectangle frame of optimum selecting, the unmanned plane predicted is pre- Surveying frame, there may be the same targets to correspond to multiple prediction blocks, needs to be filtered extra prediction block at this time, finally obtain Best prediction block is got rid of extra prediction block using the method for Soft-NMS, finally obtains each mesh on image Mark has to a prediction block.
CN201910782216.0A 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace Active CN110490155B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910782216.0A CN110490155B (en) 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910782216.0A CN110490155B (en) 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace

Publications (2)

Publication Number Publication Date
CN110490155A true CN110490155A (en) 2019-11-22
CN110490155B CN110490155B (en) 2022-05-17

Family

ID=68553079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910782216.0A Active CN110490155B (en) 2019-08-23 2019-08-23 Method for detecting unmanned aerial vehicle in no-fly airspace

Country Status (1)

Country Link
CN (1) CN110490155B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274894A (en) * 2020-01-15 2020-06-12 太原科技大学 Improved YOLOv 3-based method for detecting on-duty state of personnel
CN111832508A (en) * 2020-07-21 2020-10-27 桂林电子科技大学 DIE _ GA-based low-illumination target detection method
CN112597905A (en) * 2020-12-25 2021-04-02 北京环境特性研究所 Unmanned aerial vehicle detection method based on skyline segmentation
CN116389783A (en) * 2023-06-05 2023-07-04 四川农业大学 Live broadcast linkage control method, system, terminal and medium based on unmanned aerial vehicle

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
CN106846926A (en) * 2017-04-13 2017-06-13 电子科技大学 A kind of no-fly zone unmanned plane method for early warning
WO2017167282A1 (en) * 2016-03-31 2017-10-05 纳恩博(北京)科技有限公司 Target tracking method, electronic device, and computer storage medium
CN109002777A (en) * 2018-06-29 2018-12-14 电子科技大学 A kind of infrared small target detection method towards complex scene
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109389086A (en) * 2018-10-09 2019-02-26 北京科技大学 Detect the method and system of unmanned plane silhouette target
CN109598290A (en) * 2018-11-22 2019-04-09 上海交通大学 A kind of image small target detecting method combined based on hierarchical detection
CN109740662A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Image object detection method based on YOLO frame
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
CN109919058A (en) * 2019-02-26 2019-06-21 武汉大学 A kind of multisource video image highest priority rapid detection method based on Yolo V3
CN110033050A (en) * 2019-04-18 2019-07-19 杭州电子科技大学 A kind of water surface unmanned boat real-time target detection calculation method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015096806A1 (en) * 2013-12-29 2015-07-02 刘进 Attitude determination, panoramic image generation and target recognition methods for intelligent machine
WO2017167282A1 (en) * 2016-03-31 2017-10-05 纳恩博(北京)科技有限公司 Target tracking method, electronic device, and computer storage medium
CN106846926A (en) * 2017-04-13 2017-06-13 电子科技大学 A kind of no-fly zone unmanned plane method for early warning
CN109002777A (en) * 2018-06-29 2018-12-14 电子科技大学 A kind of infrared small target detection method towards complex scene
CN109255286A (en) * 2018-07-21 2019-01-22 哈尔滨工业大学 A kind of quick detection recognition method of unmanned plane optics based on YOLO deep learning network frame
CN109389086A (en) * 2018-10-09 2019-02-26 北京科技大学 Detect the method and system of unmanned plane silhouette target
CN109598290A (en) * 2018-11-22 2019-04-09 上海交通大学 A kind of image small target detecting method combined based on hierarchical detection
CN109740662A (en) * 2018-12-28 2019-05-10 成都思晗科技股份有限公司 Image object detection method based on YOLO frame
CN109919058A (en) * 2019-02-26 2019-06-21 武汉大学 A kind of multisource video image highest priority rapid detection method based on Yolo V3
CN109753903A (en) * 2019-02-27 2019-05-14 北航(四川)西部国际创新港科技有限公司 A kind of unmanned plane detection method based on deep learning
CN110033050A (en) * 2019-04-18 2019-07-19 杭州电子科技大学 A kind of water surface unmanned boat real-time target detection calculation method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
NASSIM AMMOUR 等: "Deep Learning Approach for Car Detection in UAV Imagery", 《REMOTE SENSING》 *
任一可: "我国民用无人机法律规制问题研究", 《中国优秀硕士学位论文全文数据库 社会科学I辑》 *
刘永姣 等: "基于相关滤波的绝缘子跟踪与测距算法", 《科学技术与工程》 *
祝思君: "基于深度学习的无人机遥感图像目标识别方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
闫斌 等: "禁飞区无人机预警算法研究", 《计算机应用研究》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111274894A (en) * 2020-01-15 2020-06-12 太原科技大学 Improved YOLOv 3-based method for detecting on-duty state of personnel
CN111832508A (en) * 2020-07-21 2020-10-27 桂林电子科技大学 DIE _ GA-based low-illumination target detection method
CN111832508B (en) * 2020-07-21 2022-04-05 桂林电子科技大学 DIE _ GA-based low-illumination target detection method
CN112597905A (en) * 2020-12-25 2021-04-02 北京环境特性研究所 Unmanned aerial vehicle detection method based on skyline segmentation
CN116389783A (en) * 2023-06-05 2023-07-04 四川农业大学 Live broadcast linkage control method, system, terminal and medium based on unmanned aerial vehicle
CN116389783B (en) * 2023-06-05 2023-08-11 四川农业大学 Live broadcast linkage control method, system, terminal and medium based on unmanned aerial vehicle

Also Published As

Publication number Publication date
CN110490155B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN110490155A (en) A kind of no-fly airspace unmanned plane detection method
CN111723748B (en) Infrared remote sensing image ship detection method
CN112818903B (en) Small sample remote sensing image target detection method based on meta-learning and cooperative attention
CN107818326B (en) A kind of ship detection method and system based on scene multidimensional characteristic
CN109886312B (en) Bridge vehicle wheel detection method based on multilayer feature fusion neural network model
CN109800631A (en) Fluorescence-encoded micro-beads image detecting method based on masked areas convolutional neural networks
CN109948415A (en) Remote sensing image object detection method based on filtering background and scale prediction
CN110135267A (en) A kind of subtle object detection method of large scene SAR image
CN107316058A (en) Improve the method for target detection performance by improving target classification and positional accuracy
CN110210463A (en) Radar target image detecting method based on Precise ROI-Faster R-CNN
Wang et al. A deep-learning-based sea search and rescue algorithm by UAV remote sensing
CN107527352A (en) Remote sensing Ship Target contours segmentation and detection method based on deep learning FCN networks
CN109684906B (en) Method for detecting red fat bark beetles based on deep learning
CN103353988B (en) Allos SAR scene Feature Correspondence Algorithm performance estimating method
CN111709329B (en) Unmanned aerial vehicle measurement and control signal high-speed recognition method based on deep learning
CN107967474A (en) A kind of sea-surface target conspicuousness detection method based on convolutional neural networks
CN110674674A (en) Rotary target detection method based on YOLO V3
CN107067410A (en) A kind of manifold regularization correlation filtering method for tracking target based on augmented sample
CN110110618A (en) A kind of SAR target detection method based on PCA and global contrast
CN115937659A (en) Mask-RCNN-based multi-target detection method in indoor complex environment
CN117830788B (en) Image target detection method for multi-source information fusion
Zhang et al. A precise apple leaf diseases detection using BCTNet under unconstrained environments
CN109558803B (en) SAR target identification method based on convolutional neural network and NP criterion
Zhou et al. ASSD-YOLO: a small object detection method based on improved YOLOv7 for airport surface surveillance
CN112084941A (en) Target detection and identification method based on remote sensing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant