CN107895386A - A kind of multi-platform joint objective autonomous classification method - Google Patents

A kind of multi-platform joint objective autonomous classification method Download PDF

Info

Publication number
CN107895386A
CN107895386A CN201711123377.6A CN201711123377A CN107895386A CN 107895386 A CN107895386 A CN 107895386A CN 201711123377 A CN201711123377 A CN 201711123377A CN 107895386 A CN107895386 A CN 107895386A
Authority
CN
China
Prior art keywords
platform
target
image
depth information
dimensional image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711123377.6A
Other languages
Chinese (zh)
Inventor
张原�
王军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aircraft Design and Research Institute of AVIC
Original Assignee
Xian Aircraft Design and Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aircraft Design and Research Institute of AVIC filed Critical Xian Aircraft Design and Research Institute of AVIC
Priority to CN201711123377.6A priority Critical patent/CN107895386A/en
Publication of CN107895386A publication Critical patent/CN107895386A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of multi-platform joint objective autonomous classification method.The multi-platform joint objective autonomous classification method comprises the following steps:Step 1:Obtain the two dimensional image containing depth information of each identification target;Step 2:CNN networks are trained by the two dimensional image containing depth information of each identification target, so as to default CNN network parameters;Step 3:Airplane sounding suspected target, and obtain the two dimensional image containing depth information of suspected target;Step 4:The two dimensional image containing depth information for the suspected target that aircraft is obtained passes to the CNN networks for having completed training, so as to judge whether suspected target is default multiple one identified in target by CNN networks.The multi-platform joint objective autonomous classification method of the application integrates the optical detection devices of multi-platform, carrier aircraft is imaged by communicating with obtaining between platform to the multi-angle optical of suspected target, realizes autonomous classification of the multimachine in combination to TOI.

Description

A kind of multi-platform joint objective autonomous classification method
Technical field
The present invention relates to avionics system technical field, more particularly to a kind of multi-platform joint objective autonomous classification method.
Background technology
Investigation, operational aircraft are in the task of execution, when particularly performing combat duty over the ground, it is often necessary to which TOI is carried out Observation and identification, this task is up to the present still mainly by being accomplished manually.
Thus, it is desirable to have a kind of technical scheme is come at least one drawbacks described above for overcoming or at least mitigating prior art.
The content of the invention
It is existing to overcome or at least mitigate it is an object of the invention to provide a kind of multi-platform joint objective autonomous classification method There is at least one drawbacks described above of technology.
To achieve the above object, the present invention provides a kind of multi-platform joint objective autonomous classification method, described multi-platform Target self-determination recognition methods is closed to comprise the following steps:
Step 1:Multiple identification targets are preset, and obtain the two dimensional image containing depth information of each identification target;
Step 2:CNN networks are trained by the two dimensional image containing depth information of each identification target, so as to default CNN Network parameter;
Step 3:Airplane sounding suspected target, and obtain the two dimensional image containing depth information of suspected target;
Step 4:The two dimensional image containing depth information for the suspected target that aircraft is obtained, which passes to, have been completed to train CNN networks, so as to by CNN networks judge suspected target whether be it is default it is multiple identification target in one.
Preferably, the step 1 is specially:
Step 11:Multiple identification targets are preset, optically detecting are carried out to each goal-selling by each platform, it is ensured that have 2 Individual or two or more platform optical detection devices ensure one or more target in sensor field of view from different observation angles It is interior, optically detecting is carried out to each goal-selling, obtains image I1, I2 ... In, forms image collection I;
Step 12:The space coordinates of image and Image-capturing platform is passed to by Data-Link will implement to hit or detect Examine the platform of task;
Task platform carries out SIFT matching primitives after receiving image to any two figure in image collection I, obtains image The set of characteristic points S of middle matching;
To S processing, its distribution vector V=[k, d] is calculated according to the relative position relation between characteristic point in S, its Middle k is the slope of two figure characteristic point lines, and d is characterized the distance of a line, and characteristic point is selected according to the required precision of identification Highest density region, the characteristic point not in the region is abandoned, form new set of characteristic points Sni
Acquired image is converted into HSV images, takes out V passages therein, and extends tonal range to 0~255, from And form gray level image;
New feature point set Sni is clustered, C is selected according to accuracy of identification cluster numbers, obtains C cluster;
Step 13:The cluster centre each clustered is calculated, in the characteristic point of this cluster, Euclidean distance is calculated and obtains and gather Center of the minimum characteristic point of class centre distance as this cluster, and calculate the transverse parallaxes in two images between the point and indulge To parallax;
According to camera characteristics to carrying out depth distance recovery to each characteristic area, calculation formula is as follows:
X/U=Y/V=L/F
Wherein:X, Y, L transverse direction, longitudinal direction and the distance parallel to optical axis, f between target and observation platform are Current camera Focal length;
Step 14:Obtained depth information is normalized, the square consistent with original image scale is formed by characteristic area distribution Battle array, and gray level image dot product, obtain including the two dimensional image of depth information.
Preferably, the step 2 is:The two dimensional image obtained by the step 14 trains CNN networks.
Preferably, the step 4 is specially:
Pass through formulaJudge, when result is more than δ, then it is assumed that suspected target is default multiple identifications One in target;Wherein,
RiRecognition result for CNN networks to characteristic area, SiIt is characterized the area in region;δ is self-defined preset value.
The multi-platform joint objective autonomous classification method of the application integrates the optical detection devices of multi-platform, enables carrier aircraft Enough the multi-angle optical of suspected target is imaged by communicating with obtaining between platform, realizes autonomous classification of the multimachine in combination to TOI, It can be used in unmanned plane or intelligent aircraft, lower to artificial dependence.
Brief description of the drawings
Fig. 1 is the schematic flow sheet of the multi-platform joint objective autonomous classification method of the embodiment of the application one.
Fig. 2 is the floor projection schematic diagram that double-DSP platform TOI is identified process.
Fig. 3 is the vertical projection principle schematic that double-DSP platform TOI is identified process.
Embodiment
To make the purpose, technical scheme and advantage that the present invention is implemented clearer, below in conjunction with the embodiment of the present invention Accompanying drawing, the technical scheme in the embodiment of the present invention is further described in more detail.In the accompanying drawings, identical from beginning to end or class As label represent same or similar element or the element with same or like function.Described embodiment is the present invention Part of the embodiment, rather than whole embodiments.The embodiments described below with reference to the accompanying drawings are exemplary, it is intended to uses It is of the invention in explaining, and be not considered as limiting the invention.Based on the embodiment in the present invention, ordinary skill people The every other embodiment that member is obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.Under Embodiments of the invention are described in detail with reference to accompanying drawing for face.
In the description of the invention, it is to be understood that term " " center ", " longitudinal direction ", " transverse direction ", "front", "rear", The orientation or position relationship of the instruction such as "left", "right", " vertical ", " level ", " top ", " bottom " " interior ", " outer " is based on accompanying drawing institutes The orientation or position relationship shown, it is for only for ease of the description present invention and simplifies description, rather than instruction or the dress for implying meaning Put or element there must be specific orientation, with specific azimuth configuration and operation, therefore it is not intended that the present invention is protected The limitation of scope.
Fig. 1 is the schematic flow sheet of the multi-platform joint objective autonomous classification method of the embodiment of the application one.
Multi-platform joint objective autonomous classification method as shown in Figure 1 comprises the following steps:
Step 1:Multiple identification targets are preset, and obtain the two dimensional image containing depth information of each identification target;
Step 2:CNN networks are trained by the two dimensional image containing depth information of each identification target, so as to default CNN Network parameter;
Step 3:Airplane sounding suspected target, and obtain the two dimensional image containing depth information of suspected target;
Step 4:The two dimensional image containing depth information for the suspected target that aircraft is obtained, which passes to, have been completed to train CNN networks, so as to by CNN networks judge suspected target whether be it is default it is multiple identification target in one.
In the present embodiment, the step 1 is specially:
Step 11:Multiple identification targets are preset, optically detecting are carried out to each goal-selling by each platform, it is ensured that have 2 Individual or two or more platform optical detection devices ensure one or more target in sensor field of view from different observation angles It is interior, optically detecting is carried out to each goal-selling, obtains image I1, I2 ... In, forms image collection I;
Step 12:The space coordinates of image and Image-capturing platform is passed to by Data-Link will implement to hit or detect Examine the platform of task;
Task platform carries out SIFT matching primitives after receiving image to any two figure in image collection I, obtains image The set of characteristic points S of middle matching;
To S processing, its distribution vector V=[k, d] is calculated according to the relative position relation between characteristic point in S, its Middle k is the slope of two figure characteristic point lines, and d is characterized the distance of a line, and characteristic point is selected according to the required precision of identification Highest density region, the characteristic point not in the region is abandoned, form new set of characteristic points Sni;
Acquired image is converted into HSV images, takes out V passages therein, and extends tonal range to 0~255, from And form gray level image;
To new feature point set SniClustered, C is selected according to accuracy of identification cluster numbers, obtain C cluster;
Step 13:The cluster centre each clustered is calculated, in the characteristic point of this cluster, Euclidean distance is calculated and obtains and gather Center of the minimum characteristic point of class centre distance as this cluster, and calculate the transverse parallaxes in two images between the point and indulge To parallax;
According to camera characteristics to carrying out depth distance recovery to each characteristic area, calculation formula is as follows:
X/U=Y/V=L/F
Wherein:X, Y, L transverse direction, longitudinal direction and the distance parallel to optical axis, f between target and observation platform are Current camera Focal length;
Step 14:Obtained depth information is normalized, the square consistent with original image scale is formed by characteristic area distribution Battle array, and gray level image dot product, obtain including the two dimensional image of depth information.
In the present embodiment, the step 2 is:The two dimensional image obtained by the step 14 trains CNN networks.
In the present embodiment, the step 4 is specially:
Pass through formulaJudge, when result is more than δ, then it is assumed that suspected target is default multiple identifications One in target;Wherein,
RiRecognition result for CNN networks to characteristic area, SiIt is characterized the area in region;δ is self-defined preset value.
The application is further elaborated by way of example below, it is to be understood that the citing is not formed to this Any restrictions of application.
Fig. 2 is the floor projection schematic diagram that double-DSP platform TOI is identified process.
Fig. 3 is the vertical projection principle schematic that double-DSP platform TOI is identified process.
Identified with two-shipper joint objective, so that FA is task platform as an example, be described in further detail below.
Platform FA and platform FB carries out periodic communication, and Content of Communication includes the current longitude and latitude height of carrier aircraft, optical detection devices light The roll of axle, pitching, focal length, ensureing the search coverage of two-shipper detecting devices has overlapping range;
Step 1:Multiple identification targets are preset, and obtain the two dimensional image containing depth information of each identification target;Tool Body, step 11:Multiple identification targets are preset, optically detecting are carried out to each goal-selling by each platform, it is ensured that there are 2 to put down The optical detection devices of platform ensure one or more target in sensor field of view from different observation angles, to each default mesh Mark carries out optically detecting, obtains image I1, I2 ... In, forms image collection I;
Step 12:Task platform carries out SIFT matching primitives after receiving image to any two figure in image geometry I, obtains Take the feature point geometry S matched in imagei
To Si processing, its distribution vector V=[k, d] is calculated according to the relative position relation between characteristic point in Si, Wherein k is the slope of two figure characteristic point lines, and d is characterized the distance of a line, and characteristic point is selected according to the required precision of identification Highest density region, abandon the characteristic point not in the region, form new set of characteristic points Sni
Acquired image is converted into HSV images, takes out V image therein, and extends tonal range to 0~255;
New feature point set Sni is clustered, 4 are selected according to accuracy of identification cluster numbers, obtains 4 clusters;
The cluster centre each clustered is calculated, in the characteristic point of this cluster, calculating Euclidean distance obtains and cluster centre Center of the minimum characteristic point of distance as this cluster, and calculate the transverse parallaxes in two images between the point and longitudinally regard Difference;
Depth distance is carried out to each characteristic area to obtained parallax according to camera characteristics (focal length, size sensor etc.) Recover (depth refers to target to the length of the vertical line of the line of two optical image acquisition platforms herein), calculation formula is as follows:
X/U=Y/V=L/F
Wherein:X, Y, L between target and observation platform laterally, longitudinal direction and parallel to optical axis (depth) distance;
Step 14:Obtained depth information is normalized, the square consistent with original image scale is formed by characteristic area distribution Battle array, and gray level image dot product, obtain including the two dimensional image of depth information;
Step 2:Default CNN networks;
Step 3:The two dimensional image comprising depth information is identified using the CNN networks for completing training, obtained each The recognition result of image segments;
Step 4:Airplane sounding suspected target, and suspected target is obtained by the method for as above step 11 to step 14 Two dimensional image containing depth information;
According to the following formula:
Wherein δ is 0.7, when result is more than δ, it is believed that includes target to be identified in collection image.
It is last it is to be noted that:The above embodiments are merely illustrative of the technical solutions of the present invention, rather than its limitations.To the greatest extent The present invention is described in detail with reference to the foregoing embodiments for pipe, it will be understood by those within the art that:It is still Technical scheme described in foregoing embodiments can be modified, or which part technical characteristic is equally replaced Change;And these modifications or replacement, the essence of appropriate technical solution is departed from the essence of various embodiments of the present invention technical scheme God and scope.

Claims (4)

  1. A kind of 1. multi-platform joint objective autonomous classification method, it is characterised in that the multi-platform joint objective autonomous classification side Method comprises the following steps:
    Step 1:Multiple identification targets are preset, and obtain the two dimensional image containing depth information of each identification target;
    Step 2:CNN networks are trained by the two dimensional image containing depth information of each identification target, so as to default CNN networks Parameter;
    Step 3:Airplane sounding suspected target, and obtain the two dimensional image containing depth information of suspected target;
    Step 4:The two dimensional image containing depth information for the suspected target that aircraft is obtained passes to the CNN for having completed training Network, so as to judge whether suspected target is default multiple one identified in target by CNN networks.
  2. 2. multi-platform joint objective autonomous classification method as claimed in claim 1, it is characterised in that the step 1 is specially:
    Step 11:Preset multiple identification targets, by each platform to each goal-selling carry out optically detecting, it is ensured that have 2 or The optical detection devices of two or more platform ensure one or more target in sensor field of view from different observation angles, right Each goal-selling carries out optically detecting, obtains image I1, I2 ... In, forms image collection I;
    Step 12:The space coordinates of image and Image-capturing platform is passed to by Data-Link will implement to hit or scout to appoint The platform of business;
    Task platform receives and carries out SIFT matching primitives to any two figure in image collection I after image, obtains in image The set of characteristic points S matched somebody with somebody;
    To S processing, its distribution vector V=[k, d] is calculated according to the relative position relation between characteristic point in S, wherein k is The slope of two figure characteristic point lines, d are characterized the distance of a line, and the maximum close of characteristic point is selected according to the required precision of identification Region is spent, the characteristic point not in the region is abandoned, forms new set of characteristic points Sni
    Acquired image is converted into HSV images, takes out V passages therein, and extends tonal range to 0~255, so as to shape Into gray level image;
    New feature point set Sni is clustered, C is selected according to accuracy of identification cluster numbers, obtains C cluster;
    Step 13:The cluster centre each clustered is calculated, in the characteristic point of this cluster, during calculating Euclidean distance is obtained and clustered Center of the minimum characteristic point of heart distance as this cluster, and calculate the transverse parallaxes in two images between the point and longitudinally regard Difference;
    According to camera characteristics to carrying out depth distance recovery to each characteristic area, calculation formula is as follows:
    X/U=Y/V=L/F
    Wherein:X, Y, L transverse direction, longitudinal direction and distance parallel to optical axis between target and observation platform, f are that Current camera is burnt Away from;
    Step 14:Obtained depth information is normalized, the matrix consistent with original image scale is formed by characteristic area distribution, with Gray level image dot product, obtain including the two dimensional image of depth information.
  3. 3. multi-platform joint objective autonomous classification method as claimed in claim 2, it is characterised in that the step 2 is:Pass through The two dimensional image training CNN networks that the step 14 is obtained.
  4. 4. multi-platform joint objective autonomous classification method as claimed in claim 1, it is characterised in that the step 4 is specially:
    Pass through formulaJudge, when result is more than δ, then it is assumed that suspected target is in default multiple identification target One;Wherein,
    RiRecognition result for CNN networks to characteristic area, SiIt is characterized the area in region;δ is self-defined preset value.
CN201711123377.6A 2017-11-14 2017-11-14 A kind of multi-platform joint objective autonomous classification method Pending CN107895386A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711123377.6A CN107895386A (en) 2017-11-14 2017-11-14 A kind of multi-platform joint objective autonomous classification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711123377.6A CN107895386A (en) 2017-11-14 2017-11-14 A kind of multi-platform joint objective autonomous classification method

Publications (1)

Publication Number Publication Date
CN107895386A true CN107895386A (en) 2018-04-10

Family

ID=61804438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711123377.6A Pending CN107895386A (en) 2017-11-14 2017-11-14 A kind of multi-platform joint objective autonomous classification method

Country Status (1)

Country Link
CN (1) CN107895386A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288659A (en) * 2019-05-27 2019-09-27 魏运 A kind of Depth Imaging and information acquisition method based on binocular vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105224942A (en) * 2015-07-09 2016-01-06 华南农业大学 A kind of RGB-D image classification method and system
CN105391970A (en) * 2014-08-27 2016-03-09 Metaio有限公司 Method and system for determining at least one image feature in at least one image
CN105654732A (en) * 2016-03-03 2016-06-08 上海图甲信息科技有限公司 Road monitoring system and method based on depth image
WO2016100814A1 (en) * 2014-12-19 2016-06-23 United Technologies Corporation Multi-modal sensor data fusion for perception systems
CN105759834A (en) * 2016-03-09 2016-07-13 中国科学院上海微***与信息技术研究所 System and method of actively capturing low altitude small unmanned aerial vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105391970A (en) * 2014-08-27 2016-03-09 Metaio有限公司 Method and system for determining at least one image feature in at least one image
WO2016100814A1 (en) * 2014-12-19 2016-06-23 United Technologies Corporation Multi-modal sensor data fusion for perception systems
CN104715254A (en) * 2015-03-17 2015-06-17 东南大学 Ordinary object recognizing method based on 2D and 3D SIFT feature fusion
CN105224942A (en) * 2015-07-09 2016-01-06 华南农业大学 A kind of RGB-D image classification method and system
CN105654732A (en) * 2016-03-03 2016-06-08 上海图甲信息科技有限公司 Road monitoring system and method based on depth image
CN105759834A (en) * 2016-03-09 2016-07-13 中国科学院上海微***与信息技术研究所 System and method of actively capturing low altitude small unmanned aerial vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
温熙: "Kinect和惯性导航***组合的室内定位技术研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
蔺敏: "基于多平台的空中机动目标跟踪技术", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110288659A (en) * 2019-05-27 2019-09-27 魏运 A kind of Depth Imaging and information acquisition method based on binocular vision
CN110288659B (en) * 2019-05-27 2021-07-13 魏运 Depth imaging and information acquisition method based on binocular vision

Similar Documents

Publication Publication Date Title
CN108919838B (en) Binocular vision-based automatic tracking method for power transmission line of unmanned aerial vehicle
Koc-San et al. Automatic citrus tree extraction from UAV images and digital surface models using circular Hough transform
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
CN104049641B (en) A kind of automatic landing method, device and aircraft
CN110163904A (en) Object marking method, control method for movement, device, equipment and storage medium
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
CN108596165B (en) Road traffic marking detection method and system based on unmanned plane low latitude Aerial Images
CN109460709A (en) The method of RTG dysopia analyte detection based on the fusion of RGB and D information
CN109583415A (en) A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN115661204B (en) Collaborative searching and tracking positioning method for moving target by unmanned aerial vehicle cluster
CN105373135A (en) Method and system for guiding airplane docking and identifying airplane type based on machine vision
CN106650701A (en) Binocular vision-based method and apparatus for detecting barrier in indoor shadow environment
CN110658539B (en) Vehicle positioning method, device, vehicle and computer readable storage medium
CN108986148B (en) Method for realizing multi-intelligent-trolley collaborative search, identification and tracking of specific target group
CN103617328A (en) Airplane three-dimensional attitude computation method
CN107527037A (en) Blue-green algae identification and analysis system based on unmanned aerial vehicle remote sensing data
CN112198899A (en) Road detection method, equipment and storage medium based on unmanned aerial vehicle
CN110378210A (en) A kind of vehicle and car plate detection based on lightweight YOLOv3 and long short focus merge distance measuring method
CN109697428B (en) Unmanned aerial vehicle identification and positioning system based on RGB _ D and depth convolution network
CN111126116A (en) Unmanned ship river channel garbage identification method and system
CN112037252A (en) Eagle eye vision-based target tracking method and system
CN103884281A (en) Patrol device obstacle detection method based on initiative structure light
CN112464933A (en) Intelligent recognition method for small dim target of ground-based staring infrared imaging
CN107895386A (en) A kind of multi-platform joint objective autonomous classification method
CN111178743A (en) Method for autonomous cooperative observation and cooperative operation of unmanned aerial vehicle cluster

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180410