CN109214463A - A kind of classification of landform method based on coorinated training - Google Patents

A kind of classification of landform method based on coorinated training Download PDF

Info

Publication number
CN109214463A
CN109214463A CN201811119967.6A CN201811119967A CN109214463A CN 109214463 A CN109214463 A CN 109214463A CN 201811119967 A CN201811119967 A CN 201811119967A CN 109214463 A CN109214463 A CN 109214463A
Authority
CN
China
Prior art keywords
sample
haptic signal
landform
image
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201811119967.6A
Other languages
Chinese (zh)
Inventor
刘阳
刘珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui fruit Intelligent Technology Co., Ltd.
Original Assignee
Hefei Best Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Best Control Technology Co Ltd filed Critical Hefei Best Control Technology Co Ltd
Priority to CN201811119967.6A priority Critical patent/CN109214463A/en
Publication of CN109214463A publication Critical patent/CN109214463A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The classification of landform method based on coorinated training that the invention discloses a kind of, strategy based on coorinated training, first with the haptic data and vision data for having mark, two initial supporting vector machine models are respectively trained, then the haptic data not marked and vision data are inputted into two preliminary classification devices respectively, and corresponding vision sample is marked with the high output prediction result of the confidence level of tactile classifier, the high output prediction result of the confidence level of vision sorter device is marked into corresponding tactile sample, and the sample newly marked is added to corresponding training set, it is then based on two classifiers of training set re -training of expansion, such operation is repeated until reaching preset repetitive exercise frequency threshold value.The vision sorter device that final training obtains can be used to the identification for carrying out terrain type to the ground that will be travelling through in robot practical work process.

Description

A kind of classification of landform method based on coorinated training
Technical field
The present invention relates to robot fields, more particularly to a kind of classification of landform method based on coorinated training.
Background technique
Different from doors structure environment when working in robot outer scene out of office, landform-ground in the scene of field is more Sample, complexity can produce bigger effect the travelling performance of robot.Accurate classification of the robot to terrain environment locating for it It is to determine that can it realize the key factor of autonomous.Tactile and vision are the mankind in the judgement for carrying out locating terrain environment When common perceptive mode, therefore we attempt to divide robot also using similar perceptual model to locating terrain environment Class.Wherein tactile can only realize the perception for now locating landform to robot, and larger range of landform perception may be implemented in vision, right The landform that will be travelling through can equally be perceived.In the present invention, we by using the haptic data that has mark on a small quantity and Vision data makes full use of the data not marked largely, the mode of two classifier coorinated training is taken to carry out the instruction of classifier Practice, to reduce dependence of the classifier training process to labeled data, and realizes the ground that will be travelling through ground to robot simultaneously The judgement of shape classification.
Summary of the invention
The technology of the present invention overcome the deficiencies in the prior art solves and marks missing in view-based access control model and the classification of landform of vibration The training problem of situation.
To solve the above problems, the invention discloses a kind of classification of landform method based on coorinated training, specifically include with Lower step:
Step S1: enabling robot, traveling for a period of time, is collected simultaneously machine respectively in every kind of landform in its working environment The ground of the camera record of the haptic signal time series and face forward ground of the touch sensor output of Qi Ren foot installation Face image sequence is split the haptic signal time series acquired in every kind of landform respectively, and segmentation length is α sampling Point obtains the set of the corresponding haptic signal tract of every kind of landform;Feature extraction is carried out to each haptic signal tract, is obtained To the corresponding haptic signal sample set of every kind of landform, the set of the haptic signal sample set of all landform is denoted asFeature extraction is carried out to the image in the corresponding ground image sequence of every kind of landform, obtains every kind of ground The corresponding characteristics of image sample set of shape, the set of the characteristics of image sample set of all landform are denoted asIts Middle aιAnd bιIndicate the haptic signal sample that touch sensor of the robot on same place obtains and the image that video camera obtains Feature samples, Indicate that haptic signal sample set A and characteristics of image sample set B includes the quantity of sample; With landform number 1,2, J is labeled the sample in these sample sets, and wherein J indicates the sum of terrain type, obtains To there is mark sample set { A, B, Ψ },For terrain type set corresponding with A and B, wherein ψι∈ { 1,2, J } is indicated and aιAnd bιCorresponding terrain type;
Step S2: enabling robot random walk in its working environment, walked in the haptic signal time sequence collected Column and ground image sequence, are split haptic signal time series, and segmentation length is α sampled point, obtain in walking The set of the haptic signal tract of collection carries out feature extraction to haptic signal tract, obtains haptic signal sample set, remembers ForFeature extraction is carried out to the image in the ground image sequence collected in walking, obtains image spy Sample set is levied, is denoted asWherein c κ and dκIndicate that touch sensor of the robot on same place obtains The characteristics of image sample that the haptic signal sample and video camera taken obtains, Indicate haptic signal sample Collect the quantity that C and characteristics of image sample set D includes sample;Thus it obtains without mark sample set { C, D };
Step S3: enabling repetitive exercise serial number e=0, remembers the training set of the classifier based on haptic signal and is based on surface map The training set of the classifier of picture is respectively L(1)、L(2), and enable L(1)={ A, Ψ }, L(2)={ B, Ψ };
Step S4: it is based on training set L(1)And L(2)Classifier based on haptic signal is respectively trained and based on ground image Classifier, sorter model use support vector machines, remember that trained classifier isWith
Step S5: n is randomly selected from haptic signal sample set CcA sample is input to classifierIn, it respectively obtains ncThe set of the landform prediction result of a sampleAnd the confidence level of landform prediction resultThe wherein highest n of confidence levelc′A landform prediction result is used to mark corresponding characteristics of image Sample forms markd characteristics of image sample set V(2);N is randomly selected from characteristics of image sample set DdA sample is input to ClassifierIn, respectively obtain ndThe set of the landform prediction result of a sampleAnd landform The confidence level of prediction resultThe wherein highest n of confidence leveld′A landform prediction result is used to mark Corresponding haptic signal sample forms markd haptic signal sample set V(1);By V(1)And V(2)It is added separately toWithTraining set in, i.e. L(1)←L(1)∪V(1), L(2)←L(2)∪V(2);It will set V(1)Corresponding nc′A haptic signal sample This is deleted from C, by set V(2)Corresponding nd′A characteristics of image sample is deleted from D, i.e.,;WhereinIndicate set V(1)Corresponding nc′A haptic signal sample composition Set,Indicate set V(2)Corresponding nd′The set of a characteristics of image sample composition;
Step S6: enabling e ← e+1, if e is less than preset repetitive exercise frequency threshold value T, repeatedly step 4 and 5;It is no Then, classifier training terminates, and obtains final classifier C(1)And C(2);When robot in actual work, by the surface map of acquisition After the operation Jing Guo feature extraction, it is input to classifier C(2)In, obtained output is robot to the ground that will be passed through The prediction result of shape type.
Compared with existing technology, the invention has the following advantages that
Category forecasting is carried out to the landform that will be travelling through 1. can realize, so that robot can change traveling strategy in advance, Reduce the adaptation time to topographic change;
2. classifier is respectively trained using two kinds of perceptive modes, and the mode of coorinated training is carried out, makes full use of and do not mark The accuracy rate of data raising classifier;
3. since training process is smaller to the dependence for having labeled data, the time required to labeled data can be greatlyd save.
Detailed description of the invention
Fig. 1 is flow chart of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing and specific implementation The present invention is described in detail for example.
A kind of classification of landform method based on coorinated training, as shown in Figure 1, specifically includes the following steps:
Step S1: enabling robot, traveling for a period of time, is collected simultaneously machine respectively in every kind of landform in its working environment The ground of the camera record of the haptic signal time series and face forward ground of the touch sensor output of Qi Ren foot installation Face image sequence is split the haptic signal time series acquired in every kind of landform respectively, and segmentation length is α sampling Point obtains the set of the corresponding haptic signal tract of every kind of landform;Feature extraction is carried out to each haptic signal tract, is obtained To the corresponding haptic signal sample set of every kind of landform, the set of the haptic signal sample set of all landform is denoted asFeature extraction is carried out to the image in the corresponding ground image sequence of every kind of landform, obtains every kind of ground The corresponding characteristics of image sample set of shape, the set of the characteristics of image sample set of all landform are denoted asIts Middle aιAnd bιIndicate the haptic signal sample that touch sensor of the robot on same place obtains and the image that video camera obtains Feature samples, Indicate that haptic signal sample set A and characteristics of image sample set B includes the quantity of sample; With landform number 1,2, J is labeled the sample in these sample sets, and wherein J indicates the sum of terrain type, obtains To there is mark sample set { A, B, Ψ },For terrain type set corresponding with A and B, wherein ψι∈ { 1,2, J } indicate and aιAnd bιCorresponding terrain type;
Step S2: enabling robot random walk in its working environment, walked in the haptic signal time sequence collected Column and ground image sequence, are split haptic signal time series, and segmentation length is α sampled point, obtain in walking The set of the haptic signal tract of collection carries out feature extraction to haptic signal tract, obtains haptic signal sample set, remembers ForFeature extraction is carried out to the image in the ground image sequence collected in walking, obtains image spy Sample set is levied, is denoted asWherein cκAnd dκIndicate that touch sensor of the robot on same place obtains The characteristics of image sample that the haptic signal sample and video camera taken obtains, Indicate haptic signal sample Collect the quantity that C and characteristics of image sample set D includes sample;Thus it obtains without mark sample set { C, D };
Step S3: enabling repetitive exercise serial number e=0, remembers the training set of the classifier based on haptic signal and is based on surface map The training set of the classifier of picture is respectively L(1)、L(2), and enable L(1)={ A, Ψ }, L(2)={ B, Ψ };
Step S4: it is based on training set L(1)And L(2)Classifier based on haptic signal is respectively trained and based on ground image Classifier, sorter model use support vector machines, remember that trained classifier isWith
Step S5: n is randomly selected from haptic signal sample set CcA sample is input to classifierIn, it respectively obtains ncThe set of the landform prediction result of a sampleAnd the confidence level of landform prediction resultThe wherein highest n of confidence levelc′A landform prediction result is used to mark corresponding characteristics of image Sample forms markd characteristics of image sample set V(2);N is randomly selected from characteristics of image sample set DdA sample is input to ClassifierIn, respectively obtain ndThe set of the landform prediction result of a sampleAnd landform The confidence level of prediction resultThe wherein highest n of confidence leveld′A landform prediction result is used to mark Corresponding haptic signal sample forms markd haptic signal sample set V(1);By V(1)And V(2)It is added separately toWithTraining set in, i.e. L(1)←L(1)∪V(1), L(2)←L(2)∪V(2);It will set V(1)Corresponding nc′A haptic signal sample This is deleted from C, by set V(2)Corresponding nd′A characteristics of image sample is deleted from D, i.e.,;WhereinIndicate set V(1)Corresponding nc′A haptic signal sample composition Set,Indicate set V(2)Corresponding nd′The set of a characteristics of image sample composition;
Step S6: enabling e ← e+1, if e is less than preset repetitive exercise frequency threshold value T, repeatedly step 4 and 5;It is no Then, classifier training terminates, and obtains final classifier C(1)And C(2);When robot in actual work, by the surface map of acquisition After the operation Jing Guo feature extraction, it is input to classifier C(2)In, obtained output is robot to the ground that will be passed through The prediction result of shape type.
In implementing process of the present invention, the feature extraction side of a variety of haptic signal tracts and ground image can be used Method.For example, for haptic signal tract Fast Fourier Transform (FFT) can be carried out to it, point of its amplitude in frequency is obtained These amplitudes are arranged in vector according to the sequence of frequency from small to large by cloth, that is, the feature for realizing haptic signal tract mentions It takes.For ground image, its color histogram can be extracted, the frequency of different colours is arranged in vector, that is, realizes ground The feature extraction of image.
Above embodiments are provided just for the sake of the description purpose of the present invention, and are not intended to limit the scope of the invention.This The range of invention is defined by the following claims.It does not depart from spirit and principles of the present invention and the various equivalent replacements made and repairs Change, should all cover within the scope of the present invention.

Claims (1)

1. a kind of classification of landform method based on coorinated training, which is characterized in that specifically includes the following steps: step S1: enabling machine Traveling a period of time, the tactile for being collected simultaneously the installation of foot of robot pass device people respectively in every kind of landform in its working environment The ground image sequence of the camera record of the haptic signal time series and face forward ground of sensor output, to every kind of landform The haptic signal time series of upper acquisition is split respectively, and segmentation length is α sampled point, obtains the corresponding touching of every kind of landform Feel the set of signal sequence section;Feature extraction is carried out to each haptic signal tract, obtains the corresponding tactile letter of every kind of landform Number sample set, the set of the haptic signal sample set of all landform are denoted asIt is corresponding to every kind of landform Image in ground image sequence carries out feature extraction, obtains the corresponding characteristics of image sample set of every kind of landform, all landform The set of characteristics of image sample set is denoted asWherein aιAnd bιIndicate touching of the robot on same place Feel the haptic signal sample that sensor obtains and the characteristics of image sample that video camera obtains, Indicate tactile Sample of signal collection A and characteristics of image sample set B includes the quantity of sample;With landform number 1,2 ..., J is in these sample sets Sample be labeled, wherein J indicate terrain type sum, obtained mark sample set { A, B, Ψ },For terrain type set corresponding with A and B, wherein ψι∈ { 1,2 ..., J } is indicated and aιAnd bιIt is right The terrain type answered;
Step S2: enabling robot random walk in its working environment, walked in the haptic signal time series collected with And ground image sequence, haptic signal time series is split, segmentation length be α sampled point, walked in collect Haptic signal tract set, to haptic signal tract carry out feature extraction, obtain haptic signal sample set, be denoted asFeature extraction is carried out to the image in the ground image sequence collected in walking, obtains characteristics of image Sample set is denoted asWherein cκAnd dκIndicate that touch sensor of the robot on same place obtains Haptic signal sample and video camera obtain characteristics of image sample, Indicate haptic signal sample set C It include the quantity of sample with characteristics of image sample set D;Thus it obtains without mark sample set { C, D };
Step S3: enabling repetitive exercise serial number e=0, remembers the training set of the classifier based on haptic signal and based on ground image The training set of classifier is respectively L(1)、L(2), and enable L(1)={ A, Ψ }, L(2)={ B, Ψ };
Step S4: it is based on training set L(1)And L(2)The classifier based on haptic signal and the classification based on ground image is respectively trained Device, sorter model use support vector machines, remember that trained classifier isWith
Step S5: n is randomly selected from haptic signal sample set CcA sample is input to classifierIn, respectively obtain ncA sample The set of this landform prediction resultAnd the confidence level of landform prediction resultThe wherein highest n of confidence levelc′A landform prediction result is used to mark corresponding characteristics of image Sample forms markd characteristics of image sample set V(2);N is randomly selected from characteristics of image sample set DdA sample is input to ClassifierIn, respectively obtain ndThe set of the landform prediction result of a sampleAnd landform The confidence level of prediction resultThe wherein highest n of confidence leveld′A landform prediction result is used to mark Corresponding haptic signal sample forms markd haptic signal sample set V(1);By V(1)And V(2)It is added separately toWithTraining set in, i.e. L(1)←L(1)∪V(1), L(2)←L(2)∪V(2);It will set V(1)Corresponding nc′A haptic signal sample This is deleted from C, by set V(2)Corresponding nd′A characteristics of image sample is deleted from D, i.e.,;WhereinIndicate set V(1)Corresponding nc′A haptic signal sample composition Set,Indicate set V(2)Corresponding nd′The set of a characteristics of image sample composition;
Step S6: enabling e ← e+1, if e is less than preset repetitive exercise frequency threshold value T, repeatedly step 4 and 5;Otherwise, divide The training of class device terminates, and obtains final classifier C(1)And C(2);When robot in actual work, the ground image of acquisition is passed through After the operation of feature extraction, it is input to classifier C(2)In, obtained output is robot to the terrain type that will be passed through Prediction result.
CN201811119967.6A 2018-09-25 2018-09-25 A kind of classification of landform method based on coorinated training Withdrawn CN109214463A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811119967.6A CN109214463A (en) 2018-09-25 2018-09-25 A kind of classification of landform method based on coorinated training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811119967.6A CN109214463A (en) 2018-09-25 2018-09-25 A kind of classification of landform method based on coorinated training

Publications (1)

Publication Number Publication Date
CN109214463A true CN109214463A (en) 2019-01-15

Family

ID=64981404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811119967.6A Withdrawn CN109214463A (en) 2018-09-25 2018-09-25 A kind of classification of landform method based on coorinated training

Country Status (1)

Country Link
CN (1) CN109214463A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147780A (en) * 2019-05-28 2019-08-20 山东大学 The landform recognition methods of real-time field robot and system based on level landform
CN110705630A (en) * 2019-09-27 2020-01-17 聚时科技(上海)有限公司 Semi-supervised learning type target detection neural network training method, device and application
CN110737339A (en) * 2019-10-28 2020-01-31 福州大学 Visual-tactile interaction model construction method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102404249A (en) * 2011-11-18 2012-04-04 北京语言大学 Method and device for filtering junk emails based on coordinated training
CN104268557A (en) * 2014-09-15 2015-01-07 西安电子科技大学 Polarization SAR classification method based on cooperative training and depth SVM
CN104834944A (en) * 2015-05-26 2015-08-12 杭州尚青科技有限公司 Cooperative training-based city region air quality estimation method
US20150356341A1 (en) * 2013-01-07 2015-12-10 Bae Systems Plc Fusion of multi-spectral and range image data
CN107644235A (en) * 2017-10-24 2018-01-30 广西师范大学 Image automatic annotation method based on semi-supervised learning
CN107977667A (en) * 2016-10-21 2018-05-01 西安电子科技大学 SAR target discrimination methods based on semi-supervised coorinated training

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102404249A (en) * 2011-11-18 2012-04-04 北京语言大学 Method and device for filtering junk emails based on coordinated training
US20150356341A1 (en) * 2013-01-07 2015-12-10 Bae Systems Plc Fusion of multi-spectral and range image data
CN104268557A (en) * 2014-09-15 2015-01-07 西安电子科技大学 Polarization SAR classification method based on cooperative training and depth SVM
CN104834944A (en) * 2015-05-26 2015-08-12 杭州尚青科技有限公司 Cooperative training-based city region air quality estimation method
CN107977667A (en) * 2016-10-21 2018-05-01 西安电子科技大学 SAR target discrimination methods based on semi-supervised coorinated training
CN107644235A (en) * 2017-10-24 2018-01-30 广西师范大学 Image automatic annotation method based on semi-supervised learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BLUM A ET AL: "Combining labeled and unlabeled data with co-training", 《PROCEEDINGS OF THE 11TH ANNUAL CONFERENCE ON COMPUTATIONAL LEARNING THEORY》 *
周广通等: "基于协同训练的指纹图像分割算法", 《山东大学学报(工学版)》 *
孙玉超等: "基于视觉采用磁带模型的移动机器人地形分类算法设计", 《医疗卫生设备》 *
孟健: "复杂地形环境四足机器人运动控制方法研究与实现", 《中国博士学位论文全文数据库信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147780A (en) * 2019-05-28 2019-08-20 山东大学 The landform recognition methods of real-time field robot and system based on level landform
CN110147780B (en) * 2019-05-28 2021-01-01 山东大学 Real-time field robot terrain identification method and system based on hierarchical terrain
CN110705630A (en) * 2019-09-27 2020-01-17 聚时科技(上海)有限公司 Semi-supervised learning type target detection neural network training method, device and application
CN110737339A (en) * 2019-10-28 2020-01-31 福州大学 Visual-tactile interaction model construction method based on deep learning
CN110737339B (en) * 2019-10-28 2021-11-02 福州大学 Visual-tactile interaction model construction method based on deep learning

Similar Documents

Publication Publication Date Title
CN104063712B (en) A kind of information of vehicles extracting method and system
Vazquez et al. Virtual and real world adaptation for pedestrian detection
CN107451607B (en) A kind of personal identification method of the typical character based on deep learning
CN109583483B (en) Target detection method and system based on convolutional neural network
CN104143079B (en) The method and system of face character identification
CN107423760A (en) Based on pre-segmentation and the deep learning object detection method returned
CN104123529B (en) human hand detection method and system
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN109214463A (en) A kind of classification of landform method based on coorinated training
CN107103326A (en) The collaboration conspicuousness detection method clustered based on super-pixel
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN110689000B (en) Vehicle license plate recognition method based on license plate sample generated in complex environment
CN107169985A (en) A kind of moving target detecting method based on symmetrical inter-frame difference and context update
CN108537269A (en) A kind of the object detection deep learning method and its system of weak interactive mode
CN104794479B (en) This Chinese detection method of natural scene picture based on the transformation of local stroke width
CN110516633A (en) A kind of method for detecting lane lines and system based on deep learning
CN111414954B (en) Rock image retrieval method and system
CN106960181A (en) A kind of pedestrian's attribute recognition approach based on RGBD data
CN106612457B (en) Video sequence alignment schemes and system
CN106910188A (en) The detection method of airfield runway in remote sensing image based on deep learning
CN103903256B (en) Depth estimation method based on relative height-depth clue
CN106156750A (en) A kind of based on convolutional neural networks to scheme to search car method
CN108549901A (en) A kind of iteratively faster object detection method based on deep learning
CN104598907A (en) Stroke width figure based method for extracting Chinese character data from image
CN112329559A (en) Method for detecting homestead target based on deep convolutional neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
TA01 Transfer of patent application right

Effective date of registration: 20190415

Address after: 235000 Fenghuang Road, Lantau Peak Economic Development Zone, Xiangshan District, Huaibei, Anhui, 7

Applicant after: Anhui fruit Intelligent Technology Co., Ltd.

Address before: 230000 Public Rent Room 110, No. 1 Building, North Export Processing Zone, Binhe District, West Qinglongtan Road, East Feiguang Road, Hefei Economic and Technological Development Zone, Anhui Province

Applicant before: Hefei best control technology Co., Ltd.

TA01 Transfer of patent application right
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190115

WW01 Invention patent application withdrawn after publication