CN105022990B - A kind of waterborne target rapid detection method based on unmanned boat application - Google Patents

A kind of waterborne target rapid detection method based on unmanned boat application Download PDF

Info

Publication number
CN105022990B
CN105022990B CN201510368994.7A CN201510368994A CN105022990B CN 105022990 B CN105022990 B CN 105022990B CN 201510368994 A CN201510368994 A CN 201510368994A CN 105022990 B CN105022990 B CN 105022990B
Authority
CN
China
Prior art keywords
target
super
area
region
pixel block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510368994.7A
Other languages
Chinese (zh)
Other versions
CN105022990A (en
Inventor
肖阳
曹治国
李畅
方智文
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201510368994.7A priority Critical patent/CN105022990B/en
Publication of CN105022990A publication Critical patent/CN105022990A/en
Application granted granted Critical
Publication of CN105022990B publication Critical patent/CN105022990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of waterborne target rapid detection methods based on unmanned boat application, belong to Digital Image Processing and control system interleaving techniques field.The present invention analyzes to obtain object candidate area by Objective, due to that can have certain false-alarm in candidate region, obtains marking area using significance analysis, and Objective is combined with conspicuousness, rejects false-alarm, obtains target accurate location.The present invention is without specific objective type information, therefore universality is preferable, compared to existing other algorithm of target detection, either in terms of the detection result of target, or all has greatly improved in terms of the speed of method, there is important directive significance to the automatic obstacle-avoiding of unmanned boat.

Description

A kind of waterborne target rapid detection method based on unmanned boat application
Technical field
The invention belongs to Digital Image Processing and control system interleaving techniques field, and nothing is based on more particularly, to one kind The waterborne target rapid detection method of people's ship application.
Background technology
The strong interest of numerous ocean military powers is caused about the research and development of unmanned boat recent years, wherein More representational is the U.S. " Sparta " number unmanned boat and Israel " Protector " number unmanned boat.Currently, either from Civilian or military angle is set out, and demand of the China to unmanned boat is all increasing increasingly, this territorial waters make an inspection tour, strike pirate and The fields such as smuggling seem particularly urgent.In the autonomous navigation of unmanned boat, the quick detection of waterborne target is that unmanned boat is kept away automatically The basis of barrier.Several currently used object detection methods are described below:
(1) it is based on the matched object detection method of local feature
Target and image to be detected are usually passed through into key point and key point based on the matched target detection of local feature Information describes target in neighborhood, or describes target by the characteristic information in regional area.
2004, David Lowe proposed famous SIFT (Scale-Invariant Feature on IJCV Transform) local feature description's, can effectively adapt to the influence that the variations such as scale, rotation, affine and visual angle are brought.It should The difference that algorithm is filtered by image pyramid and Gaussian kernel, detects the extreme point in Laplacian space as feature Point, and be described by 128 dimensional features of part, make it that there is better adaptability and robustness in application.
(2) structure-based object detection method
Object structures can be very good to reflect target information.Usual object is made of structured features, such as people It is usually made of head, trunk and four limbs, face is usually made of face, and vehicle is usually made of vehicle body and wheel.This structuring Information can accurately detected target from complex scene.
2010, Pedro Felzenszwalb proposed DPM models on PAMI.Target is divided into several by DPM models Different components judges that the object is not according to the position relationship between the matching degree and component of different components when detecting It is the target for needing to detect.DPM is current best one of algorithm of target detection, and obtains the detection hat on VOC for successive years Army.
(3) algorithm of target detection based on deep learning
The concept of deep learning is derived from the research of artificial neural network.Deep learning is formed more by combining low-level feature Abstract high-rise expression attribute classification or feature, to find that the distributed nature of data indicates.CNN is also convolutional neural networks, It is a kind of current most commonly used deep learning model.
Ross Girshick propose R-CNN methods on CVPR within 2014, and object candidate area and CNN are combined Come, is used for target detection.Target detection is divided into two parts by R-CNN:Find object candidate area and target identification.R-CNN The full articulamentum of CNN structures is substituted for SVM classifier, and feature extraction is used for using the first half of CNN structures.R-CNN Very good effect is obtained in target detection neighborhood, also becomes an important branch of target detection neighborhood.
Although at present there are many algorithm of target detection, either feature based matching, DPM algorithms is based on R- CNN algorithms all have that universality is poor.It is more effective to the detection of simple target, such as only detects certain one kind The ship of type.And in the autonomous navigation of unmanned boat, the target type that is faced numerous (such as pleasure boat, sailing boat, warship, buoy, drifts Float, reef etc.), and the posture of target, view transformation are all very big, therefore current algorithm of target detection cannot be fine The true natural scene of adaptation.Additionally, due to unmanned boat towards practical application, thus it is relatively high to the requirement of real-time of algorithm, And current DPM, R-CNN algorithm complexity is too high, the more difficult satisfaction of real-time.
In conclusion although at present there are many related algorithm in terms of target detection, all because of algorithm universality and The reasons such as complexity, it is difficult to apply it in the automatic obstacle-avoiding of unmanned boat.
Invention content
For the disadvantages described above or Improvement requirement of the prior art, the present invention provides a kind of water surface mesh applied based on unmanned boat Rapid detection method is marked, to realize the automatic obstacle-avoiding and autonomous navigation of unmanned boat.The present invention is without any specific objective type Information, therefore universality is preferable.Inventive algorithm complexity is relatively low simultaneously, can detect and be encountered in unmanned boat autonomous navigation in real time Various barriers.
The present invention provides a kind of waterborne target rapid detection method applied based on unmanned boat, includes the following steps:
Step 1 trains grader and interlayer grader in layer, wherein grader is used for empty in the scale of structure in the layer Between each layer judge whether current candidate region is target area, the interlayer grader is based on the weighting between different layers It calculates;
Step 2 carries out Objective analysis using grader in the layer and the interlayer grader to original image, obtains Final object candidate area, including following sub-step:
(2-1) carries out change of scale to the original image, builds pyramid model, obtains the figure of different scale size Picture is denoted as L1, L2..., LM, wherein M indicates the number of plies of the scale space of structure;
(2-2) is in each tomographic image LiIn, the area of fixed size is extracted to each position using the method for sliding window Domain calculates the NG features in the region, and by the score value in the classifier calculated region in the layer, obtains the target of different layers Candidate region;
(2-3) is weighted marking, and root by the interlayer grader to the object candidate area obtained in different layers It is ranked up according to weighting marking result;
(2-4) carries out maximum inhibition to object candidate area, and obtains the final object candidate area;
Step 3 trains random forest to return device and Multiscale Fusion weight, wherein the random forest returns device based on Point counting cuts the significance value of each rear super-pixel block, and the Multiscale Fusion weight is used to merge to be obtained under different scale Saliency maps;
Step 4 returns device and the Multiscale Fusion weight using the random forest and is carried out significantly to the original image Property analysis, obtain final Saliency maps;
Step 5 will include a large amount of false-alarms according to the final object candidate area and the final Saliency maps Candidate region is rejected, and the accurate location of target is finally obtained.
In general, through the invention it is contemplated above technical scheme is compared with the prior art, have below beneficial to effect Fruit:
The present invention can quickly detect the various barriers encountered in unmanned boat autonomous navigation.By handling unmanned boat The image of upper video camera shooting, real-time perception surrounding enviroment realize the autonomous navigation of unmanned boat.To the image of camera acquisition, Object candidate area is obtained by Objective, and notable sex knowledge is combined to reject target false-alarm.The present invention compared to it is existing its Its algorithm of target detection either in terms of the detection result of target, or all has greatly improved in terms of the speed of algorithm, There is important directive significance to the automatic obstacle-avoiding of unmanned boat.
Description of the drawings
Fig. 1 is the flow chart for the waterborne target rapid detection method applied the present invention is based on unmanned boat;
Fig. 2 is the flow chart of detection-phase Objective of the present invention analysis;
Fig. 3 is the result figure of the invention by being obtained after Objective algorithm process;
Fig. 4 is the flow chart of detection-phase significance analysis of the present invention;
Fig. 5 is the result figure of detection-phase significance analysis of the present invention.
Specific implementation mode
In order to make the purpose , technical scheme and advantage of the present invention be clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and It is not used in the restriction present invention.As long as in addition, technical characteristic involved in the various embodiments of the present invention described below It does not constitute a conflict with each other and can be combined with each other.
The present invention is divided into three parts:First, training objective model and using trained model to image to be detected Objective analysis is carried out, obtains object candidate area, object candidate area at this time can have certain false-alarm;Then, training Conspicuousness model simultaneously carries out significance analysis using trained model to image to be detected, obtains Saliency maps;Finally, by mesh Mark property is combined with conspicuousness, rejects target false-alarm.
Fig. 1 show the flow chart for the waterborne target rapid detection method applied the present invention is based on unmanned boat, specifically includes Following steps:
Step 1 training objective model.This training stage mainly reaches two purposes:Grader, training layer in training layer Between grader.Grader is for judging whether current candidate region is target area at each layer in its middle level;Interlayer grader For the weighted calculation between different layers.In embodiments of the present invention, PASCAL VOC 2007 are used as and train by the training stage Collection, wherein comprising 10000 figures, wherein 5000 are used for training, 5000 are used for testing.Step 1 specifically includes following sub-step Suddenly:
Grader in (1-1) training layer, directly extracts target area, and be compressed into fixed size to training set sample Block, you can as positive sample, in embodiments of the present invention, the unified block using 8 × 8 sizes.Training set sample is randomly selected Candidate blocks, as long as candidate blocks are overlapping with target area to can be used as negative sample less than fixed threshold, in embodiments of the present invention, Fixed threshold is selected as 50%;
(1-2) training interlayer grader, carries out training set sample the adjustment of scale, obtains the image of different layers, at random The block for taking 8 × 8 sizes, reverts to according to compression factor under original graph, if it is Chong Die with target area be more than 50% if as just Otherwise sample is used as negative sample.
Step 2 carries out Objective analysis, Objective analysis using trained Objective model in step 1 to original image It is a kind of method quickly obtaining object candidate area.Fig. 2 show the flow chart of detection-phase Objective analysis of the present invention, tool Body includes following sub-step:
(2-1) carries out change of scale to original image, builds pyramid model, obtains the image of different scale size, if For L1, L2..., LM, the number of plies of the scale space of wherein M expression structures, in embodiments of the present invention, M takes 33;
(2-2) is in each tomographic image LiIn (i=1,2, M), by the way of sliding window, to each position extraction 8 The region of × 8 sizes, and normalized gradient (Normed Gradients, the hereinafter referred to as NG) feature in the region is calculated, and lead to The score value in the classifier calculated region in layer is crossed, which is used for measuring the possibility that the position is object candidate area, It can be obtained by the object candidate area of different layers in this way.In embodiments of the present invention, it when calculating NG features, takes in all channels Horizontal direction maximum of gradients is gx, vertical gradient maximum value is gy, and by formula min (| gx|+|gy|, 255) it counts Calculate the characteristic value of each point;
(2-3) by step (2-2) processing after, for each tomographic image LiEach position candidate can there are one Score value, for measuring the possibility that the position is object candidate area.In view of constructing many layers in the present invention, so passing through Interlayer grader is weighted marking, and weighting according to object candidate area to the object candidate area obtained in different layers Divide and be ranked up, the weight score value is higher to show that possibility of the region comprising target is bigger;
(2-4), can be to target candidate in order to reduce a large amount of coverings between candidate region after step (2-3) processing Region carries out maximum inhibition, obtains final object candidate area.After Fig. 3 show the present invention by Objective algorithm process What obtained result figure, the wherein left side indicated is original graph, and the right is result figure.Original graph and result figure are compared, it can be found that Objective algorithm of the present invention can preferably obtain object candidate area, but can also have some false-alarms, it is therefore desirable to target Candidate region is further processed.
Step 3 trains conspicuousness model.This training stage mainly reaches two purposes:Training random forest returns device, instruction Practice Multiscale Fusion weight.The partitioning algorithm (Graph-based Segmentation) based on graph model is first passed through, it is right Original image carries out multi-scale division.Many independent regions can be obtained on each scale, after segmentation, unify to be referred to as herein For super-pixel block.Random forest returns the significance value that device is used to calculate each super-pixel block after segmentation;Multiscale Fusion is weighed The obtained Saliency maps being reused under fusion different scale, and obtain final Saliency maps.In embodiments of the present invention, will MSRA-B is as training set, wherein comprising 5000 figures, each figure has corresponding artificial target's result figure.Step 3 is specific Including following sub-step:
(3-1) training random forest returns device, using classical Graph-based Segmentation methods, to artwork N layers of multi-scale division is carried out, in embodiments of the present invention, N takes 15.On each scale, to each obtained after segmentation Super-pixel block R finds corresponding region H in corresponding handmarking's result figure.If the label of contained pixel has category in the H of region In foreground/background, then super-pixel block R is labeled as foreground/background, otherwise abandons super-pixel block R, wherein foreground refers to wrapping Target area containing mesh, background refer to nontarget area.The random forest for learning standard using the training sample of label returns device;
(3-2) training Multiscale Fusion weight, if the multiple dimensioned Saliency maps that each training sample obtains are { S1, S2..., SN, corresponding handmarking's result is G, and multiple dimensioned linear fusion weight w is trained in a manner of least squaren, Argmin indicates to take the w when mean square error minimumn, calculation formula is as follows:
Step 4 using trained conspicuousness model in step 3 to original image carry out significance analysis, conspicuousness be from The visual cognition angle of people is set out, and the vision mode built by physiology, psychology, therefore conspicuousness can be good at instead The information that attracts people's attention in scene should be gone out.Fig. 4 show the flow chart of detection-phase significance analysis of the present invention, specifically includes following Sub-step:
(4-1) carries out multiple dimensioned point of N layers using classical Graph-based Segmentation methods, to artwork It cuts, if obtained segmentation figure is T1, T2..., TN, wherein each tomographic image T after segmentationiAll it is by several independent super-pixel Block is constituted;
(4-2) is to each tomographic image T after segmentationiIn each super-pixel block, calculate three category features:Area attribute is special Sign, region contrast feature, region and background contrasts feature.For area attribute feature, the super-pixel block is calculated in difference The features such as color, texture, histogram in color space (RGB, LAB, HSV);For region contrast feature, the super picture is calculated The contrast of plain block and its all of its neighbor block wherein calculates chi-Square measure, non-histogram feature calculation absolute difference between histogram Value;For region and background contrasts feature, using the peripheral regions of image as background area, according to region contrast feature Computational methods calculate the contrast of the super-pixel block and background.Finally above-mentioned three classes feature is together in series as the super-pixel block Feature;
(4-3) extracts corresponding feature according to step (4-2), and returns device using trained random forest in step 3 Recurrence calculating is carried out, each tomographic image T after being dividediIn the corresponding significance value of each super-pixel block, finally may be used To obtain each tomographic image TiCorresponding Saliency maps Ci
(4-4) by the trained Multiscale Fusion weight of step 3, the multiple dimensioned Saliency maps { C that will be obtained1, C2..., CNCarry out linear weighted function obtain final Saliency maps.Fig. 5 show the result figure of detection-phase significance analysis of the present invention, What wherein the left side indicated is original graph, and the right is result figure.Original graph and result figure are compared, it can be found that conspicuousness can be preferable The marking area obtained in image.In result figure, brighter place represents that the position conspicuousness is stronger, which is target Possibility it is bigger, therefore object candidate area can be further analyzed by conspicuousness, obtain the accurate position of target It sets.
Step 5 is after step 2 processing, so that it may to obtain object candidate area.After step 4 processing, so that it may To obtain the Saliency maps of target.Contain a large amount of false alarm information in the candidate region obtained due to step 2, so, for every One candidate region, the Saliency maps that can be obtained by step 4 are further confirmed that, the candidate region of a large amount of false-alarms will be included It rejects, finally obtains the accurate location of target.
As it will be easily appreciated by one skilled in the art that the foregoing is merely illustrative of the preferred embodiments of the present invention, not to The limitation present invention, all within the spirits and principles of the present invention made by all any modification, equivalent and improvement etc., should all include Within protection scope of the present invention.

Claims (6)

1. a kind of waterborne target rapid detection method based on unmanned boat application, which is characterized in that including:
Step 1 trains grader and interlayer grader in layer, wherein grader is for the scale space in structure in the layer Each layer judges whether current candidate region is target area, and the interlayer grader is used for the weighted calculation between different layers;
Step 2 carries out Objective analysis using grader in the layer and the interlayer grader to original image, obtains final Object candidate area, including following sub-step:
(2-1) carries out change of scale to the original image, builds pyramid model, obtains the image of different scale size, remembers For L1,L2,…,LM, wherein M indicates the number of plies of the scale space of structure;
(2-2) is in each tomographic image LiIn, the region of fixed size, meter are extracted to each position using the method for sliding window The NG features in the region are calculated, and by the score value in the classifier calculated region in the layer, obtain the target candidate of different layers Region;
(2-3) is weighted marking by the interlayer grader to the object candidate area obtained in different layers, and according to institute Weighting marking result is stated to be ranked up;
(2-4) carries out maximum inhibition to object candidate area, and obtains the final object candidate area;
Step 3 trains random forest to return device and Multiscale Fusion weight, wherein the random forest returns device for calculating point Cut the significance value of each rear super-pixel block, the Multiscale Fusion weight be used to merge under different scale obtain it is notable Property figure;The step 3 includes following sub-step:
(3-1) training random forest returns device:The original image is carried out using the partitioning algorithm based on graph model more Multi-scale segmentation, to each super-pixel block obtained after segmentation, is found on each scale in corresponding handmarking's result figure Corresponding region, if the label of contained pixel has and belongs to foreground/background in the corresponding region, by the super-pixel block Labeled as foreground/background, the super-pixel block is otherwise abandoned, wherein the foreground refers to including mesh target area, the background Refer to nontarget area, the random forest for learning standard using the training sample of label returns device;
(3-2) training Multiscale Fusion weight wn, calculation formula is as follows:
Wherein, { S1,S2,…,SNIndicate the multiple dimensioned Saliency maps that each training sample obtains;G indicates corresponding handmarking As a result;Argmin indicates to take the Multiscale Fusion weight when mean square error minimum;
Step 4 returns device and the Multiscale Fusion weight using the random forest and carries out conspicuousness point to the original image Analysis, obtains final Saliency maps;
Step 5 will include the candidate of a large amount of false-alarms according to the final object candidate area and the final Saliency maps Region is rejected, and the accurate location of target is finally obtained.
2. the method as described in claim 1, which is characterized in that the step 1 includes following sub-step:
Grader in (1-1) training layer:Target area is directly extracted to training set sample, and is compressed into the block of fixed size As positive sample, candidate blocks are randomly selected to the training set sample, the candidate blocks are less than with the overlapping of the target area Fixed threshold is then used as negative sample;
(1-2) training interlayer grader:The adjustment that scale is carried out to the training set sample, obtains the image of different layers, The block for randomly selecting fixed size, reverts to according to compression factor under original graph, if Chong Die with the target area be more than institute It states fixed threshold and is then used as positive sample, be otherwise used as negative sample.
3. the method as described in claim 1, which is characterized in that when calculating the NG features in the step (2-2), take all Horizontal direction maximum of gradients is g in channelx, vertical gradient maximum value is gy, and pass through formula:min(|gx|+|gy|, 255) come calculate each point characteristic value.
4. method as claimed in any one of claims 1-3, which is characterized in that the step 4 includes following sub-step:
(4-1) carries out the original image using the partitioning algorithm based on graph model N layers of multi-scale division, point remembered It is T to cut figure1,T2,…,TN, wherein each tomographic image T after segmentationiAll it is to be made of several independent super-pixel block;
(4-2) is to each tomographic image T after segmentationiIn each super-pixel block, calculate three category features:Area attribute feature, area Domain contrast metric, region and background contrasts feature;
(4-3) extracts corresponding feature according to the step (4-2), and returns device using the random forest and carry out recurrence meter It calculates, each tomographic image T after being dividediIn the corresponding significance value of each super-pixel block, may finally obtain each Tomographic image TiCorresponding Saliency maps Ci
(4-4) by the Multiscale Fusion weight, the multiple dimensioned Saliency maps { C that will be obtained1,C2,…,CNLinearly add Power, obtains the final Saliency maps.
5. method as claimed in claim 4, which is characterized in that in the step (4-2), for the area attribute feature, Calculate color of the super-pixel block in different colours space, texture, histogram feature;For the region contrast feature, The contrast of the super-pixel block and its all of its neighbor block is calculated, chi-Square measure, non-histogram feature are wherein calculated between histogram Calculate absolute difference;For the region and background contrasts feature, using the peripheral regions of image as background area, according to area The computational methods of domain contrast metric calculate the contrast of the super-pixel block and background.
6. method as claimed in claim 4, which is characterized in that in the step (4-2), which is together in series work For the feature of the super-pixel block.
CN201510368994.7A 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application Active CN105022990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510368994.7A CN105022990B (en) 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510368994.7A CN105022990B (en) 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application

Publications (2)

Publication Number Publication Date
CN105022990A CN105022990A (en) 2015-11-04
CN105022990B true CN105022990B (en) 2018-09-21

Family

ID=54412945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510368994.7A Active CN105022990B (en) 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application

Country Status (1)

Country Link
CN (1) CN105022990B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170206426A1 (en) * 2016-01-15 2017-07-20 Ford Global Technologies, Llc Pedestrian Detection With Saliency Maps
CN106444759A (en) * 2016-09-29 2017-02-22 浙江嘉蓝海洋电子有限公司 Automatic homeward voyaging method and automatic homeward voyaging system of unmanned boat
CN106530324A (en) * 2016-10-21 2017-03-22 华中师范大学 Visual cortex mechanism simulated video object tracking method
CN108303747B (en) * 2017-01-12 2023-03-07 清华大学 Inspection apparatus and method of detecting a gun
CN106845408B (en) * 2017-01-21 2023-09-01 浙江联运知慧科技有限公司 Street garbage identification method under complex environment
CN107506766B (en) * 2017-08-25 2020-03-17 东软医疗***股份有限公司 Image segmentation method and device
CN107844750B (en) * 2017-10-19 2020-05-19 华中科技大学 Water surface panoramic image target detection and identification method
CN108121991B (en) * 2018-01-06 2022-11-15 北京航空航天大学 Deep learning ship target detection method based on edge candidate region extraction
CN108399430B (en) * 2018-02-28 2019-09-27 电子科技大学 A kind of SAR image Ship Target Detection method based on super-pixel and random forest
CN108681691A (en) * 2018-04-09 2018-10-19 上海大学 A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
CN108765458B (en) * 2018-04-16 2022-07-12 上海大学 Sea surface target scale self-adaptive tracking method of high-sea-condition unmanned ship based on correlation filtering
CN109117838B (en) * 2018-08-08 2021-10-12 哈尔滨工业大学 Target detection method and device applied to unmanned ship sensing system
CN109242884B (en) * 2018-08-14 2020-11-20 西安电子科技大学 Remote sensing video target tracking method based on JCFNet network
CN109544568A (en) * 2018-11-30 2019-03-29 长沙理工大学 Destination image partition method, device and equipment
CN110174895A (en) * 2019-05-31 2019-08-27 中国船舶重工集团公司第七0七研究所 A kind of verification of unmanned boat Decision of Collision Avoidance and modification method
CN110188474A (en) * 2019-05-31 2019-08-30 中国船舶重工集团公司第七0七研究所 Decision of Collision Avoidance method based on unmanned surface vehicle
CN110118561A (en) * 2019-06-10 2019-08-13 华东师范大学 A kind of unmanned boat paths planning method and unmanned boat
CN112417931B (en) * 2019-08-23 2024-01-26 河海大学常州校区 Method for detecting and classifying water surface objects based on visual saliency
CN111429435A (en) * 2020-03-27 2020-07-17 王程 Rapid and accurate cloud content detection method for remote sensing digital image
ES2912040A1 (en) * 2020-11-24 2022-05-24 Iglesias Rodrigo Garcia Delivery system of a consumer good (Machine-translation by Google Translate, not legally binding)
CN113177358B (en) * 2021-04-30 2022-06-03 燕山大学 Soft measurement method for cement quality based on fuzzy fine-grained feature extraction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271514A (en) * 2007-03-21 2008-09-24 株式会社理光 Image detection method and device for fast object detection and objective output
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN104392228A (en) * 2014-12-19 2015-03-04 中国人民解放军国防科学技术大学 Unmanned aerial vehicle image target class detection method based on conditional random field model

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8773535B2 (en) * 2010-12-08 2014-07-08 GM Global Technology Operations LLC Adaptation for clear path detection using reliable local model updating

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271514A (en) * 2007-03-21 2008-09-24 株式会社理光 Image detection method and device for fast object detection and objective output
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN104392228A (en) * 2014-12-19 2015-03-04 中国人民解放军国防科学技术大学 Unmanned aerial vehicle image target class detection method based on conditional random field model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Salient region detection: an integration approach based on image pyramid and region property;Lingfu Kong 等;《Computer Vision Iet》;20150205;第9卷(第1期);第85-97页 *
结合视觉显著性和空间金字塔的遥感图像机场检测;郭雷 等;《西北工业大学学报》;20140215;第32卷(第1期);第98-101页 *

Also Published As

Publication number Publication date
CN105022990A (en) 2015-11-04

Similar Documents

Publication Publication Date Title
CN105022990B (en) A kind of waterborne target rapid detection method based on unmanned boat application
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN109740460B (en) Optical remote sensing image ship detection method based on depth residual error dense network
CN104392228B (en) Unmanned plane image object class detection method based on conditional random field models
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
CN108121991A (en) A kind of deep learning Ship Target Detection method based on the extraction of edge candidate region
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN104537689B (en) Method for tracking target based on local contrast conspicuousness union feature
CN103400156A (en) CFAR (Constant False Alarm Rate) and sparse representation-based high-resolution SAR (Synthetic Aperture Radar) image ship detection method
CN106803070A (en) A kind of port area Ship Target change detecting method based on remote sensing images
CN112381870B (en) Binocular vision-based ship identification and navigational speed measurement system and method
CN110008900B (en) Method for extracting candidate target from visible light remote sensing image from region to target
CN102156881B (en) Method for detecting salvage target based on multi-scale image phase information
CN110472500A (en) A kind of water surface sensation target fast algorithm of detecting based on high speed unmanned boat
CN104036250A (en) Video pedestrian detecting and tracking method
CN106203439B (en) The homing vector landing concept of unmanned plane based on marker multiple features fusion
CN108073940B (en) Method for detecting 3D target example object in unstructured environment
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN108681691A (en) A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
CN105354547A (en) Pedestrian detection method in combination of texture and color features
CN103810487A (en) Method and system for target detection and identification of aerial ocean images
Hu et al. Fast face detection based on skin color segmentation using single chrominance Cr
Sravanthi et al. Efficient image-based object detection for floating weed collection with low cost unmanned floating vehicles
CN110334703B (en) Ship detection and identification method in day and night image
Wang et al. Deep learning-based human activity analysis for aerial images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant