CN108846446A - The object detection method of full convolutional network is merged based on multipath dense feature - Google Patents

The object detection method of full convolutional network is merged based on multipath dense feature Download PDF

Info

Publication number
CN108846446A
CN108846446A CN201810721733.2A CN201810721733A CN108846446A CN 108846446 A CN108846446 A CN 108846446A CN 201810721733 A CN201810721733 A CN 201810721733A CN 108846446 A CN108846446 A CN 108846446A
Authority
CN
China
Prior art keywords
target
feature
object detection
dense feature
convolutional network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810721733.2A
Other languages
Chinese (zh)
Other versions
CN108846446B (en
Inventor
黄守志
李小雨
饶丰
姜竹青
门爱东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television
Beijing University of Posts and Telecommunications
Academy of Broadcasting Science of SAPPRFT
Original Assignee
National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television
Beijing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television, Beijing University of Posts and Telecommunications filed Critical National News Publishes Broadcast Research Institute Of General Bureau Of Radio Film And Television
Priority to CN201810721733.2A priority Critical patent/CN108846446B/en
Publication of CN108846446A publication Critical patent/CN108846446A/en
Application granted granted Critical
Publication of CN108846446B publication Critical patent/CN108846446B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of object detection methods that full convolutional network is merged based on multipath dense feature, and the layering Analysis On Multi-scale Features figure with different characteristic information is extracted using depth convolutional neural networks;Fusion Features from bottom to top are carried out using bottom-up bypass connection;Top-down dense feature fusion is carried out using top-down intensive bypass connection;Construct the target candidate frame of different size and length-width ratio;The simple background sample in target candidate frame is reduced using two classifiers, and device is returned to two classifiers, multi-class classifier and bounding box using multitask loss function and carries out combined optimization.The present invention is based on depth convolutional neural networks to extract characteristics of image, improve feature representation ability using multipath dense feature fusion method, construct the full convolutional network for target detection, propose the strategy for reducing the simple background sample of redundancy and multitask loss combined optimization, the detection accuracy for improving algorithm obtains good object detection results.

Description

The object detection method of full convolutional network is merged based on multipath dense feature
Technical field
The invention belongs to computer vision target detection technique fields, especially a kind of to be merged based on multipath dense feature The object detection method of full convolutional network.
Background technique
The mankind have 80% or more information to derive from vision in the perception engineering of the material world.For the mankind, image And video is also to be important multimedia messages carrier to objective things image and description true to nature.Target detection technique is made For one of the core research topic of computer vision field, target signature is extracted by analysis, so obtain target classification and Location information.Target detection technique has merged many fields such as image procossing, pattern-recognition, artificial intelligence, computer vision Cutting edge technology, in intelligent traffic system, intelligent monitor system, human-computer interaction, automatic Pilot, image retrieval, intelligent robot Equal numerous areas are widely used.
Target detection technique is analyzed by extracting clarification of objective in image or video, and target identification is come out, And indicated in the form of bounding box, it further goes the follow-up works such as to complete tracking, understand.Target detection is as computer The quality of the background task of vision, performance will directly affect in subsequent target following, action recognition and behavior understanding etc. The performance of advanced tasks.However, the target in image usually has a variety of scales, variform, while also facing natural world Such environmental effects, such as illumination, block, complex background etc., therefore target detection based on computer vision still suffers from It is huge challenge and need further to study.
Before deep learning is widely used in computer vision field, traditional object detection method generallys use complexity Artificial design features, such as scale invariant feature conversion (Scale invariant feature transform, SIFT), Histograms of oriented gradients (Histogram of gradient, HoG) etc. neutralizes the related feature letter of target to obtain to be originally inputted Breath realizes target detection.However the factors such as the Morphological Diversity due to target, illumination variation and complex background, hand-designed one The feature not a duck soup of a robust, the adaptability of traditional characteristic be not strong.Traditional detection model is largely dependent upon Specific object detection task, and traditional detection model separation feature extraction and classifier training, also counteract traditional inspection It surveys model and obtains the feature description for more meeting target property.Have benefited from significant increase, the big data of computer hardware calculating speed The birth of collection and the development of deep learning, target detection performance performance are more excellent.Currently a popular algorithm of target detection is equal Feature extraction is carried out using convolutional neural networks.University of Toronto researcher uses convolutional neural networks within 2012 (Convolutional Neural Network, CNN) obtains the extensive visual identity contest (ImageNet of ImageNet Large Scale Visual Recognition Challenge, ILSVRC) two projects of target detection and image classification Champion, and error rate, well below conventional machines learning method, convolutional neural networks start to be widely used in computer view Feel field.Team of Berkeley University of the U.S. in 2014, which combines region candidate method with convolutional neural networks, proposes R-CNN, The precision of target detection is significantly improved, the typical scenario for carrying out target detection based on region candidate, hereafter several years targets are become The research of detection algorithm is based primarily upon convolutional neural networks.Faster R-CNN is it is further proposed that region candidate network and detection net Network shares convolution feature, solves the bottleneck problem for generating candidate region.FAIR in 2017 proposes that FPN utilizes depth convolutional network Inherent layered characteristic carrys out construction feature pyramid and detects for multiscale target.Team of University of Washington in 2016 proposes new Object detection method YOLO solves entire target detection process as regression problem, based on a simple individually end To end network, the output that target position and classification are input to from original image is completed.It is fast that YOLO detects speed, but precision compares base It is lower in the method for region candidate.YOLO only considers that using top feature, for identification, the SSD then proposed is utilized from volume The different layers feature of product neural network is predicted respectively to solve multiscale target test problems.The DSSD benefit proposed for 2017 Introducing additional contextual information with deconvolution improves target detection precision.
In conclusion although the development that algorithm of target detection have passed through decades has been achieved for good effect, convolution The appearance of neural network is even more target detection precision improvement is very much, but many problems or to be improved, for example, how Target signature information is more effectively enriched, simple background sample of redundancy etc. how is reduced.
Summary of the invention
It is an object of the invention to overcome the deficiencies in the prior art, propose that a kind of design is reasonable and with high accuracy based on multichannel Diameter dense feature merges the object detection method of full convolutional network.
The present invention solves its technical problem and adopts the following technical solutions to achieve:
A kind of object detection method being merged full convolutional network based on multipath dense feature, is included the following steps:
Step 1 extracts the layering Analysis On Multi-scale Features figure with different characteristic information using depth convolutional neural networks;
Step 2, the layering Analysis On Multi-scale Features that step 1 is generated based on pond method using bottom-up bypass connect into The Fusion Features of row from bottom to top;
Step 3 utilizes top-down intensive bypass to the layering Analysis On Multi-scale Features that step 2 generates based on Deconvolution Method Connection carries out top-down dense feature fusion;
The target candidate frame of step 4, the Analysis On Multi-scale Features figure building different size and length-width ratio that are generated based on step 3;
Step 5 is reduced the simple background sample in target candidate frame using two classifiers, and utilizes multitask loss function Device is returned to two classifiers, multi-class classifier and bounding box and carries out combined optimization, realizes image classification and target positioning function.
The concrete methods of realizing of the step 1 comprises the steps of:
(1) construct a full convolutional network and be used for feature extraction:In the convolutional neural networks for being initially used in image classification Fall full articulamentum, and adds two new convolutional layers;
(2) the picture with the true frame of target is input to convolutional neural networks, generating accordingly has different characteristic letter The layering Analysis On Multi-scale Features figure of breath.
The concrete methods of realizing of the step 2 comprises the steps of:
(1) the convolutional layer based on initial layered characteristic addition 3*3*512, so that layered characteristic channel dimension is consistent;
(2) addition batch normalization layer accelerates the training of network for weakening the influence of different layers distribution;
(3) maximum pond layer is added to most shallow-layer feature first, so that its dimension halves, be then based on bypass and connect it Being superimposed for corresponding element, which is carried out, with higher level feature realizes Fusion Features;
(4) to step, (3) bottom-up iteration is carried out, and realizes Fusion Features function from bottom to top.
The concrete methods of realizing of the step 3 comprises the steps of:
To top feature add warp lamination so that its dimension increase and it is consistent with lower adjacent layer dimension;
(2) by the superposition of characteristic pattern and lower adjacent layer feature progress corresponding element after deconvolution;
(3) all high-level characteristics are merged using intensive bypass connection type.
The implementation method of the step 4 is according to following principle:
(1) smaller target candidate frame is constructed to shallow-layer characteristic pattern, bigger target candidate frame is constructed to high-level characteristic figure;
(2) a variety of different length-width ratio target candidate frames are constructed.
Concrete methods of realizing in the step 5 comprises the steps of:
(1) construct two classifiers judge candidate frame whether include target score, for difficult sample excavation;
(2) device is returned to two classifiers, multi-class classifier and bounding box using multitask loss function and carry out combined optimization Image classification and target positioning function are realized in training.
The advantages and positive effects of the present invention are:
The present invention uses the multipath dense feature fusion method of depth convolutional neural networks, intensive by forward and backward Connection type feature-rich ability to express, and then application multilayer Analysis On Multi-scale Features carry out multiscale target detection, and generate one Two-value classifier predicts possible target position score, realizes the data mining duty of difficult sample.Present invention utilizes depth convolution Neural network constructs to the powerful expression ability of target and merges full convolution net for the multipath dense feature of target detection Network proposes the method for reducing the simple background sample of redundancy, improves the detection accuracy of algorithm, obtain good target detection As a result.
Detailed description of the invention
Fig. 1 is bottom-up Feature fusion frame diagram proposed by the present invention;
Fig. 2 be it is proposed by the present invention from top and under multipath dense feature fusion method frame diagram;
Fig. 3 is target detection overall structure figure proposed by the present invention.
Specific embodiment
The embodiment of the present invention is further described below in conjunction with attached drawing.
A kind of object detection method merging full convolutional network based on multipath dense feature, as shown in figure 3, including following Step:
Step 1 extracts the layering Analysis On Multi-scale Features figure with different characteristic information using convolutional neural networks framework.
The concrete methods of realizing of this step is as follows:
(1) it constructs a full convolutional network and is used for feature extraction:In the convolutional neural networks for being initially used in image classification Remove full articulamentum, and add two new convolutional layers, obtained characteristic pattern dimension is correspondingly reduced as the number of plies increases Half;
(2) picture with the true frame of target is input to convolutional neural networks, generate has different characteristic accordingly The layering Analysis On Multi-scale Features figure of information.
Step 2 is carried out under the multilayer feature that step 1 generates using bottom-up bypass connection based on pond method Fusion Features on and.
As shown in Figure 1, the concrete methods of realizing of this step is as follows:
(1) it is primarily based on the convolutional layer of initial layered characteristic addition 3*3*512, so that layered characteristic channel dimension keeps one It causes, convenient for Fusion Features later;
(2) addition batch normalization layer, weakens the influence of different layers distribution, accelerates the training of network;
(3) consider to merge the multilayer Analysis On Multi-scale Features of extraction, maximum pond layer added to most shallow-layer feature first, So that its dimension halves, it is then based on bypass connection and melts it with the realization feature that is superimposed that higher level feature carries out corresponding element It closes;
(4) the bottom-up iteration of step (3) is carried out, realizes Fusion Features from bottom to top.
Step 3, the multilayer feature that step 2 is generated based on Deconvolution Method using top-down intensive bypass connect into The top-down dense feature fusion of row.
As shown in Fig. 2, the concrete methods of realizing of this step is as follows:
(1) warp lamination is added to top feature, so that the increase of its dimension is consistent with lower adjacent layer dimension;
(2) by the superposition of characteristic pattern and lower adjacent layer feature progress corresponding element after deconvolution;
(3) in order to realize more intensive Fusion Features, not only using intensive bypass connection type, i.e. shallow-layer fusion feature From adjacent high-level characteristic, and all high-level characteristics are merged.
The target candidate frame of step 4, the Analysis On Multi-scale Features figure building different size and length-width ratio that are generated based on step 3.
The concrete methods of realizing of this step is as follows:
(1) difference for considering different layers neuron receptive field, for convolutional neural networks different layers neuron receptive field Difference, to the smaller target candidate frame of shallow-layer feature G- Design, the target candidate frame bigger to high-level characteristic G- Design;
(2) consider diversity existing for target Aspect Ratio, design a variety of different length-width ratios, enrich candidate's box type.
Step 5 is reduced the simple background sample in target candidate frame using two classifiers, and utilizes multitask loss function Device is returned to two classifiers, multi-class classifier and bounding box and carries out combined optimization realization image classification and target positioning.
(1) there are the simple background samples of many redundancies in target candidate frame, design two classifiers and judge candidate frame Whether include target score, realize the function that difficult sample excavates;
(2) two classifiers, multi-class classifier and bounding box recurrence device combine using multitask loss function excellent Change training, realizes image classification and target positioning.
It is tested below as method of the invention, illustrates experiment effect of the invention.
Test environment:1080 Ti GPU of Ubuntu16.04, Python 2.7, GTX
Cycle tests:PASCAL VOC data set of the selected cycle tests from target detection.Target wherein included is equal For daily life frequent species, totally 20 classifications, including the mankind, animal (bird, cat, ox, dog, horse, sheep), the vehicles (aircraft, Bicycle, ship, bus, car, motorcycle, train), indoor (bottle, chair, dining table, potted plant, sofa, electricity Depending on).PASCAL VOC2007 target detection data set includes 9,963 pictures, 24,640 labeled target objects altogether.
Test index:The main service precision mAP of the present invention (mean average precision) index is to testing result It is evaluated.MAP is the bat measurement of object detection results, is that algorithm of target detection evaluates and tests most common index, right Algorithms of different carries out test and comparison, it was demonstrated that the present invention can obtain preferable result in object detection field.
Test result is as follows:
The experimental result of table 1, different characteristic blending algorithm
Method Training set Test set Precision
Primitive character 07+12 07 70.3
Bottom-up fusion 07+12 07 70.4
Top-down fusion 07+12 07 73.2
The present invention 07+12 07 74.8
Table 1 is to carry out target detection using the different images feature that convolutional neural networks extract to survey in PASCAL VOC2007 Precision result on examination collection, their rear ends use identical detection framework.Wherein precision is mean accuracy mAP.It can be seen that Feature fusion based on forward and backward proposed by the invention can be effectively improved initial characteristics ability to express, and join Detected representation can further be promoted by closing multipath dense feature fusion method.
2 different target detector detection performance of table compares
Table 2 is that the detection performance of the object detector based on PASCAL VOC data set prevalence compares, it can be seen that this hair It is bright to be better than other algorithm of target detection on mAP.Faster R-CNN is that typically the algorithm of target detection based on region represents, MAP of the invention is 74.8%, and the detection accuracy than Faster R-CNN improves 1.6%, and detection speed of the invention is 20FPS is detected fast twice of R-CNN of speed ratio Faster close to real-time detection.SSD is typically based on the inspection of homing method Device is surveyed, detection accuracy of the invention is also higher.The above results show that object detection results caused by inventive algorithm possess more High precision, and the problem of multiscale target detection can be better solved.
It is emphasized that embodiment of the present invention be it is illustrative, without being restrictive, therefore packet of the present invention Include and be not limited to embodiment described in specific embodiment, it is all by those skilled in the art according to the technique and scheme of the present invention The other embodiments obtained, also belong to the scope of protection of the invention.

Claims (6)

1. a kind of object detection method for merging full convolutional network based on multipath dense feature, it is characterised in that including following step Suddenly:
Step 1 extracts the layering Analysis On Multi-scale Features figure with different characteristic information using depth convolutional neural networks;
Step 2 is carried out certainly the layering Analysis On Multi-scale Features that step 1 generates using bottom-up bypass connection based on pond method Fusion Features on down;
Step 3 is connected the layering Analysis On Multi-scale Features that step 2 generates using top-down intensive bypass based on Deconvolution Method Carry out top-down dense feature fusion;
The target candidate frame of step 4, the Analysis On Multi-scale Features figure building different size and length-width ratio that are generated based on step 3;
Step 5 reduces the simple background sample in target candidate frame using two classifiers, and using multitask loss function to two Classifier, multi-class classifier and bounding box return device and carry out combined optimization, realize image classification and target positioning function.
2. the object detection method according to claim 1 for merging full convolutional network based on multipath dense feature, special Sign is:The concrete methods of realizing of the step 1 comprises the steps of:
(1) construct a full convolutional network and be used for feature extraction:Remove in the convolutional neural networks for being initially used in image classification complete Articulamentum, and add two new convolutional layers;
(2) the picture with the true frame of target is input to convolutional neural networks, generate has different characteristic information accordingly It is layered Analysis On Multi-scale Features figure.
3. the object detection method according to claim 1 for merging full convolutional network based on multipath dense feature, special Sign is:The concrete methods of realizing of the step 2 comprises the steps of:
(1) the convolutional layer based on initial layered characteristic addition 3*3*512, so that layered characteristic channel dimension is consistent;
(2) addition batch normalization layer accelerates the training of network for weakening the influence of different layers distribution;
(3) maximum pond layer is added to most shallow-layer feature first so that its dimension halves, be then based on bypass connect by its with compared with Fusion Features are realized in the superposition that high-level characteristic carries out corresponding element;
(4) to step, (3) bottom-up iteration is carried out, and realizes Fusion Features function from bottom to top.
4. the object detection method according to claim 1 for merging full convolutional network based on multipath dense feature, special Sign is:The concrete methods of realizing of the step 3 comprises the steps of:
To top feature add warp lamination so that its dimension increase and it is consistent with lower adjacent layer dimension;
(2) by the superposition of characteristic pattern and lower adjacent layer feature progress corresponding element after deconvolution;
(3) all high-level characteristics are merged using intensive bypass connection type.
5. the object detection method according to claim 1 for merging full convolutional network based on multipath dense feature, special Sign is:The implementation method of the step 4 is according to following principle:
(1) smaller target candidate frame is constructed to shallow-layer characteristic pattern, bigger target candidate frame is constructed to high-level characteristic figure;
(2) a variety of different length-width ratio target candidate frames are constructed.
6. the object detection method according to claim 1 for merging full convolutional network based on multipath dense feature, special Sign is:Concrete methods of realizing in the step 5 comprises the steps of:
(1) construct two classifiers judge candidate frame whether include target score, for difficult sample excavation;
(2) device is returned to two classifiers, multi-class classifier and bounding box using multitask loss function and carry out combined optimization instruction Practice, realizes image classification and target positioning function.
CN201810721733.2A 2018-07-04 2018-07-04 Target detection method based on multi-path dense feature fusion full convolution network Active CN108846446B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810721733.2A CN108846446B (en) 2018-07-04 2018-07-04 Target detection method based on multi-path dense feature fusion full convolution network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810721733.2A CN108846446B (en) 2018-07-04 2018-07-04 Target detection method based on multi-path dense feature fusion full convolution network

Publications (2)

Publication Number Publication Date
CN108846446A true CN108846446A (en) 2018-11-20
CN108846446B CN108846446B (en) 2021-10-12

Family

ID=64200566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810721733.2A Active CN108846446B (en) 2018-07-04 2018-07-04 Target detection method based on multi-path dense feature fusion full convolution network

Country Status (1)

Country Link
CN (1) CN108846446B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522966A (en) * 2018-11-28 2019-03-26 中山大学 A kind of object detection method based on intensive connection convolutional neural networks
CN109766920A (en) * 2018-12-18 2019-05-17 任飞翔 Article characteristics Model Calculating Method and device based on deep learning
CN109903339A (en) * 2019-03-26 2019-06-18 南京邮电大学 A kind of video group personage's position finding and detection method based on multidimensional fusion feature
CN109919934A (en) * 2019-03-11 2019-06-21 重庆邮电大学 A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration
CN109978014A (en) * 2019-03-06 2019-07-05 华南理工大学 A kind of flexible base board defect inspection method merging intensive connection structure
CN110009679A (en) * 2019-02-28 2019-07-12 江南大学 A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks
CN110110722A (en) * 2019-04-30 2019-08-09 广州华工邦元信息技术有限公司 A kind of region detection modification method based on deep learning model recognition result
CN110245706A (en) * 2019-06-14 2019-09-17 西安邮电大学 A kind of lightweight target detection network for Embedded Application
CN110490242A (en) * 2019-08-12 2019-11-22 腾讯医疗健康(深圳)有限公司 Training method, eye fundus image classification method and the relevant device of image classification network
CN110516605A (en) * 2019-08-28 2019-11-29 北京观微科技有限公司 Any direction Ship Target Detection method based on cascade neural network
CN110765886A (en) * 2019-09-29 2020-02-07 深圳大学 Road target detection method and device based on convolutional neural network
CN110852330A (en) * 2019-10-23 2020-02-28 天津大学 Behavior identification method based on single stage
CN111079683A (en) * 2019-12-24 2020-04-28 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111401290A (en) * 2020-03-24 2020-07-10 杭州博雅鸿图视频技术有限公司 Face detection method and system and computer readable storage medium
CN111462050A (en) * 2020-03-12 2020-07-28 上海理工大学 Improved YO L Ov3 minimum remote sensing image target detection method, device and storage medium
CN111898615A (en) * 2020-06-16 2020-11-06 济南浪潮高新科技投资发展有限公司 Feature extraction method, device, equipment and medium of object detection model
CN112926681A (en) * 2021-03-29 2021-06-08 复旦大学 Target detection method and device based on deep convolutional neural network
CN117593516A (en) * 2024-01-18 2024-02-23 苏州元脑智能科技有限公司 Target detection method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171341A1 (en) * 2014-12-15 2016-06-16 Samsung Electronics Co., Ltd. Apparatus and method for detecting object in image, and apparatus and method for computer-aided diagnosis
CN107230351A (en) * 2017-07-18 2017-10-03 福州大学 A kind of Short-time Traffic Flow Forecasting Methods based on deep learning
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
CN107563381A (en) * 2017-09-12 2018-01-09 国家新闻出版广电总局广播科学研究院 The object detection method of multiple features fusion based on full convolutional network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160171341A1 (en) * 2014-12-15 2016-06-16 Samsung Electronics Co., Ltd. Apparatus and method for detecting object in image, and apparatus and method for computer-aided diagnosis
CN107292271A (en) * 2017-06-23 2017-10-24 北京易真学思教育科技有限公司 Learning-memory behavior method, device and electronic equipment
CN107230351A (en) * 2017-07-18 2017-10-03 福州大学 A kind of Short-time Traffic Flow Forecasting Methods based on deep learning
CN107563381A (en) * 2017-09-12 2018-01-09 国家新闻出版广电总局广播科学研究院 The object detection method of multiple features fusion based on full convolutional network

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522966B (en) * 2018-11-28 2022-09-27 中山大学 Target detection method based on dense connection convolutional neural network
CN109522966A (en) * 2018-11-28 2019-03-26 中山大学 A kind of object detection method based on intensive connection convolutional neural networks
CN109766920A (en) * 2018-12-18 2019-05-17 任飞翔 Article characteristics Model Calculating Method and device based on deep learning
CN110009679A (en) * 2019-02-28 2019-07-12 江南大学 A kind of object localization method based on Analysis On Multi-scale Features convolutional neural networks
CN110009679B (en) * 2019-02-28 2022-01-04 江南大学 Target positioning method based on multi-scale feature convolutional neural network
CN109978014A (en) * 2019-03-06 2019-07-05 华南理工大学 A kind of flexible base board defect inspection method merging intensive connection structure
CN109919934A (en) * 2019-03-11 2019-06-21 重庆邮电大学 A kind of liquid crystal display panel defect inspection method based on the study of multi-source domain depth migration
CN109919934B (en) * 2019-03-11 2021-01-29 重庆邮电大学 Liquid crystal panel defect detection method based on multi-source domain deep transfer learning
CN109903339A (en) * 2019-03-26 2019-06-18 南京邮电大学 A kind of video group personage's position finding and detection method based on multidimensional fusion feature
CN109903339B (en) * 2019-03-26 2021-03-05 南京邮电大学 Video group figure positioning detection method based on multi-dimensional fusion features
CN110110722A (en) * 2019-04-30 2019-08-09 广州华工邦元信息技术有限公司 A kind of region detection modification method based on deep learning model recognition result
CN110245706A (en) * 2019-06-14 2019-09-17 西安邮电大学 A kind of lightweight target detection network for Embedded Application
CN110490242A (en) * 2019-08-12 2019-11-22 腾讯医疗健康(深圳)有限公司 Training method, eye fundus image classification method and the relevant device of image classification network
CN110490242B (en) * 2019-08-12 2024-03-29 腾讯医疗健康(深圳)有限公司 Training method of image classification network, fundus image classification method and related equipment
CN110516605A (en) * 2019-08-28 2019-11-29 北京观微科技有限公司 Any direction Ship Target Detection method based on cascade neural network
CN110765886A (en) * 2019-09-29 2020-02-07 深圳大学 Road target detection method and device based on convolutional neural network
CN110765886B (en) * 2019-09-29 2022-05-03 深圳大学 Road target detection method and device based on convolutional neural network
CN110852330A (en) * 2019-10-23 2020-02-28 天津大学 Behavior identification method based on single stage
CN111079683A (en) * 2019-12-24 2020-04-28 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111079683B (en) * 2019-12-24 2023-12-12 天津大学 Remote sensing image cloud and snow detection method based on convolutional neural network
CN111462050A (en) * 2020-03-12 2020-07-28 上海理工大学 Improved YO L Ov3 minimum remote sensing image target detection method, device and storage medium
CN111462050B (en) * 2020-03-12 2022-10-11 上海理工大学 YOLOv3 improved minimum remote sensing image target detection method and device and storage medium
CN111401290A (en) * 2020-03-24 2020-07-10 杭州博雅鸿图视频技术有限公司 Face detection method and system and computer readable storage medium
CN111898615A (en) * 2020-06-16 2020-11-06 济南浪潮高新科技投资发展有限公司 Feature extraction method, device, equipment and medium of object detection model
CN112926681A (en) * 2021-03-29 2021-06-08 复旦大学 Target detection method and device based on deep convolutional neural network
CN117593516A (en) * 2024-01-18 2024-02-23 苏州元脑智能科技有限公司 Target detection method, device, equipment and storage medium
CN117593516B (en) * 2024-01-18 2024-03-22 苏州元脑智能科技有限公司 Target detection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN108846446B (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN108846446A (en) The object detection method of full convolutional network is merged based on multipath dense feature
CN109034210B (en) Target detection method based on super-feature fusion and multi-scale pyramid network
CN109389055B (en) Video classification method based on mixed convolution and attention mechanism
CN107563381B (en) Multi-feature fusion target detection method based on full convolution network
Kong et al. Hypernet: Towards accurate region proposal generation and joint object detection
CN106022300B (en) Traffic sign recognition method and system based on cascade deep study
CN107316058A (en) Improve the method for target detection performance by improving target classification and positional accuracy
CN107590489A (en) Object detection method based on concatenated convolutional neutral net
CN110516536A (en) A kind of Weakly supervised video behavior detection method for activating figure complementary based on timing classification
CN108647665A (en) Vehicle real-time detection method of taking photo by plane based on deep learning
Tang et al. View-independent facial action unit detection
CN111523462A (en) Video sequence list situation recognition system and method based on self-attention enhanced CNN
Shang et al. Using lightweight deep learning algorithm for real-time detection of apple flowers in natural environments
CN101667245A (en) Human face detection method by cascading novel detection classifiers based on support vectors
CN109508675A (en) A kind of pedestrian detection method for complex scene
CN105138975B (en) A kind of area of skin color of human body dividing method based on degree of depth conviction network
Baojun et al. Multi-scale object detection by top-down and bottom-up feature pyramid network
Jiang et al. Social behavioral phenotyping of Drosophila with a 2D–3D hybrid CNN framework
Quintino Ferreira et al. Pose guided attention for multi-label fashion image classification
Mo et al. Background noise filtering and distribution dividing for crowd counting
CN107609509A (en) A kind of action identification method based on motion salient region detection
Zhou et al. Sampling-attention deep learning network with transfer learning for large-scale urban point cloud semantic segmentation
Tang et al. Pest-YOLO: Deep image mining and multi-feature fusion for real-time agriculture pest detection
Liu et al. Analysis of anchor-based and anchor-free object detection methods based on deep learning
Xu et al. Occlusion problem-oriented adversarial faster-RCNN scheme

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant