CN111950551A - Target detection method based on convolutional neural network - Google Patents

Target detection method based on convolutional neural network Download PDF

Info

Publication number
CN111950551A
CN111950551A CN202010816397.7A CN202010816397A CN111950551A CN 111950551 A CN111950551 A CN 111950551A CN 202010816397 A CN202010816397 A CN 202010816397A CN 111950551 A CN111950551 A CN 111950551A
Authority
CN
China
Prior art keywords
feature map
convolution
region
neural network
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010816397.7A
Other languages
Chinese (zh)
Other versions
CN111950551B (en
Inventor
李松江
吴宁
王鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun University of Science and Technology
Original Assignee
Changchun University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun University of Science and Technology filed Critical Changchun University of Science and Technology
Priority to CN202010816397.7A priority Critical patent/CN111950551B/en
Publication of CN111950551A publication Critical patent/CN111950551A/en
Application granted granted Critical
Publication of CN111950551B publication Critical patent/CN111950551B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a target detection method based on a convolutional neural network, which comprises the following steps: extracting features based on a residual volume neural network to obtain a layer-by-layer basic feature map; sequentially fusing the basic feature maps from shallow to deep to obtain fused feature maps; extracting candidate frames of the fusion characteristic graph based on a region generation network to obtain a candidate target region characteristic graph; obtaining a region-of-interest feature map according to the fusion feature map and the candidate target region feature map; and obtaining classification scores and frame regression based on the full convolution layer according to the interesting region feature graph. The invention has higher detection precision for small targets and shielded targets.

Description

Target detection method based on convolutional neural network
Technical Field
The invention relates to the technical field of image information processing, in particular to a target detection method based on a convolutional neural network.
Background
With the increasing road traffic pressure, intelligent management and control of road vehicles through computer technology has become a popular research; the road monitoring equipment is used for detecting the vehicle target, the vehicle data and the driving track of a road network are mastered on the premise of optimizing traffic and relieving traffic pressure, and meanwhile, the vehicle target detection is the research basis in the fields of unmanned driving, vehicle tracking and vehicle characteristic identification.
At present, a convolutional neural network is widely applied to the field of vehicle target detection, and is generally divided into a single-stage detection algorithm and a double-stage detection algorithm, wherein the single-stage detection algorithm is a regression-based target detection algorithm, and the double-stage detection algorithm firstly generates a candidate region and then carries out classification and refinement. Due to the difference of the algorithm structures, the double-stage detection algorithm has higher detection precision, but the detection speed is lower than that of the single-stage detection algorithm, so that the method is suitable for scenes with higher requirements on the detection precision.
The existing two-stage target detection algorithm has the following problems: due to the fact that the characteristics of the shielded target and the small target are few, the existing algorithm is insufficient in utilization of shallow position information and context information, and the detection accuracy of the small target and the shielded target is low.
Disclosure of Invention
The invention aims to provide a target detection method based on a convolutional neural network, which has higher detection precision for small targets and occluded targets.
In order to achieve the purpose, the invention provides the following scheme:
a target detection method based on a convolutional neural network comprises the following steps:
extracting features based on a residual volume neural network to obtain a layer-by-layer basic feature map;
sequentially fusing the basic feature maps from shallow to deep to obtain fused feature maps;
extracting candidate frames of the fusion characteristic graph based on a region generation network to obtain a candidate target region characteristic graph;
obtaining a region-of-interest feature map according to the fusion feature map and the candidate target region feature map;
and obtaining classification scores and frame regression based on the full convolution layer according to the interesting region feature graph.
Preferably, the basic feature map includes a first feature map, a second feature map, a third feature map, and a fourth feature map.
Preferably, the sequentially fusing the basic feature maps from shallow to deep to obtain a fused feature map, including:
carrying out down-sampling processing on the first feature map to obtain a down-sampling feature map;
performing convolution dimensionality reduction on the second feature map to obtain a dimensionality reduction feature map, wherein the number of channels of the dimensionality reduction feature map is the same as that of the channels of the downsampling feature map;
fusing the down-sampling feature map and the dimension reduction feature map to obtain an initial fusion feature map; and finally obtaining the fusion characteristic graph in the same way.
Preferably, the down-sampling processing on the first feature map to obtain a down-sampled feature map includes:
respectively carrying out downsampling processing on the first feature map based on convolution of n branch holes; n is a positive integer greater than 1;
and fusing the first feature maps subjected to downsampling processing by convolution of the branch holes to obtain the downsampled feature map.
Preferably, n is 3, and the void ratios of the 3 branches are 1, 2 and 3 respectively.
Preferably, the extracting candidate frames from the fusion feature map based on the region-based generation network to obtain a candidate target region feature map includes:
performing convolution processing on the fusion feature map based on a first set convolution core to obtain a first convolution feature map;
performing convolution processing on the first convolution feature map based on a second set convolution core to obtain a second convolution feature map;
performing convolution processing on the second convolution characteristic diagram based on a second set convolution core to obtain a third convolution characteristic diagram;
and respectively inputting the second convolution feature map and the third convolution feature map into two parallel full-connection layers, and processing based on a set anchor frame to obtain the candidate target area feature map.
Preferably, the obtaining of classification scores and bounding box regression based on a full convolution layer according to the region of interest feature map includes:
obtaining an initial classification score and an initial frame regression based on a full convolution layer according to the interesting region feature map;
replacing the set anchor frame with the initial frame regression, sequentially executing subsequent steps, and repeatedly executing the process m times by setting m threshold values to obtain the classification score and the frame regression; m is a positive integer greater than or equal to 1.
Preferably, the first set convolution kernel is 3 × 3; the second set convolution kernel is 1 × 1.
Preferably, the obtaining a feature map of a region of interest according to the fusion feature map and the candidate target region feature map includes:
fusing the fused feature map and the candidate target region feature map based on ROIAlign to obtain an initial region-of-interest feature map;
amplifying the initial interested area feature map according to a set multiple to obtain an amplified interested area feature map;
performing global context extraction on the initial region-of-interest feature map based on the amplified region-of-interest feature map to obtain context information;
and fusing the initial region-of-interest feature map and the context information based on ROIAlign to obtain the region-of-interest feature map.
Preferably, the residual convolutional neural network is a ResNet-101 network.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention relates to a target detection method based on a convolutional neural network, which comprises the following steps: extracting features based on a residual volume neural network to obtain a layer-by-layer basic feature map; sequentially fusing the basic feature maps from shallow to deep to obtain fused feature maps; extracting candidate frames of the fusion characteristic graph based on a region generation network to obtain a candidate target region characteristic graph; obtaining a region-of-interest feature map according to the fusion feature map and the candidate target region feature map; and obtaining classification scores and frame regression based on the full convolution layer according to the interesting region feature graph. The invention has higher detection precision for small targets and shielded targets.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a flowchart of a target detection method based on a convolutional neural network according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a target detection method based on a convolutional neural network, which has higher detection precision for small targets and occluded targets.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a flowchart of a target detection method based on a convolutional neural network, and as shown in fig. 1, the present invention provides a target detection method based on a convolutional neural network, including:
step S1, extracting features based on the residual error rolling neural network ResNet-101 to obtain a layer-by-layer basic feature map; the method specifically comprises a first characteristic diagram, a second characteristic diagram, a third characteristic diagram and a fourth characteristic diagram. In this embodiment, the specific conditions of each convolutional layer of ResNet-101 are shown in Table 1.
TABLE 1 respective convolution layers of ResNet-101
Figure BDA0002632855660000041
Figure BDA0002632855660000051
Where w is the width of the region of interest and h is the height of the region of interest.
And step S2, sequentially fusing the basic feature maps from shallow to deep to obtain a fused feature map.
Taking the first characteristic diagram and the second characteristic diagram as an example for integration, the specific process is as follows:
respectively carrying out downsampling processing on the first feature map based on convolution of n branch holes; n is a positive integer greater than 1. In this embodiment, n is 3, the size of the convolution kernel is 3 × 3, and the convolution step size is 2; the void ratios of the 3 branches are 1, 2 and 3 respectively.
And fusing the first feature maps subjected to downsampling processing by convolution of the branch holes to obtain the downsampled feature map. The specific calculation formula is as follows:
F=H3,1(x)+H3,2(x)+H3,3(x)
in the formula: f denotes a down-sampled feature map after fusion, Hk,r,(x)Indicating a hole convolution, k indicating the convolution kernel size, r indicating the hole rate, and x being the first characteristic diagram.
And performing convolution dimensionality reduction on the second feature map by adopting a 1 multiplied by 1 convolution kernel to obtain a dimensionality reduction feature map, wherein the number of channels of the dimensionality reduction feature map is the same as that of the downsampling feature map.
And fusing the downsampling feature map and the dimension reduction feature map to obtain an initial fusion feature map.
And sequentially fusing according to the steps to obtain the fused characteristic diagram.
And step S3, extracting candidate frames of the fusion characteristic graph based on the region generation network to obtain a candidate target region characteristic graph.
As an alternative embodiment, step S3 of the present invention includes:
step S31, performing convolution processing on the fusion feature map based on the first set convolution kernel to obtain a first convolution feature map. In this embodiment, the first set convolution kernel size is 3 × 3.
Step S32, performing convolution processing on the first convolution feature map based on a second set convolution kernel to obtain a second convolution feature map. In this embodiment, the second set convolution kernel size is 1 × 1.
Step S33, performing convolution processing on the second convolution feature map based on the second set convolution kernel to obtain a third convolution feature map.
And step S34, inputting the second convolution feature map and the third convolution feature map into two parallel full-connection layers respectively, and processing based on a set anchor frame to obtain the candidate target area feature map.
And step S4, obtaining a region-of-interest feature map according to the fusion feature map and the candidate target region feature map.
Specifically, the step S4 includes:
and step S41, fusing the fusion characteristic map and the candidate target region characteristic map based on ROI Align to obtain an initial region-of-interest characteristic map.
And step S42, carrying out amplification processing on the initial region-of-interest feature map according to a set multiple to obtain an amplified region-of-interest feature map. In this embodiment, the set multiple is 1.5.
And step S43, performing global context extraction in four directions of up, down, left and right on the initial region-of-interest feature map based on the amplified region-of-interest feature map to obtain context information.
And step S44, mapping the initial region-of-interest feature map and the context information into rectangular boxes with the same size based on ROIAlign, and fusing to obtain the region-of-interest feature map.
And step S5, obtaining classification scores and frame regression based on the full convolution layer according to the region of interest feature map.
Specifically, an initial classification score and an initial border regression are obtained based on a full convolution layer according to the region of interest feature map.
Replacing the set anchor frame with the initial frame regression, sequentially executing the subsequent steps, and repeatedly executing the process m times by setting m threshold values to obtain the classification score and the frame regression; m is a positive integer greater than or equal to 1. In this embodiment, m is 3, and the three thresholds are 0.5, 0.6, and 0.7, respectively.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (10)

1. A target detection method based on a convolutional neural network is characterized by comprising the following steps:
extracting features based on a residual volume neural network to obtain a layer-by-layer basic feature map;
sequentially fusing the basic feature maps from shallow to deep to obtain fused feature maps;
extracting candidate frames of the fusion characteristic graph based on a region generation network to obtain a candidate target region characteristic graph;
obtaining a region-of-interest feature map according to the fusion feature map and the candidate target region feature map;
and obtaining classification scores and frame regression based on the full convolution layer according to the interesting region feature graph.
2. The convolutional neural network-based target detection method of claim 1, wherein the basic feature map comprises a first feature map, a second feature map, a third feature map and a fourth feature map.
3. The target detection method based on the convolutional neural network as claimed in claim 2, wherein the sequentially fusing the basic feature maps from shallow to deep to obtain a fused feature map comprises:
carrying out down-sampling processing on the first feature map to obtain a down-sampling feature map;
performing convolution dimensionality reduction on the second feature map to obtain a dimensionality reduction feature map, wherein the number of channels of the dimensionality reduction feature map is the same as that of the channels of the downsampling feature map;
fusing the down-sampling feature map and the dimension reduction feature map to obtain an initial fusion feature map; and finally obtaining the fusion characteristic graph in the same way.
4. The convolutional neural network-based target detection method according to claim 3, wherein the downsampling the first feature map to obtain a downsampled feature map comprises:
respectively carrying out downsampling processing on the first feature map based on convolution of n branch holes; n is a positive integer greater than 1;
and fusing the first feature maps subjected to downsampling processing by convolution of the branch holes to obtain the downsampled feature map.
5. The convolutional neural network-based target detection method as claimed in claim 4, wherein n is 3, and the void rates of 3 branches are 1, 2 and 3 respectively.
6. The convolutional neural network-based target detection method according to claim 1, wherein the region-based generation network performs candidate frame extraction on the fused feature map to obtain a candidate target region feature map, and the method comprises:
performing convolution processing on the fusion feature map based on a first set convolution core to obtain a first convolution feature map;
performing convolution processing on the first convolution feature map based on a second set convolution core to obtain a second convolution feature map;
performing convolution processing on the second convolution characteristic diagram based on a second set convolution core to obtain a third convolution characteristic diagram;
and respectively inputting the second convolution feature map and the third convolution feature map into two parallel full-connection layers, and processing based on a set anchor frame to obtain the candidate target area feature map.
7. The convolutional neural network-based target detection method of claim 6, wherein the obtaining classification scores and bounding box regression based on a full convolutional layer according to the region of interest feature map comprises:
obtaining an initial classification score and an initial frame regression based on a full convolution layer according to the interesting region feature map;
replacing the set anchor frame with the initial frame regression, sequentially executing subsequent steps, and repeatedly executing the process m times by setting m threshold values to obtain the classification score and the frame regression; m is a positive integer greater than or equal to 1.
8. The convolutional neural network-based object detection method of claim 6, wherein the first set convolution kernel is 3 x 3; the second set convolution kernel is 1 × 1.
9. The convolutional neural network-based object detection method as claimed in claim 1, wherein said obtaining a region-of-interest feature map according to the fused feature map and the candidate object region feature map comprises:
fusing the fused feature map and the candidate target region feature map based on ROI Align to obtain an initial region-of-interest feature map;
amplifying the initial interested area feature map according to a set multiple to obtain an amplified interested area feature map;
performing global context extraction on the initial region-of-interest feature map based on the amplified region-of-interest feature map to obtain context information;
and fusing the initial region-of-interest feature map and the context information based on ROI Align to obtain the region-of-interest feature map.
10. The convolutional neural network-based object detection method of claim 1, wherein the residual convolutional neural network is a ResNet-101 network.
CN202010816397.7A 2020-08-14 2020-08-14 Target detection method based on convolutional neural network Active CN111950551B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010816397.7A CN111950551B (en) 2020-08-14 2020-08-14 Target detection method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010816397.7A CN111950551B (en) 2020-08-14 2020-08-14 Target detection method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111950551A true CN111950551A (en) 2020-11-17
CN111950551B CN111950551B (en) 2024-03-08

Family

ID=73342163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010816397.7A Active CN111950551B (en) 2020-08-14 2020-08-14 Target detection method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111950551B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419292A (en) * 2020-11-30 2021-02-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN114782676A (en) * 2022-04-02 2022-07-22 北京广播电视台 Method and system for extracting region of interest of video

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068198A1 (en) * 2016-09-06 2018-03-08 Carnegie Mellon University Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network
CN109165644A (en) * 2018-07-13 2019-01-08 北京市商汤科技开发有限公司 Object detection method and device, electronic equipment, storage medium, program product
CN110348384A (en) * 2019-07-12 2019-10-18 沈阳理工大学 A kind of Small object vehicle attribute recognition methods based on Fusion Features
CN111461145A (en) * 2020-03-31 2020-07-28 中国科学院计算技术研究所 Method for detecting target based on convolutional neural network
CN111507998A (en) * 2020-04-20 2020-08-07 南京航空航天大学 Depth cascade-based multi-scale excitation mechanism tunnel surface defect segmentation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180068198A1 (en) * 2016-09-06 2018-03-08 Carnegie Mellon University Methods and Software for Detecting Objects in an Image Using Contextual Multiscale Fast Region-Based Convolutional Neural Network
CN109165644A (en) * 2018-07-13 2019-01-08 北京市商汤科技开发有限公司 Object detection method and device, electronic equipment, storage medium, program product
CN110348384A (en) * 2019-07-12 2019-10-18 沈阳理工大学 A kind of Small object vehicle attribute recognition methods based on Fusion Features
CN111461145A (en) * 2020-03-31 2020-07-28 中国科学院计算技术研究所 Method for detecting target based on convolutional neural network
CN111507998A (en) * 2020-04-20 2020-08-07 南京航空航天大学 Depth cascade-based multi-scale excitation mechanism tunnel surface defect segmentation method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CHANG TANG等: "DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Multi-Scale Deep Features", PROCEEDINGS OF THE IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 31 December 2019 (2019-12-31), pages 2700 - 2709 *
吕俊奇;邱卫根;张立臣;李雪武;: "多层卷积特征融合的行人检测", 计算机工程与设计, no. 11 *
裴伟;许晏铭;朱永英;王鹏乾;鲁明羽;李飞: "改进的SSD航拍目标检测方法", 软件学报, no. 003, 31 December 2019 (2019-12-31), pages 738 - 758 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419292A (en) * 2020-11-30 2021-02-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN112419292B (en) * 2020-11-30 2024-03-26 深圳云天励飞技术股份有限公司 Pathological image processing method and device, electronic equipment and storage medium
CN114782676A (en) * 2022-04-02 2022-07-22 北京广播电视台 Method and system for extracting region of interest of video
CN114782676B (en) * 2022-04-02 2023-01-06 北京广播电视台 Method and system for extracting region of interest of video

Also Published As

Publication number Publication date
CN111950551B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN111460926B (en) Video pedestrian detection method fusing multi-target tracking clues
US11763485B1 (en) Deep learning based robot target recognition and motion detection method, storage medium and apparatus
US11062123B2 (en) Method, terminal, and storage medium for tracking facial critical area
CN112396027B (en) Vehicle re-identification method based on graph convolution neural network
CN101996410B (en) Method and system of detecting moving object under dynamic background
CN109461172A (en) Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN106557579B (en) Vehicle model retrieval system and method based on convolutional neural network
CN106203423B (en) Weak structure perception visual target tracking method fusing context detection
CN110738690A (en) unmanned aerial vehicle video middle vehicle speed correction method based on multi-target tracking framework
CN101216941A (en) Motion estimation method under violent illumination variation based on corner matching and optic flow method
CN111950551A (en) Target detection method based on convolutional neural network
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
CN112487915B (en) Pedestrian detection method based on Embedded YOLO algorithm
Zhang et al. Coarse-to-fine object detection in unmanned aerial vehicle imagery using lightweight convolutional neural network and deep motion saliency
CN115240130A (en) Pedestrian multi-target tracking method and device and computer readable storage medium
Jia et al. Real-time obstacle detection with motion features using monocular vision
CN110992424B (en) Positioning method and system based on binocular vision
CN114627441A (en) Unstructured road recognition network training method, application method and storage medium
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN114757977A (en) Moving object track extraction method fusing improved optical flow and target detection network
Zhu et al. Boosting RGB-D salient object detection with adaptively cooperative dynamic fusion network
Gong et al. Multi-target trajectory tracking in multi-frame video images of basketball sports based on deep learning
Fang et al. A visual tracking algorithm via confidence-based multi-feature correlation filtering
CN112818771A (en) Multi-target tracking algorithm based on feature aggregation
CN116129386A (en) Method, system and computer readable medium for detecting a travelable region

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant