CN112561969B - Mobile robot infrared target tracking method and system based on unsupervised optical flow network - Google Patents

Mobile robot infrared target tracking method and system based on unsupervised optical flow network Download PDF

Info

Publication number
CN112561969B
CN112561969B CN202011564796.5A CN202011564796A CN112561969B CN 112561969 B CN112561969 B CN 112561969B CN 202011564796 A CN202011564796 A CN 202011564796A CN 112561969 B CN112561969 B CN 112561969B
Authority
CN
China
Prior art keywords
frame
feature map
previous
weight
optical flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011564796.5A
Other languages
Chinese (zh)
Other versions
CN112561969A (en
Inventor
何震宇
刘乔
白扬
杨超
万玉东
孙旭岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN202011564796.5A priority Critical patent/CN112561969B/en
Publication of CN112561969A publication Critical patent/CN112561969A/en
Application granted granted Critical
Publication of CN112561969B publication Critical patent/CN112561969B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention provides a mobile robot infrared target tracking method and a system based on an unsupervised optical flow network. And fusing the features of the previous frames through a spatial and temporal attention mechanism to obtain a feature map of the target, and finally obtaining a tracking result by using a correlation filter according to the feature map and the feature map of the frame to be predicted. The beneficial effects of the invention are as follows: the invention uses an unsupervised optical flow network capable of end-to-end training to extract the optical flow characteristics and fuse the characteristics of the previous frames, thereby improving the tracking effect. In particular, fast moving objects often occur during the tracking process of a mobile robot, and such objects can be effectively tracked by using the invention.

Description

Mobile robot infrared target tracking method and system based on unsupervised optical flow network
Technical Field
The invention relates to the technical field of visual target tracking, in particular to a mobile robot infrared target tracking method and system based on an unsupervised optical flow network.
Background
Visual perception is an important component in intelligent robotic perception systems. While visual target tracking is a support technique for visual perception. The robot first performs positioning and tracking on the target, and then can perform interaction. The visual target tracking technology is a research hot spot in the field of intelligent robots and is widely applied to the directions of robot visual tracking, navigation, intelligent monitoring and the like. The visual target tracking task is to set the position and the size of a target to be tracked in an initial frame of a video and predict the position and the size of the target in a subsequent video frame. Because the infrared image imaging mode does not depend on the intensity of light, the infrared image imaging mode is only related to the temperature of object radiation. Therefore, the infrared target tracking method can track the target under the condition of low visibility and even complete darkness, which is very suitable for the vision tracking task of the robot.
One core issue of tracking is how to accurately detect and locate objects in changing scenes of occlusion, shape deformation, etc. In recent years, a visual tracking method based on a Discriminant Correlation Filter (DCF) has received a lot of attention. With great success of CNN in target recognition, researchers have introduced CNN into target tracking algorithms, which significantly improves the accuracy and robustness of the tracking algorithms. Still other object tracking algorithms use optical flow to further improve performance.
However, most existing trackers only consider the apparent characteristics of the target in the current frame, and use too little inter-frame information, which wastes video characteristic information and also reduces the performance of the tracker. Mobile robots often present situations in which tracking fast moving objects is performed, these trackers performing poorly in such situations. While some trackers utilize optical flow to improve performance, the optical flow characteristics they utilize are off-the-shelf and are not trained to track problems, and therefore do not fully utilize optical flow information.
Disclosure of Invention
The invention provides a mobile robot infrared target tracking method based on an unsupervised optical flow network, which comprises the following steps of:
step S1: extracting a feature map of a previous T frame, and calculating optical flows of a previous frame and a previous second frame to a previous T frame by using an unsupervised optical flow network;
step S2: aligning optical flows of the previous second frame to the previous T frame by affine transformation according to the corresponding optical flows;
step S3: putting the feature map of the previous frame and other feature maps after alignment into a space attention network to obtain a weight map;
step S4: further weighting the weight map using a time awareness network;
step S5: weighting the characteristic image of the previous frame and other characteristic images after alignment by using the obtained weight image to obtain a characteristic image of the target;
step S6: and (5) extracting a characteristic diagram of the frame to be predicted, and obtaining a tracking result by using a filter according to the characteristic diagram obtained in the step (S5).
As a further improvement of the invention, in said step S1, a feature map of previous frames is calculated.
As a further development of the invention, in said step S1, an unsupervised end-to-end trainable optical flow network is used to calculate the optical flow of the previous frame and the T-th frame before the previous second frame … ….
As a further improvement of the present invention, in the step S2, the feature map is aligned according to the optical flow using affine transformation shown in formula (1);
where p is the coordinates of the original feature map,the value of the m-th channel, which is the p-th channel on the original feature map, after alignment with the optical flow, δp is the optical flow, q is the coordinates of each point in the feature map, K is the bilinear interpolation kernel,>the value of the mth channel representing the q point on the original signature.
As a further development of the invention, in said step S3, the bottleneck network pair profile is first usedTreating to obtain->Then according to +.>Calculating the weight of each frame;
wherein the method comprises the steps ofFor the value of P point on the feature map obtained by affine transformation using the optical flow of the t-1 th frame, is->Is the value of p point on the characteristic diagram of the t-1 frame.
In the step S4, the weight map of each frame is processed sequentially by a global pooling layer and three full connection layers to obtain the weight of the weight map of each frame, and then the weight is used for weighting the weight map to obtain the final weight map of the feature map of each frame.
As a further improvement of the present invention, in the step S5, the aligned several frames of feature images are weighted by using the formula (3) to obtain a fused feature image;
wherein the method comprises the steps ofOmega for the aligned ith frame feature map i→t-1 Is a corresponding weight graph.
The invention also provides a mobile robot infrared target tracking system based on the unsupervised optical flow network, which comprises:
and an extraction module: the method comprises the steps of extracting a feature map of a previous T frame, and calculating optical flows of a previous frame and a previous second frame to a previous T frame by using an unsupervised optical flow network;
and an alignment module: for aligning the optical flows of the previous second frame to the previous T-th frame using affine transformations according to the corresponding optical flows;
the calculation module: the method comprises the steps of putting a previous frame of feature map and other feature maps after alignment into a space attention network to obtain a weight map;
and a weighting module: for further weighting the weight map with a time-awareness network;
a first processing module: weighting the characteristic image of the previous frame and other characteristic images after alignment by using the obtained weight image to obtain a characteristic image of the target;
and a second processing module: and the method is used for extracting the characteristic diagram of the frame to be predicted, and a filter is used for obtaining a tracking result according to the characteristic diagram obtained by the first processing module.
As a further improvement of the present invention, in the alignment module, the feature map is aligned according to the optical flow using affine transformation shown in formula (1);
where p is the coordinates of the original feature map,the value of the m-th channel, which is the p-th channel on the original feature map, after alignment with the optical flow, δp is the optical flow, q is the coordinates of each point in the feature map, K is the bilinear interpolation kernel,>the value of the mth channel representing the q point on the original signature.
As a further development of the invention, in the computing module, a bottleneck network pair profile is first usedTreating to obtain->Then according to +.>Calculating the weight of each frame;
wherein the method comprises the steps ofFor the value of P point on the feature map obtained by affine transformation using the optical flow of the t-1 th frame, is->Is the value of p point on the characteristic diagram of the t-1 frame.
In the weighting module, each frame weight map is processed through a global pooling layer and three full connection layers in sequence to obtain the weight of each frame weight map, and then the weight is used for weighting the weight map to obtain the final weight map of each frame characteristic map.
As a further improvement of the invention, in the first processing module, the aligned frames of feature images are weighted by using a formula (3) to obtain a fused feature image;
wherein the method comprises the steps ofOmega for the aligned ith frame feature map i→t -1 is the corresponding weight map.
The beneficial effects of the invention are as follows: the invention uses an unsupervised optical flow network capable of end-to-end training to extract the optical flow characteristics and fuse the characteristics of the previous frames, thereby improving the tracking effect. In particular, fast moving objects often occur during the tracking process of a mobile robot, and such objects can be effectively tracked by using the invention. In addition, the invention uses the optical flow network capable of end-to-end training to learn the optical flow information more suitable for the infrared tracking task, and the algorithm is real-time.
Drawings
Fig. 1 is a flow chart of the method of the present invention.
Detailed Description
As shown in fig. 1, the invention discloses a mobile robot infrared target tracking method based on an unsupervised optical flow network, which comprises the following steps of:
step S1: extracting a feature map of a previous T frame, and calculating optical flows of a previous frame and a previous second frame to a previous T frame by using an unsupervised optical flow network;
step S2: aligning optical flows of the previous second frame to the previous T frame by affine transformation according to the corresponding optical flows;
step S3: putting the feature map of the previous frame and other feature maps after alignment into a space attention network to obtain a weight map;
step S4: further weighting the weight map using a time awareness network;
step S5: weighting the characteristic image of the previous frame and other characteristic images after alignment by using the obtained weight image to obtain a characteristic image of the target;
step S6: and (5) extracting a characteristic diagram of the frame to be predicted, and obtaining a tracking result by using a filter according to the characteristic diagram obtained in the step (S5).
Aiming at the problem of robot thermal infrared tracking, the invention provides a mobile robot infrared target tracking method based on an unsupervised optical flow network, which can improve the performance of a tracker in infrared tracking.
The principle of the invention is described as follows:
first, feature images of previous frames are extracted using a feature extraction network, and an unsupervised end-to-end trainable optical flow network is used to calculate the optical flow of previous frames and the T-th frame before the previous second frame … ….
The feature map is then aligned according to the optical flow using an affine transformation. The coordinates of the original feature map are represented by P, the optical flow of P points is represented by delta P, the coordinates of each point in the feature map are represented by q, the bilinear interpolation kernel is represented by K, andthe value of the mth channel representing the q point on the original signature. Method for calculating value of mth channel of p point on original characteristic diagram after alignment by optical flowAs in formula (1):
on the basis of the previous step, the weight of each frame is calculated by using a space attention mechanism in turn. First, using bottleneck network to map featuresTreating to obtain->Then according to->The weights of the frames are calculated. Use->The value of p point on the feature map obtained by affine transformation of the optical flow of the t-1 th frame is represented by +.>Representing the value of p point on the t-1 frame feature map. The calculation method of the weight map of each frame is as follows in formula (2):
and processing each frame weight map obtained in the previous step through a global pooling layer and three full connection layers in sequence to obtain the weight of each frame weight map. And then weighting the weight map by the weight to obtain a final weight map of each frame characteristic map.
And weighting the aligned frames of feature images by using the final weight image obtained in the last step to obtain a fused feature image. By usingRepresenting the aligned ith frame feature map, using omega i→t-1 Represented as a corresponding weight graph. The calculation method of the fused feature map is as shown in formula (3):
and finally extracting a characteristic diagram of the current frame, and obtaining a tracking result by using a correlation filter method according to the obtained fused target characteristic diagram.
In summary, through a large number of studies of the tracking process, feature extraction was found to be a very critical step. The tracking method fuses the characteristics of the previous frames, and fully extracts the historical target characteristics. There are two problems with the fusion of feature maps for multiple frames: one is that the target positions of the characteristic diagrams of each frame are different and need to be aligned; the other is that the importance of the feature maps of each frame is different, and each frame needs to be weighted.
The present invention uses an unsupervised end-to-end training optical flow network to calculate the optical flow of the previous frame and previous frames and uses these optical flows to align the feature map. The invention uses a space and time attention mechanism to weight each frame characteristic diagram, and finally obtains the characteristic diagram after fusion.
Finally, the present invention uses a correlation filter approach to achieve tracking of the target.
The invention uses the feature extraction network to extract the feature images of the previous frames, calculates the optical flow of the previous frames and the previous frames through an unsupervised optical flow network capable of end-to-end training, aligns the previous frames by using the optical flow, further weights the feature images by using a space attention network and a time attention network in turn to finally obtain the fused feature images, and obtains the tracking result by using a related filter method. Unlike available optical flow network used in tracking algorithm, the present invention uses optical flow network capable of being trained end to end without supervision and is capable of learning optical flow information suitable for infrared tracking task; the algorithm is real-time. The invention utilizes the fusion characteristics of multiple frames, and effectively improves the tracking effect. Even the image quality of the previous frame is poor, the method can be compensated by the characteristics of the previous frames, and the method is very suitable for a robot to track a fast moving target.
The foregoing is a further detailed description of the invention in connection with the preferred embodiments, and it is not intended that the invention be limited to the specific embodiments described. It will be apparent to those skilled in the art that several simple deductions or substitutions may be made without departing from the spirit of the invention, and these should be considered to be within the scope of the invention.

Claims (6)

1. The mobile robot infrared target tracking method based on the unsupervised optical flow network is characterized by comprising the following steps of:
step S1: extracting a feature map of a previous T frame, and calculating optical flows of a previous frame and a previous second frame to a previous T frame by using an unsupervised optical flow network;
step S2: aligning optical flows of the previous second frame to the previous T frame by affine transformation according to the corresponding optical flows;
step S3: putting the feature map of the previous frame and other feature maps after alignment into a space attention network to obtain a weight map;
step S4: further weighting the weight map using a time awareness network;
step S5: weighting the characteristic image of the previous frame and other characteristic images after alignment by using the obtained weight image to obtain a characteristic image of the target;
step S6: extracting a feature map of a frame to be predicted, and obtaining a tracking result by using a filter according to the feature map obtained in the step S5;
in the step S2, the feature map is aligned according to the optical flow using affine transformation shown in formula (1);
where p is the coordinates of the original feature map,the value of the m-th channel, which is the p-th channel on the original feature map, after alignment with the optical flow, δp is the optical flow, q is the coordinates of each point in the feature map, K is the bilinear interpolation kernel,>the value of the m-th channel representing the q point on the original feature map;
in said step S3, the bottleneck network pair profile is first usedTreating to obtain->Then according to +.>Calculating the weight of each frame;
wherein the method comprises the steps ofFor the value of P point on the feature map obtained by affine transformation using the optical flow of the t-1 th frame, is->Is the value of p point on the characteristic diagram of the t-1 frame.
2. The method according to claim 1, wherein in the step S4, each frame weight map is sequentially processed by a global pooling layer and three full connection layers to obtain weights of each frame weight map, and then the weights are used to weight the weight maps to obtain a final weight map of each frame feature map.
3. The method according to claim 2, wherein in the step S5, the aligned frames of feature images are weighted by using a formula (3) to obtain a fused feature image;
wherein the method comprises the steps ofOmega for the aligned ith frame feature map i→t-1 Is a corresponding weight graph.
4. An infrared target tracking system of a mobile robot based on an unsupervised optical flow network, comprising:
and an extraction module: the method comprises the steps of extracting a feature map of a previous T frame, and calculating optical flows of a previous frame and a previous second frame to a previous T frame by using an unsupervised optical flow network;
and an alignment module: for aligning the optical flows of the previous second frame to the previous T-th frame using affine transformations according to the corresponding optical flows;
the calculation module: the method comprises the steps of putting a previous frame of feature map and other feature maps after alignment into a space attention network to obtain a weight map;
and a weighting module: for further weighting the weight map with a time-awareness network;
a first processing module: weighting the characteristic image of the previous frame and other characteristic images after alignment by using the obtained weight image to obtain a characteristic image of the target;
and a second processing module: the method comprises the steps of extracting a characteristic diagram of a frame to be predicted, and obtaining a tracking result by using a filter according to the characteristic diagram obtained by a first processing module;
in the alignment module, the feature map is aligned according to the optical flow by affine transformation shown in formula (1);
where p is the coordinates of the original feature map,the value of the m-th channel, which is the p-th channel on the original feature map, after alignment with the optical flow, δp is the optical flow, q is the coordinates of each point in the feature map, K is the bilinear interpolation kernel,>the value of the m-th channel representing the q point on the original feature map;
in the computing module, a bottleneck network is firstly used for mapping the characteristic diagramTreating to obtain->Then according to +.>Calculating the weight of each frame;
wherein the method comprises the steps ofFor the value of P point on the feature map obtained by affine transformation using the optical flow of the t-1 th frame, is->Is the value of p point on the characteristic diagram of the t-1 frame.
5. The infrared target tracking system of claim 4, wherein in the weighting module, each frame weight map is sequentially processed by a global pooling layer and three full connection layers to obtain weights of each frame weight map, and then the weights are used for weighting the weight maps to obtain a final weight map of each frame feature map.
6. The mobile robot infrared target tracking system of claim 5, wherein in the first processing module, the aligned frames of feature images are weighted using formula (3) to obtain a fused feature image;
wherein the method comprises the steps ofOmega for the aligned ith frame feature map i→t-1 Is a corresponding weight graph.
CN202011564796.5A 2020-12-25 2020-12-25 Mobile robot infrared target tracking method and system based on unsupervised optical flow network Active CN112561969B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011564796.5A CN112561969B (en) 2020-12-25 2020-12-25 Mobile robot infrared target tracking method and system based on unsupervised optical flow network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011564796.5A CN112561969B (en) 2020-12-25 2020-12-25 Mobile robot infrared target tracking method and system based on unsupervised optical flow network

Publications (2)

Publication Number Publication Date
CN112561969A CN112561969A (en) 2021-03-26
CN112561969B true CN112561969B (en) 2023-07-25

Family

ID=75032500

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011564796.5A Active CN112561969B (en) 2020-12-25 2020-12-25 Mobile robot infrared target tracking method and system based on unsupervised optical flow network

Country Status (1)

Country Link
CN (1) CN112561969B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619655A (en) * 2019-08-23 2019-12-27 深圳大学 Target tracking method and device integrating optical flow information and Simese framework
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9373174B2 (en) * 2014-10-21 2016-06-21 The United States Of America As Represented By The Secretary Of The Air Force Cloud based video detection and tracking system
US10547871B2 (en) * 2017-05-05 2020-01-28 Disney Enterprises, Inc. Edge-aware spatio-temporal filtering and optical flow estimation in real time

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110619655A (en) * 2019-08-23 2019-12-27 深圳大学 Target tracking method and device integrating optical flow information and Simese framework
CN111476822A (en) * 2020-04-08 2020-07-31 浙江大学 Laser radar target detection and motion tracking method based on scene flow

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
Deep Convolutional Neural Networks for Thermal Infrared Object Tracking;Qiao Liu 等;《Knowledge- Based Systems》;189-198 *
Hierarchical Spatial- aware Siamese Network for Thermal Infrared Object Tracking;Xin Li 等;《Knowledge-Based Systems》;第71-81页 *
Learning Deep Multi-level Similarity for Thermal Infrared Object Tracking;Qiao Liu 等;《IEEE Trans- actions on Multimedia, 2020》;第1-10页 *
LSOTB-TIR: A Large Scale High Diversity Thermal Infrared Object Tracking Benchmark;Qiao Liu 等;《Proceedings of the 28th ACM International Conference on Multimedia》;第3847–3856页 *
MSST-ResNet: Deep Multi-scale Spatiotemporal Features for Robust Visual Object Tracking;Bin Liu 等;《Knowledge-Based Systems》;第235-252页 *
MSSTResNet-TLD: A Robust Tracking Method Based on Tracking-Learning-Detection Framework by Using Multi- scale Spatio-temporal Residual Network Feature Model;Nana Fan 等;《Neurocomputing》;第1-20页 *
Multi-Task Driven Feature Models for Thermal Infrared Tracking;Qiao Liu 等;《Proceedings of the 34th AAAI Conference on Artificial Intelligence, 2020》;第11604-11611页 *
PTB-TIR: A Thermal Infrared Pedes- trian Tracking Benchmark;Qiao Liu 等;《IEEE Transactions on Multimedia》;第666- 675页 *
Visual Object Tracking via Coef- ficients Constrained Exclusive Group LASSO;Xiao Ma 等;《Machine Vision and Applications》;第749-763页 *

Also Published As

Publication number Publication date
CN112561969A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN113807187B (en) Unmanned aerial vehicle video multi-target tracking method based on attention feature fusion
CN109800689B (en) Target tracking method based on space-time feature fusion learning
CN113506317B (en) Multi-target tracking method based on Mask R-CNN and apparent feature fusion
CN112069969B (en) Expressway monitoring video cross-mirror vehicle tracking method and system
CN111199556B (en) Indoor pedestrian detection and tracking method based on camera
CN111563415A (en) Binocular vision-based three-dimensional target detection system and method
CN111696110B (en) Scene segmentation method and system
CN112508014A (en) Improved YOLOv3 target detection method based on attention mechanism
CN114565655A (en) Depth estimation method and device based on pyramid segmentation attention
Wang et al. MCF3D: Multi-stage complementary fusion for multi-sensor 3D object detection
CN112801051A (en) Method for re-identifying blocked pedestrians based on multitask learning
CN116524062B (en) Diffusion model-based 2D human body posture estimation method
CN112767480A (en) Monocular vision SLAM positioning method based on deep learning
CN114529583B (en) Power equipment tracking method and tracking system based on residual regression network
Getahun et al. A deep learning approach for lane detection
CN111429485A (en) Cross-modal filtering tracking method based on self-adaptive regularization and high-reliability updating
CN113971688B (en) Anchor-free multi-target tracking method for enhancing ID re-identification
CN115471748A (en) Monocular vision SLAM method oriented to dynamic environment
Miki et al. Robust human pose estimation from distorted wide-angle images through iterative search of transformation parameters
CN112598739B (en) Mobile robot infrared target tracking method, system and storage medium based on space-time characteristic aggregation network
CN112561969B (en) Mobile robot infrared target tracking method and system based on unsupervised optical flow network
CN116109673A (en) Multi-frame track tracking system and method based on pedestrian gesture estimation
CN115100565B (en) Multi-target tracking method based on spatial correlation and optical flow registration
Gao et al. Coarse TRVO: A robust visual odometry with detector-free local feature
CN114820723A (en) Online multi-target tracking method based on joint detection and association

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant