CN115439771A - Improved DSST infrared laser spot tracking method - Google Patents

Improved DSST infrared laser spot tracking method Download PDF

Info

Publication number
CN115439771A
CN115439771A CN202210866567.1A CN202210866567A CN115439771A CN 115439771 A CN115439771 A CN 115439771A CN 202210866567 A CN202210866567 A CN 202210866567A CN 115439771 A CN115439771 A CN 115439771A
Authority
CN
China
Prior art keywords
laser spot
tracking
dsst
value
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210866567.1A
Other languages
Chinese (zh)
Inventor
王峰
裴林聪
吴国瑞
康智强
李�杰
赵伟
周平华
马晨
孔维立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taiyuan University of Technology
Original Assignee
Taiyuan University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taiyuan University of Technology filed Critical Taiyuan University of Technology
Priority to CN202210866567.1A priority Critical patent/CN115439771A/en
Publication of CN115439771A publication Critical patent/CN115439771A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/23Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on positionally close patterns or neighbourhood relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of spot tracking methods, and particularly relates to an infrared laser spot tracking method for improving DSST, which comprises the following steps: collecting a laser spot moving video, establishing a target detection database, preprocessing an infrared image by using a bilateral filtering algorithm, and training a YOLOv5 network model in an off-line manner; reading a first frame of a video, and performing light spot identification by using a YOLOv5 network model to obtain the position of a laser light spot; a DSST algorithm is called to perform laser spot movement tracking, and target position information and scale information are determined; and calculating the output response of the previous n continuous frames of the current frame, calculating the mean value and the variance of the output response, and judging whether the current frame is abnormal. The invention introduces a bilateral filtering algorithm to correct uneven illumination, thereby achieving the purpose of edge protection and denoising. The invention detects abnormal values according to the current frame output response, and accurately judges and re-identifies the laser spot loss; finally, long-time tracking of laser spots is achieved, and the anti-interference performance and the success rate of a tracking algorithm are improved.

Description

Improved DSST infrared laser spot tracking method
Technical Field
The invention belongs to the technical field of spot tracking methods, and particularly relates to an infrared laser spot tracking method for improving DSST (direct sequence digital subscriber station).
Background
With the development of military science and technology, the light spot formed by the laser weapon as a novel weapon has the characteristics of strong concealment, high precision, high moving speed and the like, so that the laser weapon becomes an important means for reconnaissance information and accurate guidance. Meanwhile, the real-time working state of the laser weapon can be accurately acquired by detecting and tracking the laser spots, and the method has great significance for the development and the countermeasures of the laser weapon. However, especially in the case of infrared images, laser spots have the characteristics of high flexibility, blurred edges, easiness in shielding, easiness in being affected by illumination and the like, and are difficult to collect and capture. Therefore, how to accurately detect and track the laser spot becomes a key research object at home and abroad.
At present, target tracking algorithms are mainly divided into a related filtering method and a deep learning method. Since 2010, after a related filtering concept is used in a target tracking algorithm for the first time by Blome and the like, a large number of tracking algorithms based on related filtering emerge. The main idea is to compare the similarity of the target areas of the two frames of images, and the area with the highest similarity with the previous frame is considered as the target area of the new frame. The MOSSE algorithm proposed by BOLME and the like is the mountain-opening operation of a related filtering algorithm, and then a KCF algorithm adopting multi-channel extraction features, a DSST algorithm added with multi-scale estimation and the like are presented. The deep learning method is more accurate than a related filtering method in a tracking model formed by training of a large amount of data, but is not strong in instantaneity and cannot adapt to a military scene which is instantaneously and universally changed.
Disclosure of Invention
Aiming at the technical problems that the laser spot cannot be tracked for a long time due to rapid movement, scale conversion, uneven illumination, serious shielding and the like, the invention provides the improved DSST infrared laser spot tracking method which is high in identification rate, small in error and accurate in abnormal value detection.
In order to solve the technical problems, the invention adopts the technical scheme that:
an improved DSST infrared laser spot tracking method, comprising the steps of:
s1, collecting a laser spot moving video, establishing a target detection database, preprocessing an infrared image by using a bilateral filtering algorithm, and training a YOLOv5 network model in an off-line manner;
s2, preprocessing the first frame of the infrared video by using a bilateral filtering algorithm;
s3, reading a first frame of the video, and performing light spot identification by using a YOLOv5 network model to obtain the position of a laser light spot;
s4, calling a DSST algorithm to perform laser spot movement tracking, and determining target position information and scale information;
s5, calculating output responses of the first n continuous frames of the current frame, calculating the mean value and the variance of the output responses, and judging whether abnormality occurs or not; if an abnormal value occurs, entering S5, otherwise, entering S6;
s6, starting a YOLOv5 target detection algorithm to detect the laser spot position again, returning to the current frame spot position, adjusting the tracker parameters, and entering S4;
s7, selecting the target position in a frame mode, and entering S4 until the tracking is finished.
The method for preprocessing the infrared image by utilizing the bilateral filtering algorithm in the S1 comprises the following steps: bilateral filtering is a nonlinear filtering method, and simultaneously extracts spatial proximity and gray proximity to achieve the purpose of smoothing an image, and the expression is as follows:
Figure BDA0003759431590000021
Figure BDA0003759431590000022
Figure BDA0003759431590000023
the above-mentioned
Figure BDA0003759431590000024
Is a processed image; the M is x,y Representing a set of spatial neighborhood pixels centered at (x, y); the I (x, y) represents a center point pixel value; the I (I, j) represents a pixel value at (I, j) in the spatial neighborhood pixel set; the G is s (i, j) and G r (i, j) represents spatial proximity and grayscale similarity; the sigma s And σ r Are filter parameters.
The method for off-line training of the YOLOv5 network model in the S1 comprises the following steps: the Yolov5 network model consists of CSPDarknet, FPN and Yolo Head, and the CSPDarknet module is used as a main feature extraction network for extracting the features of the image; and the FPN module enhances feature extraction, and a yolk Head module is utilized to obtain a prediction result.
The method for determining the target position information in the S4 comprises the following steps:
firstly, a group of gray image blocks f is extracted by convolution filtering 1 ,f 2 ,f 3 ,…,f t For training, the output response g is obtained by filtering 1 ,g 2 ,g 3 ,…,g t Constructing an optimal filter h satisfying minimum mean square error at time t in these outputs t Wherein h is t Satisfies the following formula:
Figure BDA0003759431590000031
the minimum value is solved by:
Figure BDA0003759431590000032
calculate the correlation filter H t Then, the response value y for a new frame is calculated as follows:
y=F -1 (H t Z)
and when the y is the maximum value, estimating the target position of the new frame of image.
The method for determining the target scale information in the S4 comprises the following steps: assuming that the size of the current frame is P × R and the scale is S, the scale estimation principle is as follows:
Figure BDA0003759431590000033
the method for judging whether the abnormality occurs in the step S5 comprises the following steps: v. the c Represents the current frame output response when v c When the absolute value of the difference with the mean value mu is smaller than lambda sigma, the target is judged to be lost or about to be lost, and the abnormal value judgment formula is as follows:
Figure BDA0003759431590000041
the n represents the mean value and the variance of the output response of the first n-1 frames, and the set value n =40; λ =3 is set.
Compared with the prior art, the invention has the following beneficial effects:
the invention provides a fusion target detection algorithm and a discrimination scale space target tracking algorithm, and establishes an infrared laser spot tracking data set. Firstly, preprocessing an infrared video frame, introducing a bilateral filtering algorithm to correct uneven illumination, and achieving the purpose of edge protection and denoising; then, carrying out target detection on the first frame by using a YOLOv5 algorithm model, framing the light spot position, and calling a DSST algorithm to track the laser light spot; a loss re-detection module is introduced, abnormal value detection is carried out according to the current frame output response, and accurate judgment and re-identification are carried out on the loss of the laser facula; finally, long-time tracking of laser spots is realized, and the anti-interference performance and the success rate of a tracking algorithm are improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art will understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical essence, and any modifications of the structures, changes of the ratio relationships, or adjustments of the sizes, should still fall within the scope covered by the technical contents disclosed in the present invention without affecting the efficacy and the achievable purpose of the present invention.
FIG. 1 is a flow chart of an embodiment of the present invention.
FIG. 2 is a schematic diagram of the infrared image bilateral filtering preprocessing result of the present invention.
Fig. 3 is a diagram of the YOLOv5 network architecture of the present invention.
FIG. 4 is a graph showing the results of the experiment according to the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below, obviously, the described embodiments are only a part of the embodiments of the present application, but not all embodiments, and the description is only for further explaining the features and advantages of the present invention, and not for limiting the claims of the present invention; all other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
In this embodiment, as shown in fig. 1, an infrared laser spot tracking method for improving DSST includes the following specific steps:
step 101: the laser emitting device is used for collecting the light spot pictures and videos under the uneven illumination condition and the complex environment, and therefore an image resolution ratio of 640 x 640 light spot identification database and a laser light spot tracking database are established. The infrared image is preprocessed using a bilateral filtering algorithm as shown in fig. 2.
Step 101-1: and (3) carrying out recognition training on the light spot target recognition database by utilizing a YOLOv5 network model to obtain an offline model. The YOLOv5 network model consists of CSPDarknet, FPN and Yolo Head. The CSPDarknet layer is used as a main feature extraction network and is used for feature extraction of the image; and (3) performing reinforced feature extraction on the FPN layer, and obtaining a prediction result by utilizing a Yolo Head layer. The structure of the YOLOv5 network model is shown in fig. 3.
Step 102: and preprocessing the first frame of the infrared video by using a bilateral filtering algorithm.
Step 102-1: bilateral filtering is a nonlinear filtering method, and simultaneously extracts spatial proximity and gray proximity to achieve the purpose of smoothing images. The expression is as follows:
Figure BDA0003759431590000061
Figure BDA0003759431590000062
Figure BDA0003759431590000063
wherein,
Figure BDA0003759431590000064
is a processed image; m is a group of x,y Representing a set of spatial neighborhood pixels centered at (x, y); i (x, y) represents the center point pixel value; i (I, j) represents the pixel value at (I, j) in the spatial neighborhood pixel set; g s (i, j) and G r (i, j) represents spatial proximity and grayscale similarity; sigma s And σ r Are filter parameters. The bilateral filtering enables the light spot edge to be smoother while removing image noise and correcting uneven illumination.
Step 103: and reading the processed first frame of the video, and performing light spot identification by using an offline trained YOLOv5 network model to obtain the position of a laser light spot.
Step 104: and calling a DSST algorithm to perform laser spot movement tracking.
Step 104-1: determining the target position information specifically as follows:
firstly, a group of gray image blocks f is extracted by utilizing convolution filtering 1 ,f 2 ,f 3 ,…,f t For training, obtaining an output response g by filtering 1 ,g 2 ,g 3 ,…,g t Constructing an optimal filter h satisfying minimum mean square error at time t in these outputs t Wherein h is t Satisfies the following formula:
Figure BDA0003759431590000065
the minimum value can be solved by:
Figure BDA0003759431590000066
calculate the correlation filter H t Then, the response value y is calculated for a new frame as follows:
y=F -1 (H t Z)
and when the y is the maximum value, estimating the target position of the new frame of image.
Step 104-2: determining target scale information, utilizing a three-dimensional scale filter, assuming that f represents the target position center determined in the last step, and intercepting S image blocks with different scales on the basis of the target position center, wherein S =33; and establishing three-dimensional filtering through a Gaussian function to obtain response output g, and determining scale information according to the maximum value in g. Assuming that the size of the current frame is P x R and the scale is S, the scale estimation principle is as follows:
Figure BDA0003759431590000071
step 105: when the loss of the target occurs, the output response becomes sharply small, and then when the output response is large, it does not represent that the tracking state is good. Therefore, the output response of the previous n continuous frames of the current frame is calculated, the mean value and the variance of the output response are calculated, and whether the abnormality occurs or not is judged. And if an abnormal value occurs, entering the fifth step, otherwise, entering the sixth step.
Step 105-1: v. the c Represents the current frame output response when v c When the absolute value of the difference from the mean μ is less than λ σ, it can be judged that the target is lost or is about to be lost. The abnormal value judgment formula is as follows:
Figure BDA0003759431590000072
wherein n represents the mean and variance of the output response of the previous n-1 frames, and the set value n =40; λ =3 is set.
Step 106: and starting a YOLOv5 target detection algorithm to detect the position of the laser spot again, returning the position of the current frame spot, adjusting the parameters of the tracker, and entering step 104.
Step 107: and (5) selecting the target position, and entering the step 104 until the tracking is finished.
In order to verify the effectiveness of the algorithm of the embodiment, the simulation experiment of the embodiment is implemented based on python writing, and the computer system is configured as follows: win 10-bit processor, intel (R) Core (TM) i7-8750H CPU @2.20GHz, and experiment IDE adopts Pycharm editor.
As shown in fig. 4, the present embodiment is experimentally demonstrated that long-term tracking of laser spots can be effectively achieved.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are included in the scope of the present invention.

Claims (6)

1. An improved DSST infrared laser spot tracking method is characterized in that: comprises the following steps:
s1, collecting a laser spot moving video, establishing a target detection database, preprocessing an infrared image by using a bilateral filtering algorithm, and training a YOLOv5 network model in an off-line manner;
s2, preprocessing the first frame of the infrared video by using a bilateral filtering algorithm;
s3, reading a first frame of the video, and performing light spot identification by using a YOLOv5 network model to obtain the position of a laser light spot;
s4, calling a DSST algorithm to perform laser spot movement tracking, and determining target position information and scale information;
s5, calculating output responses of the previous n continuous frames of the current frame, calculating the mean value and the variance of the output responses, and judging whether the output responses are abnormal or not; if an abnormal value occurs, entering S5, otherwise, entering S6;
s6, starting a YOLOv5 target detection algorithm to detect the position of the laser spot again, returning the position of the current frame spot, adjusting the parameters of the tracker, and entering S4;
and S7, selecting the target position in a frame, and entering S4 until the tracking is finished.
2. The method of claim 1 for improved DSST infrared laser spot tracking, wherein: the method for preprocessing the infrared image by using the bilateral filtering algorithm in the S1 comprises the following steps: bilateral filtering is a nonlinear filtering method, and simultaneously extracts spatial proximity and gray proximity to achieve the purpose of smoothing an image, and the expression is as follows:
Figure FDA0003759431580000011
Figure FDA0003759431580000012
Figure FDA0003759431580000013
the above-mentioned
Figure FDA0003759431580000014
Is a processed image; the M is x,y Representing a set of spatial neighborhood pixels centered at (x, y); the I (x, y) represents a center point pixel value; the I (I, j) represents a pixel value at (I, j) in a spatial neighborhood pixel set; the G is s (i, j) and G r (i, j) represents spatial proximity and gray level similarity; the sigma s And σ r Are filter parameters.
3. The improved DSST infrared laser spot tracking method according to claim 1, wherein: the method for off-line training of the YOLOv5 network model in the S1 comprises the following steps: the YOLOv5 network model consists of CSPDarknet, FPN and Yolo Head, and the CSPDarknet module is used as a main feature extraction network for extracting the features of the images; the FPN module enhances feature extraction, and a Yolo Head module is used for obtaining a prediction result.
4. The improved DSST infrared laser spot tracking method according to claim 1, wherein: the method for determining the target position information in the S4 comprises the following steps:
firstly, a group of gray image blocks f is extracted by convolution filtering 1 ,f 2 ,f 3 ,…,f t For training, obtaining an output response g by filtering 1 ,g 2 ,g 3 ,…,g t Constructing an optimal filter h satisfying minimum mean square error at time t in these outputs t Wherein h is t Satisfies the following formula:
Figure FDA0003759431580000021
the minimum value is solved by:
Figure FDA0003759431580000022
calculate the correlation filter H t Then, the response value y for a new frame is calculated as follows:
y=F -1 (H t Z)
and when the y is the maximum value, estimating the target position of the new frame of image.
5. The method of claim 1 for improved DSST infrared laser spot tracking, wherein: the method for determining the target scale information in the S4 comprises the following steps: assuming that the size of the current frame is P × R and the scale is S, the scale estimation principle is as follows:
Figure FDA0003759431580000023
6. the method of claim 1 for improved DSST infrared laser spot tracking, wherein: the method for judging whether the abnormality occurs in the S5 comprises the following steps: let v c Represents the current frame output response when v c When the absolute value of the difference with the mean value mu is smaller than lambda sigma, the target is judged to be lost or about to be lost, and the abnormal value judgment formula is as follows:
Figure FDA0003759431580000031
the n represents the mean value and the variance of the output response of the first n-1 frames, and the set value n =40; λ =3 is set.
CN202210866567.1A 2022-07-22 2022-07-22 Improved DSST infrared laser spot tracking method Pending CN115439771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210866567.1A CN115439771A (en) 2022-07-22 2022-07-22 Improved DSST infrared laser spot tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210866567.1A CN115439771A (en) 2022-07-22 2022-07-22 Improved DSST infrared laser spot tracking method

Publications (1)

Publication Number Publication Date
CN115439771A true CN115439771A (en) 2022-12-06

Family

ID=84241236

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210866567.1A Pending CN115439771A (en) 2022-07-22 2022-07-22 Improved DSST infrared laser spot tracking method

Country Status (1)

Country Link
CN (1) CN115439771A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522926A (en) * 2024-01-08 2024-02-06 四川迪晟新达类脑智能技术有限公司 Infrared light spot target identification and tracking method based on FPGA hardware platform

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522926A (en) * 2024-01-08 2024-02-06 四川迪晟新达类脑智能技术有限公司 Infrared light spot target identification and tracking method based on FPGA hardware platform
CN117522926B (en) * 2024-01-08 2024-04-02 四川迪晟新达类脑智能技术有限公司 Infrared light spot target identification and tracking method based on FPGA hardware platform

Similar Documents

Publication Publication Date Title
CN107452015B (en) Target tracking system with re-detection mechanism
CN107564034A (en) The pedestrian detection and tracking of multiple target in a kind of monitor video
CN107133969B (en) A kind of mobile platform moving target detecting method based on background back projection
CN112883819A (en) Multi-target tracking method, device, system and computer readable storage medium
CN109685045B (en) Moving target video tracking method and system
CN110084830B (en) Video moving object detection and tracking method
CN111784737B (en) Automatic target tracking method and system based on unmanned aerial vehicle platform
CN109165602B (en) Black smoke vehicle detection method based on video analysis
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN115131420A (en) Visual SLAM method and device based on key frame optimization
CN111680713A (en) Unmanned aerial vehicle ground target tracking and approaching method based on visual detection
CN112308883A (en) Multi-ship fusion tracking method based on visible light and infrared images
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN115375733A (en) Snow vehicle sled three-dimensional sliding track extraction method based on videos and point cloud data
CN111914627A (en) Vehicle identification and tracking method and device
CN115439771A (en) Improved DSST infrared laser spot tracking method
CN113255549B (en) Intelligent recognition method and system for behavior state of wolf-swarm hunting
CN112613565B (en) Anti-occlusion tracking method based on multi-feature fusion and adaptive learning rate updating
CN117315547A (en) Visual SLAM method for solving large duty ratio of dynamic object
CN110335308B (en) Binocular vision odometer calculation method based on parallax constraint and bidirectional annular inspection
CN116091405B (en) Image processing method and device, computer equipment and storage medium
CN115100565B (en) Multi-target tracking method based on spatial correlation and optical flow registration
CN111127355A (en) Method for finely complementing defective light flow graph and application thereof
CN116152758A (en) Intelligent real-time accident detection and vehicle tracking method
CN113591705B (en) Inspection robot instrument identification system and method and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination