CN111968154A - HOG-LBP and KCF fused pedestrian tracking method - Google Patents

HOG-LBP and KCF fused pedestrian tracking method Download PDF

Info

Publication number
CN111968154A
CN111968154A CN202010705263.8A CN202010705263A CN111968154A CN 111968154 A CN111968154 A CN 111968154A CN 202010705263 A CN202010705263 A CN 202010705263A CN 111968154 A CN111968154 A CN 111968154A
Authority
CN
China
Prior art keywords
lbp
hog
pedestrian
kcf
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010705263.8A
Other languages
Chinese (zh)
Inventor
陈宁
李梦璐
刘志坚
杨迪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Polytechnic University
Original Assignee
Xian Polytechnic University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Polytechnic University filed Critical Xian Polytechnic University
Priority to CN202010705263.8A priority Critical patent/CN111968154A/en
Publication of CN111968154A publication Critical patent/CN111968154A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian tracking method fusing HOG-LBP and KCF, which comprises the steps of obtaining a pedestrian video, setting a tracking target, determining a candidate area, extracting HOG-LBP characteristics of an image of the candidate area, calculating a kernel correlation filter according to the HOG-LBP characteristics of the extracted image, updating a pedestrian position through the obtained kernel correlation detection, updating the filter and outputting a tracking result; the method can solve the problems that in a KCF pedestrian tracking algorithm, when a background covers a messy noise edge, HOG characteristics cannot be processed, so that tracking drift and failure are caused, and the robustness of the algorithm is reduced.

Description

HOG-LBP and KCF fused pedestrian tracking method
Technical Field
The invention belongs to the technical field of machine vision, and relates to a pedestrian tracking method fusing HOG-LBP and KCF.
Background
With the national proposal of the concept of 'smart city', the research and development of artificial intelligence technology are concerned. Pedestrian tracking is one of the most main research tasks in the field of computer vision of artificial intelligence, has a wide application value in the aspects of automatic driving, intelligent security, intelligent robots and the like, has attracted the attention and research of broad scholars, and has emerged a plurality of related pedestrian tracking algorithms, and the KCF (Kernel Correlation Filter) kernel Correlation filtering algorithm is one of the representatives. However, the conventional KCF tracking algorithm uses a Histogram of Oriented Gradients (HOG) to characterize the pedestrian, which is good for describing image edge information or local shape information, and cannot be processed when the background covers a messy noise edge. This leads to tracking drift and failure, and the algorithm robustness is reduced.
To address this issue, the HOG feature and LBP feature (Local Binary Pattern) are combined herein, i.e., HOG-LBP feature. The LBP feature has gray scale invariance and rotation invariance, etc., and can deal with the noise edge whose background covers disorder. The combination of the two can better represent pedestrians and improve the tracking accuracy.
Disclosure of Invention
The invention aims to provide a pedestrian tracking method fusing HOG-LBP and KCF, which solves the problem of drift in a KCF algorithm and improves the robustness of the tracking algorithm on the premise of ensuring the speed of the algorithm.
The invention adopts the technical scheme that a pedestrian tracking method fused with HOG-LBP and KCF is implemented according to the following steps:
step 1, acquiring a pedestrian video, setting a tracking target and determining a candidate area;
step 2, extracting HOG-LBP characteristics of the candidate region image;
step 3, calculating a kernel correlation filter according to the HOG-LBP characteristics of the image extracted in the step 2;
step 4, updating the pedestrian position through the nuclear correlation detection obtained by the calculation in the step 3;
step 5, updating the filter;
and 6, outputting a tracking result.
The invention is also characterized in that:
the specific content of the step 1 is as follows:
acquiring a pedestrian video, reading in a t-th frame Image1, setting a tracking target, and taking a region where the tracking target is located as a candidate region of a first frame;
wherein the HOG-LBP feature extraction process in the step 2 is divided into three steps: HOG feature extraction, LBP feature extraction and HOG-LBP feature fusion;
the HOG feature extraction step mainly comprises the following parts of image normalization, gradient calculation, direction weight projection based on gradient amplitude and feature vector normalization, and the specific calculation process is as follows:
setting the size of the candidate region to be 80 multiplied by 64, and setting the size of the block to be 8 multiplied by 8, so that the candidate region comprises 80 non-overlapping blocks in total;
calculating the gradient direction and amplitude value of each block, and calculating the gradient by using a simple central symmetry operator [ -1, 0, 1], as shown in the following formula:
Figure BDA0002594499780000021
Figure BDA0002594499780000031
where I (x, y) is the pixel value of the image at the point (x, y), θ (x, y) is the gradient direction of the point, and m (x, y) corresponds to the amplitude value of the point;
setting the size of a cell to be 4 multiplied by 4, counting a gradient histogram in each block according to the size of the cell, and projecting a specified weight by applying the amplitude of the gradient;
carrying out contrast normalization on the cells in each overlapped block;
combining the histogram vectors in all blocks to obtain a final HOG characteristic vector;
the specific content of LBP feature extraction is as follows:
the LBP operator is represented by (P, R), where P represents the number of pixels contained within the domain; r denotes the domain radius, the basic LBP operator is the (8, 1) domain:
firstly, the 3 x 3 domain pixel value pi(i=1,2, …, 8) is compared with the central pixel value p0, and thresholding is performed, wherein the calculation formula is as follows:
Figure BDA0002594499780000032
arranging bi (i is 1, 2, …, 8) in a clockwise direction to obtain an 8-bit binary code, and converting the binary code into a decimal number to obtain a result obtained by calculating a central pixel by an LBP operator;
then, after the pedestrian image is subjected to LBP operator operation, histogram statistics is carried out on the pedestrian image to obtain a histogram feature vector, which can be specifically defined as:
Figure BDA0002594499780000033
in the formula, n is data of different marks generated by an LBP operator, and a consistent mode operator in a 3 × 3 field is adopted, namely n is 256; when x is true, i (x) is 1; when x is false, i (x) is 0;
dividing the image into regions R0,R1,…,Rm-1The histogram of each region may be defined as:
Figure BDA0002594499780000034
finally, connecting the sub-region histograms to form a final pedestrian feature vector, and adopting Chi2Statistically to measure the distance between LBP features;
the specific content of HOG-LBP feature fusion is as follows: and performing feature fusion by adopting a weighting mode, wherein a fusion formula is as follows:
Figure BDA0002594499780000041
wherein m represents the number of classifiers, wiAnd ciRespectively representing the weight and output score of the ith classifierF (C) is the score output after feature fusion; the weight calculation formula is as follows:
Figure BDA0002594499780000042
in the formula, EiEqual error rate for the ith classifier;
setting m different classifiers with the pedestrian image characteristic of x, and when estimating the real classification discriminant function, having m different discriminant functions:
gi(x)=h(x)+i(x),i=1,2,…,m (8)
where h (x) represents the true classification discriminant function, gi(x) Represents the discriminant function of the ith classifier, andi(x) Is represented as gi(x) An error function from the true function;
after feature fusion, the mean square error of the whole feature fusion system can be expressed as:
Figure BDA0002594499780000043
wherein the weighting coefficients satisfy ai>0 and
Figure BDA0002594499780000044
the specific contents of the calculation kernel correlation filter in the step 3 are as follows:
setting x to represent HOG-LBP characteristics extracted from an input image, h to represent a correlation filter, and setting x ^ to represent Fourier transform of x, wherein the convolution in a spatial domain is equivalent to multiplication operation among elements in a frequency domain according to a convolution theorem, and the following can be obtained:
Figure BDA0002594499780000051
in the formula (I), the compound is shown in the specification,
Figure BDA0002594499780000057
representing a convolution, F-1Representing an inverse fourier transform,. representing a multiplication between elements,. representing a complex conjugate; to train the filter, the desired correlation output y is defined, and for a new sample x' of the target, the correlation filter h satisfies the condition
Figure BDA0002594499780000052
Thus, the following results were obtained:
Figure BDA0002594499780000053
the specific contents of detecting and updating the pedestrian position in the step 4 are as follows: after the kernel correlation filtering h is obtained in the step 3, for the t +1 th frame, the detection is carried out on the image block z with the target position of the frame as the center and the size of M multiplied by N, and the correlation response diagram is
Figure BDA0002594499780000054
Thus, a new target position is found by the f (z) maximum corresponding position;
wherein, the specific updating content of the filter in the step 5 is as follows:
updating the filter coefficient alpha and the target appearance model x by adopting a linear interpolation mode, namely:
Figure BDA0002594499780000055
Figure BDA0002594499780000056
in the formula, γ represents a learning rate, and t represents a frame number.
The invention has the beneficial effects that:
aiming at the problem that in a KCF pedestrian tracking algorithm, when a background covers a messy noise edge, HOG characteristics cannot be processed, so that tracking drift and failure are caused, and algorithm robustness is reduced, the HOG characteristic and LBP characteristic, namely the HOG-LBP characteristic, are combined, the LBP characteristic has gray scale invariance, rotation invariance and the like, the background can be processed to cover the messy noise edge, the pedestrian can be better characterized by combining the HOG characteristic and the LBP characteristic, and pedestrian tracking accuracy is improved.
Drawings
FIG. 1 is a diagram of a HOG-LBP fusion framework in a HOG-LBP and KCF fusion pedestrian tracking method of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention provides a pedestrian tracking method fused with HOG-LBP and KCF, which is implemented by the following steps:
step 1, acquiring a pedestrian video, reading in a t-th frame Image1, setting a tracking target, and taking an area where the tracking target is located as a candidate area of a first frame;
step 2, the HOG-LBP feature extraction process is divided into three steps: HOG feature extraction, LBP feature extraction and HOG-LBP feature fusion; the specific implementation steps can be divided into the following steps, and the fusion framework is shown in figure 1:
firstly, obtaining HOG feature extraction of an image, wherein the steps mainly comprise image normalization, gradient calculation, direction weight projection based on gradient amplitude and feature vector normalization, and the specific calculation process comprises the following steps:
assuming that the size of the candidate region is 80 × 64, and setting the size of the block to be 8 × 8, the candidate region contains 80 non-overlapping blocks in total;
the gradient direction and magnitude are first calculated on each block, here using a simple central symmetry operator [ -1, 0, 1] to calculate the gradient as shown in the following equation:
Figure BDA0002594499780000061
Figure BDA0002594499780000062
where I (x, y) is the pixel value of the image point (x, y), θ (x, y) is the gradient direction of the point, and m (x, y) corresponds to the amplitude value of the point;
then setting the size of the cell to be 4 multiplied by 4, counting a gradient histogram in each block according to the size of the cell, and projecting a specified weight by applying the amplitude of the gradient;
then, carrying out contrast normalization on the cells in each overlapped block to eliminate the influence of illumination;
and finally, combining the histogram vectors in all blocks to obtain a final HOG characteristic vector.
Acquiring LBP characteristics of an image:
the basic idea of LBP is to describe local texture features according to binary codes obtained by comparing central pixel points with pixel points in the circular domain thereof, and an LBP operator is usually represented by (P, R), wherein P represents the number of pixels contained in the domain; r represents the radius of the domain, and the basic LBP operator is (8, 1) domain;
firstly, the 3 x 3 domain pixel value pi(i ═ 1, 2, …, 8) and the central pixel value p0And comparing, and performing thresholding treatment, wherein the calculation formula is as follows:
Figure BDA0002594499780000071
b is toi(i-1, 2, …, 8) arranging in clockwise direction to obtain an 8-bit binary code, and converting into decimal number to obtain the result obtained by calculating the center pixel by the LBP operator;
after the pedestrian image is subjected to LBP operator operation, histogram statistics is carried out on the pedestrian image to obtain a histogram feature vector, which can be specifically defined as:
Figure BDA0002594499780000072
where n is the data that the LBP operator produces different labels, a 3 × 3 domain consistent pattern operator is used herein, i.e., n is 256, and when x is true, i (x) is 1; when x is false, i (x) is 0;
to better represent pedestrian features, the image is divided into regions R0,R1,…,Rm-1The histogram of each region may be defined as:
Figure BDA0002594499780000073
finally, connecting the sub-region histograms to form a final pedestrian feature vector, and measuring the distance between LBP features by adopting chi 2 statistics;
as shown in fig. 1, the HOG-LBP features were fused:
feature fusion is performed in a weighting mode, and a fusion formula is as follows:
Figure BDA0002594499780000081
wherein m represents the number of classifiers; w is aiAnd ciRespectively representing the weight and the output score of the ith classifier; (c) is the score output after feature fusion, and the weight calculation formula is as follows:
Figure BDA0002594499780000082
in the formula, EiEqual error rate for the ith classifier;
assuming that there are m different classifiers with a pedestrian image feature of x, when estimating the true classification discriminant function, there are m different discriminant functions:
gi(x)=h(x)+i(x),i=1,2,…,m (8)
wherein h (x) represents the true classification discriminant function; gi(x) A discriminant function representing the ith classifier; whilei(x) Is represented as gi(x) An error function from the true function;
after feature fusion, the mean square error of the whole feature fusion system can be expressed as:
Figure BDA0002594499780000083
wherein the weighting coefficients satisfy ai>0 and
Figure BDA0002594499780000084
step 3, calculating a kernel correlation filter, wherein the specific contents are as follows:
let x represent HOG-LBP features extracted from the input image, h represent a correlation filter, and assume that x ^ represents Fourier transform of x, according to the convolution theorem, the convolution in the space domain is equivalent to multiplication operation among elements in the frequency domain, so as to obtain
Figure BDA0002594499780000091
In the formula (I), the compound is shown in the specification,
Figure BDA0002594499780000097
representing a convolution, F-1Representing an inverse Fourier transform, representing a multiplication between elements, representing a complex conjugate, defining a desired correlation output y for training the filter, the correlation filter h satisfying the condition for a new sample x' of the target
Figure BDA0002594499780000092
Thus, the following results were obtained:
Figure BDA0002594499780000093
when the correlation filter h is solved, the correlation filter h is converted into a ridge regression problem, and a circulation matrix is used for carrying out intensive sampling on samples, so that the nonlinear classification problem can be processed;
and 4, detecting and updating the pedestrian position, wherein the specific implementation mode is as follows:
after the kernel correlation filtering h is obtained in the step 3, for the t +1 th frame, the map with the target position of the frame as the center and the size of M multiplied by NDetection on image block z, correlation response map
Figure BDA0002594499780000094
Thus, a new target position is found by the f (z) maximum corresponding position;
step 5, updating the filter, wherein the specific implementation mode is as follows:
in order to better adapt to the change of the target appearance, the filter coefficient α and the target appearance model x are updated by linear interpolation, that is:
Figure BDA0002594499780000095
Figure BDA0002594499780000096
wherein γ represents a learning rate, and t represents a frame number;
and 6, outputting a tracking result.

Claims (9)

1. A pedestrian tracking method fused with HOG-LBP and KCF is characterized by comprising the following steps:
step 1, acquiring a pedestrian video, setting a tracking target and determining a candidate area;
step 2, extracting HOG-LBP characteristics of the candidate region image;
step 3, calculating a kernel correlation filter according to the HOG-LBP characteristics of the image extracted in the step 2;
step 4, updating the pedestrian position through the nuclear correlation detection obtained by the calculation in the step 3;
step 5, updating the filter;
and 6, outputting a tracking result.
2. The pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 1, wherein the detailed contents of step 1 are as follows:
the pedestrian video is acquired, the Image1 of the t-th frame is read, the tracking target is set, and the area where the tracking target is located is used as the candidate area of the first frame.
3. The pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 1, wherein the HOG-LBP feature extraction process in step 2 is divided into three steps: HOG feature extraction, LBP feature extraction and HOG-LBP feature fusion.
4. The pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 3, wherein the HOG feature extraction step mainly comprises image normalization, gradient calculation, gradient magnitude-based direction weight projection and feature vector normalization, and the specific calculation process is as follows:
setting the size of the candidate region to be 80 multiplied by 64, and setting the size of the block to be 8 multiplied by 8, so that the candidate region comprises 80 non-overlapping blocks in total;
calculating the gradient direction and amplitude value of each block, and calculating the gradient by using a simple central symmetry operator [ -1, 0, 1], as shown in the following formula:
Figure FDA0002594499770000021
Figure FDA0002594499770000022
where I (x, y) is the pixel value of the image point (x, y), θ (x, y) is the gradient direction of the point, and m (x, y) corresponds to the amplitude value of the point;
setting the size of a cell to be 4 multiplied by 4, counting a gradient histogram in each block according to the size of the cell, and projecting a specified weight by applying the amplitude of the gradient;
carrying out contrast normalization on the cells in each overlapped block;
and combining the histogram vectors in all blocks to obtain a final HOG characteristic vector.
5. The pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 3, wherein the specific content of LBP feature extraction is as follows:
the LBP operator is represented by (P, R), where P represents the number of pixels contained within the domain; r denotes the domain radius, the basic LBP operator is the (8, 1) domain:
firstly, the 3 x 3 domain pixel value pi(i ═ 1, 2, …, 8) and the central pixel value p0And comparing, and performing thresholding treatment, wherein the calculation formula is as follows:
Figure FDA0002594499770000023
b is toi(i-1, 2, …, 8) arranging in clockwise direction to obtain an 8-bit binary code, and converting into decimal number to obtain the result obtained by calculating the center pixel by the LBP operator;
then, after the pedestrian image is subjected to LBP operator operation, histogram statistics is carried out on the pedestrian image to obtain a histogram feature vector, which can be specifically defined as:
Figure FDA0002594499770000024
in the formula, n is data of different marks generated by an LBP operator, and a consistent mode operator in a 3 × 3 field is adopted, namely n is 256; when x is true, i (x) is 1; when x is false, i (x) is 0;
dividing the image into regions R0,R1,…,Rm-1The histogram of each region may be defined as:
Figure FDA0002594499770000031
finally, connecting the sub-region histograms to form a final pedestrian feature vector, and adopting Chi2Statistically to measure between LBP featuresThe distance of (c).
6. The pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 3, wherein the HOG-LBP feature fusion is specifically as follows: and performing feature fusion by adopting a weighting mode, wherein a fusion formula is as follows:
Figure FDA0002594499770000032
wherein m represents the number of classifiers, wiAnd ciRespectively representing the weight and the output score of the ith classifier, and f (C) is the score output after feature fusion; the weight calculation formula is as follows:
Figure FDA0002594499770000033
in the formula, EiEqual error rate for the ith classifier;
setting m different classifiers with the pedestrian image characteristic of x, and when estimating the real classification discriminant function, having m different discriminant functions:
gi(x)=h(x)+i(x),i=1,2,…,m (8)
where h (x) represents the true classification discriminant function, gi(x) Represents the discriminant function of the ith classifier, andi(x) Is represented as gi(x) An error function from the true function;
after feature fusion, the mean square error of the whole feature fusion system can be expressed as:
Figure FDA0002594499770000034
wherein the weighting coefficients satisfy ai>0 and
Figure FDA0002594499770000035
7. the pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 1, wherein the specific contents of the kernel correlation filter in step 3 are:
setting x to represent HOG-LBP characteristics extracted from an input image, h to represent a correlation filter, and setting x ^ to represent Fourier transform of x, wherein the convolution in a spatial domain is equivalent to multiplication operation among elements in a frequency domain according to a convolution theorem, and the following can be obtained:
Figure FDA0002594499770000041
in the formula (I), the compound is shown in the specification,
Figure FDA0002594499770000047
representing a convolution, F-1Representing an inverse fourier transform,. representing a multiplication between elements,. representing a complex conjugate; to train the filter, the desired correlation output y is defined, and for a new sample x' of the target, the correlation filter h satisfies the condition
Figure FDA0002594499770000042
Thus, the following results were obtained:
Figure FDA0002594499770000043
8. the pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 1, wherein the specific content of detecting and updating the pedestrian position in step 4 is: after the kernel correlation filtering h is obtained in the step 3, for the t +1 th frame, the detection is carried out on the image block z with the target position of the frame as the center and the size of M multiplied by N, and the correlation response diagram is
Figure FDA0002594499770000044
Therefore, a new target position is found by the f (z) maximum value corresponding position.
9. The pedestrian tracking method fused with HOG-LBP and KCF as claimed in claim 1, wherein the specific contents of the filter update in step 5 are:
updating the filter coefficient alpha and the target appearance model x by adopting a linear interpolation mode, namely:
Figure FDA0002594499770000045
Figure FDA0002594499770000046
in the formula, γ represents a learning rate, and t represents a frame number.
CN202010705263.8A 2020-07-21 2020-07-21 HOG-LBP and KCF fused pedestrian tracking method Pending CN111968154A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010705263.8A CN111968154A (en) 2020-07-21 2020-07-21 HOG-LBP and KCF fused pedestrian tracking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010705263.8A CN111968154A (en) 2020-07-21 2020-07-21 HOG-LBP and KCF fused pedestrian tracking method

Publications (1)

Publication Number Publication Date
CN111968154A true CN111968154A (en) 2020-11-20

Family

ID=73362762

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010705263.8A Pending CN111968154A (en) 2020-07-21 2020-07-21 HOG-LBP and KCF fused pedestrian tracking method

Country Status (1)

Country Link
CN (1) CN111968154A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284253A (en) * 2021-05-11 2021-08-20 西安邮电大学 AR target tracking method for improving Kernel Correlation Filtering (KCF) algorithm
CN116740135A (en) * 2023-05-18 2023-09-12 中国科学院空天信息创新研究院 Infrared dim target tracking method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018048353A1 (en) * 2016-09-09 2018-03-15 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN109685073A (en) * 2018-12-28 2019-04-26 南京工程学院 A kind of dimension self-adaption target tracking algorism based on core correlation filtering
CN110223323A (en) * 2019-06-02 2019-09-10 西安电子科技大学 Method for tracking target based on the adaptive correlation filtering of depth characteristic

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018048353A1 (en) * 2016-09-09 2018-03-15 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN109685073A (en) * 2018-12-28 2019-04-26 南京工程学院 A kind of dimension self-adaption target tracking algorism based on core correlation filtering
CN110223323A (en) * 2019-06-02 2019-09-10 西安电子科技大学 Method for tracking target based on the adaptive correlation filtering of depth characteristic

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
孙玉: ""基于HOG 与LBP 特征的人脸识别方法"", 《计算机工程》, pages 1 - 5 *
袁 康: ""视频序列中的运动目标跟踪算法研究"", 《中国优秀硕士学位论文全文数据库》, pages 13 - 18 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284253A (en) * 2021-05-11 2021-08-20 西安邮电大学 AR target tracking method for improving Kernel Correlation Filtering (KCF) algorithm
CN116740135A (en) * 2023-05-18 2023-09-12 中国科学院空天信息创新研究院 Infrared dim target tracking method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Zhu et al. Method of plant leaf recognition based on improved deep convolutional neural network
CN110929593B (en) Real-time significance pedestrian detection method based on detail discrimination
CN110334762B (en) Feature matching method based on quad tree combined with ORB and SIFT
CN106599836A (en) Multi-face tracking method and tracking system
Martinović et al. Real-time detection and recognition of traffic signs
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111968154A (en) HOG-LBP and KCF fused pedestrian tracking method
Drożdż et al. FPGA implementation of multi-scale face detection using HOG features and SVM classifier
Zhao et al. Real-time moving pedestrian detection using contour features
Alsanad et al. Real-time fuel truck detection algorithm based on deep convolutional neural network
CN109726621B (en) Pedestrian detection method, device and equipment
Yuan et al. Fast QR code detection based on BING and AdaBoost-SVM
CN110910497B (en) Method and system for realizing augmented reality map
CN117437691A (en) Real-time multi-person abnormal behavior identification method and system based on lightweight network
CN110866435B (en) Far infrared pedestrian training method for self-similarity gradient orientation histogram
CN107886060A (en) Pedestrian's automatic detection and tracking based on video
CN110334703B (en) Ship detection and identification method in day and night image
Ganapathi et al. Design and implementation of an automatic traffic sign recognition system on TI OMAP-L138
CN112232162B (en) Pedestrian detection method and device based on multi-feature fusion cascade classifier
Dilawari et al. Toward generating human-centered video annotations
CN108154107A (en) A kind of method of the scene type of determining remote sensing images ownership
Maharani et al. Deep features fusion for KCF-based moving object tracking
CN114792374A (en) Image recognition method based on texture classification, electronic device and storage medium
CN113361422A (en) Face recognition method based on angle space loss bearing
CN112651996A (en) Target detection tracking method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination