CN104156987A - Multi-target tracking method for video contents - Google Patents
Multi-target tracking method for video contents Download PDFInfo
- Publication number
- CN104156987A CN104156987A CN201410458136.7A CN201410458136A CN104156987A CN 104156987 A CN104156987 A CN 104156987A CN 201410458136 A CN201410458136 A CN 201410458136A CN 104156987 A CN104156987 A CN 104156987A
- Authority
- CN
- China
- Prior art keywords
- target
- frame
- point
- coordinate
- reference frame
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a multi-target tracking method for video contents. The method comprises the following steps: S1: collecting video images, setting sampling frequency and sampling time according to the image resolution, and collecting moving images from the video; S2: dividing and recognizing targets; S3: tracking moving tracks, searching a matching point of each target from reference frames by utilizing small differences between the neighboring frames of the video images, judging the correlation between the two points through the set cost function, finding out the optimum matching point of the targets, and connecting the mass centers of the targets, corresponding to the matching points, in each frame to obtain the moving path curves or tracks of the targets; S4: calculating the moving parameters of the targets, and calculating the relative parameters of the activity of the targets by using the relative parameter calculation formulas through analyzing the coordinates and the morphology data of the track point of each target on the basis of completing the division of the targets and the tracking of the track curves in the step S2 and the step S3.
Description
Technical field
The present invention relates to a kind of video content multi-object tracking method.
Background technology
In order to study video internal object number, motion state, conventionally need to carry out to video content cut apart, identification, tracing of the movement, then to data analysis.Traditional method adopts consecutive frame coupling, and computational accuracy is not high, and algorithm complexity.The present invention adopts the method realize target of multi-reference frame algorithm and many conditional filterings to follow the tracks of.
Thresholding method is a kind of simple and effective image partition method, and its basic thought is exactly by one or more threshold values, the gray level of image to be merotomized, and gray-scale value belongs to same target in of a sort pixel.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, provide a kind of algorithm simple, computational accuracy is high, can intuitively reflect the movement locus of target, can carry out the video content multi-object tracking method of the tracking of multiple targets.
The object of the invention is to be achieved through the following technical solutions: a kind of video content multi-object tracking method, it comprises the steps:
S1: video image acquisition, according to image resolution ratio, set sample frequency and sampling time, from video, gather moving image;
S2: the segmentation and recognition of target, it comprises following sub-step:
S201: Target Segmentation, separates target with image background;
S202: impurity elimination, in initial segmentation result, can contain impurity, need to be removed;
S3: tracing of the movement, utilize the fine difference between video image consecutive frame, in reference frame, find the match point of each target, judge the correlativity of 2 by the cost function of setting up, find out the optimal match point of target, and by target, the barycenter of the Corresponding matching point in each frame is connected, and obtains path curve or the track of target travel;
S4: the parameters of target motion calculate, complete cutting apart on the basis with the tracking of geometric locus of target at step S2 and step S3, by analyzing coordinate and the morphological data of each target trajectory point, utilize the calculation formulae for related parameters, calculate the correlation parameter of targeted activity.
Described sample frequency is 24-35 frame/second.
The described sampling time is one period of continuous time.
In step S3, the path curve of realize target motion or the concrete steps of track are:
S301: suppose that the reference frame quantity that can be used for calculating is N frame, setting present frame frame number is i, for finding out the impact point of the optimum matching of all impact points in reference frame in present frame, i-N is read in computing machine to the parameter list of i frame;
S302: choose a target m in present frame i;
S303: choose that do not calculate an and nearest reference frame K;
S304: choose the target n not calculating from reference frame, position range when target of prediction n moves to i frame:
If target n is the first frame or incipient target, now target n does not have reference frame, predicts that its position coordinates that may occur is:
(C
x,C
y)=(x
n,y
n) (0-1)
If target n has reference frame, predict that its possible position is:
Wherein
Wherein (C
x, C
y) be target prediction point position, (x
n, y
n) be target n center-of-mass coordinate, df is that the frame between target n and its reference frame is poor, (x'
n, y'
n) be the center-of-mass coordinate of target n optimum matching target in its reference frame, vx
nfor target n is at x direction movement velocity, vy
nfor target n is in y direction movement velocity;
Forecasting search region is suc as formula shown in (0-4):
th=max Mov+avgMov×(i-k) (0-4)
Wherein, max Mov is all maximal raties of having calculated moving target in frame, and avgMov is all moving target average velocitys in frame that calculated, and i is present frame frame number, and k is reference frame frame number.
Whether the distance of calculating between prediction coordinate and actual coordinate drops in region of search:
d≈(dx+dy)/2 (0-6)
In the time that the part that cooks noodle meets, target m falls in region of search so:
D≤th and dx≤λ th and dy≤λ th
λ is constant;
S305: in target m falls into target n region of search time, calculate its cost function
Cost function is suc as formula shown in (0-7):
Wherein α, beta, gamma is weighting coefficient, dsize is that the girth of target m and target n is poor, and avgMov is all moving target average velocitys in frame that calculated, and dbrgt is the mean luminance differences of two targets, d is the distance between two targets, and i-k is that the frame between two frames is poor, (x
m, y
m) be target m center-of-mass coordinate, (x
n, y
n) be target n center-of-mass coordinate, vx
nfor target n is at x direction movement velocity, vy
nfor target n is in y direction movement velocity;
S306: repeating step S304 and S305, from reference frame, find out optimum matching impact point, according to looking for result, carry out following sub-step:
If A. do not find out optimal match point in reference frame, return to S303;
If B. all reference frames are not all found out the impact point of coupling, in present frame, impact point m is emerging point;
If C. find out optimal match point in reference frame, return to S302, newly choose a target and mate;
If D. a little all calculated in present frame, entered S307;
S307: step S301-S306 obtains the front and back match point of each target in every two field picture and image, connects the center-of-mass coordinate of each match point, is target trajectory, according to target trajectory, adds up its kinematic parameter.
Target Segmentation described in step S201 adopts cutting apart of segmentation step realize target based on threshold value, wherein, the elementary tactics of choosing segmentation threshold is: the grey level histogram to whole sub-picture is added up, and according to statistics, adopt maximum between-cluster variance Ostu method automatically to choose overall optimal segmenting threshold.
A kind of video content multi-object tracking method according to claim 1, it is characterized in that: in step S202, the concrete steps of impurity elimination are: the related Morphological parameter of adding up each target, and utilize described related Morphological parameter to set up correlated judgment criterion, then remove impurity based on described judgment criterion.
Described Morphologic Parameters comprises area, long axis length, minor axis length, mean flow rate and girth.
Described correlation parameter comprises the speed of motion, head hunting frequency, amplitude and ratio static, motion.
Described the calculation formulae for related parameters comprises:
(1) space rate VSL: computing formula is as shown in (0-8):
In formula, N is video totalframes, and Δ T is sampling time interval, (x
0, y
0) be the origin coordinates of target trajectory, (x
n, y
n) be the terminal point coordinate of target trajectory.
(2) curve speed VCL, computing formula is as shown in (0-9):
Wherein, N is video frame number, and Δ T is sampling time interval, (x
i, y
i) be the coordinate of target trajectory on i two field picture.
(3) average path speed VAP, the general employing of average road strength point coordinate 3 or 5 smoothing processing, suc as formula (0-10), average road strength speed computing formula is as shown in (0-11):
Wherein, N is video frame number, and Δ T is sampling time interval,
for the average path point coordinate of target trajectory on i two field picture.
(4) rectilinearity LIN: the span of this value is [0,1], and computing formula is as shown in (0-12):
(5) property STR forward: computing formula is as shown in (0-13):
(6) swing property WOB, computing formula is as shown in (0-14):
(7) a target side-sway amplitude A LH: computing formula is as shown in (0-15):
Wherein, (x
i, y
i) be i point coordinate in target actual motion path,
for the i point coordinate on target average path.
The invention has the beneficial effects as follows: algorithm of the present invention is simple, and computational accuracy is high, can intuitively reflect the movement locus of target, can carry out the tracking of multiple targets.
Brief description of the drawings
Fig. 1 is video object multi-object tracking method block diagram;
Fig. 2 is the path curve of realize target motion in step S3 or the particular flow sheet of track;
Fig. 3 is sperm video image sample 1;
Fig. 4 is sperm video image sample 2.
Embodiment
Below in conjunction with accompanying drawing, technical scheme of the present invention is described in further detail, but protection scope of the present invention is not limited to the following stated.
As shown in Figure 1, a kind of video content multi-object tracking method, it comprises the steps:
S1: video image acquisition, according to image resolution ratio, set sample frequency and sampling time, from video, gather moving image;
S2: the segmentation and recognition of target, it comprises following sub-step:
S201: Target Segmentation, separates target with image background;
S202: impurity elimination, in initial segmentation result, can contain impurity, need to be removed;
S3: tracing of the movement, utilize the fine difference between video image consecutive frame, in reference frame, find the match point of each target, judge the correlativity of 2 by the cost function of setting up, find out the optimal match point of target, and by target, the barycenter of the Corresponding matching point in each frame is connected, and obtains path curve or the track of target travel;
S4: the parameters of target motion calculate, complete cutting apart on the basis with the tracking of geometric locus of target at step S2 and step S3, by analyzing coordinate and the morphological data of each target trajectory point, utilize the calculation formulae for related parameters, calculate the correlation parameter of targeted activity.
Described sample frequency is 24-35 frame/second.
The described sampling time is one period of continuous time.
As shown in Figure 2, in step S3, the path curve of realize target motion or the concrete steps of track are:
S301: suppose that the reference frame quantity that can be used for calculating is N frame, setting present frame frame number is i, for finding out the impact point of the optimum matching of all impact points in reference frame in present frame, i-N is read in computing machine to the parameter list of i frame;
S302: choose a target m in present frame i;
S303: choose that do not calculate an and nearest reference frame K;
S304: choose the target n not calculating from reference frame, position range when target of prediction n moves to i frame:
If target n is the first frame or incipient target, now target n does not have reference frame, predicts that its position coordinates that may occur is:
(C
x,C
y)=(x
n,y
n) (0-1)
If target n has reference frame, predict that its possible position is:
Wherein
Wherein (C
x, C
y) be target prediction point position, (x
n, y
n) be target n center-of-mass coordinate, df is that the frame between target n and its reference frame is poor, (x'
n, y'
n) be the center-of-mass coordinate of target n optimum matching target in its reference frame, vx
nfor target n is at x direction movement velocity, vy
nfor target n is in y direction movement velocity;
Forecasting search region is suc as formula shown in (0-4):
th=max Mov+avgMov×(i-k) (0-4)
Wherein, max Mov is all maximal raties of having calculated moving target in frame, and avgMov is all moving target average velocitys in frame that calculated, and i is present frame frame number, and k is reference frame frame number.
Whether the distance of calculating between prediction coordinate and actual coordinate drops in region of search:
d≈(dx+dy)/2 (0-6)
In the time that the part that cooks noodle meets, target m falls in region of search so:
D≤th and dx≤λ th and dy≤λ th
λ is constant;
S305: in target m falls into target n region of search time, calculate its cost function
Cost function is suc as formula shown in (0-7):
Wherein α, beta, gamma is weighting coefficient, dsize is that the girth of target m and target n is poor, and avgMov is all moving target average velocitys in frame that calculated, and dbrgt is the mean luminance differences of two targets, d is the distance between two targets, and i-k is that the frame between two frames is poor, (x
m, y
m) be target m center-of-mass coordinate, (x
n, y
n) be target n center-of-mass coordinate, vx
nfor target n is at x direction movement velocity, vy
nfor target n is in y direction movement velocity;
S306: repeating step S304 and S305, from reference frame, find out optimum matching impact point, according to looking for result, carry out following sub-step:
If A. do not find out optimal match point in reference frame, return to S303;
If B. all reference frames are not all found out the impact point of coupling, in present frame, impact point m is emerging point;
If C. find out optimal match point in reference frame, return to S302, newly choose a target and mate;
If D. a little all calculated in present frame, entered S307;
S307: step S301-S306 obtains the front and back match point of each target in every two field picture and image, connects the center-of-mass coordinate of each match point, is target trajectory, according to target trajectory, adds up its kinematic parameter.
Target Segmentation described in step S201 adopts cutting apart of segmentation step realize target based on threshold value, wherein, the elementary tactics of choosing segmentation threshold is: the grey level histogram to whole sub-picture is added up, and according to statistics, adopt maximum between-cluster variance Ostu method automatically to choose overall optimal segmenting threshold.
A kind of video content multi-object tracking method according to claim 1, it is characterized in that: in step S202, the concrete steps of impurity elimination are: the related Morphological parameter of adding up each target, and utilize described related Morphological parameter to set up correlated judgment criterion, then remove impurity based on described judgment criterion.
Described Morphologic Parameters comprises area, long axis length, minor axis length, mean flow rate and girth.
Described correlation parameter comprises the speed of motion, head hunting frequency, amplitude and ratio static, motion.
Described the calculation formulae for related parameters comprises:
(1) space rate VSL: computing formula is as shown in (0-8):
In formula, N is video totalframes, and Δ T is sampling time interval, (x
0, y
0) be the origin coordinates of target trajectory, (x
n, y
n) be the terminal point coordinate of target trajectory.
(2) curve speed VCL, computing formula is as shown in (0-9):
Wherein, N is video frame number, and Δ T is sampling time interval, (x
i, y
i) be the coordinate of target trajectory on i two field picture.
(3) average path speed VAP, the general employing of average road strength point coordinate 3 or 5 smoothing processing, suc as formula (0-10), average road strength speed computing formula is as shown in (0-11):
Wherein, N is video frame number, and Δ T is sampling time interval,
for the average path point coordinate of target trajectory on i two field picture.
(4) rectilinearity LIN: the span of this value is [0,1], and computing formula is as shown in (0-12):
(5) property STR forward: computing formula is as shown in (0-13):
(6) swing property WOB, computing formula is as shown in (0-14):
(7) a target side-sway amplitude A LH: computing formula is as shown in (0-15):
Wherein, (x
i, y
i) be i point coordinate in target actual motion path,
for the i point coordinate on target average path.
In a specific embodiment of the present invention, by method of the present invention, sperm is followed the tracks of, because sperm quantity is numerous and motion does not have rule, therefore in statistics, need the data in multiple visuals field to average.Apply method of the present invention and follow the tracks of the parameter such as ratio of the speed that not only can calculate the motion of sperm, head hunting frequency, amplitude and static, motion, can also obtain following parameter:
A. sperm sum (individual): the summation of all different sperm numbers in each visual field;
B. sperm concentration (M/ml): sperm quantity in unit volume.Wherein M/ml represents 10
6individual/ml;
C. number of motile sperm (individual): the summation that refers to the different sperm numbers of all activities in each field of view;
D. motile sperm density (M/ml): computing formula is as shown in (0-16):
E. sperm motility rate: computing formula is as shown in (0-17):
F. motile sperm density (M/ml): computing formula is as shown in (0-18):
Motile sperm density=sperm concentration × sperm motility rate (0-18)
G. abnormal sperm rate: computing formula is as shown in (0-19):
H. rectilinear motion sperm motility rate: computing formula is as shown in (0-20):
I. rectilinear motion sperm concentration (M/ml): computing formula is as shown in (0-21):
Rectilinear motion sperm concentration=sperm concentration × rectilinear motion sperm motility rate (0-21)
J. curvilinear motion sperm concentration (M/ml): computing formula is as shown in (0-22):
K. sperm rectilinear motion rate: computing formula is as shown in (0-23):
World Health Organization's sperm quality evaluation handbook clearly specifies, according to Sperm mobility characteristic, sperm is divided into four grades.Wherein A level sperm is defined as and does forward fast straight-line target, and its movement velocity is greater than 25 μ m/s; B level sperm is defined as proal target and motion Lu Jingwei curve at a slow speed; C level sperm is defined as sperm tail and swings, but does not do propulsion, and its movement velocity is less than 5 μ m/s; D level sperm is that target is not done any motion, i.e. dead essence.Generally normal male A level sperm number should reach 32%, A level, B level sperm sum should reach more than 40%, and sperm concentration should be greater than 15 × 10
6/ ml, sperm sum should be greater than 39 × 10
6/ once ejaculation, the data that obtain according to above calculating can judge man's health status.
Fig. 3 and Fig. 4 are the video image sample of two frame sperms, application this method has been carried out the tracking of sperm, following table is application this method is carried out sperm tracking, calculation of parameter data to video sample, and the data of adding up by the mode of artificial counting, as can be seen from the table, the sperm number of application the application's method statistic is more accurate, calculates kinematic parameter simpler, quick, is more conducive to the observation and analysis to sperm.
Claims (9)
1. a video content multi-object tracking method, is characterized in that: it comprises the steps:
S1: video image acquisition, according to image resolution ratio, set sample frequency and sampling time, from video, gather moving image;
S2: the segmentation and recognition of target, it comprises following sub-step:
S201: Target Segmentation, separates target with image background;
S202: impurity elimination, in initial segmentation result, can contain impurity, need to be removed;
S3: tracing of the movement, utilize the fine difference between video image consecutive frame, in reference frame, find the match point of each target, judge the correlativity of 2 by the cost function of setting up, find out the optimal match point of target, and by target, the barycenter of the Corresponding matching point in each frame is connected, and obtains path curve or the track of target travel;
S4: the parameters of target motion calculate, complete cutting apart on the basis with the tracking of geometric locus of target at step S2 and step S3, by analyzing coordinate and the morphological data of each target trajectory point, utilize the calculation formulae for related parameters, calculate the correlation parameter of targeted activity.
2. a kind of video content multi-object tracking method according to claim 1, is characterized in that: described sample frequency is 24-35 frame/second.
3. a kind of video content multi-object tracking method according to claim 1, is characterized in that: the described sampling time is one period of continuous time.
4. a kind of video content multi-object tracking method according to claim 1, is characterized in that: in step S3, the path curve of realize target motion or the concrete steps of track are:
S301: suppose that the reference frame quantity that can be used for calculating is N frame, setting present frame frame number is i, for finding out the impact point of the optimum matching of all impact points in reference frame in present frame, i-N is read in computing machine to the parameter list of i frame;
S302: choose a target m in present frame i;
S303: choose that do not calculate an and nearest reference frame K;
S304: choose the target n not calculating from reference frame, position range when target of prediction n moves to i frame:
If target n is the first frame or incipient target, now target n does not have reference frame, predicts that its position coordinates that may occur is:
(C
x,C
y)=(x
n,y
n) (0-1)
If target n has reference frame, predict that its possible position is:
Wherein
Wherein (C
x, C
y) be target prediction point position, (x
n, y
n) be target n center-of-mass coordinate, df is that the frame between target n and its reference frame is poor, (x'
n, y'
n) be the center-of-mass coordinate of target n optimum matching target in its reference frame, vx
nfor target n is at x direction movement velocity, vy
nfor target n is in y direction movement velocity;
Forecasting search region is suc as formula shown in (0-4):
th=max Mov+avgMov×(i-k) (0-4)
Wherein, max Mov is all maximal raties of having calculated moving target in frame, and avgMov is all moving target average velocitys in frame that calculated, and i is present frame frame number, and k is reference frame frame number.
Whether the distance of calculating between prediction coordinate and actual coordinate drops in region of search:
d≈(dx+dy)/2 (0-6)
In the time that the part that cooks noodle meets, target m falls in region of search so:
D≤th and dx≤λ th and dy≤λ th
λ is constant;
S305: in target m falls into target n region of search time, calculate its cost function
Cost function is suc as formula shown in (0-7):
Wherein α, beta, gamma is weighting coefficient, dsize is that the girth of target m and target n is poor, and avgMov is all moving target average velocitys in frame that calculated, and dbrgt is the mean luminance differences of two targets, d is the distance between two targets, and i-k is that the frame between two frames is poor, (x
m, y
m) be target m center-of-mass coordinate, (x
n, y
n) be target n center-of-mass coordinate, vx
nfor target n is at x direction movement velocity, vy
nfor target n is in y direction movement velocity;
S306: repeating step S304 and S305, from reference frame, find out optimum matching impact point, according to looking for result, carry out following sub-step:
If A. do not find out optimal match point in reference frame, return to S303;
If B. all reference frames are not all found out the impact point of coupling, in present frame, impact point m is emerging point;
If C. find out optimal match point in reference frame, return to S302, newly choose a target and mate;
If D. a little all calculated in present frame, entered S307;
S307: step S301-S306 obtains the front and back match point of each target in every two field picture and image, connects the center-of-mass coordinate of each match point, is target trajectory, according to target trajectory, adds up its kinematic parameter.
5. a kind of video content multi-object tracking method according to claim 1, it is characterized in that: the Target Segmentation described in step S201 adopts cutting apart of segmentation step realize target based on threshold value, wherein, the elementary tactics of choosing segmentation threshold is: the grey level histogram to whole sub-picture is added up, and according to statistics, adopt maximum between-cluster variance Ostu method automatically to choose overall optimal segmenting threshold.
6. a kind of video content multi-object tracking method according to claim 1, it is characterized in that: a kind of video content multi-object tracking method according to claim 1, it is characterized in that: in step S202, the concrete steps of impurity elimination are: the related Morphological parameter of adding up each target, and utilize described related Morphological parameter to set up correlated judgment criterion, then remove impurity based on described judgment criterion.
7. according to a kind of video content multi-object tracking method described in claim 1 or 6, it is characterized in that: described Morphologic Parameters comprises area, long axis length, minor axis length, mean flow rate and girth.
8. a kind of video content multi-object tracking method according to claim 1, is characterized in that: described correlation parameter comprises the speed of motion, head hunting frequency, amplitude and ratio static, motion.
9. a kind of video content multi-object tracking method according to claim 1, is characterized in that: described the calculation formulae for related parameters comprises:
(1) space rate VSL: computing formula is as shown in (0-8):
In formula, N is video totalframes, and Δ T is sampling time interval, (x
0, y
0) be the origin coordinates of target trajectory, (x
n, y
n) be the terminal point coordinate of target trajectory.
(2) curve speed VCL, computing formula is as shown in (0-9):
Wherein, N is video frame number, and Δ T is sampling time interval, (x
i, y
i) be the coordinate of target trajectory on i two field picture.
(3) average path speed VAP, the general employing of average road strength point coordinate 3 or 5 smoothing processing, suc as formula (0-10), average road strength speed computing formula is as shown in (0-11):
Wherein, N is video frame number, and Δ T is sampling time interval,
for the average path point coordinate of target trajectory on i two field picture.
(4) rectilinearity LIN: the span of this value is [0,1], and computing formula is as shown in (0-12):
(5) property STR forward: computing formula is as shown in (0-13):
(6) swing property WOB, computing formula is as shown in (0-14):
(7) a target side-sway amplitude A LH: computing formula is as shown in (0-15):
Wherein, (x
i, y
i) be i point coordinate in target actual motion path,
for the i point coordinate on target average path.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410458136.7A CN104156987B (en) | 2014-09-10 | 2014-09-10 | Multi-target tracking method for video contents |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410458136.7A CN104156987B (en) | 2014-09-10 | 2014-09-10 | Multi-target tracking method for video contents |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104156987A true CN104156987A (en) | 2014-11-19 |
CN104156987B CN104156987B (en) | 2017-05-10 |
Family
ID=51882475
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410458136.7A Active CN104156987B (en) | 2014-09-10 | 2014-09-10 | Multi-target tracking method for video contents |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104156987B (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204658A (en) * | 2016-07-21 | 2016-12-07 | 北京邮电大学 | Moving image tracking and device |
CN107193032A (en) * | 2017-03-31 | 2017-09-22 | 长光卫星技术有限公司 | Multiple mobile object based on satellite video quickly tracks speed-measuring method |
CN107490377A (en) * | 2017-07-17 | 2017-12-19 | 五邑大学 | Indoor map-free navigation system and navigation method |
WO2018086360A1 (en) * | 2016-11-08 | 2018-05-17 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for data visualization |
CN108303420A (en) * | 2017-12-30 | 2018-07-20 | 上饶市中科院云计算中心大数据研究院 | A kind of domestic type sperm quality detection method based on big data and mobile Internet |
CN110796686A (en) * | 2019-10-29 | 2020-02-14 | 浙江大华技术股份有限公司 | Target tracking method and device and storage device |
WO2020056913A1 (en) * | 2018-09-18 | 2020-03-26 | 图普科技(广州)有限公司 | Pedestrian trajectory acquisition method and apparatus, electronic device, and readable storage medium |
CN111161313A (en) * | 2019-12-16 | 2020-05-15 | 华中科技大学鄂州工业技术研究院 | Multi-target tracking method and device in video stream |
CN111639600A (en) * | 2020-05-31 | 2020-09-08 | 石家庄铁道大学 | Video key frame extraction method based on center offset |
CN112150415A (en) * | 2020-09-04 | 2020-12-29 | 清华大学 | Multi-target sperm real-time monitoring method based on deep learning |
CN113160273A (en) * | 2021-03-25 | 2021-07-23 | 常州工学院 | Intelligent monitoring video segmentation method based on multi-target tracking |
CN116740099A (en) * | 2023-08-15 | 2023-09-12 | 南京博视医疗科技有限公司 | OCT image segmentation method and device |
CN117078722A (en) * | 2023-10-17 | 2023-11-17 | 四川迪晟新达类脑智能技术有限公司 | Target tracking method and device for extracting small target based on gray level histogram |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102902965A (en) * | 2012-10-09 | 2013-01-30 | 公安部第三研究所 | Method for processing structured description of video image data and capable of implementing multi-target tracking |
-
2014
- 2014-09-10 CN CN201410458136.7A patent/CN104156987B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102902965A (en) * | 2012-10-09 | 2013-01-30 | 公安部第三研究所 | Method for processing structured description of video image data and capable of implementing multi-target tracking |
Non-Patent Citations (1)
Title |
---|
柳建武: "***质量计算机辅助分析", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106204658A (en) * | 2016-07-21 | 2016-12-07 | 北京邮电大学 | Moving image tracking and device |
US11049262B2 (en) | 2016-11-08 | 2021-06-29 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for data visualization |
WO2018086360A1 (en) * | 2016-11-08 | 2018-05-17 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for data visualization |
US11568548B2 (en) | 2016-11-08 | 2023-01-31 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for data visualization |
CN107193032A (en) * | 2017-03-31 | 2017-09-22 | 长光卫星技术有限公司 | Multiple mobile object based on satellite video quickly tracks speed-measuring method |
CN107193032B (en) * | 2017-03-31 | 2019-11-15 | 长光卫星技术有限公司 | Multiple mobile object based on satellite video quickly tracks speed-measuring method |
CN107490377A (en) * | 2017-07-17 | 2017-12-19 | 五邑大学 | Indoor map-free navigation system and navigation method |
CN108303420A (en) * | 2017-12-30 | 2018-07-20 | 上饶市中科院云计算中心大数据研究院 | A kind of domestic type sperm quality detection method based on big data and mobile Internet |
WO2020056913A1 (en) * | 2018-09-18 | 2020-03-26 | 图普科技(广州)有限公司 | Pedestrian trajectory acquisition method and apparatus, electronic device, and readable storage medium |
CN110796686A (en) * | 2019-10-29 | 2020-02-14 | 浙江大华技术股份有限公司 | Target tracking method and device and storage device |
CN110796686B (en) * | 2019-10-29 | 2022-08-09 | 浙江大华技术股份有限公司 | Target tracking method and device and storage device |
CN111161313B (en) * | 2019-12-16 | 2023-03-14 | 华中科技大学鄂州工业技术研究院 | Multi-target tracking method and device in video stream |
CN111161313A (en) * | 2019-12-16 | 2020-05-15 | 华中科技大学鄂州工业技术研究院 | Multi-target tracking method and device in video stream |
CN111639600A (en) * | 2020-05-31 | 2020-09-08 | 石家庄铁道大学 | Video key frame extraction method based on center offset |
CN111639600B (en) * | 2020-05-31 | 2023-07-28 | 石家庄铁道大学 | Video key frame extraction method based on center offset |
CN112150415A (en) * | 2020-09-04 | 2020-12-29 | 清华大学 | Multi-target sperm real-time monitoring method based on deep learning |
CN113160273A (en) * | 2021-03-25 | 2021-07-23 | 常州工学院 | Intelligent monitoring video segmentation method based on multi-target tracking |
CN116740099A (en) * | 2023-08-15 | 2023-09-12 | 南京博视医疗科技有限公司 | OCT image segmentation method and device |
CN116740099B (en) * | 2023-08-15 | 2023-11-14 | 南京博视医疗科技有限公司 | OCT image segmentation method and device |
CN117078722A (en) * | 2023-10-17 | 2023-11-17 | 四川迪晟新达类脑智能技术有限公司 | Target tracking method and device for extracting small target based on gray level histogram |
CN117078722B (en) * | 2023-10-17 | 2023-12-22 | 四川迪晟新达类脑智能技术有限公司 | Target tracking method and device for extracting small target based on gray level histogram |
Also Published As
Publication number | Publication date |
---|---|
CN104156987B (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104156987A (en) | Multi-target tracking method for video contents | |
CN102542289A (en) | Pedestrian volume statistical method based on plurality of Gaussian counting models | |
CN107330372A (en) | A kind of crowd density based on video and the analysis method of unusual checking system | |
CN105427626B (en) | A kind of statistical method of traffic flow based on video analysis | |
CN103530893B (en) | Based on the foreground detection method of background subtraction and movable information under camera shake scene | |
CN104931934B (en) | A kind of radar plot condensing method based on PAM cluster analyses | |
CN103279737B (en) | A kind of behavioral value method of fighting based on space-time interest points | |
CN104408707B (en) | Rapid digital imaging fuzzy identification and restored image quality assessment method | |
CN104992451A (en) | Improved target tracking method | |
CN103164858A (en) | Adhered crowd segmenting and tracking methods based on superpixel and graph model | |
CN102521565A (en) | Garment identification method and system for low-resolution video | |
CN101576952B (en) | Method and device for detecting static targets | |
CN104680556A (en) | Parallax-based three-dimensional trajectory tracking method of fish movement | |
CN104574439A (en) | Kalman filtering and TLD (tracking-learning-detection) algorithm integrated target tracking method | |
CN104965199B (en) | Radar video moving target Fusion Features decision method | |
CN104616006B (en) | A kind of beard method for detecting human face towards monitor video | |
CN103336947A (en) | Method for identifying infrared movement small target based on significance and structure | |
CN102722702B (en) | Multiple feature fusion based particle filter video object tracking method | |
CN111862145A (en) | Target tracking method based on multi-scale pedestrian detection | |
CN110210452A (en) | It is a kind of based on improve tiny-yolov3 mine truck environment under object detection method | |
CN104331886A (en) | Port region ship and warship detection method based on high resolution SAR image | |
CN104391294A (en) | Connection domain characteristic and template matching based radar plot correlation method | |
CN113763427B (en) | Multi-target tracking method based on coarse-to-fine shielding processing | |
CN104599291B (en) | Infrared motion target detection method based on structural similarity and significance analysis | |
CN103679745A (en) | Moving target detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |