CN108133172A - Method, the analysis method of vehicle flowrate and the device that Moving Objects are classified in video - Google Patents
Method, the analysis method of vehicle flowrate and the device that Moving Objects are classified in video Download PDFInfo
- Publication number
- CN108133172A CN108133172A CN201711138992.4A CN201711138992A CN108133172A CN 108133172 A CN108133172 A CN 108133172A CN 201711138992 A CN201711138992 A CN 201711138992A CN 108133172 A CN108133172 A CN 108133172A
- Authority
- CN
- China
- Prior art keywords
- movement locus
- space
- target object
- similarity
- track
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/231—Hierarchical techniques, i.e. dividing or merging pattern sets so as to obtain a dendrogram
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses method, the analysis method of vehicle flowrate and the devices that Moving Objects in a kind of video are classified.The method includes:Extract the movement locus of each target object in video;The similarity modeling of time-space relationship is carried out to the movement locus of each target object, determines the space-time similarity between the movement locus of each target object;Using the space-time similarity between the movement locus of each target object, the movement locus of each target object is clustered, obtains the group of time and the target object of spatial closeness.It can ensure the rapidity and accuracy of video data monitoring, ensure the validity of accident detection.
Description
Technical field
The present invention relates to pattern-recognition, machine learning, computer vision field, Moving Objects in more particularly to a kind of video
The method of classification, the analysis method of vehicle flowrate and device.
Background technology
With the rapid development of Digital Network technology, video image becomes the important carrier that information is transmitted.By 2011
Bottom, China's only Guangdong Province's food monitoring camera quantity just break through 1,100,000.The monitoring number that these monitoring cameras generate simultaneously
According to also constantly increasing.A large amount of, the abundant movable information that sequence of video images is included causes the great interest of people.
Traditional monitor mode, which relies on, manually monitors video recording, it is clear that can increase more and more artificial costs.By
Limited in the video way of artificial monitoring simultaneously, for some huge systems, the monitoring way possessed is relatively more, needs multiple
Personnel monitor simultaneously.The problem of more crucial is since long-time monitors monitoring video, can lead to the vision of monitoring personnel
Become fatigue so that attention laxes, and many visual informations in picture " will be turned a blind eye to " by human eye, the efficiency of monitoring and
Accuracy is not high or even will appear the phenomenon that failing to report or reporting by mistake.In addition, the prison for large-scale camera network, magnanimity
Data are controlled, wherein useful data only account for a seldom part, at this moment search out exception from these mass datas by manpower
It is nearly impossible.So while the eyes of people can directly differentiate the object of movement, extraction from the new sequence of video image
The information of movement, but the natural intelligence for relying solely on the mankind can not meet social development to obtain with working motion information
Demand.
Based on this, the human cost of video data monitoring is reduced, human vision is substituted with computer vision, from video image
Movable information is extracted, analyzes and understood in sequence, and improving the efficiency, accuracy and validity of monitoring becomes technology urgently to be resolved hurrily
Problem.
Invention content
In view of the above problems, it is proposed that the present invention overcomes the above problem in order to provide one kind or solves at least partly
State the method and apparatus that Moving Objects are classified in a kind of video of problem.
In a first aspect, the embodiment of the present invention provides a kind of method that Moving Objects are classified in video, including:
Extract the movement locus of each target object in video;
The similarity modeling of time-space relationship is carried out to the movement locus of each target object, determines the fortune of each target object
Space-time similarity between dynamic rail mark;
Using the space-time similarity between the movement locus of each target object, to the movement locus of each target object into
Row cluster, obtains the group of time and the target object of spatial closeness.
In some optional embodiments, the movement locus of each target object, specifically includes in the extraction video:
Each target object is detected from video sequence, to each target object in the enterprising line trace of sequential, obtains it
Spatial position in sequential obtains the movement locus of all target objects.
In some optional embodiments, after the movement locus for obtaining all target objects, further include:
The movement locus of the target object is pre-processed, get rid of noise and does not meet the movement rail of preset requirement
Mark.
In some optional embodiments, the similarity that time-space relationship is carried out to the movement locus of each target object is built
Mould specifically includes:
To the movement locus of each target object, its space similarity between any two and between any two is analyzed respectively
Sequential relationship information;
Sequential relationship information described in the movement locus of each target object between any two is fused to described in it between any two
Space similarity in, establish the similarity model of the movement locus of each target object time-space relationship between any two.
In some optional embodiments, the space similarity of the movement locus of each target object between any two is analyzed, is had
Body includes:
For given trace A and track B, the space length f (A, B) between track A and track B is calculated, by the space
Distance f (A, B) does normalized, obtains the final space similarity between track A and track B:
F (A, B)=exp (- f (A, B)/σ) (1)
In above formula (1), σ is Normalized Scale parameter.
In some optional embodiments, the sequential relationship information of the movement locus of each target object between any two is analyzed,
It specifically includes:
Calculate the sequential weight W between given trace A and track B:
W=1/ (1+exp (- C)) (2)
In above formula (2), the calculation formula of parameter C is as follows:
In above formula (3), sequential registrations of the Δ d between track A and track B, η is that sequential is short in track A and track B
The ratio of track and the track of sequential length, and ηtThe then sequential lenth ratio threshold value between track A and track B,With
It is the sequential length of movement locus A and B respectively, K is index parameters.
In some optional embodiments, the similarity of the movement locus of each target object time-space relationship between any two is established
Model specifically includes:
Using the sequential relationship information described in the movement locus of each target object between any two to the sky between any two
Between similarity be weighted processing, space-time similarity model is as follows between obtaining two movement locus:
In above formula (4), F and w represent the space similarity and sequential weight between track respectively, and λ is scale factor.
Second aspect, the embodiment of the present invention provide a kind of analysis method of vehicle flowrate, including:
Extract the movement locus of each Vehicle Object in video;
The similarity modeling of time-space relationship is carried out to the movement locus of each vehicle, determine each vehicle movement locus it
Between space-time similarity;
Using the space-time similarity between the movement locus of each vehicle, the movement locus of each vehicle is clustered,
Obtain time and the vehicle group of spatial closeness;
It is for statistical analysis to vehicle in each vehicle group, obtain the information of vehicle flowrate in predeterminable area.
The third aspect, the embodiment of the present invention provide the device that Moving Objects are classified in a kind of video, including:
Acquisition module, for extracting the movement locus of each target object in video;
Modeling module for carrying out the similarity modeling of time-space relationship to the movement locus of each target object, determines
Space-time similarity between the movement locus of each target object;
Cluster module, for utilizing the space-time similarity between the movement locus of each target object, to each target pair
The movement locus of elephant is clustered, and obtains the group of time and the target object of spatial closeness.
In some optional embodiments, the acquisition module is specifically used for:
Each target object is detected from video sequence, to each target object in the enterprising line trace of sequential, obtains it
Spatial position in sequential obtains the movement locus of all target objects.
In some optional embodiments, the acquisition module is additionally operable to:
The movement locus of the target object is pre-processed, get rid of noise and does not meet the movement rail of preset requirement
Mark.
In some optional embodiments, the modeling module, including:
Space similarity analyzes submodule, for the movement locus to each target object, analyze respectively its two-by-two it
Between space similarity;
Sequential relationship information analysis submodule for the movement locus to each target object, analyzes it two-by-two respectively
Between sequential relationship information;
Submodule is modeled, for the sequential relationship information described in the movement locus of each target object between any two to be fused to
In space similarity described in it between any two, the similarity of the movement locus of each target object time-space relationship between any two is established
Model.
In some optional embodiments, the space similarity analyzes submodule, is specifically used for:
For given trace A and track B, the space length f (A, B) between track A and track B is calculated, by the space
Distance f (A, B) does normalized, obtains the final space similarity between track A and track B:
F (A, B)=exp (- f (A, B)/σ) (1)
In above formula (1), σ is Normalized Scale parameter.
In some optional embodiments, the sequential relationship information analysis submodule is specifically used for:
Calculate the sequential weight W between given trace A and track B:
W=1/ (1+exp (- C)) (2)
In above formula (2), the calculation formula of parameter C is as follows:
In above formula (3), sequential registrations of the Δ d between track A and track B, η is that sequential is short in track A and track B
The ratio of track and the track of sequential length, and ηtThe then sequential lenth ratio threshold value between track A and track B,With
It is the sequential length of movement locus A and B respectively, K is index parameters.
In some optional embodiments, the modeling submodule is specifically used for:
Using the sequential relationship information described in the movement locus of each target object between any two to the sky between any two
Between similarity be weighted processing, space-time similarity model is as follows between obtaining two movement locus:
In above formula (4), F and w represent the space similarity and sequential weight between track respectively, and λ is scale factor.
Fourth aspect, the embodiment of the present invention provide a kind of analytical equipment of vehicle flowrate, including:
Acquisition module, for extracting the movement locus of each Vehicle Object in video;
Modeling module for carrying out the similarity modeling of time-space relationship to the movement locus of each vehicle, determines each vehicle
Movement locus between space-time similarity;
Cluster module, for utilizing the space-time similarity between the movement locus of each vehicle, the movement to each vehicle
Track is clustered, and obtains time and the vehicle group of spatial closeness;
Analysis module, for for statistical analysis to vehicle in each vehicle group, the vehicle flowrate obtained in predeterminable area is believed
Breath.
5th aspect, the embodiment of the present invention provide a kind of non-transitorycomputer readable storage medium, are stored thereon with meter
Calculation machine instructs, and following steps are realized when the instruction is executed by processor:
Extract the movement locus of each target object in video;
The similarity modeling of time-space relationship is carried out to the movement locus of each target object, determines the fortune of each target object
Space-time similarity between dynamic rail mark;
Using the space-time similarity between the movement locus of each target object, to the movement locus of each target object into
Row cluster, obtains the group of time and the target object of spatial closeness.
The advantageous effect of above-mentioned technical proposal provided in an embodiment of the present invention includes at least:
The time-space relationship between movement locus is analyzed, obtains the space-time similarity model of movement locus, and then to moving rail
Mark is clustered, and analysis is detected to dynamic population according to cluster group, can be more preferable while human cost is saved
Ensure dynamic population analysis rapidity, accuracy and validity.
Timing information between movement locus is fused in its space similarity, empty similarity modeling at that time is carried out, makes
It obtains the movement locus occurred in same sequential section and the movement locus being present in different sequential sections can be by the system of design
Empty similarity model is measured for the moment, is closed so as to excavate the space-time dynamic in the range of longer sequential between movement locus
System preferably carries out dynamic population the more global dynamic analysis in the range of longer sequential, obtains more robust, precision more
High dynamic population detection performance.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification
It obtains it is clear that being understood by implementing the present invention.The purpose of the present invention and other advantages can be by the explanations write
Specifically noted structure is realized and is obtained in book, claims and attached drawing.
Below by drawings and examples, technical scheme of the present invention is described in further detail.
Description of the drawings
Attached drawing is used to provide further understanding of the present invention, and a part for constitution instruction, the reality with the present invention
Example is applied together for explaining the present invention, is not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is the flow chart of a kind of method that Moving Objects are classified in video in the embodiment of the present invention one;
Fig. 2 is a movement locus schematic diagram in the embodiment of the present invention one;
Fig. 3 is the flow chart of the similarity modeling method of movement locus time-space relationship in the embodiment of the present invention one;
Fig. 4 is the exemplary plot of the sequential relationship between movement locus in the embodiment of the present invention one;
Fig. 5 is a kind of flow chart of specific implementation flow of the analysis method of vehicle flowrate in the embodiment of the present invention two;
Fig. 6 is the flow chart of the analysis method of movement locus space similarity in the embodiment of the present invention two;
Fig. 7 is the exemplary plot of movement locus cluster process in the embodiment of the present invention two;
Fig. 8 is a kind of structure diagram of the device that Moving Objects are classified in video in the embodiment of the present invention;
Fig. 9 is the sub-modular structure schematic diagram of modeling module in the embodiment of the present invention;
Figure 10 is a kind of structure diagram of the analytical equipment of vehicle flowrate in the embodiment of the present invention.
Specific embodiment
The exemplary embodiment of the disclosure is more fully described below with reference to accompanying drawings.Although the disclosure is shown in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
Completely it is communicated to those skilled in the art.
In order to solve artificial detection dynamic population waste of manpower in the prior art, and speed is slow, accuracy is low, more
Very occur failing to report, misrepresent deliberately or even the problem of some work hardly possible by manpower completion, the embodiment of the present invention provides a kind of dynamic
The method of state population analysis can fast and accurately carry out dynamic population monitoring analysis, ensure high efficiency, the reliability of analysis
And validity.
Embodiment one
The embodiment of the present invention one provides a kind of method that Moving Objects are classified in video, and flow is as shown in Figure 1, including such as
Lower step:
Step S101:Extract the movement locus of each target object in video.
Each target object is detected from video sequence, to each target object in the enterprising line trace of sequential, obtains it
Spatial position in sequential obtains the movement locus of all target objects.
Above-mentioned target object can be have the object occurred in video to be analyzed such as vehicle or crowd or its
He is any special object to be analyzed.
In above-mentioned steps S101, the moving target object of key point, that is, all, moving target are detected from video sequence
The selection factor to be considered of method for checking object for example have it is following some:
1st, the motion feature of target object includes the track characteristic and velocity characteristic of its movement;
2nd, background feature, comprising background variation speed, background light power, have it is unobstructed etc.;
3rd, the requirement of detecting system in itself, the real-time comprising detection, the precision of detection and calculation amount size etc..
Above-mentioned factor is considered to select the detection method of moving target object, common moving target object detection
Method has frame difference method, background subtraction, optical flow method etc..Such as the movement speed of target object is moderate, it is both too fast without moving
Also too slow without moving, background variation is fast, while to requirement of real-time height, can select the simple frame difference method of algorithm;Background
Variation hour can select background subtraction;Required precision is high and can select optical flow method when can bear larger calculation amount.
The embodiment of the present invention can also realize the detection to target object by the detection method of other target objects,
This is not limited.
By a kind of detection method is selected in above-mentioned analysis method, each target object is detected from video sequence, as pass
Key point to the key point in the enterprising line trace of sequential, obtains its spatial position in sequential, obtains all target objects
Movement locus.
Fig. 2 is the exemplary plot of a movement locus, and track A is regarded as to a series of combination of observation points:WhereinRepresent the space-time position of i-th of observation point,For i-th of observation point Spatial Dimension seat
Mark,For i-th of observation point time dimension coordinate.
Illumination variation and due to blocking etc., longer movement locus, and also certain noise are hardly resulted in, because
This is after the movement locus for obtaining all target objects, preferably, can be carried out to the movement locus of the target object pre-
Processing gets rid of noise and does not meet the movement locus of preset requirement.So that testing result is more accurate.
Step S102:The similarity modeling of time-space relationship is carried out to the movement locus of each target object, determines each target pair
Space-time similarity between the movement locus of elephant.
Following manner realization may be used when implementing in this step S102:To the movement locus of each target object, divide respectively
Analyse its space similarity between any two and sequential relationship information between any two;It will be two described in the movement locus of each target object
Sequential relationship information between two is fused to the movement rail that each target object is established in the space similarity described in it between any two
The similarity model of mark time-space relationship between any two.
The similarity modeling method of movement locus time-space relationship is referred to shown in Fig. 3, include the following steps:
Step S1021:Analyze the space similarity of the movement locus of each target object between any two.
For given trace A and track B, the space length f (A, B) between track A and track B is calculated, by the space
Distance f (A, B) does normalized, obtains the final space similarity between track A and track B:
F (A, B)=exp (- f (A, B)/σ) (1)
In above formula (1), σ is Normalized Scale parameter.
Space similarity between two movement locus is measured with its space length, and common space length for example wraps
It includes:Euclidean distance (Euclidean distance), mahalanobis distance (Mahalanobis distance), Minkowski Distance
(Minkowski distance), Hausdorff distance (Hausdorff distance) distance and COS distance etc..Root
According to movement locus type and feature and the specific needs of problem analysis, to select suitable similarity measurement.
Step S1022:Analyze the sequential relationship information of the movement locus of each target object between any two.
In this step S1022, sequential relationship information can be characterized by the sequential weight between movement locus;Into one
For step, the sequential weight W between given trace A and track B can be calculated by following manner:
W=1/ (1+exp (- C)) (2)
In above formula (2), the calculation formula of parameter C is as follows:
In above formula (3), sequential registrations of the Δ d between track A and track B, η is that sequential is short in track A and track B
The ratio of track and the track of sequential length, and ηtThe then sequential lenth ratio threshold value between track A and track B,With
It is the sequential length of movement locus A and B respectively, K is index parameters.
Sequential relationship between movement locus can be divided into two parts:Time overlap part and sequential not lap,
With reference to shown in Fig. 4, tri- movement locus of A, B, C, wherein track A and track B possess overlapping and non-overlapping portion in sequential simultaneously
Point, and track A and track C are then non-overlapping in sequential, track B and track C possess Time overlap and not lap simultaneously.
Wherein Δ d is defined as the sequential registration between movement locus, will most start the movement rail occurred when specifically calculating
Mark is defined as A, and the track definition that an other evening occurs is B, then
Δ d=tA_end-tB_start (4)
In above formula (4), tA_endAnd tB_startIt represents to carve at the beginning of the finish time of track A and track B respectively, when described
Pointer body is measured with frame, such as tA_endThe frame number that=27 expression track A terminate is 27.
Δd>0, show that two movement locus have the sequential weight between the track A in Time overlap, such as Fig. 4 and track B
Right Δ d=27-9=18>0, the sequential registration Δ d=42-34=8 between track B and track C>0;Δd<0, show two
Movement locus does not have the sequential registration Δ d=27-34=-7 between the track A in Time overlap, such as Fig. 4 and track C<
0。
It should be noted that two steps of above-mentioned steps S1021 and S1022 are mutual indepedent, without stringent time order and function
Sequentially.
Step S1023:Establish the similarity model of the movement locus of each target object time-space relationship between any two.
In this step S1023, the sequential relationship described in the movement locus of each target object between any two can be utilized to believe
Breath is weighted the space similarity between any two processing, obtains space-time similarity model between two movement locus:
In above formula (5), F and w represent the space similarity and sequential weight between track respectively, and λ is scale factor.
According to the method described above, the similarity model of the movement locus of all target objects time-space relationship between any two is established.
Step S103:Using the space-time similarity between the movement locus of each target object, the movement to each target object
Track is clustered, and obtains the group of time and the target object of spatial closeness.
Using the space-time similarity between the movement locus of each target object, to the movement locus of each target object into
Row cluster, the method for common movement locus cluster have partition clustering, hierarchical clustering, Density Clustering, neural network clustering, statistics
Cluster etc. is learned, according to the clustering method that the actual conditions of the target object to be analyzed select its suitable, other can also be selected
Suitable clustering method.
It selectes a kind of clustering method, the movement locus with similar behavior is divided into together, i.e., it is space length is nearer
And the shorter movement locus of timing intervals gathers for one kind;By the movement locus with different behavior demarcate come, i.e., by space away from
From farther out or the longer movement locus of timing intervals is demarcated and, the group of time and the target object of spatial closeness is obtained,
Cohort analysis is carried out for use in Moving Objects.
Step S104:Cohort analysis is carried out to target object according to the group of target object.
Such as can group global analysis, example be carried out to target object according to the group for the target object that the above method obtains
Such as analyze the population density, movement speed, the characteristics of motion of target object.Anomalous event can be carried out by analysis simultaneously
Detection, including malposition, orientation exception, velocity anomaly etc..
In the above method of the present embodiment, the time-space relationship between movement locus is analyzed, obtains the when Kongxiang of movement locus
Like degree model, and then movement locus is clustered, analysis is detected dynamic population according to cluster group, is saving people
While power cost, it can preferably ensure rapidity, accuracy and the validity of dynamic population analysis.
Timing information between movement locus is fused in its space similarity, empty similarity modeling at that time is carried out, makes
It obtains the movement locus occurred in same sequential section and the movement locus being present in different sequential sections can be by the system of design
Empty similarity model is measured for the moment, is closed so as to excavate the space-time dynamic in the range of longer sequential between movement locus
System preferably carries out dynamic population the more global dynamic analysis in the range of longer sequential, obtains more robust, precision more
High dynamic population detection performance.
Embodiment two
The embodiment of the present invention two provides the method that Moving Objects are classified in above-mentioned video and analyzes this applied to vehicle flowrate
The specific embodiment of application scenarios, the realization flow is for the purpose of the analysis of vehicle flowrate, the analysis method of the vehicle flowrate, reference
Shown in Fig. 5, specifically comprise the following steps:
Step S501:Obtain the video sequence of input.
Input the video sequence of the information of vehicle flowrate of road camera shooting.The video sequence can input in real time,
It is integrally inputted after can also integrating.Vehicle traffic flow analysis device obtains the vehicle flowrate letter of the road camera shooting of the input
The video sequence of breath, and then determine the sequential length of video sequence analyzed every time, the sequential for the video sequence analyzed every time is long
Degree can determine as follows:
Mode one:According to the needs of the requirement of real-time of analysis, if the requirement of real-time of analysis is high, than if desired for real-time
Detection close on the vehicle of violation state, violation vehicle either accident vehicle timely to be handled ensureing
Movement locus quantity have it is enough it is representative under the premise of set the sequential length of the shorter video sequence analyzed every time;If
It only needs to analyze the features such as density feature, movement speed and the characteristics of motion of vehicle, and it is not high to the requirement of real-time of analysis, it can
To set the sequential length of the longer video sequence analyzed every time.
Mode two:It is determined according to the variation speed of practical video sequence, for example video sequence variation is fast, then when shorter
Movement locus in the video sequence of sequence length can meet representative requirement, can set it is shorter analyze every time regard
The sequential length of frequency sequence;If video sequence variation is slow, the sequential length of the longer video sequence analyzed every time is set
The representative requirement of its movement locus could be met.
Mode three:According to the variation speed of the needs of requirement of real-time and practical video sequence each point is determined to integrate
The sequential length of the video sequence of analysis.
In the present embodiment, the sequential length for the vehicle flowrate video analyzed every time can be set as 200 frames, extracted first
Preceding 200 frame video sequence therefrom extracts the movement locus of vehicle, is pre-processed, and two are carried out to pretreated movement locus
Two space-time similarity analysis so to all tracks carry out cluster analysis;Follow-up 200 frame video sequence is carried out again later identical
Operation, and so on, until all video sequences have all been analyzed.
Step S502:Extract the movement locus of each Vehicle Object in video.
Select a kind of vehicle checking method:Due to the movement of vehicle have it is fast, have it is slow or even sometimes static, and due to illumination
The variation of reasons background is also more with blocking etc., therefore can select optical flow method to detect the vehicle in video sequence.
The grey scale change of each pixel of image to be seen and is taken exercises, then the rate of change of gray scale is exactly a velocity vector,
All velocity vectors just constitute light velocity field.If not having moving target in image, the light stream vector of entire image is equal
Even variation.If there are during moving target in image, the light stream vector certainty of moving target and the light stream vector of neighborhood background are not
Together, so as to detect the position of moving target.Optical flow method can detect mobile slow or even static vehicle, and by ring
The influence of border variation is smaller.
Vehicle in the video sequence is detected by optical flow method, to the vehicle in the enterprising line trace of sequential, is obtained
Its spatial position in sequential obtains the movement locus of all vehicles.
Step S503:The movement locus of each vehicle is pre-processed.
The movement locus of the vehicle is pre-processed, get rid of noise and does not meet the movement locus of preset requirement.
Such as the track hovered back and forth does not meet actual conditions, is preset to undesirable movement locus.
Step S504:Analyze the space similarity of the movement locus of each vehicle between any two.
Since length does not wait and shape difference between target vehicle movement locus, can using modified person of outstanding talent, this is more
Space length between husband's distance metric vehicle movement track.The space similarity degree of movement locus is mainly by the sky of movement locus
Between neighbor relationships and length velocity relation show, therefore introduced in the calculating process of modified Hausdorff distance about space
The space Euclidean distance of neighbor relationships and the COS distance about speed.
For given trace A and track B, the analysis method of space similarity is shown in Fig. 6, comprises the steps of:
Step S5041:Each observation point on corresponding track A searches for the point of nearest neighbours on the B of track respectively.
Track is regarded as to a series of combination of observation points:WhereinRepresent i-th of observation
The space-time position of point.For given traceAnd trackBy to each observation point on movement locus ASearch for the point ε (i) nearest from the point as follows on movement locus B:
In above formula (6),WithRepresent the coordinate points of upper i-th of the observation point of movement locus A, and j is then on movement locus B
Jth point.
Step S5042:Track A is calculated to the modified Hausdorff distance between the B of track.
All observation points to leave the right or normal track on mark A are searched on the B of track using formula (6)After nearest point ε (i), rail is obtained
Mark A is to the modified Hausdorff distance between the B of track:
In above formula (7), NAIt is the quantity of observation point in the A of track, β is the flat of space Euclidean distance and COS distance in formula
Weigh coefficient, and viAnd vε(i)It is respectivelyWithMovement speed.
Step S5043:Each observation point on corresponding track B searches for the point of nearest neighbours on the A of track respectively.
Due to track A to the modified Hausdorff distance d (A, B) between the B of track and track B to repairing between the A of track
Positive Hausdorff distance d (B, A) is asymmetric, therefore calculates the modified Hausdorff distance between track A and track B
When need to calculate d (A, B) and d (B, A) respectively.
Using the method for step S5041, each observation point on corresponding track B is searched for from it most on the A of track respectively
Near point.
Step S5044:Track B is calculated to the modified Hausdorff distance between the A of track.
Using the method for step S5042, track B is calculated to the modified Hausdorff distance d (B, A) between the A of track.
Step S5041~S5042 and step S5043~S5044 in no particular order, can first carry out step S5041~
S5042 can also first carry out step S5043~S5044, can also be carried out at the same time step S5041~S5042 and step S5043
~S5044.
Step S5045:Calculate the space length between track A and track B.
Track A is taken to the modified Hausdorff distance d (A, B) between the B of track and track B to the amendment between the A of track
Hausdorff distance d (B, A) in minimum value as the space length between track A and track B:
F (A, B)=min (d (A, B), d (B, A)) (8)
Step S5046:Calculate the space similarity between track A and track B.
The space length f (A, B) between track A and track B is obtained into row index normalized according to formula (1)
Space similarity F (A, B) between track A and track B.
Step S505:Analyze the sequential relationship information of the movement locus of each vehicle between any two.
Step S504 and step S505 sequentially, can first carry out one step of any of which, can also be carried out at the same time in no particular order.
Sequential weights influence parameter C between two tracks is calculated, and then calculate two using formula (2) according to formula (3)
Temporal Weight between track.
Step S506:The similarity modeling of time-space relationship is carried out to the movement locus of each vehicle, determines the movement of each vehicle
Space-time similarity between track.
According to formula (5), movement locus space-time similarity model is established.
Step S507:Using the space-time similarity between the movement locus of each vehicle, the movement locus of each vehicle is carried out
Cluster, obtains time and the vehicle group of spatial closeness.
Vehicle flowrate motion trace data be slowly polymerize by small group evolve into big group as a result, therefore the present embodiment
It selects and vehicle flowrate movement locus cluster group is obtained by low upward hierarchical clustering method.
The embodiment of the present invention can also cluster the movement locus of each vehicle to realize by other clustering methods,
This is not limited.
The specific cluster process of hierarchical clustering as shown in figs. 7 a and 7b, i.e., just starts during cluster us by single movement locus
It is gradually then same class by corresponding Trace Formation according to the space-time tightness degree between track, with cluster as one kind
The increase of level will carry out the group that fusion forms bigger between smaller group.
It can be automatically stopped to be clustered in different scenes according to each scene situation, i.e., using close between movement locus
Degree automatically determines the clusters number of hierarchical clustering, and it is poor between infima species internal difference and maximum kind to introduce, difference maximum, class internal difference between class
When minimum, Clustering Effect is optimal.Optimum cluster number calculates as follows:
In above formula (9), SBAnd SWIt is illustrated respectively under current hierarchical clustering state, clusters difference and class internal difference between the class of group.
With the increase of cluster level, difference and class internal difference are all becoming larger between class, and when each hierarchical clustering all counts
Calculate CN, CNClustering Effect is optimal when maximum, CNCluster is automatically stopped when maximum.
Fig. 7 a are the cluster process figure of movement locus low-level cluster, and Fig. 7 b are high-level cluster process figure:
First order hierarchical clustering, first by single movement locus t1、t2、t3... respectively as one kind.
Second level hierarchical clustering, according to the tightness degree of movement locus between any two, by two high fortune of space-time similarity
Dynamic rail mark is fused to same class.The space-time similarity model of the movement locus obtained according to step S506 between any two, i.e. formula
(5) space-time similarity is less than or equal to the current level of setting by the space-time similarity between two tracks being calculated
Two tracks of space-time similarity threshold are classified as one kind, such as t1And t2Between space-time similarity be less than the threshold value, be classified as one
Class C1,2, t3It finds and seemingly spends the track in prescribed limit, t less than with its phase space-time3Individually it is classified as one kind.
Third level hierarchical clustering, the group that the second hierarchical clustering is obtained judge between group respectively two-by-two again, if one
The space-time similarity of all tracks that all tracks that a group includes are included with another group between any two is both less than
Or the space-time similarity threshold of the current level equal to setting, if so, the two groups gather for one kind;If there is any track two-by-two
Space similarity be more than setting current level space-time similarity threshold, then the two groups cannot gather for one kind.For example,
By group C1,2Comprising track t1、t2Respectively with t3Between space-time similarity analyzed, find t1With t3Space-time it is similar
Degree, t2With t3Space-time similarity be both less than the space-time similarity threshold of current level, then group C1,2With t3Gather for one kind
C1,2,3 ...。
And so on, continue more advanced hierarchical clustering, while in the cluster of each level, calculate current level respectively
The ratio of difference and class internal difference between the class of cluster classification under cluster state.
N-2 grades of hierarchical clusterings according to the method for above-mentioned third level hierarchical clustering, have respectively obtained group D1、D2With
D3……。
N-1 grades of hierarchical clusterings, according to the method described above, comparison group D1In all tracks for including and group D2In include
All tracks space-time similarity two-by-two both less than or the space-time similarity threshold equal to current level, then by D1And D2Gather and be
A kind of D1,2;Group D3Do not find all tracks that all tracks included include therewith all two-by-two between space-time similarity it is small
In or equal to current level space-time similarity threshold group, group D3Individually it is classified as one kind.
N grades of hierarchical clusterings find group D1,2With group D3Comprising all tracks two-by-two between space-time similarity all
Less than or equal to the space-time similarity threshold of current level, then by D1,2And D3Gather for one kind, obtain the track of a kind of cluster group
path1.And calculate the ratio C of difference and class internal difference between the class for clustering group of this levelNMaximum, i.e., Clustering Effect at this time is optimal,
Stop cluster, obtain optimal cluster group.
In the above method, the setting of the space-time similarity threshold of current level is as the increase of cluster level is gradually reduced
's.
Step S508:It is for statistical analysis to vehicle in each vehicle group, obtain the information of vehicle flowrate in predeterminable area.
The vehicle flowrate movement locus cluster group obtained according to the above method carries out group global analysis to vehicle flowrate, understands
The traffic informations such as vehicle density, speed, the direction of motion, queuing degree, while can abnormal behaviour and timely be detected by analysis
Alarm effectively monitors unlawful practice violating the regulations, prevents traffic accident.
For vehicle flowrate analysis task, as urbanization process is accelerated, traffic problems getting worse, only by people
The mode of work viewing video recording far can not cope with ultra-large vehicle flowrate data.In the present embodiment, according to movement locus
Time-space relationship on highway mobile vehicle carry out space-time similarity modeling, based on space-time similarity model use hierarchical clustering side
Method clusters movement locus, and automatically determines clusters number according to the tightness degree between movement locus in each scene,
It is achieved thereby that carrying out intellectual analysis to magnanimity vehicle flowrate data, the global information of vehicle flowrate can be preferably obtained, with faster
Accurately mode knows the specifying information of vehicle flowrate under monitoring video, while anomalous event is detected by analysis.Ensure
The high efficiency of vehicle Flow Detection, accuracy and validity.
Based on same inventive concept, the embodiment of the present invention also provides the device that Moving Objects are classified in a kind of video, the dress
It puts and can be applied to human behavioral mode, communication and logistics, emergency evacuation management, Animal behaviour analysis, the marketing, computer
In the every field such as geometry and analog simulation.
The structure of the device as shown in figure 8, including:
Acquisition module 801, for extracting the movement locus of each target object in video;
Modeling module 802, for carrying out the similarity modeling of time-space relationship to the movement locus of each target object, really
Space-time similarity between the movement locus of fixed each target object;
Cluster module 803, for utilizing the space-time similarity between the movement locus of each target object, to each target
The movement locus of object is clustered, and obtains the group of time and the target object of spatial closeness.
Preferably, above-mentioned acquisition module 801, is specifically used for, and each target object is detected from video sequence, to described each
Target object obtains its spatial position in sequential, obtains the movement locus of all target objects in the enterprising line trace of sequential.
Preferably, as shown in figure 9, above-mentioned modeling module 802, including:
Space similarity analyzes submodule 8021, for the movement locus to each target object, analyze respectively its two
Space similarity between two;
Sequential relationship information analysis submodule 8022 for the movement locus to each target object, analyzes it respectively
Sequential relationship information between any two;
Submodule 8023 is modeled, for the sequential relationship information described in the movement locus of each target object between any two to be melted
It closes in the space similarity described in it between any two, establishes the phase of the movement locus of each target object time-space relationship between any two
Like degree model.
Preferably, above-mentioned space similarity analysis submodule 8021, is specifically used for:For given trace A and track B, meter
The space length f (A, B) between track A and track B is calculated, the space length f (A, B) is done into normalized, obtains track A
Final space similarity between the B of track:
F (A, B)=exp (- f (A, B)/σ) (1)
In above formula (1), σ is Normalized Scale parameter.
Preferably, above-mentioned sequential relationship information analysis submodule 8022, is specifically used for:
Calculate the sequential weight W between given trace A and track B:
W=1/ (1+exp (- C)) (2)
In above formula (2), the calculation formula of parameter C is as follows:
In above formula (3), sequential registrations of the Δ d between track A and track B, η is that sequential is short in track A and track B
The ratio of track and the track of sequential length, and ηtThe then sequential lenth ratio threshold value between track A and track B,With
It is the sequential length of movement locus A and B respectively, K is index parameters.
Preferably, above-mentioned modeling submodule 8023, is specifically used for:
Using the sequential relationship information described in the movement locus of each target object between any two to the sky between any two
Between similarity be weighted processing, space-time similarity model is as follows between obtaining two movement locus:
In above formula (5), F and w represent the space similarity and sequential weight between track respectively, and λ is scale factor.
Based on same inventive concept, the embodiment of the present invention also provides a kind of analytical equipment of vehicle flowrate, the structure of the device
With reference to shown in Figure 10, including:
Acquisition module 1001, for extracting the movement locus of each Vehicle Object in video;
Modeling module 1002 for carrying out the similarity modeling of time-space relationship to the movement locus of each vehicle, determines
Space-time similarity between the movement locus of each vehicle;
Cluster module 1003, for utilizing the space-time similarity between the movement locus of each vehicle, to each vehicle
Movement locus is clustered, and obtains time and the vehicle group of spatial closeness;
Analysis module 1004 for for statistical analysis to vehicle in each vehicle group, obtains the wagon flow in predeterminable area
Measure information.
The device and a kind of analytical equipment of vehicle flowrate classified about Moving Objects in a kind of video in above-described embodiment,
Wherein modules perform the concrete mode operated and are described in detail in the embodiment in relation to this method, herein will
Explanation is not set forth in detail.
Based on same inventive concept, the embodiment of the present invention additionally provides a kind of non-transitorycomputer readable storage medium,
When the instruction in the storage medium is performed by processor so that processor is able to carry out the evidence obtaining side of above-mentioned electronic evidence
Method, including:
Extract the movement locus of each target object in video;
The similarity modeling of time-space relationship is carried out to the movement locus of each target object, determines the fortune of each target object
Space-time similarity between dynamic rail mark;
Using the space-time similarity between the movement locus of each target object, to the movement locus of each target object into
Row cluster, obtains the group of time and the target object of spatial closeness.
Unless otherwise specific statement, term such as handle, calculate, operation, determine, display etc. can refer to it is one or more
A processing or action and/or the process of computing system or similar devices, the action and/or process will be indicated as processing system
It the data manipulation of physics (such as electronics) amount in the register or memory of system and is converted into and is similarly represented as processing system
Memory, register either other this type of information storage, transmitting or display equipment in other data of physical quantity.Information
It can be represented with signal using any one of a variety of different technology and methods.For example, in above description
Data, instruction, order, information, signal, bit, symbol and the chip referred to can use voltage, electric current, electromagnetic wave, magnetic field or grain
Son, light field or particle or its arbitrary combination represent.
It should be understood that the particular order or level of the step of during disclosed are the examples of illustrative methods.Based on setting
Count preference, it should be appreciated that in the process the step of particular order or level can be in the feelings for the protection domain for not departing from the disclosure
It is rearranged under condition.Appended claim to a method is not illustratively sequentially to give the element of various steps, and not
It is to be limited to the particular order or level.
In above-mentioned detailed description, various features are combined together in single embodiment, to simplify the disclosure.No
This open method should be construed to reflect such intention, that is, the embodiment of theme claimed needs clear
The more features of feature stated in each claim to Chu.On the contrary, that reflected such as appended claims
Sample, the present invention are in the state fewer than whole features of disclosed single embodiment.Therefore, appended claims is special
This is expressly incorporated into detailed description, and wherein each claim is alone as the individual preferred embodiment of the present invention.
It should also be appreciated by one skilled in the art that the various illustrative boxes, the mould that are described with reference to the embodiments herein
Block, circuit and algorithm steps can be implemented as electronic hardware, computer software or combination.In order to clearly demonstrate hardware and
Interchangeability between software above carries out various illustrative components, frame, module, circuit and step around its function
It is generally described.Hardware is implemented as this function and is also implemented as software, depending on specifically applying and to entire
The design constraint that system is applied.Those skilled in the art can be directed to each specific application, be realized in a manner of flexible
Described function, it is still, this to realize that decision should not be construed as the protection domain away from the disclosure.
Hardware can be embodied directly in reference to the step of described method of the embodiments herein or algorithm, held by processor
Capable software module or combination.Software module can be located at RAM memory, flash memory, ROM memory, eprom memory,
The storage of eeprom memory, register, hard disk, mobile disk, CD-ROM or any other form well known in the art is situated between
In matter.A kind of illustrative storage medium is connected to processor, so as to enable a processor to from the read information, and
Information can be written to the storage medium.Certainly, storage medium can also be the component part of processor.Pocessor and storage media
It can be located in ASIC.The ASIC can be located in user terminal.Certainly, pocessor and storage media can also be used as discrete sets
Part is present in user terminal.
For software implementations, technology described in this application can use the module for performing herein described function (for example, mistake
Journey, function etc.) it realizes.These software codes can be stored in memory cell and be performed by processor.Memory cell can
With realize in processor, can also realize outside the processor, in the latter case, it via various means by correspondence
It is coupled to processor, these are all well known in the art.
Described above includes the citing of one or more embodiments.Certainly, in order to above-described embodiment is described and description portion
The all possible combination of part or method is impossible, but it will be appreciated by one of ordinary skill in the art that each implementation
Example can do further combinations and permutations.Therefore, embodiment described herein is intended to cover fall into the appended claims
Protection domain in all such changes, modifications and variations.In addition, with regard to the term used in specification or claims
"comprising", the mode that covers of the word are similar to term " comprising ", just as " including, " solved in the claims as link word
As releasing.In addition, the use of any one of specification in claims term "or" is to represent " non-exclusionism
Or ".
Claims (10)
1. a kind of method that Moving Objects are classified in video, which is characterized in that including:
Extract the movement locus of each target object in video;
The similarity modeling of time-space relationship is carried out to the movement locus of each target object, determines the movement rail of each target object
Space-time similarity between mark;
Using the space-time similarity between the movement locus of each target object, the movement locus of each target object is gathered
Class obtains the group of time and the target object of spatial closeness.
2. the method as described in claim 1, which is characterized in that the movement locus of each target object, tool in the extraction video
Body includes:
Each target object is detected from video sequence, to each target object in the enterprising line trace of sequential, obtain its when
Spatial position in sequence obtains the movement locus of all target objects.
3. according to the method described in claim 1, it is characterized in that, the movement locus to each target object carries out space-time pass
The similarity modeling of system, specifically includes:
To the movement locus of each target object, its space similarity between any two and sequential between any two are analyzed respectively
Relation information;
Sequential relationship information described in the movement locus of each target object between any two is fused to the sky described in it between any two
Between in similarity, establish the similarity model of the movement locus of each target object time-space relationship between any two.
4. according to the method described in claim 3, it is characterized in that, analyze the sky of the movement locus of each target object between any two
Between similarity, specifically include:
For given trace A and track B, the space length f (A, B) between track A and track B is calculated, by the space length f
(A, B) does normalized, obtains the final space similarity between track A and track B:
F (A, B)=exp (- f (A, B)/σ) (1)
In above formula (1), σ is Normalized Scale parameter.
5. according to the method described in claim 3, it is characterized in that, analyze the movement locus of each target object between any two when
Order relation information, specifically includes:
Calculate the sequential weight W between given trace A and track B:
W=1/ (1+exp (- C)) (2)
In above formula (2), the calculation formula of parameter C is as follows:
In above formula (3), sequential registrations of the Δ d between track A and track B, η is the track that sequential is short in track A and track B
The ratio for the track grown with sequential, and ηtThe then sequential lenth ratio threshold value between track A and track B,WithRespectively
It is the sequential length of movement locus A and B, K is index parameters.
6. according to claim 3-5 any one of them methods, which is characterized in that establish the movement locus of each target object two-by-two
Between time-space relationship similarity model, specifically include:
Using the sequential relationship information described in the movement locus of each target object between any two to the space phase between any two
Processing is weighted like degree, space-time similarity model is as follows between obtaining two movement locus:
In above formula (4), F and w represent the space similarity and sequential weight between track respectively, and λ is scale factor.
7. a kind of analysis method of vehicle flowrate, which is characterized in that including:
Extract the movement locus of each Vehicle Object in video;
To the similarity modeling of the movement locus progress time-space relationship of each vehicle, between the movement locus for determining each vehicle
Space-time similarity;
Using the space-time similarity between the movement locus of each vehicle, the movement locus of each vehicle is clustered, is obtained
Time and the vehicle group of spatial closeness;
It is for statistical analysis to vehicle in each vehicle group, obtain the information of vehicle flowrate in predeterminable area.
8. a kind of device that Moving Objects are classified in video, which is characterized in that including:
Acquisition module, for extracting the movement locus of each target object in video;
Modeling module for carrying out the similarity modeling of time-space relationship to the movement locus of each target object, determines each mesh
Mark the space-time similarity between the movement locus of object;
Cluster module, for utilizing the space-time similarity between the movement locus of each target object, to each target object
Movement locus is clustered, and obtains the group of time and the target object of spatial closeness.
9. device as claimed in claim 8, which is characterized in that the modeling module, including:
Space similarity analyzes submodule, for the movement locus to each target object, analyzes it respectively between any two
Space similarity;
Sequential relationship information analysis submodule for the movement locus to each target object, analyzes it between any two respectively
Sequential relationship information;
Submodule is modeled, for the sequential relationship information described in the movement locus of each target object between any two to be fused to its institute
It states in space similarity between any two, establishes the similarity mould of the movement locus of each target object time-space relationship between any two
Type.
10. a kind of analytical equipment of vehicle flowrate, which is characterized in that including:
Acquisition module, for extracting the movement locus of each Vehicle Object in video;
Modeling module for carrying out the similarity modeling of time-space relationship to the movement locus of each vehicle, determines each vehicle
Space-time similarity between movement locus;
Cluster module, for utilizing the space-time similarity between the movement locus of each vehicle, to the movement locus of each vehicle
It is clustered, obtains time and the vehicle group of spatial closeness;
Analysis module for for statistical analysis to vehicle in each vehicle group, obtains the information of vehicle flowrate in predeterminable area.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711138992.4A CN108133172B (en) | 2017-11-16 | 2017-11-16 | Method for classifying moving objects in video and method and device for analyzing traffic flow |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711138992.4A CN108133172B (en) | 2017-11-16 | 2017-11-16 | Method for classifying moving objects in video and method and device for analyzing traffic flow |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108133172A true CN108133172A (en) | 2018-06-08 |
CN108133172B CN108133172B (en) | 2022-04-05 |
Family
ID=62389151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711138992.4A Active CN108133172B (en) | 2017-11-16 | 2017-11-16 | Method for classifying moving objects in video and method and device for analyzing traffic flow |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108133172B (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109461106A (en) * | 2018-10-11 | 2019-03-12 | 浙江公共安全技术研究院有限公司 | A kind of multidimensional information perception processing method |
CN109684916A (en) * | 2018-11-13 | 2019-04-26 | 恒睿(重庆)人工智能技术研究院有限公司 | Based on path locus data exception detection method, system, equipment and storage medium |
CN109784260A (en) * | 2019-01-08 | 2019-05-21 | 深圳英飞拓科技股份有限公司 | A kind of zone flow real-time statistical method and system based on video structural |
CN110727756A (en) * | 2019-10-18 | 2020-01-24 | 北京明略软件***有限公司 | Management method and device of space-time trajectory data |
CN110751164A (en) * | 2019-03-01 | 2020-02-04 | 西安电子科技大学 | Old man travel abnormity detection method based on location service |
WO2020078540A1 (en) * | 2018-10-16 | 2020-04-23 | Huawei Technologies Co., Ltd. | Improved trajectory matching based on use of quality indicators empowered by weighted confidence values |
CN111209769A (en) * | 2018-11-06 | 2020-05-29 | 深圳市商汤科技有限公司 | Identity authentication system and method, electronic device, and storage medium |
CN111898592A (en) * | 2020-09-29 | 2020-11-06 | 腾讯科技(深圳)有限公司 | Track data processing method and device and computer readable storage medium |
CN112037245A (en) * | 2020-07-22 | 2020-12-04 | 杭州海康威视数字技术股份有限公司 | Method and system for determining similarity of tracked target |
CN112365712A (en) * | 2020-11-05 | 2021-02-12 | 包赛花 | AI-based intelligent parking lot parking guidance method and artificial intelligence server |
CN112562315A (en) * | 2020-11-02 | 2021-03-26 | 鹏城实验室 | Method, terminal and storage medium for acquiring traffic flow information |
CN112925948A (en) * | 2021-02-05 | 2021-06-08 | 上海依图网络科技有限公司 | Video processing method and device, medium, chip and electronic equipment thereof |
CN113255518A (en) * | 2021-05-25 | 2021-08-13 | 神威超算(北京)科技有限公司 | Video abnormal event detection method and chip |
CN113971782A (en) * | 2021-12-21 | 2022-01-25 | 云丁网络技术(北京)有限公司 | Comprehensive monitoring information management method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120170802A1 (en) * | 2010-12-30 | 2012-07-05 | Pelco Inc. (Clovis, CA) | Scene activity analysis using statistical and semantic features learnt from object trajectory data |
CN104134222A (en) * | 2014-07-09 | 2014-11-05 | 郑州大学 | Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion |
CN104657424A (en) * | 2015-01-21 | 2015-05-27 | 段炼 | Clustering method for interest point tracks under multiple temporal and spatial characteristic fusion |
CN106383868A (en) * | 2016-09-05 | 2017-02-08 | 电子科技大学 | Road network-based spatio-temporal trajectory clustering method |
CN107301254A (en) * | 2017-08-24 | 2017-10-27 | 电子科技大学 | A kind of road network hot spot region method for digging |
-
2017
- 2017-11-16 CN CN201711138992.4A patent/CN108133172B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120170802A1 (en) * | 2010-12-30 | 2012-07-05 | Pelco Inc. (Clovis, CA) | Scene activity analysis using statistical and semantic features learnt from object trajectory data |
CN104134222A (en) * | 2014-07-09 | 2014-11-05 | 郑州大学 | Traffic flow monitoring image detecting and tracking system and method based on multi-feature fusion |
CN104657424A (en) * | 2015-01-21 | 2015-05-27 | 段炼 | Clustering method for interest point tracks under multiple temporal and spatial characteristic fusion |
CN106383868A (en) * | 2016-09-05 | 2017-02-08 | 电子科技大学 | Road network-based spatio-temporal trajectory clustering method |
CN107301254A (en) * | 2017-08-24 | 2017-10-27 | 电子科技大学 | A kind of road network hot spot region method for digging |
Non-Patent Citations (1)
Title |
---|
潘奇明: "运动目标轨迹分类与识别研究", 《中国优秀硕士学位论文全文数据(电子期刊)信息科技辑》 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109461106A (en) * | 2018-10-11 | 2019-03-12 | 浙江公共安全技术研究院有限公司 | A kind of multidimensional information perception processing method |
US11889382B2 (en) | 2018-10-16 | 2024-01-30 | Huawei Technologies Co., Ltd. | Trajectory matching based on use of quality indicators empowered by weighted confidence values |
WO2020078540A1 (en) * | 2018-10-16 | 2020-04-23 | Huawei Technologies Co., Ltd. | Improved trajectory matching based on use of quality indicators empowered by weighted confidence values |
CN111328403A (en) * | 2018-10-16 | 2020-06-23 | 华为技术有限公司 | Improved trajectory matching based on quality indicators allowed using weighted confidence values |
CN111328403B (en) * | 2018-10-16 | 2023-09-29 | 华为技术有限公司 | Trajectory matching based on quality index improvement allowed using weighted confidence values |
CN111209769B (en) * | 2018-11-06 | 2024-03-08 | 深圳市商汤科技有限公司 | Authentication system and method, electronic device and storage medium |
CN111209769A (en) * | 2018-11-06 | 2020-05-29 | 深圳市商汤科技有限公司 | Identity authentication system and method, electronic device, and storage medium |
CN109684916A (en) * | 2018-11-13 | 2019-04-26 | 恒睿(重庆)人工智能技术研究院有限公司 | Based on path locus data exception detection method, system, equipment and storage medium |
CN109684916B (en) * | 2018-11-13 | 2020-01-07 | 恒睿(重庆)人工智能技术研究院有限公司 | Method, system, equipment and storage medium for detecting data abnormity based on path track |
CN109784260A (en) * | 2019-01-08 | 2019-05-21 | 深圳英飞拓科技股份有限公司 | A kind of zone flow real-time statistical method and system based on video structural |
CN110751164B (en) * | 2019-03-01 | 2022-04-12 | 西安电子科技大学 | Old man travel abnormity detection method based on location service |
CN110751164A (en) * | 2019-03-01 | 2020-02-04 | 西安电子科技大学 | Old man travel abnormity detection method based on location service |
CN110727756A (en) * | 2019-10-18 | 2020-01-24 | 北京明略软件***有限公司 | Management method and device of space-time trajectory data |
CN112037245B (en) * | 2020-07-22 | 2023-09-01 | 杭州海康威视数字技术股份有限公司 | Method and system for determining similarity of tracked targets |
CN112037245A (en) * | 2020-07-22 | 2020-12-04 | 杭州海康威视数字技术股份有限公司 | Method and system for determining similarity of tracked target |
CN111898592B (en) * | 2020-09-29 | 2020-12-29 | 腾讯科技(深圳)有限公司 | Track data processing method and device and computer readable storage medium |
CN111898592A (en) * | 2020-09-29 | 2020-11-06 | 腾讯科技(深圳)有限公司 | Track data processing method and device and computer readable storage medium |
CN112562315B (en) * | 2020-11-02 | 2022-04-01 | 鹏城实验室 | Method, terminal and storage medium for acquiring traffic flow information |
CN112562315A (en) * | 2020-11-02 | 2021-03-26 | 鹏城实验室 | Method, terminal and storage medium for acquiring traffic flow information |
CN112365712A (en) * | 2020-11-05 | 2021-02-12 | 包赛花 | AI-based intelligent parking lot parking guidance method and artificial intelligence server |
CN112925948A (en) * | 2021-02-05 | 2021-06-08 | 上海依图网络科技有限公司 | Video processing method and device, medium, chip and electronic equipment thereof |
CN113255518A (en) * | 2021-05-25 | 2021-08-13 | 神威超算(北京)科技有限公司 | Video abnormal event detection method and chip |
CN113971782A (en) * | 2021-12-21 | 2022-01-25 | 云丁网络技术(北京)有限公司 | Comprehensive monitoring information management method and system |
CN113971782B (en) * | 2021-12-21 | 2022-04-19 | 云丁网络技术(北京)有限公司 | Comprehensive monitoring information management method and system |
Also Published As
Publication number | Publication date |
---|---|
CN108133172B (en) | 2022-04-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108133172A (en) | Method, the analysis method of vehicle flowrate and the device that Moving Objects are classified in video | |
Kim et al. | Deep-hurricane-tracker: Tracking and forecasting extreme climate events | |
Ma et al. | A hybrid CNN-LSTM model for aircraft 4D trajectory prediction | |
Sun et al. | RSOD: Real-time small object detection algorithm in UAV-based traffic monitoring | |
CN108216252B (en) | Subway driver vehicle-mounted driving behavior analysis method, vehicle-mounted terminal and system | |
Zhang et al. | Deep convolutional neural networks for forest fire detection | |
CN108399745B (en) | Unmanned aerial vehicle-based time-interval urban road network state prediction method | |
CN114970321A (en) | Scene flow digital twinning method and system based on dynamic trajectory flow | |
CN110084165A (en) | The intelligent recognition and method for early warning of anomalous event under the open scene of power domain based on edge calculations | |
Jain et al. | Performance analysis of object detection and tracking algorithms for traffic surveillance applications using neural networks | |
CN106815563B (en) | Human body apparent structure-based crowd quantity prediction method | |
CN113610069B (en) | Knowledge distillation-based target detection model training method | |
CN113239914B (en) | Classroom student expression recognition and classroom state evaluation method and device | |
CN113642474A (en) | Hazardous area personnel monitoring method based on YOLOV5 | |
Khosravi et al. | Crowd emotion prediction for human-vehicle interaction through modified transfer learning and fuzzy logic ranking | |
CN109376736A (en) | A kind of small video target detection method based on depth convolutional neural networks | |
CN113297972A (en) | Transformer substation equipment defect intelligent analysis method based on data fusion deep learning | |
Xie et al. | Research of PM2. 5 prediction system based on CNNs-GRU in Wuxi urban area | |
CN110674887A (en) | End-to-end road congestion detection algorithm based on video classification | |
CN114566052B (en) | Method for judging rotation of highway traffic flow monitoring equipment based on traffic flow direction | |
CN114202803A (en) | Multi-stage human body abnormal action detection method based on residual error network | |
CN106384359A (en) | Moving target tracking method and television set | |
Pudasaini et al. | Scalable object detection, tracking and pattern recognition model using edge computing | |
Li et al. | Lightweight convolutional neural network for aircraft small target real-time detection in Airport videos in complex scenes | |
Billones et al. | Vehicle-pedestrian classification with road context recognition using convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |