CN105354542A - Method for detecting abnormal video event in crowded scene - Google Patents

Method for detecting abnormal video event in crowded scene Download PDF

Info

Publication number
CN105354542A
CN105354542A CN201510710563.4A CN201510710563A CN105354542A CN 105354542 A CN105354542 A CN 105354542A CN 201510710563 A CN201510710563 A CN 201510710563A CN 105354542 A CN105354542 A CN 105354542A
Authority
CN
China
Prior art keywords
loci
block
atom
class
sigma
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510710563.4A
Other languages
Chinese (zh)
Other versions
CN105354542B (en
Inventor
陈华华
周灵娟
郭春生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhu Qibo Intellectual Property Operation Co.,Ltd.
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201510710563.4A priority Critical patent/CN105354542B/en
Publication of CN105354542A publication Critical patent/CN105354542A/en
Application granted granted Critical
Publication of CN105354542B publication Critical patent/CN105354542B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23211Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with adaptive number of clusters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method for detecting an abnormal video event in a crowded scene. The method specifically comprises: in a training phase, firstly dividing a video into blocks, extracting optical flow information of the blocks, using the extracted optical flow information to represent local features of the blocks, using the local features of the blocks as elements to construct a map, reducing dimensions through Laplace feature mapping, classifying the local features by using an adaptive clustering method, acquiring class centers as code words, and finally forming a codebook by using the code words; and in a test phase, firstly extracting local features of blocks of a video, then calculating a feature distance similarity between the local features and the codebook, and detecting an event in conjunction with previous 8-neighborhood information. According to the method, a normal-event model can be established only by clustering elements sets comprised of optical flow features, and the neighborhood information of historical moments further increases the accuracy of detecting an abnormal event.

Description

A kind of video accident detection method under crowd scene
Technical field
The invention belongs to Intelligent Video Surveillance Technology field, relate to a kind of video accident detection method under crowd scene.
Background technology
Video accident detection refers to event in energy automatic analysis video monitoring scene, if there is anomalous event just can send alerting signal at once, thus improves response and the rescue efficiency of relevant departments.Such as, on shopping mall by bike, skidding and vehicle pass-through, on square crowd fear, these anomalous events such as to trample and can be detected timely and report to the police.It has a wide range of applications in field of video monitoring.
Current existing accident detection method can be roughly divided into two classes: 1) by following the tracks of destination object, analyzing its movement locus, and then judging it; 2) not needing to follow the tracks of destination object, by setting up normal event model, analysis being made to light stream, Texture eigenvalue.First kind method, by following the tracks of destination object, obtain direction of motion with the movement velocity of target and according to destination object feature and size ratio, these class methods obtain good application for when only there is minority moving target in scene, but under crowded environment, due to overlapped between target, be difficult to follow the tracks of destination object, therefore the detection perform of these class methods is not good enough.The present invention adopts Equations of The Second Kind method.
Summary of the invention
The object of this invention is to provide a kind of video accident detection method under crowd scene, to improve accident detection rate.
For solving the problem, technical scheme provided by the invention is as follows:
Step (1) feature extraction, specific as follows:
Each frame in video is divided into non-overlapping copies and size is the block of N × N, and gets M frame continuously, obtain the stereo block that size is N × N × M, therefore the video of M frame length is made up of several stereo block, and each stereo block is called atom.The resolution sizes of setting video frame is W × H, and each frame obtains the quantity of block wherein represent and round downwards.The movable information of t loci block of locations represents with a histogram 1≤loci≤f_block, loci is integer, wherein h i, 1≤i≤4, be loci block of locations according to light stream direction by the light stream amplitude sum on 4 directions of 90 degree of interval quantizing acquisitions.Current time is t, and in conjunction with the histogram information of (M-1)/2 frame before t and rear (M-1)/2 frame, the atom on t loci position is expressed as m gets odd number, 1≤loci≤f_block.For one section of video, be divided into P segment in units of M frame, the atomic quantity obtained is P × f_block, by the atom set of these these videos of atomic building.
Step (2) feature learning, specific as follows:
2-1. adopts Laplacian Eigenmap method that atom set is mapped in lower dimensional space, then carries out cluster to it.First to atom collection design of graphics G=(V, E), vertex set V represents each atom, the limit E of Weight represents the similarity between each atom, in figure, between i-th atom and a jth atom, the weight on limit calculates by formula (1), 1≤i≤P × f_block, 1≤j≤P × f_block:
w i j = exp ( - ( 1 - d f i , j ) σ i σ j ) exp ( - d s i . j σ s ) - - - ( 1 )
In the Section 1 of formula (1) the right mathematic(al) representation be:
d f i , j = α i , j × d f ( x i , x j ) - - - ( 2 )
COS distance wherein <x i, x j> represents and asks x iwith x jinner product, definition d f ( x i , x j ) = &Sigma; k = 0 4 M min &lsqb; x i ( k ) , x j ( k ) &rsqb; max &lsqb; &Sigma; k = 0 4 M - 1 x i ( k ) , &Sigma; k = 0 4 M - 1 x j ( k ) &rsqb; ; &sigma; r = ( 1 - d f r , G ) , form when being exactly i=r, j=G in formula (2), σ rscale factor, wherein x gx rg Neighbor Points, the distance metric of neighbour adopts Euclidean distance, r=i or j.
In the Section 2 of formula (1) the right, represent the Euclidean distance on i-th atom and a jth atomic space; σ sfor the space scale factor.After having built figure, by means of figure, spectral clustering is carried out to atom.In graph theory, the problem of cluster changes the problem that figure cuts into.Its principle is that the limit weight maximization in subgraph and the limit weight between each subgraph minimize.Cut-off limit weight sum is minimum, even if the minimization of object function shown in formula (3).
E ( Y ) = &Sigma; i j w i j | | y i - y j | | 2 - - - ( 3 )
Wherein w ijprovided by formula (1); y iand y jx respectively iand x jbe mapped to the coordinate vector on object space, Y is by vectorial y icomposition, 1≤i≤P × f_block.The problem equivalent of the minimization of object function is in solving optimum Y:
Y opt=argmin (YLY) s.t.Y tin DY=1 (4) formula (4), Laplacian Matrix L=D-W.D is diagonal matrix, element value d on its diagonal line ii=Σ jw ij; W is by w ijform.Calculate L relative to the generalized eigenvalue of D and proper vector, choose l minimum non-zero eigenwert and characteristic of correspondence is vectorial.The l a tried to achieve proper vector is become the characteristic vector space of (P × f_block) × l, and wherein every a line represents the coordinate of atom at l dimension space.Finally according to the l dimension space coordinate of each atom, adopt the adaptive clustering scheme (LihiZelnik-Manor of LihiZelnik-Manor, Self-TuningSpectralClustering.InProceedingsofthe18thAnnu alConferenceonNeuralInformationProcessingSystems2004) at l dimension space, cluster is carried out to atom, the class number of self-adaptation determination cluster.Obtain Num0 class after cluster, each Lei Lei center represents the local feature of a class event, and class center calculates by formula (5):
&mu; k = 1 N k &Sigma; i = 1 N k x i - - - ( 5 )
N krepresent the atom number belonging to kth class event.Class center is as code word, and on the loci of position, all possible code word forms code book.
The process that 2-2. sets up code book in loci block of locations is as follows:
A () sets up initial codebook, feature learning training data being carried out step 2-1 obtains Num class event, Num=Num0.W is calculated by formula (6) k, lociif, w k, loci>0, then add to kLei Lei center in the code book on the loci of position as code word, preserves the training data of kth class local feature, k={1,2 simultaneously ... ... ..Num0};
w k , l o c i = n k , l o c i &Sigma; k n k , l o c i , k = { 1 , 2 , ........... N u m 0 } - - - ( 6 )
Wherein n k, locirepresent the number of times that on the loci of position, kth class event occurs, 1≤loci≤f_block.
B () inputs new training data it is compared with the code book in loci block of locations, if meet characteristic distance similarity time, th is the threshold value of setting; Will join in a new set U, otherwise handle to join on the loci of position in the training data of most Similarity Class local feature, then recalculate the class center of this kind of local feature, upgrade code word in code book, wherein it is the similarity of the characteristic distance calculated by formula (7);
d m a x ( x l o c i n e w ) = m a x k ( d f ( x l o c i n e w , c l o c i k ) ) - - - ( 7 )
Wherein represent a kth code word in the code book on the loci of position, 1≤k≤Num.
C (), when gathering the data bulk in U and not reaching Q, returns step (b); When gathering the data bulk in U and reaching Q, again carry out step 2-1 operation to set U, be clustered into Num1 class, renewal Num0 is Num0=Num1, if w k, loci>0, then just add to kth class event class center in the code book on the loci of position as code word, preserves the training data of kth class local feature, k={1,2 simultaneously ... ... ..Num0}; Empty set U, classification sum Num is updated to Num=Num+Num1, judges the training data of all inputs whether train complete, if not complete, return step (b).
Step (3) video accident detection, specifically:
By test data the code book of the loci block of locations set up with the training stage compares, 1≤loci≤f_block, if meet time, then tentatively think loci block of locations there is anomalous event to occur, otherwise does not occur;
Step (4) space-time aftertreatment, specifically:
In t, if being initially judged to of loci block of locations has anomalous event to occur, then consider the 8-neighborhood of loci block of locations in the t-1 moment, if it is abnormal to have at least two positions to exist in neighborhood, then sentencing in loci block of locations has anomalous event to occur, otherwise does not have anomalous event to occur.
The atom set that the present invention only need be formed Optical-flow Feature carries out spectral clustering and can set up normal event model.Space-time aftertreatment considers the neighborhood information of the historical juncture of judgement position, further increases the accuracy of accident detection.
Accompanying drawing explanation
Fig. 1 is accident detection model training procedure chart;
Fig. 2 is accident detection model inspection procedure chart.
Embodiment
Below in conjunction with accompanying drawing and embodiment, the present invention is described in detail.
Respectively as shown in Figure 1 and Figure 2, concrete steps are as follows for abnormality detection model training of the present invention and testing process:
Step (1) feature extraction, specifically:
Each frame in video is divided into non-overlapping copies and size is the block of N × N, and gets M frame continuously, obtain the stereo block that size is N × N × M, therefore the video of M frame length can be made up of several stereo block, and each stereo block is called atom, gets N=10, M=3.The resolution sizes of setting video frame is 320 × 240, and each frame obtains the quantity f_block=768 of block.The movable information of t loci block of locations represents with a histogram 1≤loci≤f_block, loci is integer, wherein h i, 1≤i≤4, be loci block of locations according to light stream direction by the light stream amplitude sum on 4 directions of 90 degree of interval quantizing acquisitions.Current time is t, and in conjunction with front 1 frame of t and the histogram information of rear 1 frame, the atom on t loci position can be expressed as get P=10, the atomic quantity obtained is P × f_block=7680, by the atom set of these these videos of atomic building.
Step (2) feature learning, specific as follows:
2-1. adopts Laplacian Eigenmap method that atom set is mapped in lower dimensional space, then carries out cluster to it.First to atom collection design of graphics G=(V, E), vertex set V represents each atom, the limit E of Weight represents the similarity between each atom, in figure, between i-th atom and a jth atom, the weight on limit calculates by formula (1), 1≤i≤P × f_block, 1≤j≤P × f_block.Get G=7, σ s=0.4.
After having built figure, by means of figure, spectral clustering is carried out to atom.Make the minimization of object function shown in formula (3).The problem equivalent of the minimization of object function is in the optimum solving formula (4).Calculate L relative to the generalized eigenvalue of D and proper vector, choose l minimum non-zero eigenwert and characteristic of correspondence is vectorial, get l=10.The l a tried to achieve proper vector is become the characteristic vector space of (P × f_block) × l, and wherein every a line represents the coordinate of atom at l dimension space.Finally according to the l dimension space coordinate of each atom, the adaptive clustering scheme of LihiZelnik-Manor is adopted to carry out cluster at l dimension space to atom, the class number of self-adaptation determination cluster.Obtain Num0 class after cluster, each Lei Lei center represents the local feature of a class event, and class center calculates by formula (5).
The process that 2-2. sets up code book on the loci of position is as follows, 1≤loci≤f_block:
A () sets up initial codebook, feature learning training data being carried out step 2-1 obtains Num class event, Num=Num0.W is calculated by formula (6) k, lociif, w k, loci>0, then add to kLei Lei center in the code book on the loci of position as code word, preserves the training data of kth class local feature, k={1,2 simultaneously ... ... ..Num0};
w k , l o c i = n k , l o c i &Sigma; k n k , l o c i , k = { 1 , 2 , ........... N u m 0 } - - - ( 6 )
Wherein n k, locirepresent the number of times that on the loci of position, kth class event occurs, 1≤loci≤f_block.
B () inputs new training data it is compared with the code book in loci block of locations, if meet characteristic distance similarity time, th is the threshold value of setting, gets th=0.85, will join in a new set U, otherwise handle to join on the loci of position in the training data of most Similarity Class local feature, then recalculate the class center of this kind of local feature, upgrade code word in code book, wherein it is the similarity of the characteristic distance calculated by formula (7);
d m a x ( x l o c i n e w ) = m a x k ( d f ( x l o c i n e w , c l o c i k ) ) - - - ( 7 )
Wherein represent a kth code word in the code book on the loci of position, 1≤k≤Num.
C (), when gathering the data bulk in U and not reaching Q, returns step (b); When gathering the data bulk in U and reaching Q, again carry out step 2-1 operation to set U, be clustered into Num1 class, renewal Num0 is Num0=Num1, if w k, loci>0, then just add to kth class event class center in the code book on the loci of position as code word, preserves the training data of kth class local feature, k={1,2 simultaneously ... ... ..Num0}; Empty set U, classification sum Num is updated to Num=Num+Num1, judges the training data of all inputs whether train complete, if not complete, return step (b).
Step (3) video accident detection, specifically:
By test data the code book of the loci block of locations set up with the training stage compares, 1≤loci≤f_block, if meet time, then tentatively think loci block of locations there is anomalous event to occur, otherwise does not occur;
Step (4) space-time aftertreatment, specifically:
In t, if being initially judged to of loci block of locations has anomalous event to occur, then consider the 8-neighborhood of loci block of locations in the t-1 moment, if it is abnormal to have at least two positions to exist in neighborhood, then sentencing in loci block of locations has anomalous event to occur, otherwise does not have anomalous event to occur.

Claims (1)

1. a video accident detection method under crowd scene, is characterized in that comprising the steps:
Step (1) feature extraction, specific as follows:
Each frame in video is divided into non-overlapping copies and size is the block of N × N, and gets M frame continuously, obtain the stereo block that size is N × N × M, therefore the video of M frame length is made up of several stereo block, and each stereo block is called atom; The resolution sizes of setting video frame is W × H, and each frame obtains the quantity of block wherein represent and round downwards; The movable information of t loci block of locations represents with a histogram 1≤loci≤f_block, loci is integer, wherein h i, 1≤i≤4, be loci block of locations according to light stream direction by the light stream amplitude sum on 4 directions of 90 degree of interval quantizing acquisitions; Current time is t, and in conjunction with the histogram information of (M-1)/2 frame before t and rear (M-1)/2 frame, the atom on t loci position is expressed as x l o c i = &lsqb; h l o c i t - ( M - 1 ) / 2 ... h l o c i t + ( M - 1 ) / 2 &rsqb; , M gets odd number, 1≤loci≤f_block; For one section of video, be divided into P segment in units of M frame, the atomic quantity obtained is P × f_block, by the atom set of these these videos of atomic building;
Step (2) feature learning, specific as follows:
2-1. adopts Laplacian Eigenmap method that atom set is mapped in lower dimensional space, then carries out cluster to it; First to atom collection design of graphics G=(V, E), vertex set V represents each atom, the limit E of Weight represents the similarity between each atom, in figure, between i-th atom and a jth atom, the weight on limit calculates by formula (1), 1≤i≤P × f_block, 1≤j≤P × f_block:
w i j = exp ( - ( 1 - d f i , j ) &sigma; i &sigma; j ) exp ( - d s i . j &sigma; s ) - - - ( 1 )
In the Section 1 of formula (1) the right mathematic(al) representation be:
d f i , j = &alpha; i , j &times; d f ( x i , x j ) - - - ( 2 )
COS distance wherein <x i, x j> represents and asks x iwith x jinner product, definition d f ( x i , x j ) = &Sigma; k = 0 4 M - 1 m i n &lsqb; x i ( k ) , x j ( k ) &rsqb; max &lsqb; &Sigma; k = 0 4 M - 1 x i ( k ) , &Sigma; k = 0 4 M - 1 x j ( k ) &rsqb; ; &sigma; r = ( 1 - d f r , G ) , σ rscale factor, wherein x gx rg Neighbor Points, the distance metric of neighbour adopts Euclidean distance, r=i or j;
In the Section 2 of formula (1) the right, represent the Euclidean distance on i-th atom and a jth atomic space; σ sfor the space scale factor; After having built figure, by means of figure, spectral clustering is carried out to atom; In graph theory, the problem of cluster changes the problem that figure cuts into; Its principle is that the limit weight maximization in subgraph and the limit weight between each subgraph minimize; Cut-off limit weight sum is minimum, even if the minimization of object function shown in formula (3);
E ( r ) = &Sigma; i j w i j | | y i - y j | | 2 - - - ( 3 )
Wherein w ijprovided by formula (1); y iand y jx respectively iand x jbe mapped to the coordinate vector on object space, Y is by vectorial y icomposition, 1≤i≤P × f_block; The problem equivalent of the minimization of object function is in solving optimum Y:
Y opt=argmin(YLY)s.t.Y TDY=1(4)
In formula (4), Laplacian Matrix L=D-W; D is diagonal matrix, element value d on its diagonal line ii=∑ jw ij; W is by w ijform; Calculate L relative to the generalized eigenvalue of D and proper vector, choose l minimum non-zero eigenwert and characteristic of correspondence is vectorial; The l a tried to achieve proper vector is become the characteristic vector space of (P × f_block) × l, and wherein every a line represents the coordinate of atom at l dimension space; Finally according to the l dimension space coordinate of each atom, the adaptive clustering scheme of LihiZelnik-Manor is adopted to carry out cluster at l dimension space to atom, obtain Num0 class after cluster, each Lei Lei center represents the local feature of a class event, and class center calculates by formula (5):
&mu; k = 1 N k &Sigma; i = 1 N k x i - - - ( 5 )
N krepresent the atom number belonging to kth class event; Class center is as code word, and on the loci of position, all possible code word forms code book;
The process that 2-2. sets up code book in loci block of locations is as follows:
A () sets up initial codebook, feature learning training data being carried out step 2-1 obtains Num class event, Num=Num0; W is calculated by formula (6) k, lociif, w k, loci>0, then add to kLei Lei center in the code book on the loci of position as code word, preserves the training data of kth class local feature, k={1,2 simultaneously ... ... ..Num0};
w k , l o c i = n k , l o c i &Sigma; k n k , l o c i , k = { 1 , 2 , ... ... ... &CenterDot; &CenterDot; N u m 0 } - - - ( 6 )
Wherein n k, locirepresent the number of times that on the loci of position, kth class event occurs, 1≤loci≤f_block;
B () inputs new training data it is compared with the code book in loci block of locations, if meet characteristic distance similarity time, th is the threshold value of setting; Will join in a new set U, otherwise handle to join on the loci of position in the training data of most Similarity Class local feature, then recalculate the class center of this kind of local feature, upgrade code word in code book, wherein it is the similarity of the characteristic distance calculated by formula (7);
d m a x ( x l o c i n e w ) = m a x k ( d f ( x l o c i n e w , c l o c i k ) ) - - - ( 7 )
Wherein represent a kth code word in the code book on the loci of position, 1≤k≤Num;
C (), when gathering the data bulk in U and not reaching Q, returns step (b); When gathering the data bulk in U and reaching Q, again carry out step 2-1 operation to set U, be clustered into Num1 class, renewal Num0 is Num0=Num1, if w k, loci>0, then just add to kth class event class center in the code book on the loci of position as code word, preserves the training data of kth class local feature, k={1,2 simultaneously ... ... ..Num0}; Empty set U, classification sum Num is updated to Num=Num+Num1, judges the training data of all inputs whether train complete, if not complete, return step (b);
Step (3) video accident detection, specifically:
By test data the code book of the loci block of locations set up with the training stage compares, 1≤loci≤f_block, if meet time, then tentatively think loci block of locations there is anomalous event to occur, otherwise does not occur;
Step (4) space-time aftertreatment, specifically:
In t, if being initially judged to of loci block of locations has anomalous event to occur, then consider the 8-neighborhood of loci block of locations in the t-1 moment, if it is abnormal to have at least two positions to exist in neighborhood, then sentencing in loci block of locations has anomalous event to occur, otherwise does not have anomalous event to occur.
CN201510710563.4A 2015-10-27 2015-10-27 A kind of video accident detection method under crowd scene Active CN105354542B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510710563.4A CN105354542B (en) 2015-10-27 2015-10-27 A kind of video accident detection method under crowd scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510710563.4A CN105354542B (en) 2015-10-27 2015-10-27 A kind of video accident detection method under crowd scene

Publications (2)

Publication Number Publication Date
CN105354542A true CN105354542A (en) 2016-02-24
CN105354542B CN105354542B (en) 2018-09-25

Family

ID=55330510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510710563.4A Active CN105354542B (en) 2015-10-27 2015-10-27 A kind of video accident detection method under crowd scene

Country Status (1)

Country Link
CN (1) CN105354542B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787472A (en) * 2016-03-28 2016-07-20 电子科技大学 Abnormal behavior detection method based on time-space Laplacian Eigenmaps learning
CN106250859A (en) * 2016-08-04 2016-12-21 杭州电子科技大学 The video flame detecting method that feature based vector motion is spent in a jumble
CN107590427A (en) * 2017-05-25 2018-01-16 杭州电子科技大学 Monitor video accident detection method based on space-time interest points noise reduction
CN107958260A (en) * 2017-10-27 2018-04-24 四川大学 A kind of group behavior analysis method based on multi-feature fusion
CN108304802A (en) * 2018-01-30 2018-07-20 华中科技大学 A kind of Quick filter system towards extensive video analysis
CN108805002A (en) * 2018-04-11 2018-11-13 杭州电子科技大学 Monitor video accident detection method based on deep learning and dynamic clustering
CN109359519A (en) * 2018-09-04 2019-02-19 杭州电子科技大学 A kind of video anomaly detection method based on deep learning
CN114519101A (en) * 2020-11-18 2022-05-20 易保网络技术(上海)有限公司 Data clustering method and system, data storage method and system and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130197370A1 (en) * 2012-01-30 2013-08-01 The Johns Hopkins University Automated Pneumothorax Detection
CN104091169A (en) * 2013-12-12 2014-10-08 华南理工大学 Behavior identification method based on multi feature fusion
CN104239897A (en) * 2014-09-04 2014-12-24 天津大学 Visual feature representing method based on autoencoder word bag
CN104978561A (en) * 2015-03-25 2015-10-14 浙江理工大学 Gradient and light stream characteristics-fused video motion behavior identification method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130197370A1 (en) * 2012-01-30 2013-08-01 The Johns Hopkins University Automated Pneumothorax Detection
CN104091169A (en) * 2013-12-12 2014-10-08 华南理工大学 Behavior identification method based on multi feature fusion
CN104239897A (en) * 2014-09-04 2014-12-24 天津大学 Visual feature representing method based on autoencoder word bag
CN104978561A (en) * 2015-03-25 2015-10-14 浙江理工大学 Gradient and light stream characteristics-fused video motion behavior identification method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BIN ZHAO 等: "Online Detection of Unusual Events in Videos via Dynamic Sparse Coding", 《2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION(CVPR)》 *
MYO THIDA 等: "Laplacian Eigenmap With Temporal Constraints for Local Abnormality Detection in Crowded Scenes", 《IEEE TRANSACTIONS ON CYBERNETICS》 *
独大为: "拥挤场景下视频异常事件监测技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
谢锦生 等: "一种基于稀疏编码模型的视频异常发现方法", 《小型微型计算机***》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105787472B (en) * 2016-03-28 2019-02-15 电子科技大学 A kind of anomaly detection method based on the study of space-time laplacian eigenmaps
CN105787472A (en) * 2016-03-28 2016-07-20 电子科技大学 Abnormal behavior detection method based on time-space Laplacian Eigenmaps learning
CN106250859A (en) * 2016-08-04 2016-12-21 杭州电子科技大学 The video flame detecting method that feature based vector motion is spent in a jumble
CN106250859B (en) * 2016-08-04 2019-09-17 杭州电子科技大学 The video flame detecting method spent in a jumble is moved based on characteristic vector
CN107590427A (en) * 2017-05-25 2018-01-16 杭州电子科技大学 Monitor video accident detection method based on space-time interest points noise reduction
CN107590427B (en) * 2017-05-25 2020-11-24 杭州电子科技大学 Method for detecting abnormal events of surveillance video based on space-time interest point noise reduction
CN107958260A (en) * 2017-10-27 2018-04-24 四川大学 A kind of group behavior analysis method based on multi-feature fusion
CN107958260B (en) * 2017-10-27 2021-07-16 四川大学 Group behavior analysis method based on multi-feature fusion
CN108304802B (en) * 2018-01-30 2020-05-19 华中科技大学 Rapid filtering system for large-scale video analysis
CN108304802A (en) * 2018-01-30 2018-07-20 华中科技大学 A kind of Quick filter system towards extensive video analysis
CN108805002A (en) * 2018-04-11 2018-11-13 杭州电子科技大学 Monitor video accident detection method based on deep learning and dynamic clustering
CN108805002B (en) * 2018-04-11 2022-03-01 杭州电子科技大学 Monitoring video abnormal event detection method based on deep learning and dynamic clustering
CN109359519A (en) * 2018-09-04 2019-02-19 杭州电子科技大学 A kind of video anomaly detection method based on deep learning
CN114519101A (en) * 2020-11-18 2022-05-20 易保网络技术(上海)有限公司 Data clustering method and system, data storage method and system and storage medium
CN114519101B (en) * 2020-11-18 2023-06-06 易保网络技术(上海)有限公司 Data clustering method and system, data storage method and system and storage medium

Also Published As

Publication number Publication date
CN105354542B (en) 2018-09-25

Similar Documents

Publication Publication Date Title
CN105354542A (en) Method for detecting abnormal video event in crowded scene
Huttunen et al. Car type recognition with deep neural networks
Essa et al. Simulated traffic conflicts: do they accurately represent field-measured conflicts?
Atev et al. A vision-based approach to collision prediction at traffic intersections
CN109886085A (en) People counting method based on deep learning target detection
CN103871077B (en) A kind of extraction method of key frame in road vehicles monitoring video
CN103942533A (en) Urban traffic illegal behavior detection method based on video monitoring system
CN102592138B (en) Object tracking method for intensive scene based on multi-module sparse projection
CN105513349A (en) Double-perspective learning-based mountainous area highway vehicle event detection method
CN114023062B (en) Traffic flow information monitoring method based on deep learning and edge calculation
CN103902976A (en) Pedestrian detection method based on infrared image
Kim et al. Structural recurrent neural network for traffic speed prediction
Jain et al. Performance analysis of object detection and tracking algorithms for traffic surveillance applications using neural networks
CN109214253B (en) Video frame detection method and device
CN105488519A (en) Video classification method based on video scale information
CN105426813A (en) Video abnormal behavior detection method
CN107730889B (en) Target vehicle retrieval method based on traffic video
Marcomini et al. A comparison between background modelling methods for vehicle segmentation in highway traffic videos
CN105809954A (en) Traffic event detection method and system
CN114372503A (en) Cluster vehicle motion trail prediction method
CN106204650A (en) A kind of vehicle target tracking based on vacant lot video corresponding technology
CN105654516A (en) Method for detecting small moving object on ground on basis of satellite image with target significance
Shirazi et al. A typical video-based framework for counting, behavior and safety analysis at intersections
Meng et al. Video‐Based Vehicle Counting for Expressway: A Novel Approach Based on Vehicle Detection and Correlation‐Matched Tracking Using Image Data from PTZ Cameras
CN108734109A (en) A kind of visual target tracking method and system towards image sequence

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201105

Address after: 310016 room 3003-1, building 1, Gaode land center, Jianggan District, Hangzhou City, Zhejiang Province

Patentee after: Zhejiang Zhiduo Network Technology Co.,Ltd.

Address before: Hangzhou City, Zhejiang province 310018 Xiasha Higher Education Park No. 2 street

Patentee before: HANGZHOU DIANZI University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20201202

Address after: Room 806, building 5, Wuhu navigation Innovation Park, Wanbi Town, Wanbi District, Wuhu City, Anhui Province

Patentee after: Wuhu Qibo Intellectual Property Operation Co.,Ltd.

Address before: 310016 room 3003-1, building 1, Gaode land center, Jianggan District, Hangzhou City, Zhejiang Province

Patentee before: Zhejiang Zhiduo Network Technology Co.,Ltd.

TR01 Transfer of patent right
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160224

Assignee: Hangzhou Elice Chemical Co.,Ltd.

Assignor: Wuhu Qibo Intellectual Property Operation Co.,Ltd.

Contract record no.: X2021330000464

Denomination of invention: A video abnormal event detection method in crowded scene

Granted publication date: 20180925

License type: Common License

Record date: 20211018

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160224

Assignee: Hangzhou Qihu Information Technology Co.,Ltd.

Assignor: Wuhu Qibo Intellectual Property Operation Co.,Ltd.

Contract record no.: X2021330000547

Denomination of invention: A video abnormal event detection method in crowded scene

Granted publication date: 20180925

License type: Common License

Record date: 20211028

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20160224

Assignee: Hangzhou Julu enterprise management consulting partnership (L.P.)

Assignor: Wuhu Qibo Intellectual Property Operation Co.,Ltd.

Contract record no.: X2021330000726

Denomination of invention: A video abnormal event detection method in crowded scene

Granted publication date: 20180925

License type: Common License

Record date: 20211109

EE01 Entry into force of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: Hangzhou Julu enterprise management consulting partnership (L.P.)

Assignor: Wuhu Qibo Intellectual Property Operation Co.,Ltd.

Contract record no.: X2021330000726

Date of cancellation: 20221103

Assignee: Hangzhou Qihu Information Technology Co.,Ltd.

Assignor: Wuhu Qibo Intellectual Property Operation Co.,Ltd.

Contract record no.: X2021330000547

Date of cancellation: 20221103

EC01 Cancellation of recordation of patent licensing contract
EC01 Cancellation of recordation of patent licensing contract

Assignee: Hangzhou Elice Chemical Co.,Ltd.

Assignor: Wuhu Qibo Intellectual Property Operation Co.,Ltd.

Contract record no.: X2021330000464

Date of cancellation: 20240429

EC01 Cancellation of recordation of patent licensing contract