CN111753651A - Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis - Google Patents
Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis Download PDFInfo
- Publication number
- CN111753651A CN111753651A CN202010405340.8A CN202010405340A CN111753651A CN 111753651 A CN111753651 A CN 111753651A CN 202010405340 A CN202010405340 A CN 202010405340A CN 111753651 A CN111753651 A CN 111753651A
- Authority
- CN
- China
- Prior art keywords
- crowd
- station
- dimensional
- abnormal behavior
- feature points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010000117 Abnormal behaviour Diseases 0.000 title claims abstract description 27
- 238000001514 detection method Methods 0.000 title claims abstract description 16
- 238000004458 analytical method Methods 0.000 title claims abstract description 14
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000003287 optical effect Effects 0.000 claims abstract description 21
- 238000013528 artificial neural network Methods 0.000 claims abstract description 9
- 238000007781 pre-processing Methods 0.000 claims abstract description 6
- 238000007637 random forest analysis Methods 0.000 claims abstract description 5
- 238000012706 support-vector machine Methods 0.000 claims abstract description 5
- 238000012544 monitoring process Methods 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 14
- 230000001133 acceleration Effects 0.000 claims description 11
- 238000005381 potential energy Methods 0.000 claims description 8
- 230000006399 behavior Effects 0.000 claims description 7
- 230000002159 abnormal effect Effects 0.000 claims description 6
- 238000011176 pooling Methods 0.000 claims description 6
- 238000009434 installation Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 230000000877 morphologic effect Effects 0.000 claims description 2
- 230000000007 visual effect Effects 0.000 claims 2
- 238000012549 training Methods 0.000 abstract description 7
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 238000000605 extraction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 101000742346 Crotalus durissus collilineatus Zinc metalloproteinase/disintegrin Proteins 0.000 description 1
- 101000872559 Hediste diversicolor Hemerythrin Proteins 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 210000004243 sweat Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a subway group abnormal behavior detection method based on station two-dimensional crowd density analysis, which comprises the following steps of: the method comprises the steps of preprocessing original video data, extracting target features by adopting a full convolution U-Net neural network, tracking the target feature points by utilizing a pyramid Lucas-Kanade optical flow method, converting the target feature points into a station two-dimensional space coordinate system, and training and learning extracted group motion feature information by utilizing a support vector machine and a random forest to complete group abnormal behavior detection.
Description
Technical Field
The invention relates to a subway group abnormal behavior detection method, in particular to a subway group abnormal behavior detection method based on station two-dimensional crowd density analysis, and belongs to the urban rail transit intelligent monitoring technology.
Background
The urban rail transit is an important way for relieving urban public traffic pressure and improving urban attractiveness by virtue of the advantages of exclusive line, large transportation capacity, stable running time and the like, and becomes an indispensable transportation mode for daily travel of vast citizens. With the attendant problem of security that is not negligible. The characteristics of complex scene, large passenger flow mobility and high density of the subway station bring great difficulty to safety supervision work, so that the subway station becomes a region with high occurrence rate and easiness in occurrence of group safety events. These events often induce large-scale hazards, which not only affect passenger safety, but also bring about huge economic property losses. Tread accidents and the like caused by group activities can be prevented and harm can be reduced through some perfect monitoring management means.
The video monitoring system is used as a convenient and effective monitoring means which is rarely used in urban rail transit systems and basically covers public areas of stations, and can provide real-time video information and subsequent search information for people. Video surveillance systems are currently in common use in sweat for road traffic, public safety, banking and the like. Meanwhile, the rapid development of various subject technologies such as machine vision, artificial intelligence and the like also ensures that the video monitoring technology has universality, practicability and excellent scientific research and commercial values.
The traditional video monitoring system only manually monitors mass monitoring data through operation management personnel to further manually judge the occurrence of abnormal events in videos, but has great limitation. On the one hand, highly centralized monitoring of literary works for a long time may be tiring due to limited available human resources and limited personal energy; on the other hand, the number and events of video channels which are effectively watched and analyzed by an operation monitoring heat source are limited on the physical aspect of a television wall, and a lot of important and useful information often exists, so that the video monitoring system which is generally and practically applied at present only serves as a tool for 'after-the-fact query', and is difficult to find abnormal behaviors of a group in time and adopt a proper emergency disposal method. There is therefore a need to improve existing video surveillance methods.
The existing common abnormal behavior detection algorithm has the problems of poor adaptability of different scenes, single recognition condition and the like due to the defect that a monocular camera lacks posture and depth information, and the problems are urgently to be solved under the conditions that the real-time performance and the robustness cannot be obtained at the same time, the false alarm rate and the missing report rate are high and the like.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a subway group abnormal behavior detection method based on station two-dimensional crowd density analysis, which meets the operation speed of more than 10 frames per second, has simple algorithm calculation, high judgment speed and higher recognition rate, reduces misjudgment in a low-density scene while ensuring real-time performance, reduces the missing rate and the misdetection rate, and ensures the robustness of the method.
The technical scheme is as follows: the invention discloses a subway group abnormal behavior detection method based on station two-dimensional crowd density analysis, which comprises the following steps of:
(1) pretreatment: preprocessing each frame of image data shot by a monitoring camera by using a related algorithm, enhancing and sharpening the image, reducing image noise, removing image shadow, setting a background and an interested region (based on an HSV color space model, reducing the influence caused by illumination change), and solving the problems of image deformation, blurring and the like caused by the reason that the image is an environment or a shooting azimuth angle and the like; a grid method is introduced, the video image is divided into grids with different sizes, and the size of the grid is inversely proportional to the distance between the pixel area and the monitoring camera;
(2) distance estimation: according to the known average passenger height H _ avg and the known camera pitch Angle, as [ (Angle) ] as ] tilt, the installation height H _ cam of the camera, the horizontal view Angle of the camera [ (Angle) ] as ] w, the vertical view Angle of the camera [ (Angle) ] as ] H and the focal length f of the camera form a mapping between the station two-dimensional plane (the plane parallel to and from the ground plane H _ avg) and the surveillance video image, the specific mapping relation formula is as follows:
P_x=(P_y(y-1/2W)〖Pix〗_w)/f
wherein, (P _ x, P _ y) is the mapping of the pixel point (x, y) in the camera two-dimensional space; p _ y is the distance from the monitoring camera; the P _ x coordinate axis is perpendicular to the optical center and represents the distance between the P _ x coordinate axis and the optical axis of the monitoring camera at the current distance; pix, h is the vertical height of the pixel; pix, w is the horizontal width of the pixel; w is the width of the image (number of horizontal pixels); h is the height of the image (number of vertical pixels);
(3) extracting characteristic points: and (3) extracting the feature points of the preprocessed image data by using a full convolution U-Net neural network, wherein the selected feature points are the nose tips (similar to the heights of passengers) which are difficult to be shielded as extracted target feature points. The convolution layer of the network selects the MobileNet neural network with Depth-Wise, so that the network operation efficiency is improved, and the network can run on the mobile terminal equipment. The activation function of the network is chosen to be the ReLU function with a faster convergence speed. The maximum pool is selected as the pooling layer and the anti-pooling layer of the network;
(4) tracking target characteristic points: for an input video sequence, tracking the feature points of a current frame by using a pyramid Lucas-Kanade optical flow method, estimating the position of each feature point corresponding to the current frame in a window with a fixed size of the next frame, if the feature points meet an allowable error, successfully tracking and retaining, otherwise, retaining the feature points as new passengers to the next frame, and discarding the feature points of the previous frame which are not successfully tracked. The moving distance of each pair of feature points of the two frames before and after is the optical flow of the current corresponding feature point, and the process is continued until the last frame. Mapping the optical flow characteristics from the two-dimensional space of the image to the two-dimensional space of the station;
(5) population density analysis: counting the characteristic points in the divided grids, representing the characteristic points as population density of the population in the space, and using the population density as a basis for selecting a subsequent population abnormal behavior identification and classification method;
(6) extracting the motion characteristics of the crowd: the population motion characteristics are mainly expressed from four aspects of population average kinetic energy, population motion direction entropy, population inter-individual distance potential energy and individual average acceleration, wherein the sum E _ k of population motion energy is represented by calculating optical flow energy in an interested region, and the population average kinetic energy E _ avg is defined to represent the violence degree of population motion due to the fact that the E _ k is related to population density N, wherein:
E_avg=E_k/N
representing the chaos degree of the group motion direction through the group motion direction entropy, wherein the chaos degree mainly comprises an optical flow vector direction histogram, a direction probability distribution diagram and a direction entropy; representing the distribution condition of the crowd through the distance potential energy among all individuals in the crowd; describing the movement of the behavior change of the crowd caused by the fact that the acting force applied to the crowd is increased due to sudden running of the crowd when an abnormal event occurs through the average acceleration of the individual;
(7) and (3) abnormal behavior detection: aiming at the five crowd characteristics extracted in the step (5): the method comprises the following steps of learning through training samples by using random forests or Support Vector Machines (SVM), wherein the crowd quantity change rate, the crowd average kinetic energy, the crowd motion direction entropy, the crowd inter-individual distance potential energy and the individual average acceleration are different in characteristics and different characteristic changes under abnormal conditions.
Preferably, in the step (1), the method used by the invention is a supervised model, the preprocessing of the video data set needs to have enough training samples, the group behavior rule is reflected as much as possible, and the random error caused by insufficient samples is reduced. The original training data set selected by the invention is selected as a UMN data set, a PETS2009 data set and the like.
In the step (3), the method used in the patent feature extraction link of the invention is a supervised model, enough samples are required for training and learning, the group behavior rules are reflected as much as possible, and random errors caused by insufficient samples are reduced. The original training data set selected by the invention is selected to be a COCO data set, an MPII data set and the like.
In the step (4), the pyramid Lucas-Kanade optical flow method is used for calculating the optical flows at the characteristic points, and basic information such as the position, the speed, the direction and the like of the moving object can be obtained. These data are information of the target feature point in a station two-dimensional plane coordinate system (image coordinate system in the unconventional method).
In the step (6), the four aspects of the crowd average kinetic energy, the crowd movement direction entropy, the crowd inter-individual distance potential energy and the individual average acceleration represent crowd movement characteristics, and all represent movement characteristics under a station two-dimensional plane coordinate system.
Further, in step (7), the principle of learning and classifying on the training data set by using a random forest or a support vector machine is as follows:
if the crowd does not have abnormal behaviors, the change rate of the number of people in the video shooting range is relatively stable, and when some abnormal behaviors occur, the change rate of the number of people is suddenly increased;
the average kinetic energy of the crowd represents the speed, the speed and the intensity of the movement of the crowd, and can be used for judging whether the walking state of the crowd is normal walking or general running;
the entropy of the movement direction of the crowd represents the chaos degree of the movement direction of the crowd, the movement direction of the crowd is single under the normal condition of the subway station, the movement direction is different under the abnormal condition, the performance is chaotic, and the entropy of the movement direction is increased;
the distance potential among individuals in the crowd can represent the distance between every two individuals of the crowd, describe the dispersion degree among the individuals of the crowd, if the distance potential suddenly increases or suddenly decreases, the probability of abnormal behaviors is shown at the moment;
the individual average acceleration represents the sport violence degree of the crowd, when the crowd meets a dangerous condition, the crowd can escape from the danger, the force of the crowd is increased, the crowd is stimulated to run quickly, and the sport violence degree is obviously increased.
The invention mainly realizes the classification of normal, crowd panic, same direction burst, fast walking and running and crowd conflict and realizes the abnormal behavior of the crowd.
Further, in the step (7), the method can further provide standardized individual position and (acceleration) speed data for the upper-layer high-precision special abnormal behavior detection and identification model, and reduce the operation pressure of upper-layer model feature extraction and analysis.
Has the advantages that: compared with the prior art, the invention has the following remarkable advantages: aiming at the high requirements of instantaneity and rapidity in the security work of a subway station, a full convolution U-Net neural network is introduced to extract the features of the nose tip of the face of a passenger, the operation speed of more than 10 frames per second is met, and the actual application requirement is met; tracking by using a pyramid Lucas-Kanade optical flow method, and classifying abnormal behaviors by using a random forest; the algorithm is simple in calculation, high in judgment speed and high in recognition rate, misjudgment under a low-density scene is reduced while real-time performance is guaranteed, and the attention degree of a high-density scene is improved; meanwhile, through the mapping of the pixels in a two-dimensional plane coordinate system of a station, the standardization and the space division of feature data are realized, the inconsistency of feature distribution is reduced, the applicability of the model in different scenes is improved, the missing detection rate and the false detection rate are reduced, and the robustness of the method is ensured.
Drawings
FIG. 1 is a schematic flow chart of the implementation of the present invention;
FIGS. 2(a) - (b) are schematic diagrams of two-dimensional space mapping of a station according to the present invention;
FIG. 3 is a schematic diagram of a neural network structure for feature extraction according to the present invention.
Detailed Description
The technical solution of the present invention is further illustrated by the following examples.
Fig. 1 shows a method for detecting abnormal behavior of a subway train population based on population density analysis, and the present invention is further described with reference to specific embodiments.
(1) Pretreatment: and preprocessing each frame of image shot by the monitoring camera by using a correlation algorithm, reducing the noise of the image, removing the shadow of the image, setting an interested area and the like. Random errors are reduced, perspective distortion is relieved to a certain extent, and the problems of image blurring, image deformation and the like caused by the environment or shooting azimuth angle and the like are solved. The method mainly comprises color space conversion, morphological processing, image denoising, image shading processing and image enhancement, and improves the accuracy of subsequent group behavior detection. And introducing a grid method, and dividing the video image into grids with different sizes according to the distance between the video image and the monitoring camera.
(2) Distance estimation: according to the known average passenger height H _ avg and the known camera pitch Angle (Angle) as (ii) tilt (H _ cam), the camera installation height H _ cam, the camera horizontal view Angle (Angle) as (ii) w, the camera vertical view Angle (Angle) as (ii) H and the camera focal length f, a mapping between the station two-dimensional plane and the surveillance video image is formed, and the mapping is stored in a local file during the installation and deployment of the surveillance camera for easy query.
(3) Extracting target feature points: and (3) extracting the feature points of the preprocessed image data by using a full convolution U-Net neural network, wherein the selected feature points are the nose tips (similar to the heights of passengers) which are difficult to be shielded as extracted target feature points. The convolution layer of the network selects the MobileNet neural network with Depth-Wise, so that the network operation efficiency is improved, and the network can run on the mobile terminal equipment. The activation function of the network is chosen to be the ReLU function with a faster convergence speed. The maximum pool is selected for both the pooling layer and the anti-pooling layer of the network.
(4) Tracking target characteristic points: for an input video sequence, tracking the feature points of a current frame by using a pyramid Lucas-Kanade optical flow method, estimating the position of each feature point corresponding to the current frame in a window with a fixed size of the next frame, if the feature points meet an allowable error, successfully tracking and retaining, otherwise, retaining the feature points as new passengers to the next frame, and discarding the feature points of the previous frame which are not successfully tracked. The moving distance of each pair of feature points of the two frames before and after is the optical flow of the current corresponding feature point, and the process is continued until the last frame. And mapping the optical flow characteristics from the two-dimensional space of the image to the two-dimensional space of the station.
(5) Population density estimation: and counting the characteristic points in the divided grids, representing the population density of the group in the grid space as a basis for selecting a subsequent group abnormal behavior identification and classification method.
(6) Extracting the motion characteristics of the crowd: the population motion characteristics are characterized mainly from four aspects of population average kinetic energy, population motion direction entropy, population inter-individual distance potential energy and individual average acceleration. Wherein, the sum E _ k of the motion energy of the crowd is represented by calculating the optical flow energy in the region of interest, and the average kinetic energy E _ avg of the crowd is defined to represent the fierce degree of the crowd motion because E _ k is related to the crowd density N.
Representing the chaos degree of the group motion direction through the group motion direction entropy, wherein the chaos degree mainly comprises an optical flow vector direction histogram, a direction probability distribution diagram and a direction entropy; representing the distribution condition of the crowd through the distance potential energy among all individuals in the crowd; the individual average acceleration is used for describing the action of the crowd, which changes the behavior of the crowd due to the fact that the crowd suddenly runs when an abnormal event occurs and the acting force applied to the crowd is increased.
As shown in fig. 2(a) - (b), fig. 2(a) shows the mapping relationship between the passenger characteristic points at different distances and the camera imaging vertical coordinate in the side view, so as to facilitate the inference of the passenger distance information in space from the two-dimensional image; and 2(b) the map shows the mapping relation between the passenger characteristic points at different positions and the camera imaging horizontal coordinate under the top view and the corresponding grid division method, so that the actual space areas in each grid are ensured to be uniform.
As shown in fig. 3, a schematic structure diagram of a full convolution U-Net neural network model for feature extraction of input image data is shown, and a MobileNet lightweight neural network model is adopted for both a 3 × 3Conv2d convolution layer and a 3 × 3ConvTranspose2d deconvolution layer.
Claims (5)
1. A subway group abnormal behavior detection method based on station two-dimensional crowd density analysis is characterized by comprising the following steps:
(1) preprocessing each frame of image shot by a monitoring camera, introducing a grid method, and dividing a video image into grids with different sizes according to the distance between the video image and the monitoring camera;
(2) forming mapping between a station two-dimensional plane and a monitoring video image according to the average height of passengers, the pitch angle of a camera, the installation height of the camera, the horizontal visual angle of the camera, the vertical visual angle of the camera and the focal length of the camera, wherein the mapping relation is stored in a local storage in the installation and deployment processes of the monitoring camera so as to be convenient for query;
(3) extracting feature points of the preprocessed image data by using a full convolution U-Net neural network, selecting a nose tip as an extracted target feature point, selecting a Mobile Net neural network with Depth-Wise by using a convolution layer of the network, wherein an activation function of the network is a ReLU function, and a pooling layer and an anti-pooling layer of the network both use a maximum pool;
(4) for an input video sequence, tracking feature points of a current frame by using a pyramid Lucas-Kanade optical flow method, estimating the position of each feature point corresponding to the current frame in a window with a fixed size of the next frame, if the feature points meet an allowable error, successfully tracking and retaining, otherwise, retaining the feature points to the next frame as a new passenger, discarding the feature points which are not successfully tracked in the previous frame, wherein the moving distance of each pair of feature points of the front frame and the rear frame is the optical flow of the current corresponding feature points, continuing the process until the last frame, and mapping optical flow features to a two-dimensional space of a station from the two-dimensional space of an image;
(5) counting the characteristic points in the divided grids, representing the population density of the group in the grid space as a basis for selecting a subsequent group abnormal behavior identification and classification method;
(6) representing the motion characteristics of the crowd from four aspects of the average kinetic energy of the crowd, the entropy of the motion direction of the crowd, the distance potential energy between individuals in the crowd and the average acceleration of the individuals;
(7) and classifying the crowd behaviors through a support vector machine and a random forest to judge whether the crowd behaviors are abnormal or not.
2. The method for detecting the abnormal behavior of the subway colony based on the station two-dimensional crowd density analysis as claimed in claim 1, wherein: in the step (1), the preprocessing includes color space conversion, morphological processing, image denoising, image shading processing and image enhancement.
3. The method for detecting the abnormal behavior of the subway colony based on the station two-dimensional crowd density analysis as claimed in claim 1, wherein: in the step (4), calculating the optical flows at the characteristic points by using the pyramid Lucas-Kanade optical flow method, and acquiring the position, the speed and the direction of the moving object, wherein the position, the speed and the direction data of the moving object are information of the target characteristic points in a station two-dimensional plane coordinate system.
4. The method for detecting the abnormal behavior of the subway colony based on the station two-dimensional crowd density analysis as claimed in claim 1, wherein: in the step (6), the four aspects of the crowd average kinetic energy, the crowd movement direction entropy, the crowd inter-individual distance potential energy and the individual average acceleration represent crowd movement characteristics, and all represent movement characteristics under a station two-dimensional plane coordinate system.
5. The method for detecting the abnormal behavior of the subway colony based on the station two-dimensional crowd density analysis as claimed in claim 1, wherein: and (7) providing standardized individual position and speed or acceleration data for the upper-layer high-precision special abnormal behavior detection and identification model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405340.8A CN111753651A (en) | 2020-05-14 | 2020-05-14 | Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010405340.8A CN111753651A (en) | 2020-05-14 | 2020-05-14 | Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111753651A true CN111753651A (en) | 2020-10-09 |
Family
ID=72674308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010405340.8A Pending CN111753651A (en) | 2020-05-14 | 2020-05-14 | Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111753651A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112434564A (en) * | 2020-11-04 | 2021-03-02 | 北方工业大学 | Detection system for abnormal aggregation behaviors in bus |
CN112598384A (en) * | 2020-12-24 | 2021-04-02 | 卡斯柯信号有限公司 | Station passenger flow monitoring method and system based on building information model |
CN112767451A (en) * | 2021-02-01 | 2021-05-07 | 福州大学 | Crowd distribution prediction method and system based on double-current convolutional neural network |
CN113033382A (en) * | 2021-03-23 | 2021-06-25 | 哈尔滨市科佳通用机电股份有限公司 | Method, system and device for identifying large-area damage fault of wagon floor |
CN113743184A (en) * | 2021-06-08 | 2021-12-03 | 中国人民公安大学 | Abnormal behavior crowd detection method and device based on element mining and video analysis |
CN115223102A (en) * | 2022-09-08 | 2022-10-21 | 枫树谷(成都)科技有限责任公司 | Real-time crowd density fusion sensing method and model based on camera cluster |
CN115240142A (en) * | 2022-07-28 | 2022-10-25 | 杭州海宴科技有限公司 | Cross-media-based abnormal behavior early warning system and method for crowd in outdoor key places |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130103213A (en) * | 2012-03-09 | 2013-09-23 | 고려대학교 산학협력단 | Detection and analysis of abnormal crowd behavior in h.264 compression domain |
CN103413321A (en) * | 2013-07-16 | 2013-11-27 | 南京师范大学 | Crowd behavior model analysis and abnormal behavior detection method under geographical environment |
CN105427345A (en) * | 2015-11-30 | 2016-03-23 | 北京正安维视科技股份有限公司 | Three-dimensional people stream movement analysis method based on camera projection matrix |
CN106326937A (en) * | 2016-08-31 | 2017-01-11 | 郑州金惠计算机***工程有限公司 | Convolutional neural network based crowd density distribution estimation method |
CN109299700A (en) * | 2018-10-15 | 2019-02-01 | 南京地铁集团有限公司 | Subway group abnormal behavior detection method based on crowd density analysis |
-
2020
- 2020-05-14 CN CN202010405340.8A patent/CN111753651A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130103213A (en) * | 2012-03-09 | 2013-09-23 | 고려대학교 산학협력단 | Detection and analysis of abnormal crowd behavior in h.264 compression domain |
CN103413321A (en) * | 2013-07-16 | 2013-11-27 | 南京师范大学 | Crowd behavior model analysis and abnormal behavior detection method under geographical environment |
CN105427345A (en) * | 2015-11-30 | 2016-03-23 | 北京正安维视科技股份有限公司 | Three-dimensional people stream movement analysis method based on camera projection matrix |
CN106326937A (en) * | 2016-08-31 | 2017-01-11 | 郑州金惠计算机***工程有限公司 | Convolutional neural network based crowd density distribution estimation method |
CN109299700A (en) * | 2018-10-15 | 2019-02-01 | 南京地铁集团有限公司 | Subway group abnormal behavior detection method based on crowd density analysis |
Non-Patent Citations (6)
Title |
---|
何瑾 等编: "《生活中的防伪技术》", 31 January 2017, 出版:天津科技翻译出版有限公司, pages: 124 - 125 * |
冯莹莹 等著: "《智能监控视频中运动目标跟踪方法研究》", 31 October 2017, 长春:吉林大学出版社, pages: 29 - 33 * |
卢誉声 著: "《移动平台深度神经网络实战 原理、架构与优化》", 30 November 2019, 北京:机械工业出版社, pages: 153 - 154 * |
宋丹妮 等: "基于视频监控的中小群体异常行为检测", 《计算机工程与设计》, vol. 37, no. 09 * |
林强 等著: "《行为识别与智能计算》", 30 November 2016, 西安:西安电子科技大学出版社, pages: 47 - 49 * |
谭智勇 等: "基于深度卷积神经网络的人群密度估计方法", 《计算机应用与软件》, no. 07 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112434564A (en) * | 2020-11-04 | 2021-03-02 | 北方工业大学 | Detection system for abnormal aggregation behaviors in bus |
CN112434564B (en) * | 2020-11-04 | 2023-06-27 | 北方工业大学 | Detection system for abnormal aggregation behavior in bus |
CN112598384B (en) * | 2020-12-24 | 2022-07-26 | 卡斯柯信号有限公司 | Station passenger flow monitoring method and system based on building information model |
CN112598384A (en) * | 2020-12-24 | 2021-04-02 | 卡斯柯信号有限公司 | Station passenger flow monitoring method and system based on building information model |
CN112767451A (en) * | 2021-02-01 | 2021-05-07 | 福州大学 | Crowd distribution prediction method and system based on double-current convolutional neural network |
CN112767451B (en) * | 2021-02-01 | 2022-09-06 | 福州大学 | Crowd distribution prediction method and system based on double-current convolutional neural network |
CN113033382A (en) * | 2021-03-23 | 2021-06-25 | 哈尔滨市科佳通用机电股份有限公司 | Method, system and device for identifying large-area damage fault of wagon floor |
CN113033382B (en) * | 2021-03-23 | 2021-10-01 | 哈尔滨市科佳通用机电股份有限公司 | Method, system and device for identifying large-area damage fault of wagon floor |
CN113743184A (en) * | 2021-06-08 | 2021-12-03 | 中国人民公安大学 | Abnormal behavior crowd detection method and device based on element mining and video analysis |
CN113743184B (en) * | 2021-06-08 | 2023-08-29 | 中国人民公安大学 | Abnormal Behavior Crowd Detection Method and Device Based on Element Mining and Video Analysis |
CN115240142A (en) * | 2022-07-28 | 2022-10-25 | 杭州海宴科技有限公司 | Cross-media-based abnormal behavior early warning system and method for crowd in outdoor key places |
CN115240142B (en) * | 2022-07-28 | 2023-07-28 | 杭州海宴科技有限公司 | Outdoor key place crowd abnormal behavior early warning system and method based on cross media |
CN115223102A (en) * | 2022-09-08 | 2022-10-21 | 枫树谷(成都)科技有限责任公司 | Real-time crowd density fusion sensing method and model based on camera cluster |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111753651A (en) | Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis | |
Venkateswari et al. | License Plate cognizance by Ocular Character Perception' | |
CN103824070B (en) | A kind of rapid pedestrian detection method based on computer vision | |
CN108053427B (en) | Improved multi-target tracking method, system and device based on KCF and Kalman | |
CN108062349B (en) | Video monitoring method and system based on video structured data and deep learning | |
CN106650620B (en) | A kind of target person identification method for tracing using unmanned plane monitoring | |
CN110942545B (en) | Dense person entrance guard control system and method based on face recognition and video fence | |
CN108052859B (en) | Abnormal behavior detection method, system and device based on clustering optical flow characteristics | |
CN109918971B (en) | Method and device for detecting number of people in monitoring video | |
CN110309718A (en) | A kind of electric network operation personnel safety cap wearing detection method | |
Saha et al. | License Plate localization from vehicle images: An edge based multi-stage approach | |
US20120207353A1 (en) | System And Method For Detecting And Tracking An Object Of Interest In Spatio-Temporal Space | |
CN102521565A (en) | Garment identification method and system for low-resolution video | |
CN111401311A (en) | High-altitude parabolic recognition method based on image detection | |
CN111091098A (en) | Training method and detection method of detection model and related device | |
KR20160109761A (en) | Method and System for Recognition/Tracking Construction Equipment and Workers Using Construction-Site-Customized Image Processing | |
CN115841649A (en) | Multi-scale people counting method for urban complex scene | |
CN109299700A (en) | Subway group abnormal behavior detection method based on crowd density analysis | |
CN111008574A (en) | Key person track analysis method based on body shape recognition technology | |
CN108921147B (en) | Black smoke vehicle identification method based on dynamic texture and transform domain space-time characteristics | |
CN109325426B (en) | Black smoke vehicle detection method based on three orthogonal planes time-space characteristics | |
WO2022134916A1 (en) | Identity feature generation method and device, and storage medium | |
CN112464765B (en) | Safety helmet detection method based on single-pixel characteristic amplification and application thereof | |
CN113963373A (en) | Video image dynamic detection and tracking algorithm based system and method | |
CN113920585A (en) | Behavior recognition method and device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |