CN114092851A - Monitoring video abnormal event detection method based on time sequence action detection - Google Patents

Monitoring video abnormal event detection method based on time sequence action detection Download PDF

Info

Publication number
CN114092851A
CN114092851A CN202111185834.0A CN202111185834A CN114092851A CN 114092851 A CN114092851 A CN 114092851A CN 202111185834 A CN202111185834 A CN 202111185834A CN 114092851 A CN114092851 A CN 114092851A
Authority
CN
China
Prior art keywords
event
abnormal
events
detection
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111185834.0A
Other languages
Chinese (zh)
Inventor
王平
安德智
田军
武光利
牛君会
曹启
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gansu Eurasia Information Technology Co ltd
Original Assignee
Gansu Eurasia Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gansu Eurasia Information Technology Co ltd filed Critical Gansu Eurasia Information Technology Co ltd
Priority to CN202111185834.0A priority Critical patent/CN114092851A/en
Publication of CN114092851A publication Critical patent/CN114092851A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)

Abstract

The invention relates to the technical field of video detection, in particular to a method for detecting abnormal events of a monitoring video based on time sequence action detection, which comprises the following steps: s1: extracting characteristics; s2: selecting a training sample; s3: preprocessing an image; s4: a basic event representation; s5: constructing an abnormality detection model: modeling a basic event; s6: judging an abnormal event; s7: after-treatment, the invention generates a series of abnormal event segments of the monitoring video containing complete semantic information through an efficient abnormal event detection algorithm, reduces a video database while retaining useful information, facilitates the research of a subsequent retrieval method, selects a window length close to the length of each abnormal event in a training sample based on the occurrence time of each abnormal event, finally removes redundant video segments by using a non-maximum suppression NMS method, obtains a detection result with high confidence score, improves the detection precision of the monitoring video with different time lengths, and has better detection performance compared with other algorithms.

Description

Monitoring video abnormal event detection method based on time sequence action detection
Technical Field
The invention relates to the technical field of video detection, in particular to a method for detecting abnormal events of a monitoring video based on time sequence action detection.
Background
With the development of socio-economy and the reduction of the price of hardware facilities in China, the video monitoring system is widely applied to places such as shopping malls, banks, prisons, traffic intersections and the like, and plays a non-trivial role in maintaining the security and the stability of the society. The current video monitoring system still belongs to traditional manual monitoring, a camera is usually installed at a key position, data are transmitted to a monitoring center through transmission equipment and displayed on a monitoring screen in real time, and monitoring personnel judge and respond to abnormal events by watching the monitoring screen. This system has the following limitations: firstly, the labor is consumed, and monitoring personnel need to wait in turn under the normal condition to ensure that the monitoring can be carried out all day long; secondly, a large amount of warning missing reports exist, and monitoring personnel are easy to fatigue to miss important information when facing a plurality of pictures on a monitoring screen for a long time; finally, the method has no prediction effect on the abnormal events, and can only be used as a post-query tool for more time. The development of computer technology is rapid, and technologies such as image processing, machine vision, pattern recognition and the like are matured correspondingly, so that the possibility is provided for breaking through the limitation of the traditional video monitoring system, and the active monitoring of abnormal events in the monitoring video becomes possible.
Although researchers at home and abroad have achieved some achievements in the aspect of abnormal event detection, the researchers still face huge challenges, and besides challenges brought by various camera angles, scale change of moving objects, occlusion and the like, the researchers have the following difficulties: there is a lack of a uniform view on the definition of the basic events. Researchers try to represent basic events from different subdivision fields, and the current method for representing the basic events comprises the steps of extracting low-level features, carrying out trajectory tracking, constructing a social force model and the like; most algorithms are not capable of real-time detection. Due to the complexity of an algorithm model and the non-intermittence of video data, most of the algorithms at present have difficulty in achieving the point; the existing detection algorithm lacks scene applicability, for example, a detection algorithm which is excellent in performance in a pedestrian scene does not necessarily have the same good detection effect in a vehicle scene. Fourth, most algorithms cannot update themselves in real time. At present, most algorithms still stay in a stage of training the model only by using fixed samples, and the model of the algorithm cannot be updated in the detection process.
The existing monitoring video abnormal event detection technology has the following problems:
(1) the recall rate of human body time sequence action detection is low, and the positioning precision of the action starting time and the action ending time is to be improved;
(2) the existing detection method has different detection precision for different types of abnormal events, and particularly has low detection precision for the abnormal events with longer time.
Disclosure of Invention
The present invention provides a method for detecting abnormal events of surveillance video based on time sequence motion detection, so as to solve the problems in the background art.
The technical scheme of the invention is as follows: a monitoring video abnormal event detection method based on time sequence action detection comprises the following steps:
s1: feature extraction: extracting three-dimensional gradient features of each frame of picture in the monitoring video based on the video image;
s2: training sample selection: extracting a UCF-Crime data set on the basis of an ubuntu16 operating system configured with an NVIDIA Titan GPU;
s3: image preprocessing: the Gaussian filtering is utilized to perform noise reduction processing on the image, so that the interference of the noise of the original monitoring video is reduced;
s4: the basic event representation: selecting proper feature descriptors to represent basic events by using a method for tracking objects;
s5: constructing an abnormality detection model: modeling a basic event;
s6: judging an abnormal event: judging whether the event is abnormal or not based on the training sample;
s7: and (3) post-treatment: and (4) based on the occurrence duration of different abnormal events, combining the training samples to score, and obtaining a detection result without overlapping.
Preferably, the feature extraction includes the steps of:
s11: and (3) picture capturing: screenshot of each frame of picture in the video, and storing the screenshot into a database according to a time sequence;
s12: zooming the picture: amplifying each frame of picture captured in the database by 5 times and reducing each frame of picture by 5 times respectively, and storing the frames of picture into corresponding database classifications;
s13: displaying and extracting classification information: and displaying each frame of initial picture in the database as original information, extracting a processed picture of each frame in the database, which is enlarged by 5 times, as key information, and extracting a picture of each frame in the database, which is reduced by 5 times, as overall information.
Preferably, the training sample extraction comprises the following steps:
s21: extracting training samples: extracting content in the UCF-Crime data set by using an ubuntu16 operating system configured with an NVIDIA Titan GPU;
s22: training sample classification: classifying the extracted content in the UCF-Crime data set based on the time sequence action, and inputting the content into a database, wherein the same classified content is input into the same folder;
s23: training sample naming: naming the classified folders in the database: normal events, explosions, fighting, abuse, gun shots, robbery, vandalism, assault, arrest, pilot fire.
Preferably, the image preprocessing comprises the following steps:
s31: substituting the distance from other pixel points in the neighborhood to the center of the neighborhood into a two-dimensional Gaussian function to calculate a Gaussian template, wherein the Gaussian template is common in the size of 3 multiplied by 3 or 5 multiplied by 5;
s32: if the template is in a decimal form, normalization processing is carried out, and the left upper angle value of the template is normalized to be 1;
s33: aligning the center of the Gaussian template to an image matrix to be processed, multiplying corresponding elements and adding, and zero filling in places without the elements;
s34: and each element is respectively subjected to the calculation, and the obtained output matrix is the result of the Gaussian filtering.
Preferably, the basic event representation comprises the following steps:
s41: the representation of the event is classified according to whether it has a realistic physical meaning: a class without physical significance, which uses low-level visual features to represent basic events; there is a class of physical meaning that uses high-level semantic features to represent basic events.
S42: event representation based on low-level visual features: collecting video blocks in an overlapping mode, regarding the video blocks as basic events, and extracting low-level visual features from UCF-Crime data sets to represent the basic events;
s43: event representation based on advanced visual features: and collecting video blocks in a non-overlapping mode, regarding the video blocks as abnormal events, and extracting high-level visual features from the UCF-Crime data set to represent the abnormal events.
Preferably, the anomaly detection model construction includes the following steps:
s51: the characteristics are as follows: firstly, calculating a feature vector of an abnormal event, and then fusing the feature vectors of the normal event and the abnormal event;
s52: an event detection module: the event trigger generator comprises an event trigger generator and an event type classifier, wherein the event trigger generator is used for identifying event trigger words from texts, and the event type classifier is used for classifying events.
Preferably, the abnormal event judgment includes the following steps:
s61: judging the monitoring video conforming to the normal event feature vector as a normal event;
s62: and judging the monitoring video conforming to the abnormal event feature vector as an abnormal event.
Preferably, the post-treatment comprises the steps of:
s71: selecting window length similar to the occurrence time of various abnormal events in the training sample based on the occurrence time of the abnormal events;
s72: calculating the occurrence frequency of the window length to obtain a weight;
s73: repositioning the action scores in the monitoring video according to the weights;
s74: and removing redundant video segments by using a non-maximum value inhibition NMS method to obtain a detection result with a high confidence score.
The invention provides a monitoring video abnormal event detection method based on time sequence action detection through improvement, compared with the prior art, the method has the following improvement and advantages:
one is as follows: according to the method, a series of monitoring video abnormal event segments containing complete semantic information are generated through an efficient abnormal event detection algorithm, so that useful information is reserved, a video database is reduced, and the follow-up search method is convenient to research;
the second step is as follows: according to the method, based on the occurrence time of various abnormal events in the training sample, the window length close to the occurrence time is selected, and finally, a non-maximum value suppression NMS method is used for removing redundant video segments to obtain a detection result with high confidence score, so that the detection precision of monitoring videos with different time lengths is improved;
and thirdly: the invention utilizes the abnormal detection method based on unit analysis of motion, size and texture characteristics to carry out research, and the method utilizes three low-level visual characteristics of motion, size and texture to independently model and establish two classifiers to judge abnormal events, thereby aiming at solving the problems of false detection and missing detection of the abnormal events caused by rough descriptors and providing an improved method.
Detailed Description
The present invention is described in detail below, and technical solutions in the embodiments of the present invention are clearly and completely described, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention provides a monitoring video abnormal event detection method based on time sequence action detection through improvement, and the technical scheme of the invention is as follows:
the first embodiment is as follows:
a monitoring video abnormal event detection method based on time sequence action detection comprises the following steps:
s1: feature extraction: extracting three-dimensional gradient features of each frame of picture in the monitoring video based on the video image;
s2: training sample selection: extracting a UCF-Crime data set on the basis of an ubuntu16 operating system configured with an NVIDIA Titan GPU;
s3: image preprocessing: the Gaussian filtering is utilized to perform noise reduction processing on the image, so that the interference of the noise of the original monitoring video is reduced;
s4: the basic event representation: selecting proper feature descriptors to represent basic events by using a method for tracking objects;
s5: constructing an abnormality detection model: modeling a basic event;
s6: judging an abnormal event: judging whether the event is abnormal or not based on the training sample;
s7: and (3) post-treatment: and (4) based on the occurrence duration of different abnormal events, combining the training samples to score, and obtaining a detection result without overlapping.
Specifically, the feature extraction includes the steps of:
s11: and (3) picture capturing: screenshot of each frame of picture in the video, and storing the screenshot into a database according to a time sequence;
s12: zooming the picture: amplifying each frame of picture captured in the database by 5 times and reducing each frame of picture by 5 times respectively, and storing the frames of picture into corresponding database classifications;
s13: displaying and extracting classification information: and displaying each frame of initial picture in the database as original information, extracting a processed picture of each frame in the database, which is enlarged by 5 times, as key information, and extracting a picture of each frame in the database, which is reduced by 5 times, as overall information.
Specifically, the training sample extraction comprises the following steps:
s21: extracting training samples: extracting content in the UCF-Crime data set by using an ubuntu16 operating system configured with an NVIDIA Titan GPU;
s22: training sample classification: classifying the extracted content in the UCF-Crime data set based on the time sequence action, and inputting the content into a database, wherein the same classified content is input into the same folder;
s23: training sample naming: naming the classified folders in the database: normal events, explosions, fighting, abuse, gun shots, robbery, vandalism, assault, arrest, pilot fire.
Specifically, the image preprocessing comprises the following steps:
s31: substituting the distance from other pixel points in the neighborhood to the center of the neighborhood into a two-dimensional Gaussian function to calculate a Gaussian template, wherein the Gaussian template is common in the size of 3 multiplied by 3 or 5 multiplied by 5;
s32: if the template is in a decimal form, normalization processing is carried out, and the left upper angle value of the template is normalized to be 1;
s33: aligning the center of the Gaussian template to an image matrix to be processed, multiplying corresponding elements and adding, and zero-filling at the place without the elements (for example, a 3 x 3 Gaussian template needs to fill a circle of zero at the outermost layer of the image to be processed);
s34: and each element is respectively subjected to the calculation, and the obtained output matrix is the result of the Gaussian filtering. The Gaussian filtering is a process of weighted averaging of the whole image, and the value of each pixel point is obtained by weighted averaging of the pixel point and other pixel values in the neighborhood. The specific operation of gaussian filtering is: each pixel in the image is scanned using a template (or convolution, mask), and the weighted average gray value of the pixels in the neighborhood determined by the template is used to replace the value of the pixel in the center of the template.
Specifically, the basic event representation comprises the following steps:
s41: the representation of the event is classified according to whether it has a realistic physical meaning: a class without physical significance, which uses low-level visual features to represent basic events; there is a class of physical meaning that uses high-level semantic features to represent basic events.
S42: event representation based on low-level visual features: collecting video blocks in an overlapping mode, regarding the video blocks as basic events, and extracting low-level visual features from UCF-Crime data sets to represent the basic events;
s43: event representation based on advanced visual features: and collecting video blocks in a non-overlapping mode, regarding the video blocks as abnormal events, and extracting high-level visual features from the UCF-Crime data set to represent the abnormal events. The method is researched by utilizing an abnormal detection method based on unit analysis of motion, size and texture features, three low-level visual features of the motion, the size and the texture are independently modeled, two classifiers are established to judge abnormal events, and an improved method is provided for the problems of false detection and missing detection of the abnormal events caused by rough descriptors. How to represent basic events is a key problem in an anomaly detection process, and the existing methods are classified from the perspective of whether to track or not, and the classification can be divided into two types: a tracked method and a non-tracked method. In the tracking method, the track of a moving object is recorded all the time, and the method is suitable for scenes with fewer target objects and less shielding; in the non-tracking method, features such as motion and texture are applied to a given scene. The invention changes a thought, and classifies the representation of the event according to whether the representation has the practical physical significance: a class without physical significance, which uses low-level visual features to represent basic events; there is a class of physical meaning that uses high-level semantic features to represent basic events.
Specifically, the anomaly detection model construction includes the following steps:
s51: the characteristics are as follows: firstly, calculating a feature vector of an abnormal event, and then fusing the feature vectors of the normal event and the abnormal event;
s52: an event detection module: the event trigger generator comprises an event trigger generator and an event type classifier, wherein the event trigger generator is used for identifying event trigger words from texts, and the event type classifier is used for classifying events. A series of monitoring video abnormal event segments containing complete semantic information are generated through an efficient abnormal event detection algorithm, useful information is reserved, a video database is reduced, and the follow-up search method is convenient to research. The invention researches the unit analysis abnormity detection technology based on the movement, size and texture characteristics of the foreground object, and makes corresponding improvement aiming at the problem of rough characteristic descriptors on the original basis: firstly, aiming at the problem that a motion feature descriptor is rough, the invention provides the HOG3D feature under polar coordinates as the motion feature, and only the position of a foreground pixel point in a unit is sampled and voted; secondly, aiming at the problem that the descriptor of the texture feature is rough, the invention proposes to use the LBP feature of the equivalent mode to describe the texture.
Specifically, the abnormal event judgment includes the following steps:
s61: judging the monitoring video conforming to the normal event feature vector as a normal event;
s62: and judging the monitoring video conforming to the abnormal event feature vector as an abnormal event.
Specifically, the post-processing comprises the following steps:
s71: selecting window length similar to the occurrence time of various abnormal events in the training sample based on the occurrence time of the abnormal events;
s72: calculating the occurrence frequency of the window length to obtain a weight;
s73: repositioning the action scores in the monitoring video according to the weights;
s74: and removing redundant video segments by using a non-maximum value inhibition NMS method to obtain a detection result with a high confidence score. Since only one corresponding video segment should be detected by one abnormal event, the non-maximum value suppression NMS method is used to remove the redundant video segments, and the detection result with high confidence score is obtained. Based on the occurrence time of various abnormal events in the training sample, the window length close to the occurrence time is selected, and finally, a non-maximum value suppression NMS method is used for removing redundant video segments to obtain a detection result with high confidence score, so that the detection precision of the monitoring videos with different time lengths is improved, and the detection performance is better compared with other algorithms based on the time sequence action detection.
The previous description is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (8)

1. The method for detecting the abnormal events of the surveillance video based on the time sequence action detection is characterized by comprising the following steps of:
s1: feature extraction: extracting three-dimensional gradient features of each frame of picture in the monitoring video based on the video image;
s2: training sample selection: extracting a UCF-Crime data set on the basis of an ubuntu16 operating system configured with an NVIDIA Titan GPU;
s3: image preprocessing: the Gaussian filtering is utilized to perform noise reduction processing on the image, so that the interference of the noise of the original monitoring video is reduced;
s4: the basic event representation: selecting proper feature descriptors to represent basic events by using a method for tracking objects;
s5: constructing an abnormality detection model: modeling a basic event;
s6: judging an abnormal event: judging whether the event is abnormal or not based on the training sample;
s7: and (3) post-treatment: and (4) based on the occurrence duration of different abnormal events, combining the training samples to score, and obtaining a detection result without overlapping.
2. The method for detecting abnormal events of surveillance video based on sequential action detection as claimed in claim 1, wherein said feature extraction comprises the steps of:
s11: and (3) picture capturing: screenshot of each frame of picture in the video, and storing the screenshot into a database according to a time sequence;
s12: zooming the picture: amplifying each frame of picture captured in the database by 5 times and reducing each frame of picture by 5 times respectively, and storing the frames of picture into corresponding database classifications;
s13: displaying and extracting classification information: and displaying each frame of initial picture in the database as original information, extracting a processed picture of each frame in the database, which is enlarged by 5 times, as key information, and extracting a picture of each frame in the database, which is reduced by 5 times, as overall information.
3. The method for detecting abnormal events of surveillance video based on sequential action detection as claimed in claim 1, wherein the training sample extraction comprises the following steps:
s21: extracting training samples: extracting content in the UCF-Crime data set by using an ubuntu16 operating system configured with an NVIDIA Titan GPU;
s22: training sample classification: classifying the extracted content in the UCF-Crime data set based on the time sequence action, and inputting the content into a database, wherein the same classified content is input into the same folder;
s23: training sample naming: naming the classified folders in the database: normal events, explosions, fighting, abuse, gun shots, robbery, vandalism, assault, arrest, pilot fire.
4. The method for detecting abnormal events of surveillance video based on sequential action detection as claimed in claim 1, wherein the image preprocessing comprises the following steps:
s31: substituting the distance from other pixel points in the neighborhood to the center of the neighborhood into a two-dimensional Gaussian function to calculate a Gaussian template, wherein the Gaussian template is common in the size of 3 multiplied by 3 or 5 multiplied by 5;
s32: if the template is in a decimal form, normalization processing is carried out, and the left upper angle value of the template is normalized to be 1;
s33: aligning the center of the Gaussian template to an image matrix to be processed, multiplying corresponding elements and adding, and zero filling in places without the elements;
s34: and each element is respectively subjected to the calculation, and the obtained output matrix is the result of the Gaussian filtering.
5. The method according to claim 1, wherein the basic event representation comprises the following steps:
s41: the representation of the event is classified according to whether it has a realistic physical meaning: a class without physical significance, which uses low-level visual features to represent basic events; one class exists in physical meaning, which uses high-level semantic features to represent basic events;
s42: event representation based on low-level visual features: collecting video blocks in an overlapping mode, regarding the video blocks as basic events, and extracting low-level visual features from UCF-Crime data sets to represent the basic events;
s43: event representation based on advanced visual features: and collecting video blocks in a non-overlapping mode, regarding the video blocks as abnormal events, and extracting high-level visual features from the UCF-Crime data set to represent the abnormal events.
6. The method for detecting the abnormal events of the surveillance video based on the sequential action detection as claimed in claim 1, wherein the abnormal detection model construction comprises the following steps:
s51: the characteristics are as follows: firstly, calculating a feature vector of an abnormal event, and then fusing the feature vectors of the normal event and the abnormal event;
s52: an event detection module: the event trigger generator comprises an event trigger generator and an event type classifier, wherein the event trigger generator is used for identifying event trigger words from texts, and the event type classifier is used for classifying events.
7. The method for detecting the abnormal events of the surveillance video based on the sequential action detection as claimed in claim 1, wherein the abnormal event judgment comprises the following steps:
s61: judging the monitoring video conforming to the normal event feature vector as a normal event;
s62: and judging the monitoring video conforming to the abnormal event feature vector as an abnormal event.
8. The method for detecting abnormal events of surveillance video based on sequential action detection as claimed in claim 1, wherein said post-processing comprises the steps of:
s71: selecting window length similar to the occurrence time of various abnormal events in the training sample based on the occurrence time of the abnormal events;
s72: calculating the occurrence frequency of the window length to obtain a weight;
s73: repositioning the action scores in the monitoring video according to the weights;
s74: and removing redundant video segments by using a non-maximum value inhibition NMS method to obtain a detection result with a high confidence score.
CN202111185834.0A 2021-10-12 2021-10-12 Monitoring video abnormal event detection method based on time sequence action detection Pending CN114092851A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111185834.0A CN114092851A (en) 2021-10-12 2021-10-12 Monitoring video abnormal event detection method based on time sequence action detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111185834.0A CN114092851A (en) 2021-10-12 2021-10-12 Monitoring video abnormal event detection method based on time sequence action detection

Publications (1)

Publication Number Publication Date
CN114092851A true CN114092851A (en) 2022-02-25

Family

ID=80296705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111185834.0A Pending CN114092851A (en) 2021-10-12 2021-10-12 Monitoring video abnormal event detection method based on time sequence action detection

Country Status (1)

Country Link
CN (1) CN114092851A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402136A (en) * 2023-03-22 2023-07-07 中航信移动科技有限公司 Rule extraction method based on offline data, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116402136A (en) * 2023-03-22 2023-07-07 中航信移动科技有限公司 Rule extraction method based on offline data, storage medium and electronic equipment
CN116402136B (en) * 2023-03-22 2023-11-17 中航信移动科技有限公司 Rule extraction method based on offline data, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
CN107230267B (en) Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method
Kumar et al. Study of robust and intelligent surveillance in visible and multi-modal framework
Khaire et al. A semi-supervised deep learning based video anomaly detection framework using RGB-D for surveillance of real-world critical environments
WO2006059419A1 (en) Tracing device, and tracing method
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN111738218A (en) Human body abnormal behavior recognition system and method
CN108596157A (en) A kind of crowd's agitation scene detection method and system based on motion detection
Martínez-Mascorro et al. Suspicious behavior detection on shoplifting cases for crime prevention by using 3D convolutional neural networks
CN114926781A (en) Multi-user time-space domain abnormal behavior positioning method and system supporting real-time monitoring scene
Jayaswal et al. A Framework for Anomaly Classification Using Deep Transfer Learning Approach.
He et al. Vehicle theft recognition from surveillance video based on spatiotemporal attention
CN114092851A (en) Monitoring video abnormal event detection method based on time sequence action detection
Zhou et al. A review of multiple-person abnormal activity recognition
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
Seidenari et al. Dense spatio-temporal features for non-parametric anomaly detection and localization
Miao et al. Abnormal behavior learning based on edge computing toward a crowd monitoring system
Zhang et al. Key frame extraction based on quaternion Fourier transform with multiple features fusion
Rajpurkar et al. Alert generation on detection of suspicious activity using transfer learning
CN105095891A (en) Human face capturing method, device and system
Yu et al. Review of intelligent video surveillance technology research
Yadav et al. Human Illegal Activity Recognition Based on Deep Learning Techniques
CN111639600B (en) Video key frame extraction method based on center offset
ELBAŞI et al. Control charts approach for scenario recognition in video sequences
CN114782675A (en) Dynamic item pricing method and system in safety technical service field

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination