WO2020135392A1 - 异常行为检测方法及装置 - Google Patents
异常行为检测方法及装置 Download PDFInfo
- Publication number
- WO2020135392A1 WO2020135392A1 PCT/CN2019/127797 CN2019127797W WO2020135392A1 WO 2020135392 A1 WO2020135392 A1 WO 2020135392A1 CN 2019127797 W CN2019127797 W CN 2019127797W WO 2020135392 A1 WO2020135392 A1 WO 2020135392A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- behavior data
- behavior
- video
- normal
- feature
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
Definitions
- the present disclosure relates to the field of video surveillance, and in particular to an abnormal behavior detection method and device.
- Abnormal behavior detection means that in the video surveillance scene, the computer equipment replaces the video surveillance personnel, and automatically detects the abnormal behavior in the video surveillance scene, so that the alarm can be promptly performed.
- abnormal behavior generally refers to behaviors in the scene that are significantly different from other behaviors or have a low probability of occurring in the scene, such as behaviors that endanger others or harm the public interest.
- Abnormal behavior detection makes video surveillance personnel free from massive monitoring data and tedious manual operations, and has extremely extensive applications in the field of video surveillance.
- OneClassSVM One Class Support Vector Machine
- a large number of surveillance videos where normal behavior occurs are collected, image sequences are extracted from the surveillance videos as normal behavior data, and a class one classifier is trained based on the normal behavior data.
- a class one classifier is trained based on the normal behavior data.
- the behavior features included in this class of classifier are obtained by performing feature extraction on normal behavior data.
- the above technology is based on the behavior characteristics extracted from the normal behavior data to determine whether there is abnormal behavior in the unknown video. Since the difference between the normal behavior data and the abnormal behavior data is not learned, the detection result is prone to large deviations and the accuracy of abnormal behavior detection is poor.
- the embodiments of the present disclosure provide an abnormal behavior detection method and device, which can solve the problem of poor accuracy of related technologies.
- the technical solution is as follows:
- a method for detecting abnormal behavior includes:
- the feature extraction model Inputting the behavior data into a feature extraction model and outputting behavior characteristics of the behavior data, the feature extraction model is used to output behavior characteristics within the feature space range according to normal behavior data and out of the feature space range according to abnormal behavior data Behavioral characteristics, the distance between the behavioral characteristics within the feature space is less than the distance threshold;
- the detection result is used to indicate whether the behavior data is abnormal behavior data
- the normal The behavior feature center is used to represent behavior features within the range of the feature space.
- the training process of the feature extraction model includes:
- each first behavior data pair containing two normal behavior data in the normal behavior data set
- Each second behavior data pair includes one normal behavior data in the normal behavior data set and one abnormal behavior data in the abnormal behavior data set;
- each first behavior feature pair containing two normal behavior data Behavior characteristics
- each second behavior characteristic pair contains a behavior characteristic of normal behavior data and a behavior characteristic of abnormal behavior data
- the method before acquiring multiple first behavior data pairs and multiple second behavior data pairs according to the normal behavior data set and the abnormal behavior data set, the method further includes:
- the abnormal behavior data set is acquired, and the plurality of second videos are videos of the target performing abnormal behavior.
- the acquiring the normal behavior data set based on multiple first videos includes:
- the spatial motion range is The spatial range covered by the target motion, the duration of the first time period is less than the duration of the first video period;
- the first image sequence of the plurality of first videos is used as the normal behavior data set.
- the acquiring the abnormal behavior data set based on multiple second videos includes:
- the spatial motion range is The spatial range covered by the target motion, the duration of the second time period is less than the duration of the second video period;
- the second image sequence of the plurality of second videos is used as the abnormal behavior data set.
- the behavior data to be detected is multiple behavior data
- the method further includes:
- the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the adding abnormal behavior data from the plurality of behavior data to the abnormal behavior data set includes:
- the behavior data to be detected is a plurality of behavior data
- the method further includes:
- the updating based on the abnormal behavior data set, performing the training process of the feature extraction model, and obtaining the updated feature extraction model includes:
- the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the method further includes:
- the spatial motion range of the target in the third time period is covered by the target motion
- the spatial range of the third time period is less than the time period of the video
- the spatial motion range and the video performing image interception in the video sequence corresponding to the third time period to obtain an image sequence of the video, the video sequence including multiple frames of video images of the video, so
- the image sequence includes an area corresponding to the spatial motion range in the multi-frame video image;
- the image sequence of the multiple videos is used as the multiple behavior data.
- the method further includes:
- the image sequence of the video to which the abnormal behavior data belongs is displayed.
- the acquiring the detection result of the behavior data according to the distance between the behavior feature of the behavior data and the center of the normal behavior feature and the distance threshold includes:
- the behavior data is abnormal behavior data
- the behavior data is normal behavior data.
- the process of acquiring the normal behavior feature center includes:
- the behavior characteristic of each normal behavior data in the plurality of normal behavior data is characterized by a feature vector
- an abnormal behavior detection device including:
- the acquisition module is used to acquire the behavior data to be detected
- An extraction module is used to input the behavior data into a feature extraction model and output behavior characteristics of the behavior data, and the feature extraction model is used to output behavior characteristics within a feature space range according to normal behavior data and output data based on abnormal behavior data.
- the acquiring module is further configured to acquire the detection result of the behavior data according to the distance between the behavior feature of the behavior data and the center of the normal behavior feature and the distance threshold, and the detection result is used to indicate whether the behavior data is For abnormal behavior data, the normal behavior feature center is used to represent behavior features within the feature space.
- the obtaining module is further used to:
- each first behavior data pair containing two normal behavior data in the normal behavior data set
- Each second behavior data pair includes one normal behavior data in the normal behavior data set and one abnormal behavior data in the abnormal behavior data set;
- each first behavior feature pair containing two normal behavior data Behavior characteristics
- each second behavior characteristic pair contains a behavior characteristic of normal behavior data and a behavior characteristic of abnormal behavior data
- the obtaining module is further used to:
- the abnormal behavior data set is acquired, and the plurality of second videos are videos of the target performing abnormal behavior.
- the acquisition module is used to:
- the spatial motion range is The spatial range covered by the target motion, the duration of the first time period is less than the duration of the first video period;
- the first image sequence of the plurality of first videos is used as the normal behavior data set.
- the acquisition module is used to:
- the spatial motion range is The spatial range covered by the target motion, the duration of the second time period is less than the duration of the second video period;
- the second image sequence of the plurality of second videos is used as the abnormal behavior data set.
- the behavior data to be detected is multiple behavior data
- the acquiring module is further configured to determine abnormal behavior data in the plurality of behavior data according to the detection results of the plurality of behavior data; add the abnormal behavior data in the plurality of behavior data to the abnormal behavior In the data set; based on the updated abnormal behavior data set, the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the behavior data to be detected is multiple behavior data
- the acquisition module After acquiring the detection result of the behavior data according to the distance between the behavior characteristic of the behavior data and the center of the normal behavior characteristic, the acquisition module is further used to:
- the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the acquiring module is configured to acquire artificial confirmation information of abnormal behavior data among the plurality of behavior data; add abnormal behavior data indicated by the manual confirmation information to the abnormal behavior data set .
- the obtaining module is further used to:
- the spatial motion range of the target in the third time period is covered by the target motion
- the spatial range of the third time period is less than the time period of the video
- the spatial motion range and the video performing image interception in the video sequence corresponding to the third time period to obtain an image sequence of the video, the video sequence including multiple frames of video images of the video, so
- the image sequence includes an area corresponding to the spatial motion range in the multi-frame video image;
- the image sequence of the plurality of videos is used as the plurality of behavior data.
- the device further includes:
- the display module is configured to display the image sequence of the video to which the abnormal behavior data belongs during the playing of the video to which the abnormal behavior data belongs to among the plurality of abnormal behavior data.
- the acquisition module is used to:
- the behavior data is abnormal behavior data
- the behavior data is normal behavior data.
- the obtaining module is further used to:
- the behavior characteristic of each normal behavior data in the plurality of normal behavior data is characterized by a feature vector
- the acquisition module is used to:
- a computer device including a processor and a memory; the memory is used to store at least one instruction; the processor executes at least one instruction stored on the memory to implement the first Aspects of abnormal behavior detection methods.
- a computer-readable storage medium in which a computer program is stored, and when the computer program is executed by a processor, the method for detecting abnormal behavior of the first aspect described above is implemented.
- the feature extraction model uses the feature extraction model to extract the behavior features of the behavior data, and determine whether the behavior data is abnormal behavior data according to the distance and distance threshold between the extracted behavior features and the normal behavior center. Since the feature extraction model is trained based on the distance constraint method, normal The behavioral features extracted by the behavioral data through the feature extraction model are in a relatively small feature space, and the behavioral features extracted by the characteristic extraction model through the feature extraction model are outside the feature space, which ensures that the normal behavioral features are relatively compact and the abnormal behavior There is a clear distance between the feature and the normal behavior feature. Since the difference between normal behavior and abnormal behavior is learned, this method of abnormal behavior detection based on distance measurement has high accuracy.
- FIG. 1 is a flowchart of a method for detecting abnormal behavior provided by an embodiment of the present disclosure
- FIG. 2 is a flowchart of an abnormal behavior detection method provided by an embodiment of the present disclosure
- FIG. 3 is a training flowchart of a feature extraction model provided by an embodiment of the present disclosure
- FIG. 6 is a schematic structural diagram of an abnormal behavior detection device provided by an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of an abnormal behavior detection device provided by an embodiment of the present disclosure.
- FIG. 8 is a schematic structural diagram of a computer device 800 according to an embodiment of the present disclosure.
- FIG. 1 is a flowchart of an abnormal behavior detection method provided by an embodiment of the present disclosure. Referring to Figure 1, the method includes:
- the feature extraction model is used to output behavior characteristics within a characteristic space range according to normal behavior data and output behavior outside the characteristic space range according to abnormal behavior data.
- Feature the distance between each behavior feature in the feature space is less than the distance threshold.
- the method provided in the embodiment of the present disclosure extracts the behavior characteristics of the behavior data through the feature extraction model, and determines whether the behavior data is abnormal behavior data according to the distance and the distance threshold between the extracted behavior characteristics and the normal behavior center. Because the feature extraction model is trained based on the distance constraint method, the behavior features extracted by the normal behavior data through the feature extraction model are in a relatively small feature space, and the behavior features extracted by the abnormal behavior data through the feature extraction model are in the feature space Outside the scope, it can be seen that the normal behavior characteristics are relatively compact, and there is a clear distance between the abnormal behavior characteristics and the normal behavior characteristics.
- the abnormal behavior can be detected based on the difference (that is, the abnormal behavior is detected based on the distance between the abnormal behavior characteristic and the normal behavior characteristic), so that the accuracy of the abnormal behavior detection is high.
- the training process of the feature extraction model includes:
- each first behavior data pair includes two normal behavior data in the normal behavior data set
- each A second behavior data pair includes a normal behavior data in the normal behavior data set and an abnormal behavior data in the abnormal behavior data set;
- each first behavior feature pair containing two behaviors of normal behavior data Characteristics
- each second behavior characteristic pair includes a behavior characteristic of normal behavior data and a behavior characteristic of abnormal behavior data
- the feature extraction model is obtained through supervised training by a loss function .
- the method before acquiring multiple first behavior data pairs and multiple second behavior data pairs according to the normal behavior data set and the abnormal behavior data set, the method further includes:
- the abnormal behavior data set is acquired, and the plurality of second videos are videos of the target performing abnormal behavior.
- the acquiring the normal behavior data set based on multiple first videos includes:
- the target in the first video is detected and tracked to obtain the spatial motion range of the target in the first time period, the spatial motion range is determined by the target motion The spatial range covered, the duration of the first time period is less than the duration of the first video period;
- the spatial motion range and the first video perform image interception in the first video sequence corresponding to the first time period to obtain a first image sequence of the first video, the first video sequence including the first video sequence A multi-frame video image, the first image sequence includes an area corresponding to the spatial motion range in the multi-frame video image;
- the first image sequence of the plurality of first videos is used as the normal behavior data set.
- the process of obtaining the abnormal behavior data set includes:
- the target in the second video is detected and tracked to obtain the spatial motion range of the target in the second time period, the spatial motion range is the target motion range The spatial range covered, the duration of the second time period is less than the duration of the second video period;
- the spatial motion range and the second video performing image interception in the second video sequence corresponding to the second time period to obtain a second image sequence of the second video, the second video sequence including the second video A multi-frame video image, the second image sequence includes an area corresponding to the spatial motion range in the multi-frame video image;
- the second image sequence of the plurality of second videos is used as the abnormal behavior data set.
- the behavior data to be detected is multiple behavior data
- the method further includes:
- the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the addition of abnormal behavior data in the plurality of behavior data to the abnormal behavior data set includes:
- the abnormal behavior data indicated by the manual confirmation information is added to the abnormal behavior data set.
- the behavior data to be detected is multiple behavior data
- the method further includes:
- the training process of the feature extraction model is performed to obtain the updated feature extraction model, including:
- the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the method further includes:
- the spatial motion range is the spatial range covered by the target motion, the The duration of the third time period is less than the duration of the video period;
- the spatial motion range and the video perform image interception in the video sequence corresponding to the third time period to obtain an image sequence of the video, the video sequence includes multiple frames of the video image of the video, and the image sequence includes the multiple frames The area in the video image corresponding to the spatial motion range;
- the image sequence of the multiple videos is used as the multiple behavior data.
- the method further includes:
- the image sequence of the video to which the abnormal behavior data belongs is displayed.
- obtaining the detection result of the behavior data according to the distance between the behavior feature of the behavior data and the center of the normal behavior feature and the distance threshold includes:
- the behavior data is determined to be abnormal behavior data
- the behavior data is determined to be normal behavior data.
- the process of acquiring the normal behavior feature center includes:
- the normal behavior characteristic center is obtained.
- the behavior characteristic of each normal behavior data in the plurality of normal behavior data is characterized by a feature vector
- FIG. 2 is a flowchart of an abnormal behavior detection method provided by an embodiment of the present disclosure. The method is executed by a computer device. Referring to FIG. 2, the method includes:
- the normal behavior data set contains multiple normal behavior data
- the abnormal behavior data set contains multiple abnormal behavior data
- step 201 may include: acquiring the normal behavior data set based on multiple first videos, The plurality of first videos are videos of the target performing normal behavior; based on the plurality of second videos, the abnormal behavior data set is acquired, and the plurality of second videos are videos of the target performing abnormal behavior.
- the plurality of first videos and the plurality of second videos may be collected by relevant personnel according to preset normal behavior categories and stored on the computer device.
- normal behavior and abnormal behavior since the normal behavior category can be preset, the category of normal behavior can be arbitrarily specified according to the application scenario, and behaviors different from the normal behavior are considered abnormal behaviors.
- Normal behaviors can include, but are not limited to, normal walking, meditation, and a series of normal behaviors related to a specific scene, and abnormal behaviors can include, but are not limited to, riots, conflicts, and a series of specific scene-related behaviors.
- normal walking and meditation are normal behaviors
- behaviors such as riots and conflicts are abnormal behaviors.
- the bank counter scene it can be specified that the behavior of standing upright and counting banknotes is normal behavior, while the behavior of making calls and putting banknotes in pockets is abnormal behavior.
- the process of acquiring the normal behavior data set based on multiple first videos may include the following steps a1 to a3:
- Step a1 For each first video of the plurality of first videos, detect and track the target in the first video to obtain the spatial motion range of the target in the first time period, the spatial motion range is The spatial range covered by the target motion, the duration of the first time period is less than the duration of the first video period.
- the first time period may be a time period of a video sequence of the first video, where the video sequence includes multiple frames of video images of the first video, such as f1, ..., fn.
- the first time period is 15 seconds
- the first video is 3 minutes
- the first time period is any consecutive 15 seconds of the first video.
- Targets refer to people, cars, animals, etc.
- the computer device may use a target detection and tracking algorithm to detect and track the target in the first video, determine the position of the target at each moment in the first time period, and thereby determine the spatial motion range of the target and the space of the target
- the motion range is used to characterize the range of changes in the position of the target in the video image during the first time period.
- target detection and tracking algorithms include but are not limited to DPM (Deformable Part Model), FRCNN (Faster Region-based Convolutional Neural Networks, rapid detection model of convolutional neural network based on candidate regions), YOLO (You Only Look, Once,), SSD (Single Shot Multibox Detector,), etc.
- the position of the target at each moment can be represented by adding a target frame to the video image.
- the form of the target frame includes but is not limited to a circumscribed rectangular frame, a circumscribed circular frame, and a circumscribed polygonal frame.
- the coordinates of the lower right corner of the lower target frame (the origin of the rectangular coordinate system may be the upper left corner of the image where the target frame is located).
- the computer device obtains a series of target frames, the spatial motion range of the target can be obtained based on the target frame.
- min( ⁇ left_top_x ⁇ ) represents the minimum horizontal coordinate of the upper left corner of the target frame
- min( ⁇ left_top_y ⁇ ) represents the minimum vertical coordinate of the upper left corner of the target frame
- max( ⁇ right_bottom_x ⁇ ) represents the maximum of the lower right corner of the target frame
- max( ⁇ right_bottom_y ⁇ ) represents the maximum ordinate of the lower right corner of the target frame.
- Step a2 According to the spatial motion range and the first video, perform image interception in the first video sequence corresponding to the first time period to obtain the first image sequence of the first video.
- the first video sequence includes multi-frame video images of the first video, and the first image sequence includes regions corresponding to the spatial motion range in the multi-frame video images.
- the computer device may perform image interception on each frame of the video image included in the video sequence according to the spatial motion range in step a1, and intercept the area corresponding to the spatial motion range from each frame of the video image.
- the sequence constitutes the first image sequence, which can reflect the motion information of the target in time and space.
- R tube [min( ⁇ left_top_x ⁇ ),min( ⁇ left_top_y ⁇ ),max( ⁇ right_bottom_x ⁇ ),max( ⁇ right_bottom_y ⁇ )], and then intercept the upper left corner in the video sequence f1,...,fn
- the rectangular area with the coordinates of (min( ⁇ left_top_x ⁇ ), min( ⁇ left_top_y ⁇ )), and the coordinates of the bottom right corner (max( ⁇ right_bottom_x ⁇ ), max( ⁇ right_bottom_y ⁇ )) gets the video of each frame in the video sequence
- the rectangular area in the image, the rectangular areas sequentially intercepted constitute the first image sequence.
- Step a3 Use the first image sequence of the plurality of first videos as the normal behavior data set.
- the computer device can obtain the first image sequence of each first video in the plurality of first videos, and use each first image sequence as a behavior data (or behavior sequence) to form a normal behavior data set .
- the process of acquiring the abnormal behavior data sets based on multiple second videos may include the following steps b1 to b3:
- Step b1 For each second video in the plurality of second videos, the computer device detects and tracks the target in the second video to obtain the spatial motion range of the target in the second time period, the spatial motion range For the spatial range covered by the target motion, the duration of the second time period is less than the duration of the second video period.
- Step b2 The computer device performs image interception in the second video sequence corresponding to the second time period according to the spatial motion range and the second video to obtain a second image sequence of the second video, the second video sequence includes The multi-frame video image of the second video, and the second image sequence includes an area corresponding to the spatial motion range in the multi-frame video image.
- Step b3 Use the second image sequence of the plurality of second videos as the abnormal behavior data set.
- Step b1 to step b3 are the same as step a1 to step a3, and the specific process will not be repeated here.
- duration of the second time period and the first time period may be equal or different.
- the probability of normal behavior is much greater than the probability of abnormal behavior, so the first video is easier to collect than the second video. It can be understood that, since the number of multiple first videos is much larger than the number of multiple second videos, the normal behavior data set acquired based on multiple first videos contains a large amount of normal behavior data, while the number based on multiple second videos The abnormal behavior data collection obtained by the video contains a small amount of abnormal behavior data.
- each first behavior data pair includes two normal behavior data in the normal behavior data set
- Each second behavior data pair contains one normal behavior data in the normal behavior data set and one abnormal behavior data in the abnormal behavior data set.
- the computer device may construct a plurality of first behavior data pairs (“normal-normal” behavior data pairs) based on the normal behavior data set, and construct a plurality of second behavior data sets based on the normal behavior data set and the abnormal behavior data set Behavior data pairs ("normal-abnormal" behavior data pairs).
- the specific processing is: the computer device can combine the normal behavior data in the normal behavior data set in twos to obtain multiple first behavior data pairs. For each normal behavior data in the normal behavior data set, the computer device may combine the normal behavior data with each abnormal behavior data in the abnormal behavior data set to obtain multiple second behavior data pairs.
- the computer device can form a large amount of "normal-normal” based on a large amount of normal behavior data and a small amount of abnormal behavior data Behavior data pairs and a large number of "normal-abnormal" behavior data pairs.
- the subscript of a refers to the abnormal behavior category. The superscript is used to distinguish the abnormal behavior data under the same abnormal behavior category.
- the computer device trains based on NN_Pair and NA_Pair to obtain a feature extraction model. For the specific process, see subsequent steps 203 and 204.
- each first behavior feature pair includes two normal behavior data Behavior characteristics
- each second behavior characteristic pair contains a behavior characteristic of normal behavior data and a behavior characteristic of abnormal behavior data.
- the computer device may obtain the initial extraction model, and use the initial extraction model to perform behavior feature extraction on the normal behavior data in the plurality of first behavior data pairs to obtain a plurality of first behavior feature pairs (p.
- a behavior feature pair includes behavior features extracted from two normal behavior data).
- the computer device uses the initial extraction model to perform behavior feature extraction on the normal behavior data and the abnormal behavior data in the plurality of second behavior data pairs, respectively, to obtain a plurality of second behavior characteristic pairs (the second behavior characteristic pair includes one normal behavior data Behavior features extracted and behavior features extracted from an abnormal behavior data).
- the initial feature extraction model has the ability to output behavior features based on the input behavior data.
- the initial feature extraction model can be trained by a computer device or sent to the computer device by other devices.
- the training process of the initial feature extraction model may include: training the convolutional neural network based on multiple sample behavior data to obtain an initial feature extraction model.
- the computer device may separately input each normal behavior data in the first behavior data pair into an initial feature extraction model, and output the first behavior data pair The behavioral characteristic pair, that is, the first behavioral characteristic pair.
- the computer device may input the normal behavior data and the abnormal behavior data in the second behavior data pair into the initial feature extraction model, respectively, and output the second behavior data A pair of behavioral characteristics, that is, a pair of second behavioral characteristics.
- the computer device supervises training through a loss function to obtain The feature extraction model.
- the feature extraction model is used to map the behavior features of normal behavior data into the feature space and map the behavior features of abnormal behavior data outside the feature space.
- the distance between the behavior features in the feature space is less than Distance threshold.
- the feature space range refers to a high-dimensional spherical space composed of high-dimensional feature vectors of all normal behavior data, so that the behavior features of normal behavior data are always inside the spherical surface.
- the distance threshold is selected as the diameter of the spherical surface, the distance between all the behavioral characteristics of normal behavior is less than the distance threshold, and the behavioral characteristics of the abnormal behavior data are mapped to the outside of the spherical surface. It is called the center of the feature space, that is, the distance between the normal behavior feature centers) is greater than the distance threshold.
- the computer device may use a preset distance algorithm to calculate the distance between the two behavior features contained in each first behavior feature pair to obtain each first behavior feature The corresponding distance.
- the computer device may use a preset distance algorithm to calculate the distance between the two behavior features contained in each second behavior feature pair to obtain the distance corresponding to each second behavior feature pair.
- the distance includes but is not limited to Euclidean distance, cosine distance and Hamming distance.
- the computer device can obtain the feature extraction model through the loss function supervised training according to the calculated distance.
- the specific process includes: the computer device calculates the error between the distance corresponding to each first behavior feature pair and the first distance threshold In this way, for the distances corresponding to multiple first behavior feature pairs, the computer device can obtain multiple errors.
- the computer device calculates the error between the distance of each second behavior feature pair and the second distance threshold, so that for the distances of multiple second behavior feature pairs, multiple errors can be acquired.
- the first distance threshold is the distance from the first behavior feature to the expected distance
- the second distance threshold is the distance from the second behavior feature to the expected distance.
- the first distance threshold is less than the second distance threshold.
- the first distance threshold may be 0
- the second distance threshold may be greater than 0.
- the computer device can calculate the loss through the loss function according to all the errors obtained, such as summing all the obtained errors to obtain the summation result, and the computer device returns the summation result as a supervised signal to update the initial feature extraction model Parameters to get the updated initial feature extraction model. Then, based on the first behavior feature pair and the second behavior feature pair, continue to train the updated initial feature extraction model until the loss meets the requirements, and the feature extraction model is obtained.
- the loss function includes but is not limited to Contrastive Loss (contrast loss) and Triplet Loss (triple loss) and other loss functions.
- the feature extraction model can be a 3D (three-dimensional) convolutional neural network model, including but not limited to resnet (Residual Neural Network, residual neural network) 18, resnet50, resnet101, resnet152, inception-v1 and VGG (Visual Geometry Group, visual Geometry group).
- the computer device uses the loss function to supervise the training method, with the constraints of reducing the corresponding distance of the first behavior feature pair and increasing the corresponding distance of the second behavior feature pair as constraints, training the initial feature extraction model, and finally obtaining the feature extraction model.
- steps 202 to 204 are the training process of the feature extraction model.
- the steps 202 to 204 are optional steps, which are steps that need to be performed before the behavior data is detected, not every time the behavior data are detected, to ensure that training has been performed when the behavior data is detected.
- the feature extraction model can be obtained.
- Embodiments of the present disclosure may use behavior data to build behavior data pairs for specific scenarios for training.
- an end-to-end training scheme is adopted (end-to-end training refers to the training loss from the given input data to the back propagation training, without any human intervention in the middle of training), which improves
- the degree of automation of the system which restricts the distance between the behavioral characteristics, can ensure that the normal behavioral characteristics are more compact, and there is a clear distance between the abnormal behavioral characteristics and the normal behavioral characteristics (abnormal behavioral characteristics are the behavioral characteristics of abnormal behavioral data, normal behavioral characteristics Behavioral characteristics of normal behavioral data).
- This training method based on distance constraints can adapt to the problem of a wide variety of abnormal behavior data and insufficient data in real scenes, and has the ability to detect unknown abnormal behavior.
- the trained feature extraction model after inputting normal behavior data into the trained feature extraction model, it can output behavior features within the feature space range, and after inputting abnormal behavior data into the trained feature extraction model, it can output out of the feature space range.
- Behavioral characteristics That is, if the input behavior data of the feature extraction model is normal behavior data, the output behavior feature will be within the range of the feature space, and if the input behavior data of the feature extraction model is abnormal behavior data, the output behavior The feature will be outside the feature space.
- the feature space range is a relatively small space range in the feature space.
- the computer device may acquire multiple behavior data based on massive videos, and use the multiple behavior data as behavior data to be detected. These videos may be collected by relevant personnel and stored on the computer device. These videos The behavior of the target in the behavior category is unknown.
- this step 205 may include: the computer device acquiring multiple videos; for each of the multiple videos, detecting and tracking the target in the video, and acquiring the video in the third time period
- the spatial motion range of the target is the spatial range covered by the target motion, the duration of the third time period is less than the duration of the video time period; according to the spatial motion range and the video, at the third time Perform image interception in the video sequence corresponding to the segment to obtain the image sequence of the video.
- the video sequence includes multi-frame video images of the video.
- the image sequence includes the region corresponding to the spatial motion range in the multi-frame video image;
- the image sequence of each video is used as the multiple behavior data.
- the process of obtaining multiple behavior data by the computer device is the same as that of obtaining the normal behavior data set and the abnormal behavior data set in step 201, and will not be repeated here. It should be noted here that the duration of the third time period and the first time period may be equal or unequal, and the duration of the third time period and the second time period may be equal or unequal.
- this step 205 takes the behavior data to be detected as multiple behavior data as an example for description. It can be understood that, in this step 205, the computer device may also obtain only one behavior data to be detected. The disclosed embodiments do not limit this.
- the computer device may use the feature extraction model to extract behavior characteristics of multiple behavior data. For each behavior data, if the behavior data is normal behavior data, the distance between the behavior features extracted by the feature extraction model and the behavior features of each normal behavior data is small, for example, the distance is less than or equal to the distance threshold. If the behavior data is abnormal behavior data, the distance between the behavior features extracted by the feature extraction model and the behavior features of each normal behavior data is large, for example, the distance is greater than the distance threshold.
- the computer device obtains the detection result of the behavior data according to the distance and the distance threshold of the behavior characteristic of the behavior data and the center of the normal behavior characteristic.
- the detection result is used to indicate whether the behavior data is abnormal behavior data.
- the normal behavior characteristic center It is used to represent the behavior features within the feature space.
- the computer device may use the normal behavior feature center to represent the behavior feature within the feature space, where the normal behavior feature may be a behavior feature extracted from a plurality of normal behavior data extracted by the feature extraction model obtained by the above training.
- the process of acquiring the normal behavior feature center may include: acquiring multiple normal behavior data; for each normal behavior data in the multiple normal behavior data, input the normal behavior data into the feature extraction
- the model outputs the behavior characteristics of the normal behavior data; according to the behavior characteristics of the plurality of normal behavior data, the normal behavior characteristic center is obtained.
- the plurality of normal behavior data belong to the normal behavior data set.
- the behavior features of the multiple normal behavior data may be multiple feature vectors, such as 128-dimensional feature vectors.
- the computer device may calculate an average value of each feature vector in each dimension. In this way, for each dimension, an average value is obtained.
- the computer device uses the average value of each dimension of the feature vector as the normal behavior feature center.
- the computer device can separately calculate the distance between the behavior feature of multiple behavior data and the center of the normal behavior feature, and the distance includes but is not limited to the Euclidean distance, the cosine distance, and the Hamming distance. For each behavior data in multiple behavior data, determine the distance between the behavior feature of the behavior data and the center of the normal behavior feature and the distance threshold, when the distance between the behavior feature of the behavior data and the center of the normal behavior feature is greater than the distance At the threshold, the computer device may determine that the behavior data is abnormal behavior data, that is, the detection result of the behavior data indicates that the behavior data is abnormal behavior data.
- the computer device may determine that the behavior data is normal behavior data, that is, the detection result of the behavior data indicates that the behavior data is normal Behavioral data.
- the embodiments of the present disclosure take the computer device to obtain a normal behavior feature center, and use the normal behavior feature center to represent the behavior characteristics of all normal behavior data as an example for description.
- the computer device can also obtain multiple normal behavior feature centers, and each normal behavior feature center is used to represent the behavior characteristics of one or more types of normal behavior data, so that for each behavior data in the multiple behavior data, the computer device The distance between the behavior feature of the behavior data and the centers of the plurality of normal behavior features can be calculated separately.
- the computer device adopts a preset judgment algorithm to judge whether the behavior data is abnormal behavior data, thereby obtaining a detection result of the behavior data.
- the preset judgment algorithm includes but is not limited to KNN (K-Nearest Neighbor, nearest neighbor algorithm) algorithm and clustering algorithm.
- the computer device may determine behavior data whose detection result is abnormal behavior data among the plurality of behavior data.
- the amount of the multiple behavior data is relatively large, by testing the multiple behavior data and combining with simple manual confirmation, it is possible to collect a batch of abnormal behavior data, and the collected abnormal behavior may be used Data to expand the existing abnormal behavior data collection.
- the computer device may display the image sequence of the video to which the abnormal behavior data belongs.
- the computer device can highlight the image sequence in the video (the image sequence is the image sequence of the video to which the abnormal behavior data belongs), and the specific display method includes but is not limited to adding a rectangular frame to the spatial motion range of the target, that is, including the image sequence The area is marked with a rectangular frame.
- the image sequence is a spatiotemporal cubic image sequence, which may also be called Tube/Tubelet(), and the images it contains can reflect the motion information of the target in time and space.
- Display the abnormal behavior detection results by highlighting the sequence information of the space-time cube in the original video. While displaying the abnormal behavior detection results, you can also display the abnormal behavior alarm results, such as in the area corresponding to the added rectangular frame It displays "abnormal" and "alarm” text prompts.
- the user can know the start time, end time and spatial position of the abnormal behavior.
- the area included in the image sequence in the video can be For display, the start time and end time of the abnormal behavior are the start time and end time of the image sequence, that is, the first frame video image and the last frame video of the multi-frame video images corresponding to the image sequence The time the image is in the entire target video.
- the spatial position where the abnormal behavior occurs is the three-dimensional spatial position indicated by the area contained in the image sequence. This adopts the form of space-time cube to record and display, which can facilitate users to view and manage.
- By displaying the alarm result it is convenient for the user to confirm. Since the image sequence of the entire video is recorded, the user can also view other abnormal behavior detection results near the alarm time for a more comprehensive correlation.
- the embodiments of the present disclosure use behavioral space-time cube structure for behavior testing and analysis and display.
- the behavioral space-time cube analysis method can effectively use the information of the target behavior, remove a large amount of background irrelevant information, and can alleviate the problem that the target ratio is too small.
- the stable recognition performance also greatly reduces the space resource consumption of the system.
- the real-time anomaly detection result display method adopted by the embodiment of the present disclosure can highlight the abnormal behavior that occurs in the video stream for a long time, can intuitively observe the abnormal behavior and alarm, and improve the level of intelligence.
- the computer device may use the abnormal behavior data in the plurality of behavior data to expand the abnormal behavior data set.
- Steps 205 to 208 are a process of acquiring multiple behavior data and using the feature extraction model to automatically determine abnormal behavior data among the multiple behavior data.
- the detection result may be further confirmed manually, and if manually confirmed as abnormal behavior data, the computer device may perform this step 209.
- the computer device can obtain manual confirmation information of abnormal behavior data among the plurality of behavior data (the manual confirmation information is used to indicate whether it is abnormal behavior data); the abnormality indicated by the manual confirmation information Behavior data is added to the abnormal behavior data collection.
- the computer device can display the detection result of the abnormal behavior data, and the human can input the confirmation information through a manual confirmation information input interface.
- the computer device obtains the manual confirmation information of the detection result, and if the manual confirmation information indicates a certain abnormal behavior data, it is added to the abnormal behavior data set, thereby expanding the abnormal behavior data set.
- the embodiment of the present disclosure takes the expansion of the abnormal behavior data set as an example for description.
- the computer device may also expand the normal behavior data set.
- the computer device in addition to determining abnormal behavior data in the plurality of behavior data, the computer device may also determine normal behavior data in the plurality of behavior data, and the computer device may add it to the normal behavior data set to achieve normal Expansion of behavioral data collection.
- the computing device may add the normal behavior data from the multiple behavior data to the normal behavior data In the collection, the normal behavior data collection is also expanded.
- the computer device adds the normal behavior data set determined in step 208 to the normal behavior data set, it may add the behavior data manually confirmed as normal behavior data to the normal behavior data set.
- the computer device after the computer device expands the abnormal behavior data through steps 205 to 209, it can perform steps 202 to 204 (the feature extraction model training process) again to obtain an updated feature extraction model.
- the computer device may form a new second behavior data pair according to the newly added abnormal behavior data in the normal behavior data set and the abnormal behavior data set (that is, the abnormal behavior data added in step 209). For example, for each newly added abnormal behavior data, the abnormal behavior data may be combined with each normal behavior data in the normal behavior data set to obtain a new second behavior data pair.
- the computer device may retain the original multiple first behavior data pairs and the multiple second behavior data pairs unchanged, and obtain new second behavior data pairs, thereby achieving the purpose of expanding the "normal-abnormal" behavior data pairs.
- the computer device may determine a new first behavior data pair and a new second behavior data pair based on the normal behavior data in the updated normal behavior data set and the abnormal behavior data in the updated abnormal behavior data set.
- the computer device can keep the original multiple first behavior data pairs and multiple second behavior data pairs unchanged, and acquire new first behavior data pairs and new second behavior data pairs, thereby achieving expansion of "normal-normal” The purpose of behavior data pairs and "normal-abnormal" behavior data pairs.
- the above steps 208 to 210 are optional steps.
- the abnormal behavior data set after manual confirmation can be used to expand the abnormal behavior data set.
- the above training feature extraction model is executed again. The process gets an updated feature extraction model.
- the computer device can use the updated feature extraction model to perform abnormal behavior detection on any video. Since the updated feature extraction model learns more behavior features, it can obtain higher detection performance when detecting behavior data. More abundant abnormal behaviors are detected.
- steps 205 to 210 are processes of acquiring multiple behavior data as test data, collecting abnormal behavior data, updating the training data set, and updating the feature extraction model based on updating the training data set. This process can be executed cyclically, and the abnormal behavior data set (or the normal behavior data set and the abnormal behavior data set) can be updated for each execution, to obtain an updated feature extraction model, and to obtain better abnormal behavior detection performance.
- the feature extraction model provided by an embodiment of the present disclosure is an end-to-end deep learning model, and its acquisition process may be divided into a training phase (steps 201 to 204 above), a deployment phase (steps 205 to step 207 above), and a feedback update phase (Steps 208 to 210 above).
- a training flowchart of a feature extraction model is provided. As shown in FIG. 3, according to behavior data (or behavior sequence), a pair of “normal-normal” behavior data and “normal-abnormal” behavior data are formed. Yes, a feature extraction model is trained. Referring to FIG. 4, a flowchart of abnormal behavior detection is provided. As shown in FIG. 4, behavior characteristic extraction is performed on the test behavior data (any of the plurality of behavior data in step 205 ). Distance, according to the distance to determine abnormal behavior, and then collect abnormal behavior data. Referring to FIG. 5, a feedback update flowchart for abnormal behavior detection is provided. As shown in FIG.
- an initial training data set (a plurality of first behavior data pairs and a plurality of second behavior data pairs in step 202) is used to execute the diagram
- the model training process in the training phase shown in 3 uses massive test data (a number of behavior data in step 205) to perform the abnormal behavior detection process in the deployment phase shown in FIG. 4, and then updates the training data set according to the collected abnormal behavior data. Then execute the model training process shown in FIG. 3 to obtain an updated feature extraction model, and use the updated feature extraction model to execute the abnormal behavior detection process shown in FIG. 4 on the new test data to obtain more accurate detection performance.
- the behavior data pair is formed, and the feature extraction model is trained based on the behavior data pair, and then the feature extraction model can be used to collect more abnormal behavior data, expand the "normal-abnormal" behavior data pair, update The feature extraction model.
- the above technical solution can train a feature extraction model based on a large amount of normal behavior data and a small amount of abnormal behavior data, and then detect a large amount of behavior data to collect more abnormal behavior data to solve the lack of abnormal behavior in real scenes
- the feature extraction model based on more abnormal behavior data has better abnormal behavior detection performance.
- the method provided by the embodiment of the present disclosure extracts the behavior characteristics of the behavior data through the feature extraction model, and determines whether the behavior data is abnormal behavior data according to the distance and the distance threshold between the extracted behavior characteristics and the normal behavior center, because the feature extraction model is based on The distance constraint method is trained.
- the behavior features extracted by the normal behavior data through the feature extraction model are in a relatively small feature space range, and the behavior features extracted by the abnormal behavior data through the feature extraction model are outside the feature space range, which ensures that The normal behavior feature is relatively compact, and there is a clear distance between the abnormal behavior feature and the normal behavior feature. Since the difference between the normal behavior and the abnormal behavior is learned, the detection method of the abnormal behavior based on the distance measurement has high accuracy.
- 6 is a schematic structural diagram of an abnormal behavior detection device provided by an embodiment of the present disclosure. 6, the device includes:
- the obtaining module 601 is used to obtain behavior data to be detected
- An extraction module 602 is used to input the behavior data into a feature extraction model and output behavior characteristics of the behavior data.
- the feature extraction model is used to output behavior characteristics within a characteristic space range according to normal behavior data and output the characteristic space according to abnormal behavior data Behaviour features outside the range, the distance between each behavior feature within the feature space is less than the distance threshold;
- the obtaining module 601 is further used to obtain the detection result of the behavior data according to the distance between the behavior feature of the behavior data and the center of the normal behavior feature and the distance threshold, and the detection result is used to indicate whether the behavior data is abnormal behavior data.
- the normal behavior feature center is used to represent the behavior features within the feature space.
- the obtaining module 601 is further used to:
- each first behavior data pair includes two normal behavior data in the normal behavior data set
- each A second behavior data pair includes a normal behavior data in the normal behavior data set and an abnormal behavior data in the abnormal behavior data set;
- each first behavior feature pair containing two behaviors of normal behavior data Characteristics
- each second behavior characteristic pair includes a behavior characteristic of normal behavior data and a behavior characteristic of abnormal behavior data
- the feature extraction model is obtained through supervised training by a loss function .
- the obtaining module 601 is further used to:
- the abnormal behavior data set is acquired, and the plurality of second videos are videos of the target performing abnormal behavior.
- the obtaining module 601 is used to:
- the target in the first video is detected and tracked to obtain the spatial motion range of the target in the first time period, the spatial motion range is determined by the target motion The spatial range covered, the duration of the first time period is less than the duration of the first video period;
- the spatial motion range and the first video perform image interception in the first video sequence corresponding to the first time period to obtain a first image sequence of the first video, the first video sequence including the first video sequence A multi-frame video image, the first image sequence includes an area corresponding to the spatial motion range in the multi-frame video image;
- the first image sequence of the plurality of first videos is used as the normal behavior data set.
- the obtaining module 601 is used to:
- the target in the second video is detected and tracked to obtain the spatial motion range of the target in the second time period, the spatial motion range is the target motion range The spatial range covered, the duration of the second time period is less than the duration of the second video period;
- the spatial motion range and the second video performing image interception in the second video sequence corresponding to the second time period to obtain a second image sequence of the second video, the second video sequence including the second video A multi-frame video image, the second image sequence includes an area corresponding to the spatial motion range in the multi-frame video image;
- the second image sequence of the plurality of second videos is used as the abnormal behavior data set.
- the behavior data to be detected is multiple behavior data
- the obtaining module 601 is further configured to determine abnormal behavior data in the plurality of behavior data according to the detection results of the plurality of behavior data; add the abnormal behavior data in the plurality of behavior data to the abnormal behavior data set; Based on the updated abnormal behavior data set, the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the behavior data to be detected is multiple behavior data
- the acquisition module 601 is also used to:
- the training process of the feature extraction model is performed to obtain the updated feature extraction model.
- the obtaining module 601 is configured to obtain artificial confirmation information of abnormal behavior data among the plurality of behavior data; add abnormal behavior data indicated by the manual confirmation information to the abnormal behavior data set.
- the obtaining module 601 is further used to:
- the spatial motion range is the spatial range covered by the target motion, the The duration of the third time period is less than the duration of the video period;
- the spatial motion range and the video perform image interception in the video sequence corresponding to the third time period to obtain an image sequence of the video, the video sequence includes multiple frames of the video image of the video, and the image sequence includes the multiple frames The area in the video image corresponding to the spatial motion range;
- the image sequence of the multiple videos is used as the multiple behavior data.
- the device further includes:
- the display module 603 is configured to display the image sequence of the video to which the abnormal behavior data belongs during the process of playing the video to which the abnormal behavior data belongs.
- the obtaining module 601 is used to:
- the behavior data is determined to be abnormal behavior data
- the behavior data is determined to be normal behavior data.
- the obtaining module 601 is further used to:
- the normal behavior characteristic center is obtained.
- the behavior characteristic of each normal behavior data in the plurality of normal behavior data is characterized by a feature vector
- the acquisition module 601 is used to:
- the behavior feature of the behavior data is extracted through the feature extraction model, and whether the behavior data is abnormal behavior data is determined according to the distance and the distance threshold between the extracted behavior feature and the normal behavior center, because the feature extraction model is based on the distance constraint
- the method is trained, the behavior features extracted by the normal behavior data through the feature extraction model are in a relatively small feature space, and the behavior features extracted by the abnormal behavior data through the feature extraction model are outside the feature space, which ensures normal behavior.
- the features are relatively compact, and there is a clear distance between the abnormal behavior features and the normal behavior features. Since the difference between the normal behavior and the abnormal behavior is learned, this method of detecting abnormal behavior based on the distance measurement has high accuracy.
- the abnormal behavior detection device provided in the above embodiment only uses the division of the above functional modules as an example for the detection of abnormal behavior.
- the above functions can be allocated by different functional modules according to needs. That is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
- the abnormal behavior detection device and the abnormal behavior detection method embodiment provided in the above embodiments belong to the same concept. For the specific implementation process, see the method embodiment, and details are not described here.
- the computer device 800 may have a relatively large difference due to different configurations or performances, and may include one or more processors (Central Processing Units, CPU). 801 and one or more memories 802, where at least one instruction is stored in the memory 802, and the at least one instruction is loaded and executed by the processor 801 to implement the abnormal behavior detection method provided by the foregoing method embodiments.
- the computer device 800 may also have components such as a wired or wireless network interface, a keyboard, and an input-output interface for input and output.
- the computer device 800 may also include other components for implementing device functions, which will not be repeated here.
- a computer-readable storage medium storing at least one instruction, for example, a memory storing at least one instruction, where the at least one instruction is executed by a processor to implement the abnormal behavior in the above embodiment Detection method.
- the computer-readable storage medium may be ROM (Read-Only Memory), RAM (Random Access Memory), CD-ROM (Compact Disc Read-Only Memory, CD-ROM), Magnetic tapes, floppy disks, optical data storage devices, etc.
- the above program may be stored in a computer-readable storage medium.
- the storage medium can be read-only memory, magnetic disk or optical disk, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (29)
- 一种异常行为检测方法,其特征在于,所述方法包括:获取待检测的行为数据;将所述行为数据输入特征提取模型,输出所述行为数据的行为特征,所述特征提取模型用于根据正常行为数据输出特征空间范围内的行为特征以及根据异常行为数据输出所述特征空间范围外的行为特征,所述特征空间范围内各个行为特征之间的距离小于距离阈值;根据所述行为数据的行为特征与正常行为特征中心的距离和所述距离阈值,获取所述行为数据的检测结果,所述检测结果用于指示所述行为数据是否为异常行为数据,所述正常行为特征中心用于代表所述特征空间范围内的行为特征。
- 根据权利要求1所述的方法,其特征在于,所述特征提取模型的训练过程包括:根据正常行为数据集合和异常行为数据集合,获取多个第一行为数据对和多个第二行为数据对,每个第一行为数据对包含所述正常行为数据集合中的两个正常行为数据,每个第二行为数据对包含所述正常行为数据集合中的一个正常行为数据和所述异常行为数据集合中的一个异常行为数据;提取所述多个第一行为数据对的多个第一行为特征对和所述多个第二行为数据对的多个第二行为特征对,每个第一行为特征对包含两个正常行为数据的行为特征,每个第二行为特征对包含一个正常行为数据的行为特征和一个异常行为数据的行为特征;根据所述每个第一行为特征对包含的两个行为特征之间的距离和所述每个第二行为特征对包含的两个行为特征之间的距离,通过损失函数监督训练,得到所述特征提取模型。
- 根据权利要求2所述的方法,其特征在于,所述根据正常行为数据集合和异常行为数据集合,获取多个第一行为数据对和多个第二行为数据对之前,所述方法还包括:基于多个第一视频,获取所述正常行为数据集合,所述多个第一视频为目 标进行正常行为的视频;基于多个第二视频,获取所述异常行为数据集合,所述多个第二视频为所述目标进行异常行为的视频。
- 根据权利要求3所述的方法,其特征在于,所述基于多个第一视频,获取所述正常行为数据集合,包括:对于所述多个第一视频中的每个第一视频,对所述第一视频中的目标进行检测和跟踪,获取第一时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第一时间段的时长小于所述第一视频的时间段的时长;根据所述空间运动范围和所述第一视频,在所述第一时间段对应的第一视频序列中进行图像截取,得到所述第一视频的第一图像序列,所述第一视频序列包含所述第一视频的多帧视频图像,所述第一图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个第一视频的第一图像序列作为所述正常行为数据集合。
- 根据权利要求3所述的方法,其特征在于,所述基于多个第二视频,获取所述异常行为数据集合,包括:对于所述多个第二视频中的每个第二视频,对所述第二视频中的目标进行检测和跟踪,获取第二时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第二时间段的时长小于所述第二视频的时间段的时长;根据所述空间运动范围和所述第二视频,在所述第二时间段对应的第二视频序列中进行图像截取,得到所述第二视频的第二图像序列,所述第二视频序列包含所述第二视频的多帧视频图像,所述第二图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个第二视频的第二图像序列作为所述异常行为数据集合。
- 根据权利要求2所述的方法,其特征在于,所述待检测的行为数据为多个行为数据,所述根据所述行为数据的行为特征与正常行为特征中心的距离,获取所述行为数据的检测结果之后,所述方法还包括:根据所述多个行为数据各自的检测结果,确定所述多个行为数据中的异常行为数据;将所述多个行为数据中的异常行为数据添加至所述异常行为数据集合中;基于更新的异常行为数据集合,执行所述特征提取模型的训练过程,获取更新的特征提取模型。
- 根据权利要求6所述的方法,其特征在于,所述待检测的行为数据为多个行为数据;所述根据所述行为数据的行为特征与正常行为特征中心的距离,获取所述行为数据的检测结果之后,所述方法还包括:根据所述多个行为数据各自的检测结果,确定所述多个行为数据中的正常行为数据;将所述多个行为数据中的正常行为数据添加至所述正常行为数据集合中;所述基于更新的异常行为数据集合,执行所述特征提取模型的训练过程,获取更新的特征提取模型,包括:基于更新的异常行为数据集合和更新的正常行为数据集合,执行所述特征提取模型的训练过程,获取更新的特征提取模型。
- 根据权利要求6所述的方法,其特征在于,所述方法还包括:获取多个视频;对于所述多个视频中的每个视频,对所述视频中的目标进行检测和跟踪,获取第三时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第三时间段的时长小于所述视频的时间段的时长;根据所述空间运动范围和所述视频,在所述第三时间段对应的视频序列中进行图像截取,得到所述视频的图像序列,所述视频序列包含所述视频的多帧视频图像,所述图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个视频的图像序列作为所述多个行为数据。
- 根据权利要求1所述的方法,其特征在于,所述正常行为特征中心的获取过程包括:获取多个正常行为数据;对于所述多个正常行为数据中的每个正常行为数据,将所述正常行为数据输入所述特征提取模型,输出所述正常行为数据的行为特征;根据所述多个正常行为数据的行为特征,获取所述正常行为特征中心。
- 根据权利要求9所述的方法,其特征在于,所述多个正常行为数据中每个正常行为数据的行为特征使用一个特征向量表征;所述根据所述多个正常行为数据的行为特征,获取所述正常行为特征中心,包括:对所述多个正常行为数据的特征向量在每个维度计算平均值,获得由每个维度的平均值组成的一组平均值所表征的目标特征向量;将所述目标特征向量作为所述正常行为特征中心。
- 一种异常行为检测装置,其特征在于,所述装置包括:获取模块,用于获取待检测的行为数据;提取模块,用于将所述行为数据输入特征提取模型,输出所述行为数据的行为特征,所述特征提取模型用于根据正常行为数据输出特征空间范围内的行为特征以及根据异常行为数据输出所述特征空间范围外的行为特征,所述特征空间范围内各个行为特征之间的距离小于距离阈值;所述获取模块还用于根据所述行为数据的行为特征与正常行为特征中心的距离和所述距离阈值,获取所述行为数据的检测结果,所述检测结果用于指示所述行为数据是否为异常行为数据,所述正常行为特征中心用于代表所述特征空间范围内的行为特征。
- 根据权利要求11所述的装置,其特征在于,所述获取模块还用于:根据正常行为数据集合和异常行为数据集合,获取多个第一行为数据对和多个第二行为数据对,每个第一行为数据对包含所述正常行为数据集合中的两 个正常行为数据,每个第二行为数据对包含所述正常行为数据集合中的一个正常行为数据和所述异常行为数据集合中的一个异常行为数据;提取所述多个第一行为数据对的多个第一行为特征对和所述多个第二行为数据对的多个第二行为特征对,每个第一行为特征对包含两个正常行为数据的行为特征,每个第二行为特征对包含一个正常行为数据的行为特征和一个异常行为数据的行为特征;根据所述每个第一行为特征对包含的两个行为特征之间的距离和所述每个第二行为特征对包含的两个行为特征之间的距离,通过损失函数监督训练,获得所述特征提取模型。
- 根据权利要求12所述的装置,其特征在于,所述获取模块还用于:基于多个第一视频,获取所述正常行为数据集合,所述多个第一视频为目标进行正常行为的视频;基于多个第二视频,获取所述异常行为数据集合,所述多个第二视频为所述目标进行异常行为的视频。
- 根据权利要求13所述的装置,其特征在于,所述获取模块用于:对于所述多个第一视频中的每个第一视频,对所述第一视频中的目标进行检测和跟踪,获取第一时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第一时间段的时长小于所述第一视频的时间段的时长;根据所述空间运动范围和所述第一视频,在所述第一时间段对应的第一视频序列中进行图像截取,得到所述第一视频的第一图像序列,所述第一视频序列包含所述第一视频的多帧视频图像,所述第一图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个第一视频的第一图像序列作为所述正常行为数据集合。
- 根据权利要求13所述的装置,其特征在于,所述获取模块用于:对于所述多个第二视频中的每个第二视频,对所述第二视频中的目标进行检测和跟踪,获取第二时间段内所述目标的空间运动范围,所述空间运动范围 为所述目标运动所覆盖的空间范围,所述第二时间段的时长小于所述第二视频的时间段的时长;根据所述空间运动范围和所述第二视频,在所述第二时间段对应的第二视频序列中进行图像截取,得到所述第二视频的第二图像序列,所述第二视频序列包含所述第二视频的多帧视频图像,所述第二图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个第二视频的第二图像序列作为所述异常行为数据集合。
- 根据权利要求12所述的装置,其特征在于,所述待检测的行为数据为多个行为数据;所述获取模块还用于根据所述多个行为数据各自的检测结果,确定所述多个行为数据中的异常行为数据;将所述多个行为数据中的异常行为数据添加至所述异常行为数据集合中;基于更新的异常行为数据集合,执行所述特征提取模型的训练过程,获取更新的特征提取模型。
- 根据权利要求16所述的装置,其特征在于,所述获取模块还用于:获取多个视频;对于所述多个视频中的每个视频,对所述视频中的目标进行检测和跟踪,获取第三时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第三时间段的时长小于所述视频的时间段的时长;根据所述空间运动范围和所述视频,在所述第三时间段对应的视频序列中进行图像截取,得到所述视频的图像序列,所述视频序列包含所述视频的多帧视频图像,所述图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个视频的图像序列作为所述多个行为数据。
- 根据权利要求11所述的装置,其特征在于,所述获取模块还用于:获取多个正常行为数据;对于所述多个正常行为数据中的每个正常行为数据,将所述正常行为数据输入所述特征提取模型,输出所述正常行为数据的行为特征;根据所述多个正常行为数据的行为特征,获取所述正常行为特征中心。
- 根据权利要求18所述的装置,其特征在于,所述多个正常行为数据中每个正常行为数据的行为特征使用一个特征向量表征;所述获取模块用于:对所述多个正常行为数据的特征向量在每个维度计算平均值,获得由每个维度的平均值组成的一组平均值所表征的目标特征向量;将所述目标特征向量作为所述正常行为特征中心。
- 一种计算机设备,其特征在于,包括处理器和存储器;所述存储器,用于存放至少一条指令;所述处理器执行所述存储器上所存放的至少一条指令,用于实现:获取待检测的行为数据;将所述行为数据输入特征提取模型,输出所述行为数据的行为特征,所述特征提取模型用于根据正常行为数据输出特征空间范围内的行为特征以及根据异常行为数据输出所述特征空间范围外的行为特征,所述特征空间范围内各个行为特征之间的距离小于距离阈值;根据所述行为数据的行为特征与正常行为特征中心的距离和所述距离阈值,获取所述行为数据的检测结果,所述检测结果用于指示所述行为数据是否为异常行为数据,所述正常行为特征中心用于代表所述特征空间范围内的行为特征。
- 根据权利要求20所述的计算机设备,其特征在于,所述处理器执行所述存储器上所存放的至少一条指令,还用于实现:根据正常行为数据集合和异常行为数据集合,获取多个第一行为数据对和多个第二行为数据对,每个第一行为数据对包含所述正常行为数据集合中的两个正常行为数据,每个第二行为数据对包含所述正常行为数据集合中的一个正常行为数据和所述异常行为数据集合中的一个异常行为数据;提取所述多个第一行为数据对的多个第一行为特征对和所述多个第二行为数据对的多个第二行为特征对,每个第一行为特征对包含两个正常行为数据的 行为特征,每个第二行为特征对包含一个正常行为数据的行为特征和一个异常行为数据的行为特征;根据所述每个第一行为特征对包含的两个行为特征之间的距离和所述每个第二行为特征对包含的两个行为特征之间的距离,通过损失函数监督训练,得到所述特征提取模型。
- 根据权利要求21所述的计算机设备,其特征在于,所述处理器执行所述存储器上所存放的至少一条指令,还用于实现:基于多个第一视频,获取所述正常行为数据集合,所述多个第一视频为目标进行正常行为的视频;基于多个第二视频,获取所述异常行为数据集合,所述多个第二视频为所述目标进行异常行为的视频。
- 根据权利要求22所述的计算机设备,其特征在于,所述处理器执行所述存储器上所存放的至少一条指令,还用于实现:对于所述多个第一视频中的每个第一视频,对所述第一视频中的目标进行检测和跟踪,获取第一时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第一时间段的时长小于所述第一视频的时间段的时长;根据所述空间运动范围和所述第一视频,在所述第一时间段对应的第一视频序列中进行图像截取,得到所述第一视频的第一图像序列,所述第一视频序列包含所述第一视频的多帧视频图像,所述第一图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个第一视频的第一图像序列作为所述正常行为数据集合。
- 根据权利要求22所述的计算机设备,其特征在于,所述处理器执行所述存储器上所存放的至少一条指令,还用于实现:对于所述多个第二视频中的每个第二视频,对所述第二视频中的目标进行检测和跟踪,获取第二时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第二时间段的时长小于所述第二视频 的时间段的时长;根据所述空间运动范围和所述第二视频,在所述第二时间段对应的第二视频序列中进行图像截取,得到所述第二视频的第二图像序列,所述第二视频序列包含所述第二视频的多帧视频图像,所述第二图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个第二视频的第二图像序列作为所述异常行为数据集合。
- 根据权利要求21所述的计算机设备,其特征在于,所述待检测的行为数据为多个行为数据;所述处理器执行所述存储器上所存放的至少一条指令,还用于实现:根据所述多个行为数据各自的检测结果,确定所述多个行为数据中的异常行为数据;将所述多个行为数据中的异常行为数据添加至所述异常行为数据集合中;基于更新的异常行为数据集合,执行所述特征提取模型的训练过程,获取更新的特征提取模型。
- 根据权利要求25所述的计算机设备,其特征在于,所述处理器执行所述存储器上所存放的至少一条指令,还用于实现:获取多个视频;对于所述多个视频中的每个视频,对所述视频中的目标进行检测和跟踪,获取第三时间段内所述目标的空间运动范围,所述空间运动范围为所述目标运动所覆盖的空间范围,所述第三时间段的时长小于所述视频的时间段的时长;根据所述空间运动范围和所述视频,在所述第三时间段对应的视频序列中进行图像截取,得到所述视频的图像序列,所述视频序列包含所述视频的多帧视频图像,所述图像序列包含所述多帧视频图像中所述空间运动范围对应的区域;将所述多个视频的图像序列作为所述多个行为数据。
- 根据权利要求20所述的计算机设备,其特征在于,所述处理器执行所述存储器上所存放的至少一条指令,还用于实现:获取多个正常行为数据;对于所述多个正常行为数据中的每个正常行为数据,将所述正常行为数据输入所述特征提取模型,输出所述正常行为数据的行为特征;根据所述多个正常行为数据的行为特征,获取所述正常行为特征中心。
- 根据权利要求27所述的计算机设备,其特征在于,所述多个正常行为数据中每个正常行为数据的行为特征使用一个特征向量表征;所述处理器执行所述存储器上所存放的至少一条指令,用于实现:对所述多个正常行为数据的特征向量在每个维度计算平均值,获得由每个维度的平均值组成的一组平均值所表征的目标特征向量;将所述目标特征向量作为所述正常行为特征中心。
- 一种计算机可读存储介质,其特征在于,所述存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1-10任一项所述的方法步骤。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811581954.0A CN111353352B (zh) | 2018-12-24 | 2018-12-24 | 异常行为检测方法及装置 |
CN201811581954.0 | 2018-12-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020135392A1 true WO2020135392A1 (zh) | 2020-07-02 |
Family
ID=71127632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/127797 WO2020135392A1 (zh) | 2018-12-24 | 2019-12-24 | 异常行为检测方法及装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111353352B (zh) |
WO (1) | WO2020135392A1 (zh) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111860429A (zh) * | 2020-07-30 | 2020-10-30 | 科大讯飞股份有限公司 | 高炉风口异常检测方法、装置、电子设备和存储介质 |
CN111950363A (zh) * | 2020-07-07 | 2020-11-17 | 中国科学院大学 | 一种基于开放数据过滤和域适应的视频异常检测方法 |
CN112115769A (zh) * | 2020-08-05 | 2020-12-22 | 西安交通大学 | 一种基于视频的无监督稀疏人群异常行为检测算法 |
CN112686114A (zh) * | 2020-12-23 | 2021-04-20 | 杭州海康威视数字技术股份有限公司 | 一种行为检测方法、装置及设备 |
CN112966589A (zh) * | 2021-03-03 | 2021-06-15 | 中润油联天下网络科技有限公司 | 一种在危险区域的行为识别方法 |
CN113673342A (zh) * | 2021-07-19 | 2021-11-19 | 浙江大华技术股份有限公司 | 行为检测方法、电子装置和存储介质 |
CN116049818A (zh) * | 2023-02-21 | 2023-05-02 | 吕艳娜 | 用于数字化在线业务的大数据异常分析方法及*** |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113295635A (zh) * | 2021-05-27 | 2021-08-24 | 河北先河环保科技股份有限公司 | 一种基于动态更新数据集的水质污染报警方法 |
CN115690658B (zh) * | 2022-11-04 | 2023-08-08 | 四川大学 | 一种融合先验知识的半监督视频异常行为检测方法 |
CN116049755A (zh) * | 2023-03-15 | 2023-05-02 | 阿里巴巴(中国)有限公司 | 时间序列的检测方法、电子设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101571914A (zh) * | 2008-04-28 | 2009-11-04 | 株式会社日立制作所 | 异常行为检测装置 |
CN103761748A (zh) * | 2013-12-31 | 2014-04-30 | 北京邮电大学 | 异常行为检测方法和装置 |
CN108809745A (zh) * | 2017-05-02 | 2018-11-13 | ***通信集团重庆有限公司 | 一种用户异常行为检测方法、装置及*** |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160132754A1 (en) * | 2012-05-25 | 2016-05-12 | The Johns Hopkins University | Integrated real-time tracking system for normal and anomaly tracking and the methods therefor |
KR20150029006A (ko) * | 2012-06-29 | 2015-03-17 | 비헤이버럴 레코그니션 시스템즈, 인코포레이티드 | 비디오 감시 시스템을 위한 피처 이례들의 무감독 학습 |
CN105184818B (zh) * | 2015-09-06 | 2018-05-18 | 山东华宇航天空间技术有限公司 | 一种视频监控异常行为检测方法及其检测*** |
CN105787472B (zh) * | 2016-03-28 | 2019-02-15 | 电子科技大学 | 一种基于时空拉普拉斯特征映射学习的异常行为检测方法 |
CN106101116B (zh) * | 2016-06-29 | 2019-01-08 | 东北大学 | 一种基于主成分分析的用户行为异常检测***及方法 |
CN107590427B (zh) * | 2017-05-25 | 2020-11-24 | 杭州电子科技大学 | 基于时空兴趣点降噪的监控视频异常事件检测方法 |
CN107766823B (zh) * | 2017-10-25 | 2020-06-26 | 中国科学技术大学 | 基于关键区域特征学习的视频中异常行为检测方法 |
CN108462708B (zh) * | 2018-03-16 | 2020-12-08 | 西安电子科技大学 | 一种基于hdp-hmm的行为序列的检测方法 |
CN108737410B (zh) * | 2018-05-14 | 2021-04-13 | 辽宁大学 | 一种基于特征关联的有限知工业通信协议异常行为检测方法 |
-
2018
- 2018-12-24 CN CN201811581954.0A patent/CN111353352B/zh active Active
-
2019
- 2019-12-24 WO PCT/CN2019/127797 patent/WO2020135392A1/zh active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101571914A (zh) * | 2008-04-28 | 2009-11-04 | 株式会社日立制作所 | 异常行为检测装置 |
CN103761748A (zh) * | 2013-12-31 | 2014-04-30 | 北京邮电大学 | 异常行为检测方法和装置 |
CN108809745A (zh) * | 2017-05-02 | 2018-11-13 | ***通信集团重庆有限公司 | 一种用户异常行为检测方法、装置及*** |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111950363A (zh) * | 2020-07-07 | 2020-11-17 | 中国科学院大学 | 一种基于开放数据过滤和域适应的视频异常检测方法 |
CN111950363B (zh) * | 2020-07-07 | 2022-11-29 | 中国科学院大学 | 一种基于开放数据过滤和域适应的视频异常检测方法 |
CN111860429A (zh) * | 2020-07-30 | 2020-10-30 | 科大讯飞股份有限公司 | 高炉风口异常检测方法、装置、电子设备和存储介质 |
CN111860429B (zh) * | 2020-07-30 | 2024-02-13 | 科大讯飞股份有限公司 | 高炉风口异常检测方法、装置、电子设备和存储介质 |
CN112115769A (zh) * | 2020-08-05 | 2020-12-22 | 西安交通大学 | 一种基于视频的无监督稀疏人群异常行为检测算法 |
CN112686114A (zh) * | 2020-12-23 | 2021-04-20 | 杭州海康威视数字技术股份有限公司 | 一种行为检测方法、装置及设备 |
CN112966589A (zh) * | 2021-03-03 | 2021-06-15 | 中润油联天下网络科技有限公司 | 一种在危险区域的行为识别方法 |
CN113673342A (zh) * | 2021-07-19 | 2021-11-19 | 浙江大华技术股份有限公司 | 行为检测方法、电子装置和存储介质 |
CN116049818A (zh) * | 2023-02-21 | 2023-05-02 | 吕艳娜 | 用于数字化在线业务的大数据异常分析方法及*** |
CN116049818B (zh) * | 2023-02-21 | 2024-03-01 | 天翼安全科技有限公司 | 用于数字化在线业务的大数据异常分析方法及*** |
Also Published As
Publication number | Publication date |
---|---|
CN111353352A (zh) | 2020-06-30 |
CN111353352B (zh) | 2023-05-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020135392A1 (zh) | 异常行为检测方法及装置 | |
US11928800B2 (en) | Image coordinate system transformation method and apparatus, device, and storage medium | |
CN111444828B (zh) | 一种模型训练的方法、目标检测的方法、装置及存储介质 | |
WO2020042419A1 (zh) | 基于步态的身份识别方法、装置、电子设备 | |
CN103971386B (zh) | 一种动态背景场景下的前景检测方法 | |
CN110428449A (zh) | 目标检测跟踪方法、装置、设备及存储介质 | |
WO2019210555A1 (zh) | 一种基于深度神经网络的人数统计方法及装置、存储介质 | |
CN111027442A (zh) | 用于行人重识别的模型训练方法、识别方法、装置及介质 | |
CN113963445A (zh) | 一种基于姿态估计的行人摔倒动作识别方法及设备 | |
TWI707243B (zh) | 基於眼球跟蹤之活體檢測的方法、裝置及系統 | |
WO2022001925A1 (zh) | 行人追踪方法和设备,及计算机可读存储介质 | |
CN109711389B (zh) | 一种基于Faster R-CNN和HMM的哺乳母猪姿态转换识别方法 | |
CN112200021B (zh) | 基于有限范围场景内的目标人群跟踪监控方法 | |
CN111652331B (zh) | 一种图像识别方法、装置和计算机可读存储介质 | |
CN110458235B (zh) | 一种视频中运动姿势相似度比对方法 | |
JP7136500B2 (ja) | ノイズチャネルに基づくランダム遮蔽回復の歩行者再識別方法 | |
CN108268855A (zh) | 一种面向行人再识别的函数模型的优化方法及装置 | |
CN112052771A (zh) | 一种对象重识别方法及装置 | |
Haibo et al. | An improved yolov3 algorithm for pulmonary nodule detection | |
CN117541994A (zh) | 一种密集多人场景下的异常行为检测模型及检测方法 | |
CN111767880A (zh) | 一种基于脸部特征的活体身份识别方法、装置和存储介质 | |
CN115272967A (zh) | 一种跨摄像机行人实时跟踪识别方法、装置及介质 | |
CN113723306A (zh) | 俯卧撑检测方法、设备以及计算机可读介质 | |
US20230351727A1 (en) | Reducing false positive identifications during video conferencing tracking and detection | |
Hu et al. | A 3D Semantic Visual SLAM in Dynamic Scenes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19903122 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19903122 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19903122 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02.02.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19903122 Country of ref document: EP Kind code of ref document: A1 |