CN113408352A - Pedestrian abnormal behavior detection method, image processing device and storage device - Google Patents

Pedestrian abnormal behavior detection method, image processing device and storage device Download PDF

Info

Publication number
CN113408352A
CN113408352A CN202110542200.XA CN202110542200A CN113408352A CN 113408352 A CN113408352 A CN 113408352A CN 202110542200 A CN202110542200 A CN 202110542200A CN 113408352 A CN113408352 A CN 113408352A
Authority
CN
China
Prior art keywords
optical flow
detection area
frame image
image
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110542200.XA
Other languages
Chinese (zh)
Inventor
库浩华
潘华东
郑佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110542200.XA priority Critical patent/CN113408352A/en
Publication of CN113408352A publication Critical patent/CN113408352A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a pedestrian abnormal behavior detection method, an image processing device and a storage device. The pedestrian abnormal behavior detection method comprises the steps of respectively carrying out target detection on multiple frames of images obtained by shooting of a camera device to obtain a detection area corresponding to a target pedestrian in each frame of image; respectively carrying out optical flow detection on the detection area of each frame of image to obtain optical flow information of the detection area of each frame of image, wherein the multi-frame image comprises a target frame image and at least one frame of sample frame image; analyzing optical flow information of a target frame image by using the optical flow information of at least one frame of sample frame image; and determining whether the target pedestrian has abnormal behavior based on the result of the analysis. According to the scheme, the abnormal behaviors of the pedestrians can be accurately detected.

Description

Pedestrian abnormal behavior detection method, image processing device and storage device
Technical Field
The present application is a divisional application of a patent application entitled "method for detecting abnormal behavior of pedestrian, image processing apparatus, and storage apparatus" filed on 28.05.2019, and having an application number of 2019104528014, and relates to the technical field of computer vision, and in particular, to a method for detecting abnormal behavior of pedestrian, image processing apparatus, and storage apparatus.
Background
Along with the monitoring cameras are distributed and controlled in all corners of a city more and more densely, the intelligent monitoring system also continuously expands the service functions of the intelligent monitoring system, so that monitoring personnel are gradually replaced to complete a large amount of repeated monitoring work.
The service function of detecting the abnormal behavior of the pedestrian is particularly important in the crowded areas such as prisons, stations, squares and markets. For example, when abnormal behaviors such as fighting and fighting occur, related departments can be timely involved, so that the loss of life and property is reduced, the social stability and the regional safety are facilitated, and the method has great application value and wide application prospect. Therefore, how to accurately detect whether the pedestrian has abnormal behaviors becomes a problem to be solved urgently in the intelligent monitoring service.
Disclosure of Invention
The technical problem mainly solved by the application is to provide a pedestrian abnormal behavior detection method, an image processing device and a storage device, which can accurately detect the abnormal behavior of a pedestrian.
In order to solve the above problems, a first aspect of the present application provides a method for detecting abnormal behaviors of pedestrians, including performing target detection on multiple frames of images captured by an image capture device, respectively, to obtain a detection region corresponding to a target pedestrian in each frame of image; respectively carrying out optical flow detection on the detection area of each frame of image to obtain optical flow information of the detection area of each frame of image, wherein the multi-frame image comprises a target frame image and at least one frame of sample frame image; analyzing optical flow information of a target frame image by using the optical flow information of at least one frame of sample frame image; and determining whether the target pedestrian has abnormal behavior based on the result of the analysis.
In order to solve the above problem, a second aspect of the present application provides an image processing apparatus including a memory and a processor coupled to each other; the processor is adapted to execute the program instructions stored by the memory to implement the method of the first aspect described above.
In order to solve the above problem, a third aspect of the present application provides a storage device storing program instructions executable by a processor, the program instructions being for implementing the method of the first aspect.
In the above scheme, the target detection is performed on the multi-frame images including the target frame image and the at least one frame of sample frame image, which are obtained by the image pickup device, respectively, so as to obtain the detection area corresponding to the target pedestrian in each frame of image, thereby detecting the optical flow detection on the detection area, further obtaining the optical flow information of the target frame image and the at least one frame of sample frame image, further analyzing the optical flow information of the target frame image by using the optical flow information of the sample frame image, and further finding the change of the optical flow information of the target frame image compared with the sample frame image, so that the optical flow information of the sample frame image, which is obtained by shooting the target pedestrian, can be used as the current behavior judgment standard of the target pedestrian to judge the optical flow information of the target frame image by taking the optical flow information of the sample frame image, regardless of the scene of the target pedestrian, so as to accurately detect whether the target pedestrian has abnormal behavior.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of a method for detecting abnormal behavior of a pedestrian according to the present application;
FIG. 2 is a schematic flow chart of another embodiment of the method for detecting abnormal behavior of pedestrians according to the present application;
FIG. 3 is a flowchart illustrating an embodiment of step S21 in FIG. 2;
FIG. 4 is a flowchart illustrating an embodiment of step S23 in FIG. 2;
FIG. 5 is a flowchart illustrating an embodiment of step S11 in FIG. 1;
FIG. 6 is a schematic flow chart diagram illustrating a method for detecting abnormal pedestrian behavior according to yet another embodiment of the present application;
FIG. 7 is a flowchart illustrating an embodiment of step S61 in FIG. 6;
FIG. 8 is a flowchart illustrating an embodiment of step S62 in FIG. 6;
FIG. 9 is a schematic flow chart illustrating another embodiment of step S61 in FIG. 6;
FIG. 10 is a schematic flow chart diagram illustrating a method for detecting abnormal pedestrian behavior according to yet another embodiment of the present application;
FIG. 11 is a block diagram of an embodiment of an image processing apparatus according to the present application;
FIG. 12 is a block diagram of an embodiment of a memory device according to the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flow chart of an embodiment of a method for detecting abnormal behavior of a pedestrian according to the present application. Specifically, the method may include:
step S11: and respectively carrying out target detection on the multi-frame images shot by the camera device to obtain a detection area corresponding to the target pedestrian in each frame of image.
The camera device may be a night vision camera, an infrared camera, or the like. Different types of image pickup devices can be selected according to different application scenes. For example, for a place with a dark environment and poor lighting, the camera device can be a night vision camera or an infrared camera; aiming at indoor places with bright light, the camera device can be a common digital camera or a network camera; and for the outdoor non-sheltered scene, the camera can be a waterproof camera, and this embodiment does not make specific limitations.
The multi-frame image may be a 2-frame image, a 3-frame image, a 4-frame image, etc., and this embodiment is not an example here.
The detection area may be a rectangle, and the rectangle surrounds the pedestrian in each frame of image. Alternatively, the detection area may be an irregular figure, and in an implementation scenario, the detection area may be a contour area of a pedestrian in order to more accurately perform optical flow detection on the detection area of the multi-frame image, so as to more accurately obtain optical flow information of a target pedestrian in the multi-frame image.
The determination of the Detection area can be realized by a Pedestrian Detection (Pedestrian Detection) technology, which judges whether a Pedestrian exists in an image or a video sequence by using a computer vision technology and gives accurate positioning. The existing pedestrian detection methods are mainly three, which are a global feature-based method, a human body part-based method, and a stereoscopic vision-based method, such as a Haar wavelet feature-based method, an HOG (Histogram of Oriented Gradients) feature-based method, and a contour template-based method, which are representative methods based on the global feature. The pedestrian detection technology is the prior art in the technical field of computer vision, and the present embodiment is not described herein again.
Step S12: and respectively carrying out optical flow detection on the detection area of each frame of image to obtain optical flow information of the detection area of each frame of image, wherein the multi-frame image comprises a target frame image and at least one frame of sample frame image.
Optical Flow (Optical Flow) is a concept in object motion detection in the visual domain, and is used to describe the motion of an observed object, surface, or edge caused by motion relative to an observer. The optical flow refers to the speed of the motion of the image surface. The reason why the object is found by the human eye while moving is that when the object moves, a series of continuously changing images are formed on the retina of the human eye, and the changing information continuously flows through the retina of the glasses at different times as if an optical flow passes, which is called optical flow.
Optical flow detection plays an important role in the fields of pattern recognition, computer vision, and other image processing. Specifically, optical flow detection may be used to detect motion, object cuts, computation of collision time and object inflation, motion compensated encoding, or stereo measurements through object surfaces and edges, among others.
Methods related to optical flow detection currently include gradient-based methods, matching-based methods, energy-based methods, and the like. The following are typical: the Horn-hill Method (Horn-Schunck Method), the Lucas-Kanade Method (Lucas-Kanade Method), and the like. The optical flow detection method is the prior art in the field, and the description of the embodiment is omitted here.
In one implementation scenario, in order to quantitatively represent the position information of the optical flow point, the optical flow information includes coordinate information of at least one optical flow point of the detection area on a multi-dimensional coordinate axis, for example, the optical flow information of the optical flow point 1 in the previous frame image is represented as coordinate information (X) on a two-dimensional coordinate axis1,t,Y1,t) Or optical flow information of the optical flow point 2 in the previous frame image is represented as coordinate information (X) on a two-dimensional coordinate axis2,t,Y2,t) Or optical flow information of the optical flow point 1 in the subsequent frame image is represented as coordinate information (X) on a two-dimensional coordinate axis1,t+1,Y1,t+1) Or optical flow information of the optical flow point 2 in the subsequent frame image is represented as coordinate information (X) on a two-dimensional coordinate axis2,t+1,Y2,t+1) Etc., which will not be described in detail in this embodiment.
In an implementation scenario, in order to adaptively perform the detection of the abnormal behavior of the pedestrian on the target frame image, at least one frame of sample frame image is a plurality of frames of image before the target frame image, for example, ten frames of image before the target frame image, fifteen frames of image before the target frame image, and the like, which is not illustrated here.
Step S13: and analyzing the optical flow information of the target frame image by using the optical flow information of at least one frame of sample frame image.
Analyzing optical flow information of the target frame image based on optical flow information of at least one frame of sample frame image. In one implementation scenario, changes between the target frame image and the sample image optical flow information may be discovered based on optical flow information of at least one frame of the sample frame image.
Step S14: based on the result of the analysis, it is determined whether there is abnormal behavior in the target pedestrian.
And determining whether the target pedestrian has abnormal behaviors or not based on the analysis of the optical flow information of the target frame image and the at least one frame of sample frame image. In one implementation scenario, a change between the optical flow information of the target frame image and the optical flow information of the sample frame image can be found based on the optical flow information of at least one frame of the sample frame image, so that whether the target pedestrian has abnormal behavior or not is determined based on the change between the optical flow information of the target frame image and the optical flow information of the sample frame image.
In the above manner, the multi-frame images including the target frame image and the at least one frame of sample frame image, which are captured by the imaging device, are respectively subjected to target detection, so that the detection area corresponding to the target pedestrian in each frame of image is obtained, and then the optical flow detection is performed on the detection area, so that the optical flow information of the target frame image and the optical flow information of the at least one frame of sample frame image are obtained, and therefore, the optical flow information of the target frame image is analyzed by using the optical flow information of the sample frame image, and further, the change of the optical flow information of the target frame image compared with the sample frame image can be found, so that no matter the scene where the target pedestrian is located, the optical flow information of the sample frame image captured by the target pedestrian can be used as the current behavior judgment standard of the target pedestrian for performing behavior judgment on the optical flow information of the target frame image, so as to accurately detect whether the target pedestrian has abnormal behavior.
Referring to fig. 2, fig. 2 is a schematic flow chart of another embodiment of the method for detecting abnormal pedestrian behavior according to the present application. Before the step S13, the method may further include:
step S21: and judging whether the detection area is smaller than a preset minimum size. If so, go to step S22, otherwise go to step S23.
The preset minimum size is the minimum size that can contain optical flow information required for abnormal behavior detection. When the detection area is smaller than the preset minimum size, the optical flow information in the detection area is less, which may result in insufficient use for subsequent detection of abnormal behavior. The preset minimum size is a preset size which is uniformly set, or can also be a preset size which is set by a user in a self-defined way. In one implementation scenario, the detection area is a rectangular detection area, and the preset minimum size is also a rectangle. Referring to fig. 3, the step S21 may specifically include:
step S211: a first length of a diagonal line of the detection area and a second length of the diagonal line of the preset minimum size are acquired.
In one implementation scenario, the first length of the diagonal line may be determined by coordinate information of vertices of a rectangle corresponding to the detection region, for example, the coordinate information of four vertices corresponding to the detection region is (X) respectivelyA,YA)、(XA+30,YA)、(XA+30,YA+40)、(XA,YA+40) Then, according to the pythagorean theorem, the first length of the diagonal line of the detection area may be determined to be 50, and similarly, the second length of the diagonal line with the preset minimum size may be determined, which is not illustrated in this embodiment.
Step S212: and judging whether the first length is smaller than the second length. If so, go to step S213, otherwise go to step S214.
Comparing whether the first length is less than the second length.
Step S213: and determining that the detection area is smaller than a preset minimum size.
If the first length is smaller than the second length, the detection area can be determined to be smaller than the preset minimum size.
Step S214: determining that the detection area is not smaller than a preset minimum size.
If the first length is not less than the second length, it may be determined that the detection area is not less than a preset minimum size.
Step S22: and amplifying the image in the detection area to a preset standard size, and taking the amplified image as the image in the detection area.
When the detection area is smaller than the preset minimum size, it means that if the optical flow detection is performed on the image in the detection area at this time, the detected optical flow information may be less, and thus, the subsequent detection of the abnormal behavior may not be sufficient. At this time, the image in the detection area may be enlarged to a preset standard size, and the enlarged image may be used as the image in the detection area.
Step S23: and calculating the size ratio between the size of the detection area and a preset standard size, and correspondingly taking the product of the optical flow displacement of each optical flow point in the detection area and the size ratio as the optical flow displacement of the optical flow point.
If the detection area is not smaller than the preset minimum size, it means that if the optical flow detection is performed on the image in the detection area at this time, the detected optical flow information is sufficient for the subsequent detection of the abnormal behavior. At this time, a size ratio between the size of the detection area and a preset standard size may be further calculated, and a product of the optical flow displacement of each optical flow point in the detection area and the size ratio may be taken as the optical flow displacement of the corresponding optical flow point. Specifically, the preset standard size may be divided by the size of the detection area to obtain a size ratio, for example, if the size ratio is 1.2, it indicates that the preset standard size is larger than the detection area, and the optical flow displacement of each optical flow point in the detection area is multiplied by the size ratio 1.2 to obtain the optical flow displacement corresponding to the optical flow point; or, for example, if the size ratio is 0.8, it indicates that the preset standard size is smaller than the detection area, the optical flow displacement of each optical flow point in the detection area is multiplied by the size ratio 10.8 to obtain the optical flow displacement of the corresponding optical flow point, and the optical flow displacements of the optical flow points in the detection areas with different sizes can be normalized by the above method, so that the influence of the difference between the optical flow displacements between the near view and the far view in the image on the abnormal behavior detection is eliminated.
In step S22, if the detection area is smaller than the preset minimum size, the image in the detection area is enlarged to the preset standard size, and the enlarged image is used as the image in the detection area, where the size ratio between the size of the detection area and the preset standard size is 1, which is equivalent to that the optical flow displacement of the optical flow point is not necessarily scaled.
In an implementation scenario, in step S23, in order to quickly and conveniently calculate the size ratio between the size of the detection area and the preset standard size, referring to fig. 4, step S23 may specifically include the following steps:
step S231: a first length of a diagonal line of the detection area and a third length of the diagonal line of a preset standard size are acquired.
In one implementation scenario, the first length of the diagonal line may be determined by coordinate information of vertices of a rectangle corresponding to the detection region, for example, the coordinate information of four vertices corresponding to the detection region is (X) respectivelyA,YA)、(XA+30,YA)、(XA+30,YA+40)、(XA,YA+40) Then the first length of the diagonal of the detection area can be determined to be 50 according to the pythagorean theorem. Similarly, the third length of the diagonal line of the preset standard size can be calculated by analogy, and this embodiment is not exemplified here.
Step S232: the ratio of the third length to the first length is taken as the dimension ratio.
The ratio of the third length to the first length is taken as the dimension ratio.
By the method, the optical flow displacement of each optical flow point in the detection areas with different sizes can be normalized, so that the influence of the difference between the optical flow displacement between the near view and the distant view in the image on the abnormal behavior detection is eliminated.
Referring to fig. 5, fig. 5 is a flowchart illustrating an embodiment of step S11 in fig. 1. Specifically, step S11 in the above embodiment may include the following steps:
step S111: and detecting each frame of image by using a preset head-shoulder frame detection model to obtain a head-shoulder frame corresponding to the target pedestrian in each frame of image.
The preset head and shoulder detection model may be obtained in advance through deep learning training, and the embodiment is not particularly limited herein.
And detecting each frame of image through a preset head and shoulder detection model to obtain a head and shoulder frame corresponding to the target pedestrian in each frame of image.
Step S112: and obtaining a detection area corresponding to the target pedestrian from the head and shoulder frame corresponding to the target pedestrian.
The detection region corresponding to the target pedestrian can be obtained based on the head-shoulder frame corresponding to the target pedestrian. In one implementation scenario, the head and shoulder frames may be extended according to a predetermined ratio and a predetermined direction to obtain the detection area. For example, the head and shoulder frames are uniformly pulled down by a certain ratio, such as 1.2 times, and the like.
In the above manner, the image is detected by presetting the head and shoulder detection model, and the head and shoulder area is a relatively stable part in the image, so that the identification accuracy of the target pedestrian can be improved by presetting the head and shoulder detection model to detect the image, the influence of the motion of other objects on the determination of the detection area can be avoided by utilizing the head and shoulder frame, the accuracy of the obtained detection area can be improved, and the accuracy of the subsequent abnormal behavior detection based on the optical flow information in the detection area is improved.
Referring to fig. 6, fig. 6 is a schematic flowchart illustrating a method for detecting abnormal pedestrian behavior according to another embodiment of the present application.
Specifically, the step S13 may include the following steps:
step S61: and obtaining an optical flow threshold value of the target frame image based on the optical flow information of at least one frame of sample frame image.
And obtaining an optical flow threshold value which is subsequently used for detecting whether the pedestrian has abnormal behaviors in the target frame image based on the optical flow information of the at least one frame of sample frame image. In one implementation scenario, in order to detect whether the pedestrian has abnormal behavior from multiple dimensions, thereby making the detection result of the abnormal behavior more accurate, the optical flow threshold may include at least one of an optical flow displacement threshold, an optical flow ratio threshold.
Specifically, the optical-flow displacement threshold of the target frame image may be obtained by acquiring a first optical-flow displacement of the detection area of each frame sample image, and determining based on the first optical-flow displacement. In one implementation scenario, the optical flow displacement of the current frame may be obtained based on the optical flow information of the optical flow point in the current frame and the optical flow information of the corresponding optical flow point in the previous frame, and the size ratio in the above embodiment, and specifically, may be obtained by coordinate information in the optical flow information, such as the optical flow point1 the coordinate information in the current frame image is (X)1,t,Y1,t) The coordinate information in the previous frame image is (X)1,t-1,Y1,t-1) Then the optical flow displacement of optical flow point 1 can be represented as a vector
Figure BDA0003072225960000091
Further, the magnitude of the optical flow displacement of the optical flow point 1 can be expressed in the modulo of the vector as
Figure BDA0003072225960000092
The product between the above value and the size ratio can finally be taken as the optical flow displacement. By analogy, the optical flow displacement and the optical flow direction change of other optical flow points can be calculated, and this embodiment is not exemplified.
In addition, the optical flow rate threshold of the target frame image can be obtained by counting the number of first optical flow points in the detection area of each frame sample image, wherein the optical flow direction change is larger than a preset angle, and determining the optical flow rate threshold based on the number of the first optical flow points. In one implementation scenario, the optical flow direction change of the current frame may be obtained based on the optical flow information of the optical flow point in the current frame and the optical flow information of the corresponding optical flow point in the previous frame and the next frame, and specifically, may be obtained by coordinate information in the optical flow information, for example, the coordinate information of the optical flow point 1 in the current frame image is (X)1,t,Y1,t) The coordinate information in the subsequent frame image is (X)1,t+1,Y1,t+1) The coordinate information in the previous frame image is (X)1,t-1,Y1,t-1) So that the optical flow represented by the optical flow point 1 in the current frame and the previous frame is shifted by
Figure BDA0003072225960000093
The optical flow point 1 is displaced by the optical flow represented in the current frame and the next frame
Figure BDA0003072225960000094
So that the change of the optical flow direction of the optical flow point 1 in the current frame can be obtained as
Figure BDA0003072225960000095
The preset angle may be uniformly preset, or may be set by a user according to an actual application in a self-defined manner, and the preset angle may be 100 degrees, 120 degrees, and the like, which is not limited in this embodiment.
Step S62: the optical flow information of the target frame image is compared with an optical flow threshold value.
In one implementation scenario, to detect whether there is abnormal behavior of a pedestrian from multiple dimensions, the optical flow threshold may include at least one of an optical flow displacement threshold, an optical flow ratio threshold. In this case, the step S62 may be specifically implemented by acquiring a second optical flow displacement of the detection area of the target frame image, and comparing the second optical flow displacement with an optical flow displacement threshold value; alternatively, the step S62 may be implemented by counting the number of second optical flow points in the detection area of the target frame image, where the change in the optical flow direction is greater than a preset angle, obtaining the optical flow ratio of the target frame image based on the number of second optical flow points, and comparing the optical flow ratio with the optical flow ratio threshold; alternatively, the step S62 may be implemented by first acquiring a second optical flow displacement of the detection area of the target frame image, comparing the second optical flow displacement with the optical flow displacement threshold, counting the number of second optical flow points in the detection area of the target frame image, the number of which optical flow direction changes by more than a predetermined angle, obtaining an optical flow ratio of the target frame image based on the number of second optical flow points, and comparing the optical flow ratio with the optical flow ratio threshold. Specifically, the specific manner of calculating the second optical-flow displacement of the detection area of the target frame image may refer to the above-described embodiment of calculating the optical-flow displacement threshold in step S61, and the specific manner of calculating the optical-flow direction change in the detection area of the target frame image may refer to the above-described embodiment of calculating the optical-flow direction change in the detection area of the sample frame image in step S61, which is not illustrated here.
The step S14 may include:
step S63: and if the optical flow information of the target frame image and the optical flow threshold value meet the preset relation condition, determining that the target pedestrian has abnormal behaviors.
In one implementation scenario, to detect whether there is abnormal behavior of a pedestrian from multiple dimensions, the optical flow threshold may include at least one of an optical flow displacement threshold, an optical flow ratio threshold. At this time, the preset relation condition includes: the second optical flow displacement is greater than or equal to a first preset multiple of the optical flow displacement threshold; and/or the optical flow ratio is greater than or equal to a second preset multiple of the optical flow ratio threshold. The first preset multiple may be 1.3, 1.5, 1.7, 1.9, 2.1, etc., and the embodiment is not limited in this respect. The second preset multiple may be 1.3, 1.5, 1.7, 1.9, 2.1, etc., and the embodiment is not limited herein.
In the above manner, the optical flow threshold is obtained through the optical flow information of a plurality of frame sample frame images before the target frame image, and by comparing the optical flow information of the target frame image with the optical flow threshold, if the optical flow information of the target frame image and the optical flow threshold satisfy the preset relation condition, the target pedestrian can be determined to have abnormal behaviors.
Referring to fig. 7, fig. 7 is a flowchart illustrating an embodiment of step S61 in fig. 6. In order to reduce the amount of calculation for subsequently determining whether there is an abnormal behavior by using the optical flow information, the step S61 of "acquiring the first optical flow displacement of the detection area of each frame of the sample frame image" may be specifically implemented by:
step S611: and determining a first optical flow point of which the optical flow displacement in the detection area of each frame of sample frame image meets a preset displacement condition.
The preset displacement condition includes a preset number of maximum optical flow displacements in the detection area, where the preset number may be 10% of the number of all optical flow points in the detection area, and the preset displacement condition is 10% of the maximum optical flow displacements in the detection area. For example, the sample frame image ftThe detection area in (1) contains 3000 optical flow points in total, then optical flow displacement is calculated according to optical flow information of the 3000 optical flow points, the 3000 optical flow points are sorted in the descending order of the optical flow displacement, the first 10% of the optical flow points are taken as first optical flow points, and the first optical flow points meeting the preset displacement condition are taken as other sample frame images in the same wayAnd determining the first optical flow point in the detection area of each frame of sample image, wherein the optical flow displacement meets the preset displacement condition.
Step S612: and obtaining a first optical flow displacement of the detection area corresponding to the sample frame image based on the optical flow displacement of the first optical flow point.
In one implementation scenario, the average value of the optical flow displacements of the first optical flow points that have been determined in step S611 described above may be used as the first optical flow displacement of the detection area of the sample frame image. For example, the sample frame image ftThe total number of the first optical flow points in the detection area is 300, and the optical flow displacements of the 300 first optical flow points are respectively D1,t、D2,t、D3,t……D300,tSo that the average value of the optical flow displacements of the 300 first optical flow points can be calculated
Figure BDA0003072225960000111
The average value is used as a sample frame image ftThe first optical flow displacement of the detection area in (1). Of course, in another implementation scenario, the sum of the optical flow displacements of the first optical flow points of the sample frame image may be used as the first optical flow displacement of the detection area of the sample frame image. For example, the sample frame image ftThe total number of the first optical flow points in the detection area is 300, and the optical flow displacements of the 300 first optical flow points are respectively D1,t、D2,t、D3,t……D300,tSo that the sum of the 300 first luminous flux point luminous flux displacements can be calculated
Figure BDA0003072225960000121
The embodiment is not particularly limited herein.
Further, the "determining the optical-flow displacement threshold value of the target frame image based on the first optical-flow displacement" in the above step S61 may include:
step S613: and taking the average value of the first optical flow displacement of the at least one frame of sample frame image as the optical flow displacement threshold value of the target frame image.
Calculating all first optical flow displacements based on the calculated first optical flow displacements of all sample frame imagesAnd the average value of the optical flow displacement is used as an optical flow displacement threshold value for detecting the abnormal pedestrian behaviors in the target frame image. For example, taking consecutive 10 frame sample frame images before the target frame image, the first optical flow displacement is Df1、Df2、Df3……Df10Then the optical flow displacement threshold can be expressed as
Figure BDA0003072225960000122
When the number of the sample frame images taken is other values, the number may be sequentially calculated, and this embodiment is not illustrated here.
Referring to fig. 8, fig. 8 is a flowchart illustrating an embodiment of step S62 in fig. 6. Similarly, in order to reduce the amount of calculation for subsequently determining whether there is an abnormal behavior using the optical flow information, "acquiring the second optical flow displacement of the detection area of the target frame image" in the above step S62 may include:
step S621: and determining a second optical flow point in the detection area of the target frame image, wherein the optical flow displacement meets a preset displacement condition.
The preset displacement condition includes a preset number of maximum optical flow displacements in the detection area, where the preset number may be 10% of the number of all optical flow points in the detection area, and the preset displacement condition is 10% of the maximum optical flow displacements in the detection area. For example, the target frame image ft+1When 3000 optical flow points are included in the detection area, the optical flow displacement is calculated according to the optical flow information of the 3000 optical flow points, the 3000 optical flow points are sorted in the descending order of the optical flow displacement, and the first 10% of the optical flow points are taken as the second optical flow points.
Step S622: and obtaining a second optical flow displacement of the detection area of the target frame image based on the optical flow displacement of the second optical flow point.
In one implementation scenario, when the first optical flow displacement of the detection area in the sample frame image is obtained by calculating an average of the optical flow displacements of the first optical flow points, the average of the optical flow displacements of the second optical flow points of the target frame image may be taken as the second optical flow displacement of the detection area of the target frame image. For example, the target frame image ft+1Second light in the detection areaThe total number of flow points is 300, and the optical flow displacement of 300 second optical flow points is D1,t+1、D2,t+1、D3,t+1……D300,t+1So that the average value of the optical flow displacements of the 300 second optical flow points can be calculated
Figure BDA0003072225960000131
Using the average value as the target frame image ft+1The second optical flow displacement of the detection area in (1). In another implementation scenario, when the first optical flow displacement of the detection area in the sample frame image is obtained by calculating the sum of the optical flow displacements of the first optical flow points, the sum of the optical flow displacements of the second optical flow points of the target frame image may be taken as the second optical flow displacement of the detection area of the target frame image. For example, the target frame image ft+1The total number of the second optical flow points in the detection area is 300, and the optical flow displacements of the 300 second optical flow points are respectively D1,t+1、D2,t+1、D3,t+1……D300,t+1So that the sum of the 300 second luminous flux point luminous flux displacements can be calculated
Figure BDA0003072225960000132
And displaced as a second optical flow.
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating another embodiment of step S61 in fig. 6. In one implementation scenario, in order to reduce the calculation amount of the optical flow direction, in the above step S61, "count the number of first optical flow points in which the optical flow direction changes by more than a preset angle in the detection area of each frame of sample frame image" may count the number of first optical flow points in which the optical flow direction is more than the preset angle based on the first optical flow points in which the optical flow displacement satisfies the preset displacement condition in the detection area of each frame of sample frame image has been determined in the above step S611. Specifically, the "determining the optical flow ratio threshold value of the target frame image based on the number of the first optical flow points" in the above-described step S61 may include:
step S91: and acquiring the point ratio between the first optical flow point number of each frame of sample frame image and the total optical flow point number of the detection area.
Counting to obtain the detection area of each frame of sample frame imageThe number of the first optical flow points with the optical flow direction change larger than the preset angle can be calculated, so that the point ratio between the number of the first optical flow points of each frame of sample image and the total number of the optical flow points of the detection area can be calculated. For example, when the first optical flow point is acquired based on the fact that the optical flow displacement in the detection area of each frame of the sample frame image has been determined to satisfy the preset displacement condition in step S611 described above, for example, the sample frame image ftTaking 300 optical flow points satisfying a preset condition as first optical flow points, and further counting the number of optical flow directions in the 300 first optical flow points, for example, 200, to determine the sample frame image ftIs compared with the total number of optical flow points of the detection areatIs 200/300. Similarly, the same reasoning can be repeated to obtain other sample frame images f through calculation respectivelyt-1、ft-2、ft-3……ft-nIs compared with the total number of optical flow points of the detection areat-1、Pt-2、Pt-3……Pt-n
Step S92: and taking the average value of the point ratio of at least one frame of sample frame image as the optical flow ratio threshold value of the target frame image.
And taking the average value of the point ratios of all the sample frame images as an optical flow rate threshold value for detecting the abnormal behavior of the pedestrian in the target frame image. For example, the other sample frame images f are calculated respectivelyt-1、ft-2、ft-3……ft-nIs compared with the total number of optical flow points of the detection areat-1、Pt-2、Pt-3……Pt-nThen, the dot ratio of all sample frame images is averaged as the optical flow ratio threshold
Figure BDA0003072225960000141
Correspondingly, in an implementation scenario, in order to reduce the calculation amount of the optical flow direction, the above-mentioned "count the number of second optical flow points whose optical flow direction changes by more than a preset angle in the detection area of the target frame image" in step S62 may be based on the second optical flow points whose optical flow displacement satisfies the preset displacement condition in the detection area of the target frame image determined in step S621, and count the number in which the optical flow direction changes by more than the preset angle.
The "obtaining the optical flow ratio of the target frame image based on the number of the second optical flow points" in the above-described step S62 may include: and taking the ratio of the second optical flow point number of the target frame image to the total optical flow point number of the detection area as the optical flow ratio of the target frame image.
Specifically, the number of second optical flow points in which the change in the optical flow direction in the detection area of the target frame image is larger than a preset angle may be obtained, so that the point ratio between the number of second optical flow points of the target frame image and the total number of optical flow points of the detection area may be calculated. For example, when the second optical flow point is the second optical flow point whose optical flow displacement satisfies the preset displacement condition in the detection area based on the target frame image determined in the above-described step S621, for example, the target frame image ft+1Taking 300 optical flow points satisfying a preset condition as second optical flow points, and further counting the number of optical flow directions in the 300 second optical flow points larger than a preset angle, for example, 100, the target frame image f can be determinedt+1Is compared with the total number of optical flow points of the detection areat+1Is 100/300.
Referring to fig. 10, fig. 10 is a schematic flow chart of a pedestrian abnormal behavior detection method according to another embodiment of the present application. Fig. 10 is a flowchart illustrating a method for detecting abnormal pedestrian behavior according to the present application, wherein the steps in any of the above embodiments may be referred to in the detailed description. Specifically, the method may include:
step S1001: and detecting each frame of image by using a preset head-shoulder frame detection model to obtain a head-shoulder frame corresponding to the target pedestrian in each frame of image.
The preset head and shoulder detection model may be obtained in advance through deep learning training, and the embodiment is not particularly limited herein.
Step S1002: and obtaining a detection area corresponding to the target pedestrian from the head and shoulder frame corresponding to the target pedestrian.
The detection region corresponding to the target pedestrian can be obtained based on the head-shoulder frame corresponding to the target pedestrian. In one implementation scenario, the head and shoulder frames may be extended according to a predetermined ratio and a predetermined direction to obtain the detection area. For example, the head and shoulder frames are uniformly pulled down by a certain ratio, such as 1.2 times, and the like.
Step S1003: and judging whether the detection area is smaller than a preset minimum size. If so, go to step S1004, otherwise go to step S1005.
The preset minimum size is the minimum size that can contain optical flow information required for abnormal behavior detection. When the detection area is smaller than the preset minimum size, the optical flow information in the detection area is less, which may result in insufficient use for subsequent detection of abnormal behavior. The preset minimum size is a preset size which is uniformly set, or can also be a preset size which is set by a user in a self-defined way. In one implementation scenario, the detection area is a rectangular detection area, and the preset minimum size is also a rectangle.
Step S1004: and amplifying the image in the detection area to a preset standard size, and taking the amplified image as the image in the detection area.
When the detection area is smaller than the preset minimum size, it means that if the optical flow detection is performed on the image in the detection area at this time, the detected optical flow information may be less, and thus, the subsequent detection of the abnormal behavior may not be sufficient. At this time, the image in the detection area may be enlarged to a preset standard size, and the enlarged image may be used as the image in the detection area.
Step S1005: and calculating the size ratio between the size of the detection area and a preset standard size, and correspondingly taking the product of the optical flow displacement of each optical flow point in the detection area and the size ratio as the optical flow displacement of the optical flow point.
If the detection area is not smaller than the preset minimum size, it means that if the optical flow detection is performed on the image in the detection area at this time, the detected optical flow information is sufficient for the subsequent detection of the abnormal behavior. At this time, a size ratio between the size of the detection area and a preset standard size may be further calculated, and a product of the optical flow displacement of each optical flow point in the detection area and the size ratio may be taken as the optical flow displacement of the corresponding optical flow point. Specifically, the preset standard size may be divided by the size of the detection area to obtain a size ratio, for example, if the size ratio is 1.2, it indicates that the preset standard size is larger than the detection area, and the optical flow displacement of each optical flow point in the detection area is multiplied by the size ratio 1.2 to obtain the optical flow displacement corresponding to the optical flow point; or, for example, if the size ratio is 0.8, it indicates that the preset standard size is smaller than the detection area, the optical flow displacement of each optical flow point in the detection area is multiplied by the size ratio 10.8 to obtain the optical flow displacement of the corresponding optical flow point, and the optical flow displacements of the optical flow points in the detection areas with different sizes can be normalized by the above method, so that the influence of the difference between the optical flow displacements between the near view and the far view in the image on the abnormal behavior detection is eliminated.
Step S1006: and respectively carrying out optical flow detection on the detection area of each frame of image to obtain optical flow information of the detection area of each frame of image, wherein the multi-frame image comprises a target frame image and at least one frame of sample frame image, and the at least one frame of sample frame image is a plurality of frame images before the target frame image.
Methods related to optical flow detection currently include gradient-based methods, matching-based methods, energy-based methods, and the like. The following are typical: the Horn-hill Method (Horn-Schunck Method), the Lucas-Kanade Method (Lucas-Kanade Method), and the like. The optical flow detection method is the prior art in the field, and the description of the embodiment is omitted here. At least one frame of sample frame image is several frame images before the target frame image, for example, 10 frame images before the target frame image, or 15 frame images before the target frame image, and the like, and the embodiment is not limited in particular herein.
Step S1007: and determining a first optical flow point of which the optical flow displacement in the detection area of each frame of sample frame image meets a preset displacement condition.
The preset displacement condition includes a preset number of maximum optical flow displacements in the detection area, where the preset number may be 10% of the number of all optical flow points in the detection area, and the preset displacement condition is 10% of the maximum optical flow displacements in the detection area.
Step S1008: the average value of the optical flow displacements of the first optical flow points of the sample frame image is taken as the first optical flow displacement of the detection area of the sample frame image.
For example, the sample frame image ftThe total number of the first optical flow points in the detection area is 300, and the optical flow displacements of the 300 first optical flow points are respectively D1,t、D2,t、D3,t……D300,tSo that the average value of the optical flow displacements of the 300 first optical flow points can be calculated
Figure BDA0003072225960000171
The average value is used as a sample frame image ftThe first optical flow displacement of the detection area in (1).
Step S1009: and taking the average value of the first optical flow displacement of the at least one frame of sample frame image as the optical flow displacement threshold value of the target frame image.
For example, taking consecutive 10 frame sample frame images before the target frame image, the first optical flow displacement is Df1、Df2、Df3……Df10Then the optical flow displacement threshold can be expressed as
Figure BDA0003072225960000172
Step S1010: and determining a second optical flow point in the detection area of the target frame image, wherein the optical flow displacement meets a preset displacement condition.
The preset displacement condition includes a preset number of maximum optical flow displacements in the detection area, where the preset number may be 10% of the number of all optical flow points in the detection area, and the preset displacement condition is 10% of the maximum optical flow displacements in the detection area.
Step S1011: and taking the average value of the optical flow displacement of the second optical flow point of the target frame image as the second optical flow displacement of the detection area of the target frame image.
For example, the target frame image ft+1The total number of the second optical flow points in the detection area is 300, and the optical flow displacements of the 300 second optical flow points are respectively D1,t+1、D2,t+1、D3,t+1……D300,t+1So that the average value of the optical flow displacements of the 300 second optical flow points can be calculated
Figure BDA0003072225960000173
Using the average value as the target frame image ft+1The second optical flow displacement of the detection area in (1).
In an implementation scenario, the steps S107 to S109 and the steps S1010 to S1011 may be executed sequentially or in parallel, and this embodiment is not limited specifically herein.
Step S1012: and judging that the second optical flow displacement is greater than or equal to a first preset multiple of the optical flow displacement threshold, if so, executing step S1013, otherwise, executing step S1018.
The first preset multiple may be 1.3, 1.5, 1.7, 1.9, 2.1, etc., and the embodiment is not limited in this respect.
Step S1013: counting the number of first optical flow points of which the direction change of the optical flow in the detection area of each frame of sample frame image is larger than a preset angle, and acquiring the point ratio between the number of the first optical flow points of each frame of sample frame image and the total number of the optical flow points of the detection area.
For example, the sample frame image ftTaking 300 optical flow points satisfying a preset condition as first optical flow points, and further counting the number of optical flow directions in the 300 first optical flow points, for example, 200, to determine the sample frame image ftIs compared with the total number of optical flow points of the detection areatIs 200/300. Similarly, the same reasoning can be repeated to obtain other sample frame images f through calculation respectivelyt-1、ft-2、ft-3……ft-nIs compared with the total number of optical flow points of the detection areat-1、Pt-2、Pt-3……Pt-n
Step S1014: and taking the average value of the point ratio of at least one frame of sample frame image as the optical flow ratio threshold value of the target frame image.
For example, respectively calculatedOther sample frame image ft-1、ft-2、ft-3……ft-nIs compared with the total number of optical flow points of the detection areat-1、Pt-2、Pt-3……Pt-nThen, the dot ratio of all sample frame images is averaged as the optical flow ratio threshold
Figure BDA0003072225960000181
Step S1015: and counting the number of second optical flow points of which the optical flow direction change is larger than a preset angle in the detection area of the target frame image, and taking the ratio of the number of the second optical flow points of the target frame image to the number of total optical flow points of the detection area as the optical flow ratio of the target frame image.
For example, the target frame image ft+1Taking 300 optical flow points satisfying a preset condition as second optical flow points, and further counting the number of optical flow directions in the 300 second optical flow points larger than a preset angle, for example, 100, the target frame image f can be determinedt+1Is compared with the total number of optical flow points of the detection areat+1Is 100/300.
In an implementation scenario, the steps S1013 to S1014 and the step S1015 may be executed successively or in parallel, and this embodiment is not limited in this respect.
Step S1016: and judging that the optical flow ratio is greater than or equal to a second preset multiple of the optical flow ratio threshold, if so, executing step S1017, otherwise, executing step S1018.
The second preset multiple may be 1.3, 1.5, 1.7, 1.9, 2.1, etc., and the embodiment is not limited herein.
Step S1017: and determining that the pedestrian has abnormal behaviors.
If the determination results in step S1012 and step S1016 are both yes, it can be determined that the pedestrian has an abnormal behavior. In an implementation scene, alarm information can be sent out at the moment, specifically, acousto-optic alarm information can be sent out, so that monitoring personnel are prompted to have pedestrian abnormal behaviors, the monitoring personnel can intervene and intervene in time, or the monitoring personnel inform security personnel to intervene and intervene in time, life and property losses are reduced, and security and safety are maintained stably.
Step S1018: it is determined that the pedestrian does not have abnormal behavior.
If at least one of the determination results of the step S1012 and the step S1016 is negative, it may be determined that the pedestrian has no abnormal behavior.
Referring to fig. 11, fig. 11 is a block diagram of an image processing apparatus 1100 according to an embodiment of the present disclosure. The image processing apparatus 1100 comprises a memory 1110 and a processor 1120 which are coupled to each other, and the processor 1120 is configured to execute program instructions stored in the memory 1110 to implement the steps of the pedestrian abnormal behavior detection method in any one of the embodiments.
Specifically, the processor 1120 is configured to control itself and the memory 1110 to implement the pedestrian abnormal behavior detection method in any of the above embodiments, and the processor 1120 is further configured to control itself and the memory 1110 to implement the optical flow autocorrelation determination method of the object in any of the above embodiments. Processor 1120 may also be referred to as a CPU (Central Processing Unit). Processor 1120 may be an integrated circuit chip having signal processing capabilities. The Processor 1120 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 1120 may be commonly implemented by a plurality of circuit forming chips.
In this embodiment, the processor 1120 is further configured to perform target detection on multiple frames of images captured by the imaging device respectively to obtain a detection area corresponding to a target pedestrian in each frame of image, the processor 1120 is further configured to perform optical flow detection on the detection area of each frame of image respectively to obtain optical flow information of the detection area of each frame of image, where the multiple frames of images include the target frame of image and at least one frame of sample frame of image, the processor 1120 is further configured to analyze the optical flow information of the target frame of image by using the optical flow information of the at least one frame of sample frame of image, and the processor 1120 is further configured to determine whether an abnormal behavior exists in the target pedestrian based on a result of the analysis.
In the above manner, the multi-frame images including the target frame image and the at least one frame of sample frame image, which are captured by the imaging device, are respectively subjected to target detection, so that the detection area corresponding to the target pedestrian in each frame of image is obtained, and then the optical flow detection is performed on the detection area, so that the optical flow information of the target frame image and the optical flow information of the at least one frame of sample frame image are obtained, and therefore, the optical flow information of the target frame image is analyzed by using the optical flow information of the sample frame image, and further, the change of the optical flow information of the target frame image compared with the sample frame image can be found, so that no matter the scene where the target pedestrian is located, the optical flow information of the sample frame image captured by the target pedestrian can be used as the current behavior judgment standard of the target pedestrian for performing behavior judgment on the optical flow information of the target.
In some embodiments, the processor 1120 is further configured to derive an optical flow threshold of the target frame image based on the optical flow information of the at least one frame of sample frame image, the processor 1120 is further configured to compare the optical flow information of the target frame image with the optical flow threshold, and the processor 1120 is further configured to determine that the target pedestrian has abnormal behavior if a preset relation condition is satisfied between the optical flow information of the target frame image and the optical flow threshold.
In some embodiments, the optical flow threshold comprises at least one of: the processor 1120 is further configured to obtain a first optical flow displacement of the detection area of each frame of the sample frame image and determine to obtain an optical flow displacement threshold of the target frame image based on the first optical flow displacement, the processor 1120 is further configured to count a number of first optical flow points in the detection area of each frame of the sample frame image, where a change in optical flow direction is greater than a preset angle, and determine to obtain an optical flow ratio threshold of the target frame image based on the number of the first optical flow points, the processor 1120 is further configured to obtain a second optical flow displacement of the detection area of the target frame image, compare the second optical flow displacement with the optical flow displacement threshold, the processor 1120 is further configured to count a number of second optical flow points in the detection area of the target frame image, where a change in optical flow direction is greater than the preset angle, and obtain an optical flow ratio of the target frame image based on the number of the second optical flow points, compare the optical flow ratio with the optical flow ratio threshold, the preset relation conditions comprise: the second optical flow displacement is greater than or equal to a first preset multiple of the optical flow displacement threshold; and/or the optical flow ratio is greater than or equal to a second preset multiple of the optical flow ratio threshold.
In some embodiments, the processor 1120 is further configured to determine a first optical flow point in the detection area of each frame of the sample frame image, where the optical flow displacement satisfies a preset displacement condition, the processor 1120 is further configured to derive a first optical flow displacement of the detection area of the corresponding sample frame image based on the optical flow displacement of the first optical flow point, the processor 1120 is further configured to determine a second optical flow point in the detection area of the target frame image, where the optical flow displacement satisfies the preset displacement condition, and the processor 1120 is further configured to derive a second optical flow displacement of the detection area of the target frame image based on the optical flow displacement of the second optical flow point.
In some embodiments, the preset displacement condition includes a preset number of optical flow displacements in the detection area that is the largest, the processor 1120 is further configured to use an average of the optical flow displacements of the first optical flow points of the sample frame images as a first optical flow displacement of the detection area of the sample frame images, the processor 1120 is further configured to use an average of the first optical flow displacements of at least one frame of the sample frame images as an optical flow displacement threshold of the target frame images, and the processor 1120 is further configured to use an average of the optical flow displacements of the second optical flow points of the target frame images as a second optical flow displacement of the detection area of the target frame images.
In some embodiments, the processor 1120 is further configured to obtain a point ratio between a first optical flow point of each frame of the sample frame image and a total optical flow point of the detection area, the processor 1120 is further configured to use an average of the point ratios of at least one frame of the sample frame image as an optical flow ratio threshold of the target frame image, and the processor 1120 is further configured to use a ratio between a second optical flow point of the target frame image and the total optical flow point of the detection area as an optical flow ratio of the target frame image.
In some embodiments, the processor 1120 is further configured to determine whether the detection area is smaller than a preset minimum size, when the detection area is determined to be smaller than the preset minimum size, the processor 1120 is further configured to enlarge an image in the detection area to a preset standard size and use the enlarged image as the image in the detection area, and when the detection area is determined not to be smaller than the preset minimum size, the processor 1120 is further configured to calculate a size ratio between the size of the detection area and the preset standard size, and to correspond a product of an optical flow displacement of each optical flow point in the detection area and the size ratio to an optical flow displacement of the optical flow point.
In some embodiments, the processor 1120 is further configured to obtain a first length of a diagonal line of the detection region and a second length of the diagonal line of the preset minimum size, the processor 1120 is further configured to determine whether the first length is smaller than the second length, the processor 1120 is further configured to determine that the detection region is smaller than the preset minimum size when the first length is determined to be smaller than the second length, and the processor 1120 is further configured to determine that the detection region is not smaller than the preset minimum size when the first length is determined to be not smaller than the second length.
In some embodiments, the processor 1120 is further configured to obtain a first length of a diagonal line of the detection area and a third length of the diagonal line of a preset standard size, and the processor 1120 is further configured to use the third length and the first length as a size ratio.
In some embodiments, at least one frame of the sample frame images is several frames of images before the target frame image, the processor 1120 is further configured to detect each frame of image by using a preset head-shoulder frame detection model to obtain a head-shoulder frame corresponding to the target pedestrian in each frame of image, and the processor 1120 is further configured to obtain a detection region corresponding to the target pedestrian from the head-shoulder frame corresponding to the target pedestrian.
In some embodiments, the image processing apparatus 1100 further includes an image pickup device 1130 for capturing a plurality of frames of images in time sequence.
Referring to fig. 12, fig. 12 is a schematic diagram of a memory device 1200 according to an embodiment of the present application. The memory device 1200 of the present application stores program instructions 1210 that can be executed by a processor, and the program instructions 1210 are used to implement the steps of any one of the above-described embodiments of the method for detecting abnormal behaviors of pedestrians.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (10)

1. A pedestrian abnormal behavior detection method is characterized by comprising the following steps:
respectively carrying out target detection on multiple frames of images shot by the camera device to obtain a detection area corresponding to a target pedestrian in each frame of image;
respectively carrying out optical flow detection on the detection area of each frame of image to obtain optical flow information of the detection area of each frame of image, wherein the multi-frame image comprises a target frame image and at least one frame of sample frame image;
analyzing the optical flow information of the target frame image using the optical flow information of the at least one frame sample frame image; and
determining whether the target pedestrian has abnormal behavior based on the result of the analysis;
wherein, before the analyzing the optical flow information of the target frame image by using the optical flow information of the at least one frame sample frame image, the method further comprises:
judging whether the detection area is smaller than a preset minimum size;
if so, amplifying the image in the detection area to a preset standard size, and taking the amplified image as the image in the detection area;
if not, calculating the size ratio between the size of the detection area and the preset standard size, and correspondingly taking the product of the optical flow displacement of each optical flow point in the detection area and the size ratio as the optical flow displacement of the optical flow point.
2. The method of claim 1, wherein the determining whether the detection area is smaller than a preset minimum size comprises:
acquiring a first length of a diagonal line of the detection area and a second length of the diagonal line with a preset minimum size;
judging whether the first length is smaller than the second length;
if so, determining that the detection area is smaller than the preset minimum size;
if not, determining that the detection area is not smaller than the preset minimum size.
3. The method of claim 1, wherein the calculating of the size ratio between the size of the detection area and the preset standard size comprises:
acquiring a first length of a diagonal line of the detection area and a third length of the diagonal line of the preset standard size;
taking a ratio of the third length to the first length as the dimension ratio.
4. The method of claim 1, wherein said analyzing said optical flow information of said target frame image using said optical flow information of said at least one frame sample frame image comprises:
obtaining an optical flow threshold of the target frame image based on the optical flow information of the at least one frame of sample frame image;
comparing the optical flow information of the target frame image to the optical flow threshold;
the determining whether the target pedestrian has abnormal behavior based on the result of the analysis includes:
and if the optical flow information of the target frame image and the optical flow threshold value meet a preset relation condition, determining that the target pedestrian has abnormal behaviors.
5. The method of claim 4, wherein the optical flow threshold comprises at least one of: an optical flow displacement threshold and an optical flow ratio threshold; the preset relation conditions comprise: a second optical flow displacement of the detection area of the target frame image is greater than or equal to a first preset multiple of the optical flow displacement threshold; and/or the optical flow ratio of the target frame image is greater than or equal to a second preset multiple of the optical flow ratio threshold.
6. The method of claim 4, wherein the optical flow threshold comprises at least one of: an optical flow displacement threshold and an optical flow ratio threshold;
the obtaining an optical flow threshold of the target frame image based on the optical flow information of the at least one frame sample frame image comprises:
acquiring first optical flow displacement of a detection area of each frame of sample frame image, and determining and obtaining an optical flow displacement threshold of the target frame image based on the first optical flow displacement; and/or
Counting the number of first optical flow points of which the optical flow direction change is larger than a preset angle in a detection area of each frame of sample frame image, and determining and obtaining an optical flow ratio threshold of the target frame image based on the number of the first optical flow points;
the comparing the optical flow information of the target frame image to the optical flow threshold comprises:
acquiring a second optical flow displacement of a detection area of the target frame image, and comparing the second optical flow displacement with the optical flow displacement threshold; and/or the presence of a gas in the gas,
and counting the number of second optical flow points of which the optical flow direction change is larger than a preset angle in the detection area of the target frame image, obtaining the optical flow ratio of the target frame image based on the number of the second optical flow points, and comparing the optical flow ratio with the optical flow ratio threshold.
7. The method according to claim 1, wherein the at least one frame sample frame image is several frame images before the target frame image;
the target detection is respectively carried out on the multi-frame images shot by the camera device, and the detection area corresponding to the target pedestrian in each frame of image is obtained by the following steps:
detecting each frame of image by using a preset head-shoulder frame detection model to obtain a head-shoulder frame corresponding to the target pedestrian in each frame of image;
and obtaining the detection area corresponding to the target pedestrian from the head-shoulder frame corresponding to the target pedestrian.
8. An image processing apparatus comprising a memory and a processor coupled to each other;
the processor is configured to execute the program instructions stored by the memory to implement the method of any of claims 1 to 7.
9. The apparatus according to claim 8, further comprising an image pickup device for picking up a plurality of frames of images in time series.
10. A storage device storing program instructions executable by a processor to perform the method of any one of claims 1 to 7.
CN202110542200.XA 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device Pending CN113408352A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110542200.XA CN113408352A (en) 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110542200.XA CN113408352A (en) 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device
CN201910452801.4A CN110222616B (en) 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201910452801.4A Division CN110222616B (en) 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device

Publications (1)

Publication Number Publication Date
CN113408352A true CN113408352A (en) 2021-09-17

Family

ID=67818343

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110542200.XA Pending CN113408352A (en) 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device
CN201910452801.4A Active CN110222616B (en) 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201910452801.4A Active CN110222616B (en) 2019-05-28 2019-05-28 Pedestrian abnormal behavior detection method, image processing device and storage device

Country Status (1)

Country Link
CN (2) CN113408352A (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401296B (en) * 2020-04-02 2023-09-29 浙江大华技术股份有限公司 Behavior analysis method, device and apparatus
CN112001229B (en) * 2020-07-09 2021-07-20 浙江大华技术股份有限公司 Method, device and system for identifying video behaviors and computer equipment
CN113223046B (en) * 2020-07-10 2022-10-14 浙江大华技术股份有限公司 Method and system for identifying prisoner behaviors
CN113569756B (en) * 2021-07-29 2023-06-09 西安交通大学 Abnormal behavior detection and positioning method, system, terminal equipment and readable storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101794450A (en) * 2009-11-13 2010-08-04 北京智安邦科技有限公司 Method and device for detecting smoke in video image sequence
JP2010199865A (en) * 2009-02-24 2010-09-09 Nec Corp Abnormality detection system, abnormality detection method, and abnormality detection program
JP2010250541A (en) * 2009-04-15 2010-11-04 Toyota Motor Corp Object detection device
CN102156880A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for detecting abnormal crowd behavior based on improved social force model
KR20120025718A (en) * 2010-09-08 2012-03-16 중앙대학교 산학협력단 Apparatus and method for detecting abnormal behavior
JP2013092994A (en) * 2011-10-27 2013-05-16 Clarion Co Ltd Vehicle periphery monitoring device
CN103810717A (en) * 2012-11-09 2014-05-21 浙江大华技术股份有限公司 Human behavior detection method and device
CN104811660A (en) * 2014-01-27 2015-07-29 佳能株式会社 Control apparatus and control method
CN106204659A (en) * 2016-07-26 2016-12-07 浙江捷尚视觉科技股份有限公司 Elevator switch door detection method based on light stream
CN107346414A (en) * 2017-05-24 2017-11-14 北京航空航天大学 Pedestrian's attribute recognition approach and device
CN108052859A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of anomaly detection method, system and device based on cluster Optical-flow Feature
CN108257188A (en) * 2017-12-29 2018-07-06 重庆锐纳达自动化技术有限公司 A kind of moving target detecting method
CN108288021A (en) * 2017-12-12 2018-07-17 深圳市深网视界科技有限公司 A kind of crowd's accident detection method, electronic equipment and storage medium
CN109101929A (en) * 2018-08-16 2018-12-28 新智数字科技有限公司 A kind of pedestrian counting method and device
CN109697409A (en) * 2018-11-27 2019-04-30 北京文香信息技术有限公司 A kind of feature extracting method of moving image and the recognition methods for motion images of standing up
CN109697394A (en) * 2017-10-24 2019-04-30 京东方科技集团股份有限公司 Gesture detecting method and gestures detection equipment

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473533B (en) * 2013-09-10 2017-03-15 上海大学 Moving Objects in Video Sequences abnormal behaviour automatic testing method
CN104123544B (en) * 2014-07-23 2018-03-13 通号通信信息集团有限公司 Anomaly detection method and system based on video analysis
CN105046285B (en) * 2015-08-31 2018-08-17 武汉鹰视智能科技有限公司 A kind of abnormal behaviour discrimination method based on kinematic constraint
CN105550678B (en) * 2016-02-03 2019-01-18 武汉大学 Human action feature extracting method based on global prominent edge region
CN106127148B (en) * 2016-06-21 2019-03-12 日立电梯(广州)自动扶梯有限公司 A kind of escalator passenger's anomaly detection method based on machine vision
CN106980829B (en) * 2017-03-17 2019-09-20 苏州大学 Abnormal behaviour automatic testing method of fighting based on video analysis
CN107610108B (en) * 2017-09-04 2019-04-26 腾讯科技(深圳)有限公司 Image processing method and device
CN109034126B (en) * 2018-08-31 2021-09-28 上海理工大学 Micro-expression recognition method based on optical flow main direction
CN109711344B (en) * 2018-12-27 2023-05-26 东北大学 Front-end intelligent specific abnormal behavior detection method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010199865A (en) * 2009-02-24 2010-09-09 Nec Corp Abnormality detection system, abnormality detection method, and abnormality detection program
JP2010250541A (en) * 2009-04-15 2010-11-04 Toyota Motor Corp Object detection device
CN101794450A (en) * 2009-11-13 2010-08-04 北京智安邦科技有限公司 Method and device for detecting smoke in video image sequence
KR20120025718A (en) * 2010-09-08 2012-03-16 중앙대학교 산학협력단 Apparatus and method for detecting abnormal behavior
CN102156880A (en) * 2011-04-11 2011-08-17 上海交通大学 Method for detecting abnormal crowd behavior based on improved social force model
JP2013092994A (en) * 2011-10-27 2013-05-16 Clarion Co Ltd Vehicle periphery monitoring device
CN103810717A (en) * 2012-11-09 2014-05-21 浙江大华技术股份有限公司 Human behavior detection method and device
CN104811660A (en) * 2014-01-27 2015-07-29 佳能株式会社 Control apparatus and control method
CN106204659A (en) * 2016-07-26 2016-12-07 浙江捷尚视觉科技股份有限公司 Elevator switch door detection method based on light stream
CN107346414A (en) * 2017-05-24 2017-11-14 北京航空航天大学 Pedestrian's attribute recognition approach and device
CN109697394A (en) * 2017-10-24 2019-04-30 京东方科技集团股份有限公司 Gesture detecting method and gestures detection equipment
CN108052859A (en) * 2017-10-31 2018-05-18 深圳大学 A kind of anomaly detection method, system and device based on cluster Optical-flow Feature
CN108288021A (en) * 2017-12-12 2018-07-17 深圳市深网视界科技有限公司 A kind of crowd's accident detection method, electronic equipment and storage medium
CN108257188A (en) * 2017-12-29 2018-07-06 重庆锐纳达自动化技术有限公司 A kind of moving target detecting method
CN109101929A (en) * 2018-08-16 2018-12-28 新智数字科技有限公司 A kind of pedestrian counting method and device
CN109697409A (en) * 2018-11-27 2019-04-30 北京文香信息技术有限公司 A kind of feature extracting method of moving image and the recognition methods for motion images of standing up

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
柳晶晶;陶华伟;罗琳;赵力;邹采荣;: "梯度直方图和光流特征融合的视频图像异常行为检测算法", 信号处理, vol. 32, no. 01, 25 January 2016 (2016-01-25), pages 1 - 7 *

Also Published As

Publication number Publication date
CN110222616B (en) 2021-08-31
CN110222616A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110222616B (en) Pedestrian abnormal behavior detection method, image processing device and storage device
Zhang et al. Wide-area crowd counting via ground-plane density maps and multi-view fusion cnns
Surasak et al. Histogram of oriented gradients for human detection in video
US9646212B2 (en) Methods, devices and systems for detecting objects in a video
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
Lee et al. Occlusion handling in videos object tracking: A survey
US8706663B2 (en) Detection of people in real world videos and images
TWI503756B (en) Human image tracking system, and human image detection and human image tracking methods thereof
JP2007080262A (en) System, method and program for supporting 3-d multi-camera video navigation
JP2023015989A (en) Item identification and tracking system
JP7131587B2 (en) Information processing system, information processing device, information processing method and program
CN110070003B (en) Abnormal behavior detection and optical flow autocorrelation determination method and related device
Wang et al. Template-based people detection using a single downward-viewing fisheye camera
Zoidi et al. Stereo object tracking with fusion of texture, color and disparity information
JP2005503731A (en) Intelligent 4-screen simultaneous display through collaborative distributed vision
Lyu et al. Extract the gaze multi-dimensional information analysis driver behavior
WO2022095818A1 (en) Methods and systems for crowd motion summarization via tracklet based human localization
US11544926B2 (en) Image processing apparatus, method of processing image, and storage medium
CN116824641A (en) Gesture classification method, device, equipment and computer storage medium
JP6163732B2 (en) Image processing apparatus, program, and method
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
KR101241813B1 (en) Apparatus and method for detecting objects in panoramic images using gpu
AU2008356238A1 (en) Method and device for analyzing video signals generated by a moving camera
JP7152651B2 (en) Program, information processing device, and information processing method
Zúniga et al. Fast and reliable object classification in video based on a 3D generic model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination