CN114241397A - Frontier defense video intelligent analysis method and system - Google Patents

Frontier defense video intelligent analysis method and system Download PDF

Info

Publication number
CN114241397A
CN114241397A CN202210164370.3A CN202210164370A CN114241397A CN 114241397 A CN114241397 A CN 114241397A CN 202210164370 A CN202210164370 A CN 202210164370A CN 114241397 A CN114241397 A CN 114241397A
Authority
CN
China
Prior art keywords
target
behavior
video
pictures
targets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210164370.3A
Other languages
Chinese (zh)
Other versions
CN114241397B (en
Inventor
彭凯
陈程鹏
桂宾
周昂
徐晓慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Fenghuo Kaizhuo Technology Co ltd
Original Assignee
Wuhan Fenghuo Kaizhuo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Fenghuo Kaizhuo Technology Co ltd filed Critical Wuhan Fenghuo Kaizhuo Technology Co ltd
Priority to CN202210164370.3A priority Critical patent/CN114241397B/en
Publication of CN114241397A publication Critical patent/CN114241397A/en
Application granted granted Critical
Publication of CN114241397B publication Critical patent/CN114241397B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Burglar Alarm Systems (AREA)

Abstract

The invention provides a frontier defense video intelligent analysis method and a frontier defense video intelligent analysis system, wherein the method comprises the following steps: step 1, numbering all video sources, sequentially acquiring pictures from all video sources according to the numbers, and numbering the acquired pictures, wherein the pictures comprise a plurality of targets; step 2, identifying each target from the picture by a target detection model trained on the basis of a frontier defense data set, and associating and numbering a plurality of identified targets on the basis of an improved multi-target tracking model; and 3, analyzing the target behavior of each identified target, and determining the target of the abnormal behavior. The invention can identify the moving target from a plurality of video sources, analyzes the abnormal behavior of the moving target and is suitable for the field of security protection.

Description

Frontier defense video intelligent analysis method and system
Technical Field
The invention relates to the field of intelligent security, in particular to a frontier defense video intelligent analysis method and system.
Background
With the continuous promotion of important engineering projects such as national emergency systems, safe cities, safe campuses, scientific and technological police and the like in China, video monitoring receives more and more attention, and the video monitoring is widely applied to places such as schools, frontiers, roads and the like. The deployment of the video monitoring system provides reliable and powerful guarantee for daily life, property safety and national safety of people, for example, the 1300-path channel of the whole line of the Qinghai-Tibet railway adopts video analysis to comprehensively and effectively protect the safety of the whole line railway. With the development of Video monitoring technology, the Intelligent Video Surveillance (IVS) is proposed and widely applied.
The intelligent video monitoring is realized by describing, analyzing and understanding a video frame sequence to obtain valuable target data in a video by using a computer vision technology on the premise of no human intervention, and then realizing the functions of automatic monitoring and timely warning of abnormal behaviors of suspicious people and the like on the basis, so that a video monitoring system has the human-like intelligence. The intelligent video monitoring thoroughly changes the traditional video monitoring based on manual monitoring picture supervision and content understanding modes, and provides a wider display stage for the development of video monitoring.
Disclosure of Invention
The invention provides a frontier defense video intelligent analysis method and system aiming at the technical problems in the field of monitoring and security protection in the prior art, and the frontier defense video intelligent analysis method and system can cross platforms, is simple to operate and has high analysis efficiency.
According to a first aspect of the present invention, there is provided a frontier defense video intelligent analysis method, including:
step 1, numbering all video sources, sequentially acquiring pictures from all video sources according to the numbers, and numbering the acquired pictures, wherein the pictures comprise a plurality of targets;
step 2, identifying each target from the picture by a target detection model trained on the basis of a frontier defense data set, and associating and numbering a plurality of identified targets on the basis of an improved multi-target tracking model;
and 3, analyzing the target behavior of each identified target, and determining the target of the abnormal behavior.
According to a second aspect of the present invention, there is provided a frontier video intelligent analysis system, comprising:
the acquisition module is used for numbering all video sources, acquiring pictures from all video sources in sequence according to the numbers, and numbering the acquired pictures, wherein the pictures comprise a plurality of targets;
the recognition module is used for recognizing each target from the picture based on a target detection model trained by a frontier defense data set, associating and numbering a plurality of recognized targets based on an improved multi-target tracking model;
and the determining module is used for analyzing the target behavior of each identified target and determining the abnormal behavior target.
According to a third aspect of the present invention, there is provided an electronic device, comprising a memory and a processor, wherein the processor is configured to implement the steps of the frontier video intelligent analysis method when executing a computer management-like program stored in the memory.
According to a fourth aspect of the present invention, there is provided a computer readable storage medium, on which a computer management class program is stored, which when executed by a processor implements the steps of the frontier video intelligent analysis method.
According to the frontier defense video intelligent analysis method and system, the moving target can be identified from a plurality of video sources, the abnormal behavior of the moving target is analyzed, and the frontier defense video intelligent analysis method and system are suitable for the field of security and protection; the video source analysis system has the advantages that a plurality of video sources can be analyzed simultaneously, analysis efficiency is improved, the video source analysis system is adaptive to various operating systems, the video source analysis system can be used without changing any hardware equipment, reusability of original equipment is enhanced, and accordingly cost is reduced.
Drawings
FIG. 1 is a flow chart of a frontier defense video intelligent analysis method provided by the invention;
FIG. 2 is a diagram illustrating a determination of whether a point is within a region;
FIG. 3 is a schematic illustration of a boundary crossing in an analysis method;
FIG. 4 is a schematic illustration of intrusion in an analysis method;
FIG. 5 is a schematic diagram of retrograde flow in the analysis process;
fig. 6 is a schematic structural diagram of a frontier defense video intelligent analysis system provided by the present invention;
FIG. 7 is a schematic diagram of a hardware structure of a possible electronic device according to the present invention;
fig. 8 is a schematic diagram of a hardware structure of a possible computer-readable storage medium according to the present invention.
Detailed Description
The following detailed description of embodiments of the present invention is provided in connection with the accompanying drawings and examples. The following examples are intended to illustrate the invention but are not intended to limit the scope of the invention.
Aiming at the field of intelligent security, the existing intelligent video analysis system usually determines a moving target by the following three methods: (1) the interframe difference method arranges the videos shot by the camera according to the shooting sequence, simultaneously carries out difference operation on the images of the front frame and the rear frame, and then extracts the changed part of the video frame so as to position the moving target in the video. The frame difference method has the biggest defect that a complete moving object is difficult to separate. (2) The background difference method comprises the steps of firstly creating a background model through a read first frame of a video, then carrying out difference on a video frame of a monitoring system and the background model, and then positioning a moving target in the video. The background difference method has the main advantages that the method can be suitable for complex and changeable background environments, and targets with large moving ranges in videos can be easily detected; the disadvantage is that it is very susceptible to light. In order to make the background subtraction method have a good effect, the background model must be updated in time. (3) And the optical flow method infers the moving direction and speed of the object according to the change rule of the intensity of the pixel points of the detected image along with time. The optical flow method has the disadvantages that the calculation process is complex, and the calculation amount is huge in the detection process, so that the requirement on the processing capacity of a computer is high, and the requirement on the real-time monitoring is difficult to achieve.
All three methods have obvious defects and have great limitation on detecting static or overlapped targets. In order to meet the requirement of quickly and accurately determining a moving target in an actual scene, the invention provides a frontier defense video intelligent analysis method which can identify and detect the moving target and detect the abnormal behavior of the target based on a plurality of video sources.
Example one
A frontier defense video intelligent analysis method, referring to fig. 1, the frontier defense video intelligent analysis method includes:
step 1, numbering all video sources, sequentially obtaining pictures from all video sources according to the numbers, and numbering the obtained pictures, wherein the pictures comprise a plurality of targets.
As an example, the step 1 comprises: storing all video sources in a list, and numbering each video source as a; each round of the method sequentially obtains a picture from all video sources, wherein the picture is numbered as CabWherein a represents a video source number, and b represents the b th picture under the video source a; the step 1 further comprises the following steps of extracting key pictures from the obtained pictures: if picture CabB in (1) is odd, the picture C is discardedabOtherwise, picture CabAnd storing the pictures in a picture queue Q in sequence.
It can be understood that the video recording can be performed in real time for the area to be monitored, and different monitoring devices can be adopted to record the video. And storing the recorded different video sources in a video source list, and numbering each video source in sequence.
When the pictures are obtained from the video sources, the pictures are obtained according to multiple rounds, one picture is obtained from each video source in each round, and multiple pictures are obtained through the multiple rounds of obtaining of the pictures, wherein each picture comprises multiple targets. The picture numbers obtained from a plurality of video sources are sequentially
Figure 943608DEST_PATH_IMAGE001
. Since the total number of pictures of each video source may be different, the pictures of a certain video source are obtained in each roundThen a flag is set for the video source, which is skipped the next time a picture is taken.
After the pictures are obtained from a plurality of video sources, extracting key pictures therein, wherein extracting the key pictures can be understood as screening the obtained pictures, and during specific screening, if the current picture C isabWhen b in (A) is an odd number, i.e.
Figure 684644DEST_PATH_IMAGE002
And if so, discarding the picture, otherwise, putting the picture into a picture queue Q.
It should be noted that, the pictures are stored in the picture queue Q in a certain order, the pictures acquired in the first round are stored first, then the pictures acquired in the second round are stored, and so on, and the pictures acquired in multiple rounds are all stored in the picture queue Q.
And 2, identifying each target from the picture by a target detection model trained on the basis of the frontier defense data set, and associating and numbering a plurality of identified targets on the basis of the improved multi-target tracking model.
As an example, the step 2 includes: and inputting the obtained pictures into a target detection model in sequence, inputting the output result of the target detection model into a multi-target tracking model, and obtaining the category of each target, a target rectangular frame representing target position information, a target ID and the occurrence time of the video source and the target which are output by the multi-target tracking model.
It can be understood that each picture in the picture queue Q may be sequentially input into the target detection model, the target identification result in each picture is output, the target identification result of each picture is input into the improved multi-target tracking model, the multi-target tracking model outputs the category of each target, and represents the target rectangular frame of the target position information, the target ID, the video source to which the target belongs, and the time when the target appears.
The target detection model identifies targets in the pictures, can identify the target position (represented by a target rectangular frame) and the type of the target in each picture, and the multi-target tracking model associates a plurality of targets output by the target detection model and judges whether the targets identified from different pictures are associated targets, namely whether the targets are the same target.
Specifically, because the background environment of frontier defense is mostly complicated, the target detection model of the public data set training can not meet the requirements, and the frontier defense data set under the frontier defense camera is required to be used to retrain the target detection model before step 3. In addition, in the original multi-target tracking algorithm, a method for determining whether the targets in the two previous and next frames of pictures are the same target is to combine motion information and target appearance information. The degree of association of the motion information is as follows:
Figure 265798DEST_PATH_IMAGE003
wherein d isjIndicating the detected position of the jth target rectangular frame, yiRepresenting the predicted position of the improved multi-target tracking model to the ith target, SiRepresenting a covariance matrix between the detected position of the jth target and the predicted position of the ith target, (x-y)TIs a transposed matrix of (x-y). Wherein, the detection position of the target (the position of the target detection frame) is identified according to the target detection model, and the prediction position of the target is determined according to the multi-target tracking model.
The degree of association of the target appearance information is as follows:
Figure 679593DEST_PATH_IMAGE004
;
wherein r isj128-dimensional feature vector, R, calculated for the jth target for the target detection modeliThe most recently set number of feature vectors successfully associated for the ith target,
Figure 205252DEST_PATH_IMAGE005
is RiThe k-th 128-dimensional feature vector;
linear weighting based on target motion information and target appearance information as the final degree of association:
Figure 221750DEST_PATH_IMAGE006
wherein the content of the first and second substances,
Figure 391831DEST_PATH_IMAGE007
the weight of the correlation degree of the target motion information is 0.01-0.1.
The improved multi-target tracking model is used for associating targets in different pictures, the same target ID is allocated to the same target, and the extraction of key frames is considered to result in error amplification of target motion information and excessive target ID conversion times, so that the method can be used for processing the target ID conversion times
Figure 835582DEST_PATH_IMAGE008
Set to 0 to reduce the error of the motion information to 0, and assign the target ID depending only on the target appearance information.
And 3, analyzing the target behavior of each identified target, and determining the target of the abnormal behavior.
It can be understood that the target detection model and the multi-target tracking model are used for identifying the target from the picture, and the targets are associated, that is, the moving target is identified. In the step, the behavior of the target is analyzed according to the motion information of the moving target, and abnormal behaviors of the target, such as illegal behaviors, are identified.
The abnormal behaviors of the target comprise border crossing behaviors, gathering behaviors, retrograde behaviors, intrusion behaviors, wandering retention behaviors and the like. Before analyzing the behavior of the target, an alert zone is set. The method adopts an injection line method to judge whether a target is in the warning area, namely, a ray is emitted from the center of the target, and if the number of the intersection points of the ray and the warning area is odd, the target point is in the warning area. As shown in FIG. 2, a straight line parallel to the X axis is drawn by the Y coordinate of the target point, 8 intersection points are formed with the warning region, the left side is 5, the right side is 3, and the target is in the region according to the injection line method. The specific algorithm is as follows:
algorithm determines whether a point is within a polygon
Inputting: x is the number of0,y0Point coordinates to be judged
And (3) outputting:True or Falsewhether or not within a polygon
1: function ISCONTAINS(x0,y0)
2: crossings
Figure 330761DEST_PATH_IMAGE009
3: for
Figure 947687DEST_PATH_IMAGE010
do
4:
Figure 175537DEST_PATH_IMAGE011
5:
Figure 134397DEST_PATH_IMAGE012
6:
Figure 507085DEST_PATH_IMAGE013
7:
Figure 724439DEST_PATH_IMAGE014
8: endfor
9: if
Figure 869113DEST_PATH_IMAGE015
then
10:
Figure 123508DEST_PATH_IMAGE016
11: endif
12: return
Figure 845476DEST_PATH_IMAGE017
13: endfunction
The schematic diagram for determining the boundary crossing of the target can be seen in fig. 3, and the moving target is replaced by a rectangular frame, wherein the coordinate of the upper left corner of the rectangular frame
Figure 741888DEST_PATH_IMAGE018
And coordinates of lower right corner
Figure 741068DEST_PATH_IMAGE019
Position information representing a moving object. Two end points of the boundary line have coordinates of
Figure 697523DEST_PATH_IMAGE020
And
Figure 250995DEST_PATH_IMAGE021
from the two end point coordinates, the equation of the boundary line can be obtained:
Figure 213747DEST_PATH_IMAGE022
wherein k and l are derived from the formula:
Figure 67434DEST_PATH_IMAGE023
when the target is out of range, two points (the upper left corner coordinate and the lower right corner coordinate) of the target rectangular frame are certainly positioned on two sides of the boundary line, so that the following conditions are met:
Figure 194790DEST_PATH_IMAGE024
when the target invades, the schematic diagram is shown in fig. 4, the target invades, that is, the target is located in the alert area, and the specific determination method may refer to the above-mentioned injection line method, and the description is not repeated here.
Target retrograde view figure 5, using vectors
Figure 891350DEST_PATH_IMAGE025
Representing the initial direction of movement of the target by a vector
Figure 129565DEST_PATH_IMAGE026
Representing the moving direction of the target after a period of time, and judging that the target is in the reverse direction when the dot product of the two vectors is less than 0, namely:
Figure 572179DEST_PATH_IMAGE027
the specific algorithm of the target reverse is as follows:
algorithmic retrograde detection
Inputting:
Figure 995070DEST_PATH_IMAGE028
moving target position coordinate, basic direction and interval frame number
1: function ISRETROGRADE
Figure 54293DEST_PATH_IMAGE029
2:
Figure 565039DEST_PATH_IMAGE030
3:
Figure 658897DEST_PATH_IMAGE031
Current frame number
4: if (x0,y0) Within the alarm area then
5 if the object is in the list
Figure 414143DEST_PATH_IMAGE032
Middle then
6:
Figure 836028DEST_PATH_IMAGE033
7: if
Figure 478362DEST_PATH_IMAGE034
then
8, judging whether the retrograde motion exists according to a retrograde motion judgment formula
9: endif
10: else
11:
Figure 770934DEST_PATH_IMAGE035
12: endif
13: endif
14: endfunction
As an example, the presence of aggregation behavior or wandering retention behavior of a target is determined by: counting the number of targets in the warning area, and determining that aggregation behaviors exist in a plurality of targets if the number of the targets exceeds a set number threshold; and calculating the time for the target to stay in the warning area, and determining that the target has a lingering behavior if the stay time of the target exceeds a set time threshold.
It will be appreciated that the principle of aggregate detection is similar to that of intrusion detection, with the determination of whether aggregate activity has occurred being made by determining whether an object within the surveillance zone exceeds a threshold.
The principle of retention detection is: and calculating the time for the target to stay in the warning area, and judging that the target stays if the stay time exceeds a threshold value. And if the target moves, restarting timing.
The loitering detection principle is similar to the retention detection, the only difference is that the loitering detection is a moving target, and if the target moves in the warning area all the time and the moving time exceeds a threshold value, the target is judged to loiter.
The specific algorithm of target aggregation detection is as follows:
algorithm aggregate detection
Inputting:
Figure 535628DEST_PATH_IMAGE036
upper limit of aggregation
And (3) outputting:True or Falsewhether or not to aggregate
1: function ISGATHER
Figure 300934DEST_PATH_IMAGE037
2, initializing the number of moving targets in the current frame image
Figure 419063DEST_PATH_IMAGE038
3, traversing each target to judge whether the target is in the alarm area
4: for
Figure 221934DEST_PATH_IMAGE039
do
5: if
Figure 891950DEST_PATH_IMAGE040
Within the alarm area then
6:
Figure 147482DEST_PATH_IMAGE041
7: endif
8: endfor
9, judging whether the target number in the current frame image reaches the aggregation upper limit or not
10: if
Figure 69301DEST_PATH_IMAGE042
then
11: return True
12: else
13: return False
14: endif
15: endfunction
The principle of the target aggregation detection is to determine whether aggregation behavior has occurred by determining whether a target within an alert area exceeds a threshold.
The specific algorithm for detecting the loitering retention behavior is as follows:
algorithmic loitering detection
Inputting:
Figure 851312DEST_PATH_IMAGE043
moving target position coordinates and loitering threshold
And (3) outputting:True or Falsewhether loitering
1: function ISWANDER
Figure 833175DEST_PATH_IMAGE044
2:
Figure 576003DEST_PATH_IMAGE045
3:
Figure 567093DEST_PATH_IMAGE046
Current frame number
4: if
Figure 938031DEST_PATH_IMAGE047
Within the alarm area then
If the first time the alarm enters the then area
6 initialization
Figure 87865DEST_PATH_IMAGE048
And
Figure 52410DEST_PATH_IMAGE049
7, storing the target in
Figure 847191DEST_PATH_IMAGE050
In
8: else
9, continuously updating
Figure 479160DEST_PATH_IMAGE051
10: endif
11: else
12: if the target is
Figure 802826DEST_PATH_IMAGE052
Middle then
13 from
Figure 644880DEST_PATH_IMAGE052
Delete the target
14: endif
15: endif
If the target is at
Figure 712193DEST_PATH_IMAGE052
Middle then
17: if
Figure 933090DEST_PATH_IMAGE053
then
18: return True
19: endif
20: endif
21: return False
22: end function
And comparing the time for the target to stay in the warning area with the time for the target to enter the warning area for the first time by continuously updating the time for the target to stay in the warning area, and if the time exceeds a threshold value, the target wanders.
As an embodiment, the step 3 further includes: storing target information of the abnormal behavior target into a violation queue E, wherein the target information comprises a target type, a target ID, violation time and a target picture; acquiring target information from the violation queue E, and locally caching the target information; and when the target is not detected beyond the set time interval, uploading target information of the undetected target from the local to the database.
It will be appreciated that target information for the detected targets for which there is abnormal behavior (referred to as violation targets) is stored in the violation queue. And obtaining the illegal target information from the queue E, caching the information locally, and uploading the information to the database when the target leaves the screen for 30s and does not return to the screen. Interaction times with the database are reduced through caching, resource waste in the analysis process is reduced, and system performance is improved.
Example two
A frontier video intelligent analysis system, see fig. 6, includes an obtaining module 601, a recognition module 602, and a determination module 603.
The acquiring module 601 is configured to number all video sources, sequentially acquire pictures from all video sources according to the numbers, and number the acquired pictures, where the pictures include multiple targets; the identification module 602 is configured to identify each target from the picture based on a target detection model trained by a frontier defense dataset, and associate and number a plurality of identified targets based on an improved multi-target tracking model; the determining module 603 is configured to analyze the target behavior of each identified target, and determine an abnormal behavior target.
It can be understood that the frontier defense video intelligent analysis system provided by the present invention corresponds to the frontier defense video intelligent analysis method provided by the foregoing embodiments, and the relevant technical features of the frontier defense video intelligent analysis system may refer to the relevant technical features of the frontier defense video intelligent analysis method, and are not described herein again.
EXAMPLE III
Referring to fig. 7, fig. 7 is a schematic view of an embodiment of an electronic device according to an embodiment of the invention. As shown in fig. 7, an embodiment of the present invention provides an electronic device 700, which includes a memory 710, a processor 720, and a computer program 711 stored in the memory 710 and running on the processor 720, where the processor 720 implements the frontier video intelligent analysis method of the first embodiment when executing the computer program 711.
Example four
Referring to fig. 8, fig. 8 is a schematic diagram of an embodiment of a computer-readable storage medium according to the present invention. As shown in fig. 8, the present embodiment provides a computer-readable storage medium 800, on which a computer program 811 is stored, and when the computer program 811 is executed by a processor, the intelligent analysis method for frontier video according to the first embodiment is implemented.
According to the intelligent analysis method and system for the frontier defense video, provided by the embodiment of the invention, the moving target can be identified from a plurality of video sources, and the abnormal behavior of the moving target is analyzed, so that the intelligent analysis method and system are suitable for the field of security and protection; the video source analysis system has the advantages that a plurality of video sources can be analyzed simultaneously, analysis efficiency is improved, the video source analysis system is adaptive to various operating systems, the video source analysis system can be used without changing any hardware equipment, reusability of original equipment is enhanced, and accordingly cost is reduced.
It should be noted that, in the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to relevant descriptions of other embodiments for parts that are not described in detail in a certain embodiment.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A frontier defense video intelligent analysis method is characterized by comprising the following steps:
step 1, numbering all video sources, sequentially acquiring pictures from all video sources according to the numbers, and numbering the acquired pictures, wherein the pictures comprise a plurality of targets;
step 2, identifying each target from the picture by a target detection model trained on the basis of a frontier defense data set, and associating and numbering a plurality of identified targets on the basis of an improved multi-target tracking model;
and 3, analyzing the target behavior of each identified target, and determining the target of the abnormal behavior.
2. The frontier defense video intelligent analysis method according to claim 1, wherein the step 1 comprises:
storing all video sources in a list, and numbering each video source as a;
each round of the method sequentially obtains a picture from all video sources, wherein the picture is numbered as CabWherein a represents a video source number, and b represents the b th picture under the video source a;
the step 1 further comprises the following steps of extracting key pictures from the obtained pictures:
if picture CabB in (1) is odd, the picture C is discardedabOtherwise, picture CabAnd storing the pictures in a picture queue Q in sequence.
3. The frontier defense video intelligent analysis method according to claim 1 or 2, wherein the step 2 of associating and numbering the detected multiple targets based on the improved multi-target tracking model comprises:
calculating the association degrees of two targets in two adjacent frames of pictures according to the target motion information and the target appearance information, wherein the association degrees comprise the association degree of the target motion information and the association degree of the target appearance information;
the association degree of the target motion information is as follows:
Figure 291037DEST_PATH_IMAGE001
wherein d isjIndicating the detected position of the jth target rectangular frame, yiRepresenting the predicted position of the improved multi-target tracking model to the ith target, SiRepresenting a covariance matrix between the detected position of the jth target and the predicted position of the ith target, (x-y)TA transposed matrix of (x-y);
the degree of association of the target appearance information is as follows:
Figure 164315DEST_PATH_IMAGE002
;
wherein r isj128-dimensional feature vector, R, calculated for the jth target for the target detection modeliLast set number of successful associations for ith targetIs determined by the feature vector of (a),
Figure 183961DEST_PATH_IMAGE003
is RiThe k-th 128-dimensional feature vector;
linear weighting based on target motion information and target appearance information as the final degree of association:
Figure 105781DEST_PATH_IMAGE004
wherein the content of the first and second substances,
Figure 825475DEST_PATH_IMAGE005
the weight of the correlation degree of the target motion information is 0.01-0.1;
and determining two targets with the final association degree larger than the set association degree threshold value as the same target.
4. The frontier defense video intelligent analysis method according to claim 3, wherein the step 2 comprises:
and inputting the obtained pictures into a target detection model in sequence, inputting the output result of the target detection model into an improved multi-target tracking model, and obtaining the category of each target, a target rectangular frame representing target position information, a target ID, a video source and the occurrence time which are output by the multi-target tracking model.
5. The frontier video intelligent analysis method according to claim 4, wherein the abnormal behavior comprises border crossing behavior, gathering behavior, retrograde behavior, intrusion behavior, and wandering retention behavior.
6. The frontier defense video intelligent analysis method according to claim 5, characterized in that whether intrusion behavior exists in the target is determined by:
a warning area is defined, and whether the target is in the warning area is judged based on an injection line method:
emitting a ray from the center of the target rectangular frame, and if the number of intersection points of the ray and the warning area is odd, determining that the target has an intrusion behavior in the warning area;
determining that the target has boundary crossing behavior by:
setting the coordinates of the upper left corner of the target rectangular frame
Figure 807338DEST_PATH_IMAGE006
And coordinates of lower right corner
Figure 612483DEST_PATH_IMAGE007
Representing the position information of the moving object, the coordinates of two end points of the boundary line are respectively
Figure 839458DEST_PATH_IMAGE008
And
Figure 413659DEST_PATH_IMAGE009
obtaining boundary line equation through two end point coordinates of boundary line
Figure 566423DEST_PATH_IMAGE010
Wherein k and l are derived from the formula:
Figure 593285DEST_PATH_IMAGE011
when it is satisfied with
Figure 388065DEST_PATH_IMAGE012
When the object exists, the upper left corner coordinate and the lower right corner coordinate of the object rectangular frame are positioned on two sides of the boundary line, and the object is determined to have boundary crossing behavior;
determining that the target has retrograde behavior by the following method:
using vectors
Figure 252991DEST_PATH_IMAGE013
Representing the initial direction of movement of the target by a vector
Figure 638973DEST_PATH_IMAGE014
Representing the moving direction after a target preset time period, if the moving direction meets the requirement
Figure 356393DEST_PATH_IMAGE015
Determining that the target has a retrograde motion behavior;
determining that the target has aggregation behavior or wandering retention behavior by:
counting the number of targets in the warning area, and determining that aggregation behaviors exist in a plurality of targets if the number of the targets exceeds a set number threshold;
and calculating the time for the target to stay in the warning area, and determining that the target has a lingering behavior if the stay time of the target exceeds a set time threshold.
7. The frontier defense video intelligent analysis method according to claim 1, characterized by further comprising, after the step 3:
storing target information of the abnormal behavior target into a violation queue E, wherein the target information comprises a target type, a target ID, violation time and a target picture;
acquiring violation target information from the violation queue E, and caching the violation target information to the local;
and when the violation target is not detected beyond the set time interval, uploading target information of the undetected violation target from the local to the database.
8. The utility model provides a frontier defense video intelligence analytic system which characterized in that includes:
the acquisition module is used for numbering all video sources, acquiring pictures from all video sources in sequence according to the numbers, and numbering the acquired pictures, wherein the pictures comprise a plurality of targets;
the recognition module is used for recognizing each target from the picture based on a target detection model trained by a frontier defense data set, associating and numbering a plurality of recognized targets based on an improved multi-target tracking model;
and the determining module is used for analyzing the target behavior of each identified target and determining the abnormal behavior target.
9. An electronic device, comprising a memory and a processor, wherein the processor is configured to implement the steps of the frontier video intelligent analysis method according to any one of claims 1 to 7 when executing a computer management class program stored in the memory.
10. A computer-readable storage medium, having stored thereon a computer management-like program, which, when executed by a processor, performs the steps of the frontier video intelligence analysis method of any of claims 1-8.
CN202210164370.3A 2022-02-23 2022-02-23 Frontier defense video intelligent analysis method and system Active CN114241397B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210164370.3A CN114241397B (en) 2022-02-23 2022-02-23 Frontier defense video intelligent analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210164370.3A CN114241397B (en) 2022-02-23 2022-02-23 Frontier defense video intelligent analysis method and system

Publications (2)

Publication Number Publication Date
CN114241397A true CN114241397A (en) 2022-03-25
CN114241397B CN114241397B (en) 2022-07-08

Family

ID=80747780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210164370.3A Active CN114241397B (en) 2022-02-23 2022-02-23 Frontier defense video intelligent analysis method and system

Country Status (1)

Country Link
CN (1) CN114241397B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116959A (en) * 2013-01-25 2013-05-22 上海博超科技有限公司 Analyzing and recognizing method for abnormal behaviors in intelligent videos
CN104680555A (en) * 2015-02-13 2015-06-03 电子科技大学 Border-crossing detection method and border-crossing monitoring system based on video monitoring
CN104680557A (en) * 2015-03-10 2015-06-03 重庆邮电大学 Intelligent detection method for abnormal behavior in video sequence image
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN111860282A (en) * 2020-07-15 2020-10-30 中国电子科技集团公司第三十八研究所 Subway section passenger flow volume statistics and pedestrian retrograde motion detection method and system
CN112836639A (en) * 2021-02-03 2021-05-25 江南大学 Pedestrian multi-target tracking video identification method based on improved YOLOv3 model
CN113903000A (en) * 2021-09-28 2022-01-07 浙江大华技术股份有限公司 Wall turning detection method, device and equipment
CN114067428A (en) * 2021-11-02 2022-02-18 上海浦东发展银行股份有限公司 Multi-view multi-target tracking method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116959A (en) * 2013-01-25 2013-05-22 上海博超科技有限公司 Analyzing and recognizing method for abnormal behaviors in intelligent videos
CN104680555A (en) * 2015-02-13 2015-06-03 电子科技大学 Border-crossing detection method and border-crossing monitoring system based on video monitoring
CN104680557A (en) * 2015-03-10 2015-06-03 重庆邮电大学 Intelligent detection method for abnormal behavior in video sequence image
CN109711320A (en) * 2018-12-24 2019-05-03 兴唐通信科技有限公司 A kind of operator on duty's unlawful practice detection method and system
CN111860282A (en) * 2020-07-15 2020-10-30 中国电子科技集团公司第三十八研究所 Subway section passenger flow volume statistics and pedestrian retrograde motion detection method and system
CN112836639A (en) * 2021-02-03 2021-05-25 江南大学 Pedestrian multi-target tracking video identification method based on improved YOLOv3 model
CN113903000A (en) * 2021-09-28 2022-01-07 浙江大华技术股份有限公司 Wall turning detection method, device and equipment
CN114067428A (en) * 2021-11-02 2022-02-18 上海浦东发展银行股份有限公司 Multi-view multi-target tracking method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈桦: ""基于视频的区域人侵检测智能监控***的设计与实现"", 《中国优秀硕士学位沦文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN114241397B (en) 2022-07-08

Similar Documents

Publication Publication Date Title
Benezeth et al. Abnormal events detection based on spatio-temporal co-occurences
WO2017129020A1 (en) Human behaviour recognition method and apparatus in video, and computer storage medium
Masurekar et al. Real time object detection using YOLOv3
US10970823B2 (en) System and method for detecting motion anomalies in video
Makhmutova et al. Object tracking method for videomonitoring in intelligent transport systems
Gong et al. Local distinguishability aggrandizing network for human anomaly detection
Malhi et al. Vision based intelligent traffic management system
Tyagi et al. A review of deep learning techniques for crowd behavior analysis
CN110991245A (en) Real-time smoke detection method based on deep learning and optical flow method
CN107729811B (en) Night flame detection method based on scene modeling
Yang et al. Combining Gaussian mixture model and HSV model with deep convolution neural network for detecting smoke in videos
Dey et al. Moving object detection using genetic algorithm for traffic surveillance
Saluky et al. Abandoned Object Detection Method Using Convolutional Neural Network
CN114241397B (en) Frontier defense video intelligent analysis method and system
Anandhi Edge Computing-Based Crime Scene Object Detection from Surveillance Video Using Deep Learning Algorithms
Filonenko et al. A fuzzy model-based integration framework for vision-based intelligent surveillance systems
Zhang et al. What makes for good multiple object trackers?
Zin et al. Background modeling using special type of Markov Chain
Li et al. MPAT: Multi-path attention temporal method for video anomaly detection
Moayed et al. Traffic intersection monitoring using fusion of GMM-based deep learning classification and geometric warping
Ghode et al. Motion detection using continuous frame difference and contour based tracking
Marsiano et al. Deep Learning-Based Anomaly Detection on Surveillance Videos: Recent Advances
Peleshko et al. Core generator of hypotheses for real-time flame detecting
Alkanat et al. Towards Scalable Abnormal Behavior Detection in Automated Surveillance
Shrivastav A Real-Time Crowd Detection and Monitoring System using Machine Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant