CN113225457A - Data processing method and device, electronic equipment and storage medium - Google Patents

Data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113225457A
CN113225457A CN202011607538.0A CN202011607538A CN113225457A CN 113225457 A CN113225457 A CN 113225457A CN 202011607538 A CN202011607538 A CN 202011607538A CN 113225457 A CN113225457 A CN 113225457A
Authority
CN
China
Prior art keywords
monitoring
time
historical
target object
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011607538.0A
Other languages
Chinese (zh)
Inventor
李志明
方小帅
孙亮亮
杨春晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN202011607538.0A priority Critical patent/CN113225457A/en
Publication of CN113225457A publication Critical patent/CN113225457A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Alarm Systems (AREA)

Abstract

The application provides a data processing method, a data processing device, electronic equipment and a storage medium. The method comprises the following steps: acquiring an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed; analyzing the plurality of monitoring videos respectively, and determining historical action tracks and real-time action tracks of target objects in the monitoring pictures; and determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track. According to the method, the historical action track and the real-time action track of the target object are obtained by analyzing the plurality of monitoring videos of which the target positions are within the preset range, so that the most probable predicted action track of the target object is obtained, and a related mechanism can be favorably and timely locked with the target object.

Description

Data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a data processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous expansion of the video monitoring scale, mass video data are brought correspondingly, so that the traditional video on-line searching mainly based on the man-sea tactics is just like a sea fishing needle. In the related art, when a target object in a surveillance video needs to be locked, a surveillance video about the target object is manually searched and further acquisition places of the surveillance video including the target object are obtained, and the target object is captured through the acquisition places.
When the target object is positioned in the related technology, on one hand, a large amount of time is needed to manually search the monitoring video about the target object, and on the other hand, the manual deployment is directly carried out at the acquisition point of the monitoring video passing through the target object, so that the efficiency is low, and the probability of capturing the target object is seriously influenced. Therefore, how to better analyze the monitoring video data and lock the target object to meet the requirements of processing tasks of related departments becomes a problem to be solved urgently.
Disclosure of Invention
In view of the above, the present application provides a data processing method, apparatus, electronic device and storage medium that overcome or at least partially address the above-mentioned problems.
A first aspect of the present application provides a data processing method, including:
acquiring an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed;
analyzing the plurality of monitoring videos respectively, and determining historical action tracks and real-time action tracks of target objects in the monitoring pictures;
and determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track.
A second aspect of the present application provides a data processing apparatus comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed;
the analysis module is used for respectively analyzing the plurality of monitoring videos and determining the historical action track and the real-time action track of the target object in the monitoring picture;
and the determining module is used for determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track.
A third aspect of the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the data processing method according to the first aspect of the present application when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the data processing method according to the first aspect of the present application.
According to the data processing method, firstly, an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed are obtained; then, analyzing the multiple monitoring videos respectively to determine historical action tracks and real-time action tracks of the target object in the monitoring picture; and finally, determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track. According to the method, a plurality of monitoring videos of which the target positions are within a preset range are analyzed to obtain historical action tracks and real-time action tracks of the target object, and then the most probable predicted action tracks of the target object are obtained, so that a relevant mechanism can lock the target object in time, and follow-up decisions (such as implementation of a capture strategy of the target object) can be better made for the target object.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
FIG. 1 is a flow chart illustrating a method of data processing according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a data processing system according to an embodiment of the present application;
fig. 3 is a block diagram illustrating a data processing apparatus according to an embodiment of the present application;
FIG. 4 is a schematic networking diagram of a video network according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating a hardware structure of a node server according to an embodiment of the present application;
fig. 6 is a schematic diagram illustrating a hardware structure of an access switch according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a flowchart illustrating a data processing method according to an embodiment of the present application. Referring to fig. 1, the data processing method of the present application may include the steps of:
step S11: the method comprises the steps of obtaining an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed.
The execution subject of the data processing method of the present application may be a data processing system. The data processing system can acquire a to-be-processed event from the alarm terminal, wherein the to-be-processed event comprises the following steps: location of case, time of case, description of case, etc. The alarm terminal may be an internet alarm terminal deployed in the internet, or may be a video networking alarm terminal deployed in the video networking, which is not specifically limited in this embodiment.
The preset range can be set arbitrarily according to actual requirements, for example, a circular range formed by taking the target position as a center of a circle and taking 2 kilometers as a radius can be used. The monitoring video may be a monitoring video acquired by a monitoring device in the internet, or may also be a monitoring video acquired by a monitoring device in the internet, which is not specifically limited in this embodiment.
In step S11, after acquiring the event to be processed, the data processing system first determines a target position of the event to be processed, and then acquires a plurality of surveillance videos within a preset range around the target position.
Step S12: and analyzing the plurality of monitoring videos respectively to determine the historical action track and the real-time action track of the target object in the monitoring picture.
In one embodiment, step S12 may include:
determining a historical action track of a target object according to a shooting position and time corresponding to a monitoring video before an event to be processed;
and determining the real-time action track of the target object according to the shooting position and time corresponding to the monitoring video after the event to be processed.
In step S12, the multiple monitoring videos may be analyzed to obtain an analysis result of whether the multiple monitoring videos include the picture of the target object in the event to be processed, and then the historical action track and the real-time action track of the target object may be determined according to the analysis result.
In the present embodiment, the target object may be a person, an animal, a vehicle, or the like, and the present embodiment is not particularly limited thereto. After obtaining the plurality of surveillance videos, the plurality of surveillance videos may be viewed to determine whether the target object is in the surveillance video. If the target object can be viewed in the monitoring video, the target object is shown to be appeared within the preset range, and if the target object cannot be viewed in the monitoring video, the target object is shown not to be appeared within the preset range.
If the target object appears in the preset range, the target object may not leave, and at this time, each historical action track of the target object before the event to be processed is sent and each real-time action track of the target object after the event to be processed is sent may be determined according to the analysis result and the shooting positions corresponding to the multiple monitoring videos. In specific implementation, each target surveillance video including the target object may be determined, the shooting location of each target surveillance video may be determined, and finally, the action track of the target object may be drawn according to the shooting locations.
The historical action track can be drawn according to the monitoring video collected before the event to be processed is sent, and the real-time action track can be drawn according to the monitoring video collected at the time and after the event to be processed is sent. When the monitoring video is searched, a time range can be set according to actual requirements. For example, after sending the event to be processed, the historical action track may be drawn according to the monitoring video within 24 hours before sending the event to be processed, and the real-time action track may be drawn according to the monitoring video from sending the event to be processed to the current time.
Further, according to different time periods, a plurality of historical action tracks before the event to be processed is sent and a plurality of real-time action tracks after the event to be processed is sent can be obtained. Illustratively, if at a certain day 18 pm: 00 receives the event to be processed, the division is based on every 1 hour, the starting and ending time of searching the monitoring video is 6 hours before the event to be processed is sent, and then the following steps can be performed according to 17 in the afternoon of the day: 00-18: 00, drawing a historical action track according to the following data of 16% in the afternoon of the day: 00-17: 00, drawing another historical action track, and so on until the current day is drawn to be 12 in the afternoon: 00-13: 00, obtaining 6 historical action tracks. Similarly, for the real-time action track, the division is based on every 5 minutes, and the current time is 18: 20, then the ratio can be determined from 18: 15-18: 20, obtaining a real-time action track according to the drawing of the monitoring video, wherein the real-time action track comprises the following steps of 18: 10-18: 15, drawing another real-time action track, and so on until 18% of the afternoon of the day is drawn: 00-18: 05, obtaining 4 real-time action tracks in total.
Of course, the historical action track or the real-time action track is divided according to different time periods, so that when a time period is longer, the historical action track can be only one and/or the real-time action track is only one, that is, the number of the historical action track and the real-time action track is not particularly limited in the present application.
Step S13: and determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track.
In the present embodiment, the predicted action trajectory is an action trajectory that is likely to be higher in the appearance of the target object.
In an implementation, the predicted action trajectory may be determined from a plurality of real-time action trajectories and a plurality of historical action trajectories according to a predetermined analysis rule. For example, a part of the real-time action trajectory and a part of the historical action trajectory may be used as the predicted action trajectory, or the entire real-time action trajectory and a part of the historical action trajectory may be used as the predicted action trajectory. The present embodiment does not specifically limit the analysis rule.
With reference to the above embodiments, in one implementation manner, after determining one or more predicted action trajectories of the target object, the data processing method of the present application may further include the following steps:
and according to the predicted action track, implementing a capture strategy on the target object.
The predicted action trajectory can be used to implement a corresponding strategy for the target object, such as capturing the target object.
In this embodiment, the data processing method may be applied to a capturing scene of a target object, and through analysis of a plurality of monitoring videos, a large amount of manpower may be deployed on a predicted action track where the target object is most likely to occur, and a small amount of manpower may be deployed on other action tracks, so that on one hand, a success rate of capturing the target object is improved, and on the other hand, an influence on residents when a large amount of manpower is deployed on any action track is reduced.
After a witness discovers a target object on site, a to-be-processed event can be sent through an alarm terminal, a data processing system receives the to-be-processed event, obtains the geographic position of the incident site in the to-be-processed event, then obtains a plurality of monitoring videos in a preset range around the incident site, obtains a plurality of historical action tracks and a plurality of real-time action tracks of the target object by analyzing the monitoring videos, and then screens out one or more predicted action tracks which are most likely to be used when the target object leaves from the plurality of historical action tracks and the plurality of real-time action tracks according to a preset analysis rule. Accordingly, the correlation mechanism may implement a capture strategy for the target object by predicting the action trajectory, such as deploying human power on the predicted action trajectory to increase the success rate of capturing the target object.
According to the data processing method, firstly, an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed are obtained; then, analyzing the multiple monitoring videos respectively to determine historical action tracks and real-time action tracks of the target object in the monitoring picture; and finally, determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track. According to the method, a plurality of monitoring videos of which the target positions are within a preset range are analyzed to obtain historical action tracks and real-time action tracks of the target object, and then the most probable predicted action tracks of the target object are obtained, so that a relevant mechanism can lock the target object in time, and follow-up decisions (such as implementation of a capture strategy of the target object) can be better made for the target object.
In one implementation, in combination with the above embodiments, the event to be processed includes a feature of the target object. On this basis, the step S12 may include:
identifying whether the image characteristics of the monitoring video after the event to be processed contains the characteristics of the target object;
when the characteristics of the target object are included, determining that the target object is in a preset range;
when the characteristics of the target object are not included, determining that the target object is not in a preset range;
and when the analysis result shows that the target object is in the preset range, determining each historical action track of the target object before the event to be processed is sent and each real-time action track of the target object after the event to be processed is sent according to each analysis result and the shooting positions corresponding to the plurality of monitoring videos.
In the present embodiment, taking the target object as an example of a person, the characteristics may be height, body type, sex, clothing, age, appearance, and the like; taking the target object as an example, the characteristics may be color, style, license plate number, etc., and the characteristics of the target object are not particularly limited in this embodiment.
When the user sends the event to be processed, the characteristics of the target object can be added to the alarm terminal, and then the event to be processed is submitted, so that the data processing system can analyze the monitoring video according to the characteristics of the target object. The alarm terminal has an image acquisition function and can acquire images of surrounding people in real time, so that when a user sends a to-be-processed event, image information of a target object can be added to the to-be-processed event, and the target object can be positioned better.
In specific implementation, the data processing system may first analyze the images of the surveillance videos after the event to be processed is sent, identify whether the image features of the surveillance videos include features of the target object, indicate that the target object still does not leave within the preset range after the event to be processed is sent if the image features include the features of the target object, and indicate that the target object leaves the preset range after the event to be processed is sent if the image features do not include the features of the target object.
In the present embodiment, if the target object is still within the preset range, it is necessary to further obtain the real-time action trajectory of the target object, predict the most likely trajectory of the target object, and implement the subsequent strategy.
With reference to the above embodiments, in an implementation manner, the event to be processed further includes a feature of the target object. On this basis, determining the historical action track of the target object according to the shooting positions and the time corresponding to the multiple monitoring videos before the event to be processed may include the following steps:
acquiring a multi-frame historical monitoring image containing the characteristics of a target object from a historical monitoring video before an event to be processed;
dividing a plurality of frames of historical monitoring images into a plurality of different image groups according to different time periods, wherein the time stamp carried by each frame of historical monitoring image in each image group is positioned in one time period;
and connecting the shooting positions corresponding to the multiple frames of historical monitoring images into a historical action track according to the sequence of the shooting time for each image group.
With reference to the foregoing embodiment, in another implementation manner, the event to be processed further includes a feature of the target object. On the basis, according to the shooting positions and the time corresponding to the multiple monitoring videos after the event to be processed, the historical action track and the real-time action track of the target object are determined, and the method comprises the following steps:
acquiring a multi-frame real-time monitoring image containing the characteristics of a target object from a real-time monitoring video after an event to be processed;
dividing a plurality of real-time monitoring images into a plurality of different image groups according to different time periods, wherein the time stamp carried by each real-time monitoring image in each image group is positioned in one time period;
and connecting the shooting positions corresponding to the real-time monitoring images of the multiple frames into a real-time action track according to the sequence of the shooting time for each image group.
The above-mentioned manner of obtaining the historical action track and the manner of obtaining the real-time action track may be applied separately or simultaneously, which is not limited in this embodiment.
The real-time monitoring video comprises the monitoring video after the event to be processed is sent and the monitoring video at the moment when the event to be processed is sent.
In specific implementation, aiming at the historical action track, a multi-frame historical monitoring image containing the characteristics of a target object can be obtained from a historical monitoring video; then dividing the multi-frame historical monitoring image into a plurality of image groups according to a preset time period; and then, aiming at each image group, connecting the shooting positions corresponding to the multiple frames of historical monitoring images into a historical action track according to the sequence of the shooting time. In specific implementation, aiming at the real-time action track, a multi-frame real-time monitoring image containing the characteristics of a target object can be obtained from a real-time monitoring video; then dividing the multi-frame real-time monitoring image into a plurality of image groups according to a preset time period; and then, aiming at each image group, connecting the shooting positions corresponding to the multiple frames of real-time monitoring images into a real-time action track according to the sequence of the shooting time.
In this embodiment, the collected multiple monitoring videos may be divided into a historical monitoring video before the event to be processed is sent and a real-time monitoring video after the event to be processed is sent. Each section of monitoring video is composed of multiple frames of monitoring images, and in this embodiment, first, for a historical monitoring video, multiple frames of historical monitoring images containing the features of a target object are extracted. Because each monitoring image carries a timestamp, multiple frames of historical monitoring images can be divided into different time periods according to the timestamp, for example, multiple frames of historical monitoring images with timestamps of 10:00-11:00 are divided into one group of images, multiple frames of historical monitoring images with timestamps of 11:00-12:00 are divided into another group of images, and the like, so that the multiple frames of historical monitoring images are divided into multiple image groups. Similarly, for the real-time monitoring video, extracting a plurality of real-time monitoring images containing the characteristics of the target object, dividing the plurality of real-time monitoring images into different time periods according to the time stamp, for example, dividing the plurality of real-time monitoring images with the time stamp of 10:00-11:00 into one group of images, dividing the plurality of real-time monitoring images with the time stamp of 11:00-12:00 into another group of images, and so on, thereby dividing the plurality of real-time monitoring images into a plurality of image groups.
In this embodiment, after a plurality of image groups are obtained, for a plurality of frame images in each image group, the corresponding shooting positions are connected in the order of shooting time to form an action track. Specifically, for each historical monitoring image group, shooting positions corresponding to multiple frames of historical monitoring images are connected into a historical action track according to the sequence of shooting time, and multiple historical action tracks are obtained. And connecting the shooting positions corresponding to the real-time monitoring images of multiple frames into a real-time action track according to the sequence of the shooting time for each real-time monitoring image group to obtain multiple real-time action tracks.
Illustratively, one history monitoring image group includes an image 1 (shooting time: X month X day of X year 6:03, shooting position: A ground), an image 2 (shooting time: X month X day of X year 7:21, shooting position: B ground), an image 3 (shooting time: X month X day of X year 7:28, shooting position: C ground), an image 4 (shooting time: X month X day of X year 7:43, shooting position: B ground), an image 5 (shooting time: X month X day of X year 6:23, shooting position: E ground). Then the connected users have a historical track of actions between 6:00 in X month and X day 6:00 in X year and 8:00 in X month and X day: a is ground-E is ground-B is ground-C is ground-B is ground.
By the embodiment, a plurality of historical action tracks of the target object before the event to be processed is sent and a plurality of real-time action tracks of the target object after the event to be processed is sent can be obtained according to the shooting time and the shooting position of the monitoring image, and technical support is provided for subsequent data processing work.
In one implementation, in combination with the above embodiments, the present application further provides a method for obtaining a predicted action trajectory. Specifically, the step S14 may include:
when the historical action track comprises a plurality of action tracks, respectively determining the confidence probability of each historical action track;
and determining the real-time action track and the historical action track with the confidence probability larger than a preset threshold value as a predicted action track.
In the present embodiment, the confidence probability is used to indicate the probability of the occurrence of the target object. The higher the confidence probability of a track, the higher the probability of the target object appearing on the track, and the lower the confidence probability of a track, the lower the probability of the target object appearing on the track.
In this embodiment, a threshold may be set in advance according to experience, all historical action trajectories with confidence probabilities greater than the preset threshold are used as the most likely historical action trajectories of the target object, and then each real-time action trajectory and the most likely historical action trajectory are used as the most likely predicted action trajectory of the target object.
Illustratively, in a scene of capturing a target object, the confidence probabilities of 5 historical action tracks obtained by the data processing system are track 1-50%, track 2-49%, track 3-68%, track 4-80% and track 5-89%, respectively, and if the preset threshold is 60%, the track greater than the preset threshold is track 3-track 5, so that to improve the success rate of capturing the target object, a large amount of manpower can be deployed on track 3-track 5, and a small amount of manpower can be deployed on track 1-track 2, so that the influence on residents when a large amount of manpower is deployed on any track can also be reduced.
In one embodiment, determining the confidence probability for each historical action track may include:
counting the total number M of track points contained in all historical action tracks and the total number N of times that each track point appears in all historical action tracks;
determining the quotient of the total times N of each track point appearing in all historical action tracks and the total number M of the track points contained in all historical action tracks as the confidence probability of the track point;
and determining the product of the confidence probabilities of the contained track points as the confidence probability of each historical action track.
In this embodiment, when determining the confidence probability of each historical action track, first, the total number of track points in all historical action tracks and the number of track points of each type are counted; then, taking the quotient of the number of each type of track points and the total number as the confidence probability of the type of track points; and then, for each historical action track, taking the product of the confidence probabilities of the contained track points as the confidence probability of the historical action track.
In this embodiment, each trace point is a place where the target object appears. By analyzing the historical action track, the probability that the target object reappears at each track point, namely the confidence probability, can be obtained. Specifically, the frequency of occurrence of each trace point in the historical action trajectory in all trace points of the historical action trajectory may be used as the probability (confidence probability) of occurrence of the target object at the trace point again.
Illustratively, there are 3 historical trajectories of actions as follows:
historical action trajectory 1: site 1-site 2-site 3;
historical action trajectory 2: location 2-location 5-location 4;
historical action trajectory 3: site 3-site 1;
the total number of track points in the historical action track is 3+3+ 2-8. The track points have 5 categories, namely, a place 1 to a place 5. The probability of the occurrence of the trace point 1 is 2/8-1/4, the probability of the occurrence of the trace point 2 is 2/8-1/4, the probability of the occurrence of the trace point 3 is 2/8-1/4, the probability of the occurrence of the trace point 4 is 1/8-1/8, and the probability of the occurrence of the trace point 5 is 1/8-1/8.
After obtaining the confidence probability of each type of track point, the confidence probability of each historical action track can be further obtained.
The confidence probability of the historical action track 1 is: confidence probability for site 1. confidence probability for site 2. confidence probability for site 3, i.e., 1/4. 1/4. 1/4. 1.56%. Similarly, the confidence probability of the historical action track 2 is: 1/4 × 1/8 × 1/8 is 0.39%. Similarly, the confidence probability of the historical action track 3 is: 1/4 × 1/4 ═ 6.25%.
In this embodiment, the confidence probability of the historical action tracks can be obtained according to the historical activity condition of the target object within the preset range, so as to obtain the most likely historical action tracks of the target object in all the historical action tracks, and then the strategies for the target object are implemented on the real-time action tracks and the most likely historical action tracks, for example, key joint defense control is performed, the target object is captured, and the like, so that the probability of successfully capturing the target object can be effectively improved.
With reference to the foregoing embodiments, in an implementation manner, the present application further provides a method for acquiring multiple surveillance videos within a preset range. Specifically, the step S11 may include the following steps:
determining a plurality of monitoring devices within a preset range;
calling a plurality of real-time monitoring videos of a plurality of monitoring devices from the plurality of monitoring devices;
historical monitoring videos which are shot by a plurality of monitoring devices are obtained from a video storage server which manages the plurality of monitoring devices.
In this embodiment, a plurality of monitoring devices are deployed within a preset range, and the monitoring devices may periodically send collected monitoring videos to a video storage server for storage. Therefore, after the event to be processed is sent, the data processing system firstly acquires a plurality of real-time monitoring videos from a preset range, and then calls the historical monitoring videos shot by the monitoring devices from the video storage server.
By the embodiment, a plurality of real-time monitoring videos and historical monitoring videos in the preset range of the target position in the event to be processed can be quickly obtained after the event to be processed is sent, and smooth execution of a subsequent data processing method is guaranteed.
FIG. 2 is a schematic diagram of a data processing system according to an embodiment of the present application. The data processing method of the present application can also be applied to the data processing system in fig. 2. Referring to fig. 2, the data processing system comprises a joint defense deployment and control server, a video network alarm terminal, a monitoring access server, an intelligent analysis server, a track calculation server and a confidence probability calculation server; the joint defense deployment and control server is respectively in communication connection with the video network alarm terminal, the monitoring access server, the intelligent analysis server, the track calculation server and the confidence probability calculation server; the intelligent analysis server is respectively in communication connection with the monitoring access server and the track calculation server; the track calculation server is also in communication connection with the confidence probability calculation server.
The video networking alarm terminal is used for sending a to-be-processed event to the joint defense deployment and control server;
the joint defense deployment and control server is used for sending information of the event to be processed to the monitoring access server according to the event to be processed and obtaining a plurality of predicted action tracks according to the result of the track calculation server and the result of the confidence probability calculation server;
the monitoring access server is used for responding to the information of the event to be processed and acquiring a plurality of monitoring videos which are within a preset range from a target position in the event to be processed;
the intelligent analysis server is used for respectively analyzing the plurality of monitoring videos to obtain an analysis result of whether the plurality of monitoring videos comprise the picture of the target object in the event to be processed;
the track calculation server is used for determining each historical action track of the target object before the target object sends the event to be processed and each real-time action track after the target object sends the event to be processed according to each analysis result and the shooting positions corresponding to the monitoring videos;
the confidence probability calculation server is used for determining the confidence probability of each historical action track.
The monitoring access server is respectively in communication connection with the monitoring equipment and the video storage server; the plurality of monitoring videos comprise historical monitoring videos and real-time monitoring videos; the intelligent analysis server acquires real-time monitoring videos from the monitoring equipment through the monitoring access server and acquires historical monitoring videos from the video storage server through the video storage server.
In this embodiment, a monitoring access server needs to be configured with a master virtual terminal and a plurality of non-master virtual terminals, and a video storage server, an intelligent analysis server, a trajectory calculation server, a potential escape probability server, and a joint defense deployment and control server all need to be configured with the master virtual terminal. The main virtual terminal is used for managing and distributing the non-main virtual terminals and monitoring the access service video network communication. The non-master virtual terminal is used for checking each path of monitoring video (one path of monitoring needs to be dynamically distributed with one non-master virtual terminal), and the non-master virtual terminal is used for video networking communication.
All devices such as a joint defense deployment and control server, a video networking alarm terminal, a monitoring access server, an intelligent analysis server, a track calculation server, a confidence probability calculation server, a monitoring device, a video storage server and the like need to be authenticated by a network management server before use.
And the monitoring equipment stores the acquired monitoring video to the video storage server through the monitoring access server. The video network alarm terminal has various forms including bayonet type, recorder type, etc. The bayonet type video network alarm terminal is a terminal deployed at a place where a target object must pass through, and is used for collecting characteristics of passing people, such as face snapshot and the like. The recorder type video networking alarm terminal is worn on the alarm terminal of a specific crowd, and can collect characteristics of the crowd in the surrounding environment of the wearer, such as face snapshot and the like. The video network alarm terminal can continuously record information of people in the surrounding environment, such as photo information and the like, during work.
The data processing method of the present application will be described in detail below with reference to fig. 2, taking the target object as an example of a human being.
Step 1: when a case occurs, a user can select a photo of a target object through the alarm terminal to initiate a pending event (the characteristics of the target object, such as height, body type, clothes and the like, can also be input to initiate the pending event);
step 2: the alarm terminal uploads the event to be processed to the joint defense deployment and control server 1;
and step 3: the joint defense deployment and control server 1 sends information (including position information, photo information of a target object and the like) of the event to be processed to the monitoring access server 3 through the position information of the event to be processed;
and 4, step 4: the monitoring access server 3 obtains monitoring information data (including information configured on the monitoring access server by the monitoring equipment, such as an IP address, a port number and the like) by analyzing the information of the event to be processed, and pulls real-time monitoring video data to the monitoring equipment 8;
and 5: the monitoring access server 3 sends the pulled real-time monitoring video data and the photo information of the target object obtained by analyzing the information of the event to be processed to the intelligent analysis server 2;
step 6: the intelligent analysis server 2 analyzes and calculates the real-time monitoring video data and the photo information of the target object to obtain whether the real-time monitoring video data contains the target object and the acquisition position of the monitoring video data containing the target object;
and 7: the intelligent analysis server 2 sends the analysis calculation result to the joint defense deployment and control server 1;
and 8: the joint defense deployment and control server 1 judges whether the target object leaves or not by analyzing the calculation result data;
and step 9: if the target object does not leave, the joint defense deployment and control server 1 informs related personnel near the case to process the case;
step 10: if the target object leaves, the joint defense deployment and control server 1 sends joint defense deployment and control information to the monitoring access server 3;
step 11: the monitoring access server 3 receives the joint defense deployment and control information and pulls the monitoring equipment near the monitoring equipment 8 to monitor the video data in real time;
step 12: the monitoring access server 3 sends the pulled real-time monitoring video data to the intelligent analysis server 2;
step 13: the monitoring access server 3 sends monitoring historical data request information to the video storage server 4 and the video storage server 5;
step 14: after receiving the monitoring history data request information, the video storage server sends the monitoring history video data to the intelligent analysis server 2;
step 15: the intelligent analysis server 2 analyzes and calculates the real-time monitoring video data and the monitoring historical video data, and sends the analysis and calculation result to the track calculation server 1;
step 16: the track calculation server 1 calculates a real-time latent escape track and sends the real-time latent escape track to the joint defense deployment and control server 1;
and step 17: the track calculation server 1 calculates the historical activity track near the case location and sends the historical activity track to the potential escape probability server 1;
step 18: the latent escape probability server 1 calculates the historical activity tracks to obtain the latent escape probability result information such as the latent escape probability of the target object for each historical activity track and the historical activity track which is most likely to be used when the target object leaves;
step 19: the potential escape probability server 1 sends the potential escape probability result information to the joint defense deployment and control server 1;
step 20: the joint defense deployment and control server 1 displays a real-time potential escape route of the target object through the real-time potential escape track;
step 21: the joint defense deployment and control server 1 displays the most probable path of the target object for potential escape through the potential escape probability result information;
step 22: and performing key joint defense deployment and control according to the real-time potential escape route and the most possible potential escape route, and capturing the target object.
Through the steps 1 to 22, the most possible latent escape routes used by the target object are obtained in all historical latent escape tracks, and then the real-time latent escape routes and the most possible latent escape routes are subjected to repeated joint defense deployment and control to capture the target object, so that the probability of successfully capturing the target object can be effectively improved.
Through the embodiment, the data processing system not only can quickly position the target object after receiving the event to be processed, but also can predict the most likely used route when the target object leaves and realize intelligent joint defense control, thereby improving the probability of successfully capturing the target object; and moreover, the manpower is saved, and the negative influence on the society is reduced.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Based on the same inventive concept, the present application further provides a data processing apparatus 300. Fig. 3 is a block diagram illustrating a data processing apparatus according to an embodiment of the present application. Referring to fig. 3, the data processing apparatus 300 may include:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed;
the analysis module is used for respectively analyzing the plurality of monitoring videos and determining the historical action track and the real-time action track of the target object in the monitoring picture;
and the determining module is used for determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track.
Optionally, the determining module includes:
the first determining submodule is used for respectively determining the confidence probability of each historical action track when the historical action tracks comprise a plurality of action tracks;
and the second determining submodule is used for determining the real-time action track and the historical action track with the confidence probability larger than a preset threshold value as the predicted action track.
Optionally, the first determining sub-module includes:
the statistical submodule is used for counting the total number M of track points contained in all historical action tracks and the total number N of times that each track point appears in all historical action tracks;
the third determining submodule is used for determining the quotient of the total times N of each track point appearing in all historical action tracks and the total number M of the track points contained in all historical action tracks as the confidence probability of the track point;
and the fourth determining submodule is used for determining the product of the confidence probabilities of the contained track points as the confidence probability of each historical action track.
Optionally, the analysis module comprises:
a fifth determining submodule, configured to determine a historical action track of the target object according to a shooting position and time corresponding to the surveillance video before the event to be processed;
and the sixth determining submodule is used for determining the real-time action track of the target object according to the shooting position and time corresponding to the monitoring video after the event to be processed.
Optionally, the event to be processed further includes a feature of the target object, and the fifth determining sub-module includes:
the first obtaining submodule is used for obtaining a multi-frame historical monitoring image containing the characteristics of the target object from the historical monitoring video before the event to be processed;
the first dividing module is used for dividing the multi-frame historical monitoring image into a plurality of different image groups according to different time periods, and the time stamp carried by each frame of historical monitoring image in each image group is positioned in one time period;
and the first connecting submodule is used for connecting the shooting positions corresponding to the multiple frames of historical monitoring images into a historical action track according to the sequence of the shooting time for each image group.
Optionally, the event to be processed further includes a feature of the target object, and the sixth determining sub-module includes:
the second acquisition submodule is used for acquiring a multi-frame real-time monitoring image containing the characteristics of the target object from the real-time monitoring video after the event to be processed;
the second division submodule is used for dividing the multi-frame real-time monitoring image into a plurality of different image groups according to different time periods, and the time stamp carried by each frame of real-time monitoring image in each image group is positioned in one time period;
and the second connecting submodule is used for connecting the shooting positions corresponding to the multiple frames of real-time monitoring images into a real-time action track according to the sequence of the shooting time for each image group. Based on the same inventive concept, the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and running on the processor, and when the processor executes the computer program, the electronic device implements the steps in the data processing method according to any of the above embodiments of the present application.
Based on the same inventive concept, the present application provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps in the data processing method according to any of the above-mentioned embodiments of the present application.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the Packet Switching is adopted by the technology of the video networking to meet the Streaming requirement. The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
Fig. 4 is a networking diagram of a video network according to an embodiment of the present application. As shown in fig. 4, the video network is divided into an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
fig. 5 is a schematic diagram illustrating a hardware structure of a node server according to an embodiment of the present application. As shown in fig. 5, the system mainly includes a network interface module 501, a switching engine module 502, a CPU module 503, and a disk array module 504;
the network interface module 501, the CPU module 503 and the disk array module 504 all enter the switching engine module 502; the switching engine module 502 performs an operation of looking up the address table 505 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a corresponding queue of the packet buffer 506 based on the packet's steering information; if the queue of the packet buffer 506 is nearly full, it is discarded; the switching engine module 502 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 504 mainly implements control over the hard disk, including initialization, read-write, and other operations of the hard disk; the CPU module 503 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 505 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 504.
The access switch:
fig. 6 is a schematic diagram illustrating a hardware structure of an access switch according to an embodiment of the present application. As shown in fig. 6, the network interface module (downlink network interface module 601, uplink network interface module 602), switching engine module 603, and CPU module 604 are mainly included;
wherein, the packet (uplink data) coming from the downlink network interface module 601 enters the packet detection module 605; the packet detection module 605 detects whether the Destination Address (DA), the Source Address (SA), the type of the packet, and the length of the packet meet the requirements, if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 603, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 602 enters the switching engine module 603; the incoming data packet from the CPU module 604 enters the switching engine module 603; the switching engine module 603 performs an operation of looking up the address table 606 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 603 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 607 in association with the stream-id; if the queue of the packet buffer 607 is close to full, it is discarded; if the packet entering the switching engine module 603 is not from the downlink network interface to the uplink network interface, the packet is stored in the queue of the corresponding packet buffer 607 according to the packet guiding information; if the queue of the packet buffer 607 is close to full, it is discarded.
The switching engine module 603 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) obtaining a token generated by a code rate control module;
if the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 608 is configured by the CPU module 604 and generates tokens for packet buffer queues going to the upstream network interface from all downstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 604 is mainly responsible for protocol processing with the node server, configuration of the address table 606, and configuration of the code rate control module 608.
Ethernet protocol conversion gateway
Fig. 7 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to an embodiment of the present application. As shown in fig. 7, the apparatus mainly includes a network interface module (a downlink network interface module 701, an uplink network interface module 702), a switching engine module 703, a CPU module 704, a packet detection module 705, a rate control module 708, an address table 706, a packet buffer 707, a MAC adding module 709, and a MAC deleting module 710.
Wherein, the data packet coming from the downlink network interface module 701 enters the packet detection module 705; the packet detection module 705 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deleting module 710 subtracts MAC DA, MAC SA, length or frame type (2byte), and enters the corresponding receiving buffer, otherwise, discards;
the downlink network interface module 701 detects the sending buffer of the port, and if a packet exists, the downlink network interface module learns the ethernet MAC DA of the corresponding terminal according to the destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet protocol gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be mainly classified into 2 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (such as various protocol packets, multicast data packets, unicast data packets, etc.), there are 256 possibilities at most, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses;
the Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA);
the reserved byte consists of 2 bytes;
the payload part has different lengths according to different types of datagrams, and is 64 bytes if the datagram is various types of protocol packets, and is 32+1024 or 1056 bytes if the datagram is a unicast packet, of course, the length is not limited to the above 2 types;
the CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., there may be more than 2 connections between a node switch and a node server, a node switch and a node switch, and a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of the Label of MPLS (Multi-Protocol Label Switch), and assuming that there are two connections between the device a and the device B, there are 2 labels for the packet from the device a to the device B, and 2 labels for the packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined by reference to the following: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, one of the core concepts of the embodiment of the invention is provided, and a plurality of monitoring videos which are within a preset range from the event to be processed and the target position in the event to be processed are obtained according to the protocol of the video network; analyzing the multiple monitoring videos respectively, and determining historical action tracks and real-time action tracks of target objects in the monitoring pictures; according to the real-time action track and the historical action track, one or more predicted action tracks of the target object are determined, and the method analyzes a plurality of monitoring videos of which the target positions are within a preset range to obtain the historical action track and the real-time action track of the target object, so that the predicted action track which is most likely to appear in the target object is obtained.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The data processing method, the data processing apparatus, the electronic device, and the storage medium according to the present invention are described in detail above, and a specific example is applied in the description to explain the principles and embodiments of the present invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (9)

1. A data processing method, comprising:
acquiring an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed;
analyzing the plurality of monitoring videos respectively, and determining historical action tracks and real-time action tracks of target objects in the monitoring pictures;
and determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track.
2. The method of claim 1, wherein determining one or more predicted action trajectories for the target object based on the real-time action trajectory and the historical action trajectory comprises:
when the historical action track comprises a plurality of action tracks, respectively determining the confidence probability of each historical action track;
and determining the real-time action track and the historical action track with the confidence probability larger than a preset threshold value as the predicted action track.
3. The method of claim 2, wherein determining a confidence probability for each of the historical trajectories of actions comprises:
counting the total number M of track points contained in all historical action tracks and the total number N of times that each track point appears in all historical action tracks;
determining the quotient of the total times N of each track point appearing in all historical action tracks and the total number M of the track points contained in all historical action tracks as the confidence probability of the track point;
and determining the product of the confidence probabilities of the contained track points as the confidence probability of each historical action track.
4. The method according to claim 1, wherein the analyzing the plurality of monitoring videos respectively to determine the historical action track and the real-time action track of the target object in the monitoring picture comprises:
determining the historical action track of the target object according to the shooting position and time corresponding to the monitoring video before the event to be processed;
and determining the real-time action track of the target object according to the shooting position and time corresponding to the monitoring video after the event to be processed.
5. The method according to claim 4, wherein the event to be processed further includes a feature of the target object, and the determining the historical action track of the target object according to the shooting positions and the times corresponding to the plurality of monitoring videos before the event to be processed comprises:
acquiring a multi-frame historical monitoring image containing the characteristics of the target object from the historical monitoring video before the event to be processed;
dividing the multiple frames of historical monitoring images into multiple different image groups according to different time periods, wherein the time stamp carried by each frame of historical monitoring image in each image group is positioned in one time period;
and connecting the shooting positions corresponding to the multiple frames of historical monitoring images into a historical action track according to the sequence of the shooting time for each image group.
6. The method according to claim 4 or 5, wherein the event to be processed further includes a feature of the target object, and the determining the historical action trajectory and the real-time action trajectory of the target object according to the shooting positions and the times corresponding to the plurality of surveillance videos after the event to be processed comprises:
acquiring a multi-frame real-time monitoring image containing the characteristics of the target object from the real-time monitoring video after the event to be processed;
dividing the multi-frame real-time monitoring image into a plurality of different image groups according to different time periods, wherein the time stamp carried by each frame of real-time monitoring image in each image group is positioned in one time period;
and connecting the shooting positions corresponding to the real-time monitoring images of the multiple frames into a real-time action track according to the sequence of the shooting time for each image group.
7. A data processing apparatus, comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an event to be processed and a plurality of monitoring videos which are within a preset range from a target position in the event to be processed;
the analysis module is used for respectively analyzing the plurality of monitoring videos and determining the historical action track and the real-time action track of the target object in the monitoring picture;
and the determining module is used for determining one or more predicted action tracks of the target object according to the real-time action track and the historical action track.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the data processing method according to any one of claims 1 to 6.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor when executing performs the steps of the data processing method according to any of claims 1-6.
CN202011607538.0A 2020-12-29 2020-12-29 Data processing method and device, electronic equipment and storage medium Pending CN113225457A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011607538.0A CN113225457A (en) 2020-12-29 2020-12-29 Data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011607538.0A CN113225457A (en) 2020-12-29 2020-12-29 Data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113225457A true CN113225457A (en) 2021-08-06

Family

ID=77085911

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011607538.0A Pending CN113225457A (en) 2020-12-29 2020-12-29 Data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113225457A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113573025A (en) * 2021-08-10 2021-10-29 海南视联通信技术有限公司 Monitoring video viewing method and device, terminal equipment and storage medium
CN114500952A (en) * 2022-02-14 2022-05-13 深圳市中壬速客信息技术有限公司 Control method, device and equipment for dynamic monitoring of park and computer storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109743541A (en) * 2018-12-15 2019-05-10 深圳壹账通智能科技有限公司 Intelligent control method, device, computer equipment and storage medium
CN110659391A (en) * 2019-08-29 2020-01-07 苏州千视通视觉科技股份有限公司 Video detection method and device
CN110969852A (en) * 2019-12-09 2020-04-07 上海宝康电子控制工程有限公司 Method for realizing real-time prediction and investigation and control processing of public security based on control vehicle running path
CN111291280A (en) * 2020-03-10 2020-06-16 中国科学院计算技术研究所 Method, medium, and apparatus for fast predicting trajectory of large-scale moving object
WO2020134231A1 (en) * 2018-12-28 2020-07-02 杭州海康威视数字技术股份有限公司 Information pushing method and device, and information display system
CN112040186A (en) * 2020-08-28 2020-12-04 北京市商汤科技开发有限公司 Method, device and equipment for determining activity area of target object and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109743541A (en) * 2018-12-15 2019-05-10 深圳壹账通智能科技有限公司 Intelligent control method, device, computer equipment and storage medium
WO2020134231A1 (en) * 2018-12-28 2020-07-02 杭州海康威视数字技术股份有限公司 Information pushing method and device, and information display system
CN110659391A (en) * 2019-08-29 2020-01-07 苏州千视通视觉科技股份有限公司 Video detection method and device
CN110969852A (en) * 2019-12-09 2020-04-07 上海宝康电子控制工程有限公司 Method for realizing real-time prediction and investigation and control processing of public security based on control vehicle running path
CN111291280A (en) * 2020-03-10 2020-06-16 中国科学院计算技术研究所 Method, medium, and apparatus for fast predicting trajectory of large-scale moving object
CN112040186A (en) * 2020-08-28 2020-12-04 北京市商汤科技开发有限公司 Method, device and equipment for determining activity area of target object and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113573025A (en) * 2021-08-10 2021-10-29 海南视联通信技术有限公司 Monitoring video viewing method and device, terminal equipment and storage medium
CN114500952A (en) * 2022-02-14 2022-05-13 深圳市中壬速客信息技术有限公司 Control method, device and equipment for dynamic monitoring of park and computer storage medium

Similar Documents

Publication Publication Date Title
CN108965040B (en) Service monitoring method and device for video network
CN110636257B (en) Monitoring video processing method and device, electronic equipment and storage medium
CN108964963A (en) A method of warning system and realization alarm based on view networking
CN110190973B (en) Online state detection method and device
CN109150905B (en) Video network resource release method and video network sharing platform server
CN110572607A (en) Video conference method, system and device and storage medium
CN110557606B (en) Monitoring and checking method and device
CN110475113B (en) Monitoring equipment fault processing method and device based on video network
CN113225457A (en) Data processing method and device, electronic equipment and storage medium
CN110740295B (en) Round-robin playing method and device for video stream monitored by video network
CN110012316B (en) Method, device, equipment and storage medium for processing video networking service
CN109768957B (en) Method and system for processing monitoring data
CN109743555B (en) Information processing method and system based on video network
CN109698953B (en) State detection method and system for video network monitoring equipment
CN110691213B (en) Alarm method and device
CN109361546B (en) Program early warning method and device based on video network
CN110392224B (en) Data processing method and device
CN110519554B (en) Monitoring detection method and device
CN110113555B (en) Video conference processing method and system based on video networking
CN110072072B (en) Method and device for reporting and displaying data
CN109698756B (en) Video conference reservation method and device
CN110768854B (en) Data statistics method and device based on video network
CN110401633B (en) Monitoring and inspection data synchronization method and system
CN110418105B (en) Video monitoring method and system
CN109688073B (en) Data processing method and system based on video network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination