CN111914592A - Multi-camera combined evidence obtaining method, device and system - Google Patents

Multi-camera combined evidence obtaining method, device and system Download PDF

Info

Publication number
CN111914592A
CN111914592A CN201910380772.5A CN201910380772A CN111914592A CN 111914592 A CN111914592 A CN 111914592A CN 201910380772 A CN201910380772 A CN 201910380772A CN 111914592 A CN111914592 A CN 111914592A
Authority
CN
China
Prior art keywords
monitoring target
target
close
image
evidence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910380772.5A
Other languages
Chinese (zh)
Other versions
CN111914592B (en
Inventor
房世光
顾简
刘旭
毛敏霞
葛露萍
冯波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910380772.5A priority Critical patent/CN111914592B/en
Publication of CN111914592A publication Critical patent/CN111914592A/en
Application granted granted Critical
Publication of CN111914592B publication Critical patent/CN111914592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The embodiment of the application provides a multi-camera combined evidence obtaining method, a multi-camera combined evidence obtaining device and a multi-camera combined evidence obtaining system, wherein the method comprises the following steps: acquiring a panoramic video stream acquired by a gunlock, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event or not; if the monitoring target triggers the preset detection event, acquiring a video frame of a specified monitoring target triggering the preset detection event in the panoramic video stream to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target triggering the preset detection event; the method comprises the steps of predicting a predicted position of a monitoring target in a gunlock image after a preset time period; determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the image of the gun camera and the PT coordinate of the dome camera; and (5) acquiring a close-up evidence image at the target PT coordinate through a ball machine. The multi-camera joint evidence obtaining method can obtain a close-up evidence image while obtaining a panoramic evidence image.

Description

Multi-camera combined evidence obtaining method, device and system
Technical Field
The application relates to the technical field of video monitoring, in particular to a multi-camera combined evidence obtaining method, device and system.
Background
With the rapid development of intelligent traffic, evidence obtaining modes based on video monitoring technology are widely applied to traffic scenes. Particularly, aiming at crossroads, curves and areas with multiple accidents, the arrangement control camera can play a role in deterrence and is convenient for obtaining evidence afterwards.
In the related art, a bolt is arranged at a designated monitoring point to comprehensively monitor a monitoring scene, but due to the fact that the visual angle of the bolt is large, local details of an image of the bolt are easy to distort when being enlarged, so that illegal details are not captured in place, misjudgment may occur, evidence after violation is insufficient, and the like, and therefore it is desirable to obtain a close-up evidence image.
Disclosure of Invention
An object of the embodiment of the application is to provide a multi-camera combined evidence obtaining method, device and system, so that a close-up evidence image is obtained while a panoramic evidence image is obtained. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present application provides a multi-camera joint forensics method, where the method includes:
acquiring a panoramic video stream acquired by a gunlock, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event or not;
if the preset detection event is triggered by a monitoring target, acquiring a video frame of the preset detection event triggered by a specified monitoring target in the panoramic video stream to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target triggering the preset detection event;
predicting the predicted position of the specified monitoring target in the image of the gunlock after a preset time period;
determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the image of the gun camera and the PT coordinate of the dome camera;
and acquiring a close-up evidence image at the target PT coordinate through the dome camera.
In a second aspect, an embodiment of the present application provides a multi-camera joint forensics apparatus, including:
the video stream detection module is used for acquiring a panoramic video stream acquired by a gunlock and judging whether a monitoring target triggers a preset detection event or not in the panoramic video stream;
a panoramic evidence obtaining module, configured to, if there is a monitoring target that triggers the preset detection event, obtain a video frame in the panoramic video stream in which a specified monitoring target triggers the preset detection event, to obtain a panoramic evidence image, where the specified monitoring target is a monitoring target that triggers the preset detection event;
the position prediction module is used for predicting the predicted position of the specified monitoring target in the gun camera image after a preset time period;
the target coordinate determination module is used for determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the gun camera image and the PT coordinate of the dome camera;
and the close-up evidence acquisition module is used for acquiring a close-up evidence image at the target PT coordinate through the dome camera.
In a third aspect, an embodiment of the present application provides a multi-camera joint forensics system, where the system includes: gunlock and ball machine; the bolt face is operated to realize the multi-camera joint evidence obtaining method of any one of the first aspect.
In a fourth aspect, an embodiment of the present application provides a multi-camera joint forensics system, including: a server, a rifle bolt and a ball machine; the server implements the multi-camera joint forensics method of any of the first aspects described above when running.
According to the multi-camera combined evidence obtaining method, the multi-camera combined evidence obtaining device and the multi-camera combined evidence obtaining system, a panoramic video stream collected by a gunlock is obtained, and whether a monitoring target triggers a preset detection event exists in the panoramic video stream or not is judged; if the monitoring target triggers the preset detection event, acquiring a video frame of a specified monitoring target triggering the preset detection event in the panoramic video stream to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target triggering the preset detection event; the method comprises the steps of predicting a predicted position of a monitoring target in a gunlock image after a preset time period; determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the image of the gun camera and the PT coordinate of the dome camera; and (5) acquiring a close-up evidence image at the target PT coordinate through a ball machine. The panoramic evidence image and the close-up evidence image can be acquired at the same time, the predicted position of the target after the preset time period is predicted, the linkage time of the dome camera is guaranteed, and the success rate of close-up evidence image acquisition is improved. Of course, not all advantages described above need to be achieved at the same time in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first schematic diagram of a multi-camera joint forensics method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of horizontal transformation of coordinate transformation according to an embodiment of the present application;
FIG. 3 is a schematic diagram of vertical direction transformation of coordinate transformation according to an embodiment of the present application;
FIG. 4 is a first schematic diagram of a multi-camera joint forensics method according to an embodiment of the present application;
FIG. 5 is a schematic illustration of a composite evidence image according to an embodiment of the present application;
FIG. 6 is a first schematic diagram of a multi-camera joint forensics system according to an embodiment of the application;
FIG. 7 is a second schematic diagram of a multi-camera joint forensics system according to an embodiment of the application;
FIG. 8 is a schematic view of a multiple camera joint forensics device according to an embodiment of the present application;
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to obtain a close-up evidence image of a violation event, an embodiment of the present application provides a multi-camera joint forensics method, which is shown in fig. 1 and includes:
s101, acquiring a panoramic video stream acquired by a gunlock, and judging whether a monitoring target triggers a preset detection event or not in the panoramic video stream.
The multi-camera joint forensics method in the embodiment of the present application may be implemented by an electronic device, where the electronic device includes a processor and a memory, where the memory is used to store a computer program, and when the processor executes the computer program stored in the memory, the multi-camera joint forensics method in the embodiment of the present application is implemented. Specifically, the electronic device may be a server, a hard disk video recorder, a gunlock, or the like.
The electronic equipment acquires a panoramic video stream acquired by the gunlock in real time, analyzes the panoramic video stream by using a computer vision technology (such as a pre-trained convolutional neural network) and judges whether a preset detection event is triggered by a monitoring target. The monitoring target and the preset detection event can be set according to actual conditions, for example, the monitoring target is a pedestrian, and the preset detection event is a red light running or road crossing.
Optionally, the panoramic video stream includes lane line information and lane direction information, the monitored target is a vehicle, and the preset detection event includes: one or more of illegal parking, reverse driving, line pressing, turning around, occupation of a non-motor vehicle lane by a motor vehicle and illegal lane changing. The electronic equipment can analyze the panoramic video stream through a pre-trained convolutional neural network, so as to judge whether a vehicle triggers a preset detection event in the panoramic video stream.
And S102, if the preset detection event is triggered by the monitoring target, acquiring a video frame of the preset detection event triggered by the specified monitoring target in the panoramic video stream to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target triggering the preset detection event.
If a preset detection event is triggered by the presence of a monitoring target in the panoramic video stream, extracting video frames of the preset detection event triggered by the specified monitoring target in the panoramic video stream for the monitoring target (hereinafter referred to as a specified monitoring target) triggering the preset detection event.
And S103, predicting the predicted position of the specified monitoring target in the image of the bolt machine after a preset time period.
The electronic equipment predicts the position of the designated monitoring target in the gun image of the gun camera collecting the panoramic video stream after a preset time period through a deep learning technology or a related track prediction algorithm and the like according to the motion condition of the designated monitoring target in the panoramic video stream. The preset time interval is set according to the linkage time of the ball machine in actual conditions, the preset time interval is larger than the linkage time of the ball machine, so that the ball machine is guaranteed to have enough time to transfer to a specified monitoring area, and the preset time interval is not suitable to be set too long so as to avoid reducing the accuracy of the predicted position.
Optionally, predicting the predicted position of the designated monitoring target in the bolt image after the preset time period includes:
and S1031, determining the driving parameters of the specified monitoring target according to the panoramic video stream, wherein the driving parameters comprise a motion track, a speed and an acceleration.
And calculating the driving parameters of the appointed monitoring target according to the motion condition of the appointed monitoring target in the panoramic video stream, wherein the driving parameters comprise the motion track, the speed and the acceleration of the appointed monitoring target.
And S1032, predicting the predicted position of the monitoring target in the bolt image after the preset time period according to the running parameters of the specified monitoring target.
For example, when the monitored target is a vehicle, the position where the vehicle arrives in the bolt face picture after a preset time period, that is, the predicted position, is predicted according to the motion track, speed and acceleration of the vehicle in combination with lane information, lane direction and the like.
Optionally, the predicting the predicted position of the monitoring target in the bolt image after the preset time period according to the running parameter of the specified monitoring target includes:
s10321, judging according to the motion track of the specified monitoring target, if the specified monitoring target runs straight, predicting the predicted position of the specified monitoring target in the straight line direction after a preset time period according to the speed and the acceleration of the specified monitoring target;
s10322, judging according to the motion track of the specified monitoring target, if the specified monitoring target has a lane change action, predicting a predicted position of the specified monitoring target in a lane after lane change within a preset time period according to the speed and the acceleration of the specified monitoring target;
and S10323, judging according to the speed and the acceleration of the specified monitoring target, if the specified monitoring target stops within a preset first distance threshold, and taking a position which is a preset second distance ahead of the current position of the specified monitoring target as a predicted position according to the motion track of the specified monitoring target.
The look-ahead logic may be summarized as: if the vehicle always advances in a certain direction, the vehicle is at a position point in the same direction after a preset time period; if the lane changing action of the vehicle is detected, the position of the vehicle on the lane after lane changing after the preset time period is judged in advance; if the vehicle speed tends to stop, the predicted position is advanced or positioned near the current position of the vehicle for active capture.
Meanwhile, the position can be corrected and prejudged by combining with the lane lines, and the position of the vehicle cannot exceed the lane area. Optionally, the multi-camera joint forensics method according to the embodiment of the present application further includes: and correcting the predicted position according to the lane line information.
And S104, determining target PT coordinates of the dome camera corresponding to the predicted positions according to the pre-established association relationship between the positions in the images of the gun camera and the PT coordinates of the dome camera.
The coordinate system of the dome camera is generally a PTZ (Pan/Tilt/Zoom, Zoom control, etc.) coordinate system. The incidence relation between each position in the image of the gunlock and the PT coordinate of the dome camera can be pre-established in a key point mapping mode. For example, by associating the position of the calibration point in the image of the gun camera with the PT coordinates of the dome camera and associating a plurality of calibration points, the association between any position in the image of the gun camera and the PT coordinates of the dome camera can be calculated. One gun camera can be calibrated with a plurality of ball machines, and the incidence relation between the position in the image of the gun camera and the PT coordinates of the plurality of ball machines is established.
Optionally, the step of pre-establishing an association relationship between each position in the image of the gunlock and the PT coordinate of the dome camera includes:
the method comprises the steps of firstly, acquiring GPS (Global Positioning System) coordinates of the erection position of the dome camera and the erection height of the dome camera.
And secondly, determining the GPS coordinate of the actual position of the position in the actual scene aiming at any position in the gunlock image, and calculating the longitude and latitude distance between the dome camera and the actual position according to the GPS coordinate of the actual position and the GPS coordinate of the dome camera.
The GPS coordinates comprise longitude and latitude, the longitude difference between the actual position and the dome camera is the calculated longitude distance, and the latitude difference between the actual position and the dome camera is the calculated latitude distance.
And step three, calculating the horizontal distance between the actual position and the ball machine according to the longitude and latitude distance.
The horizontal distance is the distance between the dome camera and the monitoring target under the assumption that the dome camera and the monitoring target are the same in height. Referring to fig. 2, in a case where the ground is considered to be a plane, the horizontal distance between the monitoring target and the ball machine is calculated using the following equation 1. Wherein, the formula is:
Figure BDA0002053296130000061
alternatively, the horizontal distance between the monitoring target and the ball machine can also be calculated by using a Haversine (hemipositive vector) function, see formula 2:
Figure BDA0002053296130000062
wherein Aw represents the latitude of the monitored target, Aj represents the longitude of the monitored target, Bw represents the latitude of the dome camera, Bj represents the longitude of the dome camera, L represents the horizontal distance between the monitored target and the dome camera, and R represents the earth radius of the position of the dome camera.
Or, the ground can be considered as a spherical surface, and the horizontal distance between the monitoring target and the ball machine, that is, the spherical surface distance, is calculated by using a spherical sine-cosine formula. There are various ways to calculate the horizontal distance between the monitoring target and the ball machine, and they are not listed one by one.
And step four, calculating a horizontal included angle between the actual position and the specified direction through a trigonometric function according to the longitude and latitude distances.
The designated direction can be set according to actual conditions. Optionally, the designated direction is due north; the calculating the horizontal included angle between the actual position and the designated direction by a trigonometric function according to the longitude and latitude distances comprises the following steps: calculating the ratio of the longitude distance to the latitude distance as the tangent value of the horizontal included angle; and solving the horizontal included angle through the tangent value of the horizontal included angle. Referring to fig. 2, tan θ is the distance between the warp direction and the weft direction, and θ is the horizontal angle between the monitored target and the due north direction.
Optionally, the designated direction is the east; the calculating the horizontal included angle between the actual position and the designated direction by a trigonometric function according to the longitude and latitude distances comprises the following steps: calculating the ratio of the distance in the weft direction to the distance in the warp direction as the tangent value of the horizontal included angle; and solving the horizontal included angle through the tangent value. Referring to fig. 2, tan α is the horizontal angle between the monitored target and the east direction, where α is the distance between the weft direction and the warp direction.
Certainly, the designated direction may be west or south, and the specific calculation process is similar and will not be described again.
And step five, determining the P coordinate of the ball machine according to the horizontal included angle.
The P coordinate of the ball machine can be understood as the angle of the ball machine in the horizontal direction, and the angle of the ball machine in the horizontal direction can be determined by knowing the horizontal included angle between the ball machine and the specified direction (such as due north).
And sixthly, calculating the T coordinate of the dome camera according to the horizontal distance and the erection height of the dome camera, thereby obtaining the association relation between any position in the gunlock image and the PT coordinate of the dome camera.
Optionally, the calculating the T coordinate of the dome camera according to the horizontal distance and the height of the dome camera includes: calculating the ratio of the horizontal distance to the height of the ball machine as the tangent value of the T coordinate of the ball machine; and solving the T coordinate of the ball machine through the tangent value of the T coordinate of the ball machine. Referring to fig. 3, tan × h is L, h represents the height of the dome camera, L represents the horizontal distance between the monitoring target and the dome camera, and T represents the T coordinate of the dome camera. The T coordinate of the dome camera can be calculated from this equation.
In practical situations, errors may exist due to problems with GPS accuracy, measurement accuracy, and the like. Optionally, the step of pre-establishing an association relationship between each position in the image of the gun camera and the PT coordinate of the dome camera further includes: and if the converted coordinates of the actual position in the image coordinate system and the actual coordinates of the actual position in the image have horizontal errors, adjusting an electronic compass of the dome camera to reduce the horizontal errors. Optionally, the step of pre-establishing an association relationship between each position in the image of the gun camera and the PT coordinate of the dome camera further includes: and if the coordinate of the actual position in the image coordinate system obtained by conversion has a vertical error with the actual coordinate of the actual position in the image, reducing the vertical error by adjusting the acquired height value of the ball machine.
And S105, acquiring a close-up evidence image of the target PT coordinate through the dome camera.
The dome camera in the embodiment of the application may be a conventional dome camera only having an image acquisition function, or may be an intelligent dome camera having image feature extraction and analysis capabilities.
When the ball machine is a conventional ball machine, optionally, the obtaining of the close-up evidence image of the target PT coordinate by the ball machine includes:
step one, the ball machine is adjusted to the position of the target PT coordinate, and the close-up video stream at the position of the target PT coordinate is obtained through the ball machine.
And the electronic equipment sends a message containing the target PT coordinates to the ball machine so that the ball machine turns to the target PT coordinates after receiving the message and collects the close-up video stream of the target PT coordinate position, and the electronic equipment acquires the close-up video stream of the target PT coordinate position collected by the ball machine.
And step two, analyzing the close-up video stream to acquire a close-up evidence image of the close-up video stream including the monitoring target.
The electronic device analyzes the close-up video stream using computer vision techniques to obtain a close-up evidence image including the monitoring target in the close-up video stream.
When the ball machine is an intelligent ball machine, optionally, the obtaining of the close-up evidence image of the target PT coordinate by the ball machine includes:
step one, the dome camera is adjusted to the position of the target PT coordinate, and the dome camera is triggered to start a snapshot mode, so that the dome camera takes a snapshot of a close-up evidence image including a monitored target.
And the electronic equipment sends a message containing the target PT coordinates to the ball machine, so that the ball machine turns to the target PT coordinates after receiving the message and collects the close-up video stream of the target PT coordinate position, and the ball machine analyzes the close-up video stream by utilizing a computer vision technology so as to obtain a close-up evidence image containing the monitoring target in the close-up video stream.
And step two, receiving the close-up evidence image sent by the ball machine.
The electronic equipment acquires a close-up evidence image of the target PT coordinate position acquired by the ball machine.
In the embodiment of the application, the ball machine is used for analyzing the close-up evidence image in the close-up video stream, so that the processing load of the electronic equipment can be reduced.
In order to improve the capturing efficiency of the designated monitoring target when there are a plurality of ball machines capable of linking, optionally, the adjusting the ball machines to the position of the target PT coordinate includes: and adjusting the plurality of ball machines to the target PT coordinates and monitoring positions adjacent to the target PT coordinates respectively.
The plurality of ball machines are respectively responsible for monitoring the PT coordinates of the target and the monitoring positions adjacent to the PT coordinates of the target, so that the capturing efficiency of the specified monitoring target is improved. The monitoring areas of all the dome cameras can be partially overlapped, so that the situation of detection failure caused by the fact that the designated monitoring target is located at the edge of the monitoring area of the dome cameras is reduced, certainly, the monitoring areas can not be overlapped, and the specific situation is set according to the actual situation. The corresponding relation between each ball machine and the monitoring position can be randomly divided, or can be calculated through a related shortest path algorithm, and the monitoring position corresponding to each ball machine is calculated when the shortest path is calculated by taking the angle of the ball machine transferred to the corresponding monitoring position as a path.
In the embodiment of the application, the gunlock is used for collecting the panoramic evidence image, the ball machine is used for collecting the close-up evidence image, the close-up evidence image is obtained while the panoramic evidence image is obtained, the predicted position of the target after the preset time period is predicted, the linkage time of the ball machine is guaranteed, and the success rate of close-up evidence image collection is improved.
The inventor finds in research that the electronic equipment or the ball machine only obtains the close-up evidence image of a certain type of monitoring target, and therefore the collected characteristic evidence image may not specify the monitoring target. For example, when a preset detection event is triggered by the vehicle a, a detection algorithm in the electronic device or the dome camera is to extract a close-up evidence image of the vehicle, and extract a close-up evidence image of the vehicle B, which may cause a situation that the panoramic evidence image and the characteristic evidence image are incorrectly matched, in view of this, optionally, referring to fig. 4, after the ball camera acquires the close-up evidence image at the target PT coordinate, the multi-camera joint forensics method in the embodiment of the present application further includes:
and S106, judging whether the monitoring target in the close-up evidence image and the specified monitoring target are the same target.
The electronic equipment can compare the characteristics of the monitoring target in the close-up evidence image with the designated monitoring target through methods such as characteristic comparison and the like, so that whether the monitoring target in the close-up evidence image and the designated monitoring target are the same target or not is judged.
Optionally, the determining whether the monitoring target in the close-up evidence image and the specified monitoring target are the same target includes:
and S1061, judging whether the position of the monitoring target in the close-up evidence image is matched with the motion track of the specified monitoring target in the panoramic video stream according to the shooting time of the close-up evidence image and the PT coordinates of the ball machine for shooting the close-up evidence image.
When the monitoring target is a vehicle, the dome camera can send the license plate information of the vehicle, the snapshot time of the close-up evidence image, the vehicle modeling result, the PT coordinates when the dome camera shoots the close-up evidence image and the like to the electronic equipment besides sending the close-up evidence image. The vehicle modeling result can be applied to follow-up feature matching, and the PT coordinates of the ball machine during capturing the close-up evidence image and the capturing time of the close-up evidence image can be applied to detection of whether the position and the motion track of the monitoring target are matched or not.
The electronic device converts the location of the monitored target in the close-up evidence image to a location in the bolt face image, hereinafter referred to as a mapping location. And comparing the mapping position with the motion trail of the specified monitoring target in the panoramic video stream. And judging whether the position of the monitoring target in the close-up evidence image is consistent with the motion track of the specified monitoring target in the panoramic video stream, namely judging whether the mapping position is consistent with the motion track of the specified monitoring target in the panoramic video stream.
And S1062, if the close-up evidence image and the specified monitoring target are not the same, judging that the monitoring target in the close-up evidence image is not the same target.
And S1063, if the monitoring targets in the close-up evidence image are matched with the specified monitoring target in characteristic matching, and when the monitoring targets in the close-up evidence image are matched with the specified monitoring target in characteristic matching, judging that the monitoring targets in the close-up evidence image are the same as the specified monitoring target.
And S1064, when the characteristics of the monitoring target in the close-up evidence image and the specified monitoring target are matched into different targets, judging that the monitoring target in the close-up evidence image and the specified monitoring target are not the same target.
Because feature matching consumes more computing resources, in the embodiment, the motion trajectory is determined, and feature matching is performed when the position of the monitoring target in the close-up evidence image is matched with the motion trajectory of the specified monitoring target in the panoramic video stream. The situation that the filtered parts are not the same target can be judged through the motion trail, so that the times of feature matching are reduced, and computing resources are saved.
And S107, when the monitoring target in the close-up evidence image and the specified monitoring target are the same monitoring target, uploading the panoramic evidence image and the close-up evidence image to an alarm platform.
When the monitoring target in the close-up evidence image and the designated monitoring target are the same monitoring target, the electronic equipment uploads the panoramic evidence image and the close-up evidence image to the alarm platform. Specifically, the panoramic evidence image and the close-up evidence image can be synthesized, and the synthesized images are uploaded to an alarm platform; certainly, the panoramic evidence image and the close-up evidence image are calibrated by a preset calibration method without synthesis, and the panoramic evidence image and the close-up evidence image are the evidence images of the specified monitoring target.
For example, a plurality of panoramic proof images of the gun camera and one close-up proof image of the dome camera may be combined, and the combination format supports a top-bottom combination, a left-right combination, a field-shaped combination, and the like. Taking three panoramic evidence images as an example, a synthesized image in a field font format is shown in fig. 5.
Besides the evidence image, the electronic equipment can also upload the illegal type, illegal time, license plate information, vehicle characteristic information, scene information and the like of the specified monitoring target to the alarm platform, so that the alarm platform can perform later operations such as display, retrieval, punishment and the like.
The following describes a multi-camera joint forensics method according to an embodiment of the present application, when a monitored target is a vehicle.
Before evidence collection, the monitoring positions of the gunlock and the dome camera need to be calibrated. The specific position in the image of the gunlock is associated with the PTZ position of a dome camera, and the association relation between any position in the image of the gunlock and the PTZ of the dome camera can be further calculated by associating a plurality of points; one gun camera can be calibrated with a plurality of ball machine devices, and the mapping relation between the position in the image of the gun camera and the PTZ of any ball machine is established. Setting a preset detection event: monitoring a large scene by a gunlock picture, and adding lane line information, lane directions and the like in the gunlock picture; the preset detection events include, but are not limited to, illegal events such as illegal parking, reverse driving, line pressing, turning around, occupation of non-driving power, lane changing and the like.
In the process of evidence collection, vehicle detection is carried out on the panoramic video stream of the gunlock, and whether the vehicle in the picture triggers a preset detection event or not is judged according to the running track of the vehicle by combining the configured lane information, lane direction and the like; if an object triggers an event, the following steps are performed:
extracting 1 or more pictures from the process of a preset detection event triggered by a vehicle to obtain a panoramic evidence image, thereby recording the whole illegal process; a linked ball machine shoots a high-definition picture of a vehicle; recording the running track of the target vehicle in the picture, and recording the time of a specific track point in the picture; carrying out target modeling on the target vehicle connection rigid; and the target modeling result is used for target comparison, and the target modeling is realized according to the vehicle type, the vehicle color and other related characteristic information.
In the process of capturing the vehicle by the linked ball machine, the motion position of the vehicle needs to be predicted, when a certain vehicle triggers an event of equipment, the position of the vehicle, which arrives in a gunlock picture after a certain time, is pre-judged according to the running track and speed of the gunlock and the combination of configured lane information, lane direction and the like to become a target position; the anticipation logic comprises the following:
if the target advances in a certain direction all the time, determining that the target is a position point in the same direction after a period of time; if the target is detected to be a lane changing action, the position of the target after a period of time is judged in advance to be the position in front of the lane after lane changing; the position correction prejudgment is carried out by combining the lane lines, and the prejudgment position of the vehicle does not exceed the lane area; if the target speed tends to stop, actively advancing the pre-judging position or directly determining the pre-researching position near the target vehicle for active capture; determining to link one or more ball machines to the vicinity of the corresponding position according to the target position by the linked ball machine snapshot, and zooming the ball machines to a proper magnification to perform snapshot; the transverse section of the detection can be wider when a plurality of ball machines are linked, so that the success rate of the ball machines for capturing vehicles can be improved; before the vehicle reaches the target position, the gunlock can continuously correct the target position according to the latest track of the target, and the ball machine is linked again, so that the position accuracy of the latest ball machine is ensured.
Capturing a close-up evidence image through a ball machine: after the ball machine is linked, the ball machine enters a vehicle snapshot mode, can snapshot passing vehicles after entering the vehicle snapshot mode, and can analyze license plate information of the vehicles; giving the alarm information of each vehicle during the snapshot period to the electronic device, wherein the alarm information can also comprise, besides a vehicle picture (characteristic evidence image): license plate information, snapshot time, vehicle modeling result, current PTZ position of the dome camera and the like.
And associating the panoramic evidence image and the close-up evidence image. The electronic equipment receives alarm information returned by the dome camera, compares the snapshooting time in the alarm information with the PTZ position of the dome camera with the track node of the target vehicle in the gunlock image, and discards the alarm if the snapshooting time is not matched with the PTZ position of the dome camera; and if the vehicle modeling result is consistent with the vehicle modeling result of the gun camera image, comparing the vehicle modeling result of the dome camera with the vehicle modeling result of the gun camera image, and if the vehicle modeling result is consistent with the target vehicle captured by the gun camera, determining that the vehicle captured by the dome camera and the target vehicle captured by the gun camera are the same vehicle.
And synthesizing the panoramic evidence image and the close-up evidence image. When the images are uploaded, a plurality of panoramic evidence images and one close-up evidence image of the dome camera can be synthesized, and the synthesis format supports the modes of up-down synthesis, left-right synthesis, field-shaped synthesis and the like. Referring to fig. 5, taking 3 panoramic evidence images as an example, 4 images are finally combined into a field font format, although the panoramic evidence images and the close-up evidence images may not be combined. The alarm picture, the illegal type, the illegal time, the license plate information, the vehicle characteristic information, the scene information and the like are combined into an alarm message to be sent to the alarm platform, so that the alarm platform can perform later-stage operations such as display, retrieval, punishment and the like.
In the embodiment of the application, a linked ball machine mode is used, so that the problem that evidence cannot be obtained after a vehicle turns around and stops moving is solved; by using a linked ball machine mode, only one billiard machine can be used for being linked with a gun machine, and evidence obtaining of the whole large scene is considered; by using a pre-judging mode, the linked ball machine is more accurate, and the success rate of capturing the target vehicle by the ball machine is improved; track matching and modeling matching are used, and the accuracy of successful target matching among multiple devices is improved; by using the mode of linking a plurality of dome cameras, evidence obtaining of wider areas can be covered by matching the gunlock, and the success rate of target vehicle capturing can be improved.
The embodiment of the present application further provides a multi-camera combined forensics system, referring to fig. 6, the system includes: a bolt 601 and a ball machine 602; the number of the ball machines 602 may be one or more. The bolt 601 realizes the following steps when in operation:
acquiring a panoramic video stream acquired by a gunlock, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event or not;
if the preset detection event is triggered by a monitoring target, acquiring a video frame of the preset detection event triggered by a specified monitoring target in the panoramic video stream to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target triggering the preset detection event;
predicting the predicted position of the specified monitoring target in the image of the gunlock after a preset time period;
determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the image of the gun camera and the PT coordinate of the dome camera;
and acquiring a close-up evidence image at the target PT coordinate through the dome camera.
Optionally, the bolt 601 can also implement any one of the above multiple-camera joint forensics methods when operating.
The embodiment of the present application further provides a multi-camera combined forensics system, referring to fig. 7, the system includes: a server 701, a gunlock 702 and a ball machine 703; the number of the ball machines 703 may be one or more.
The server 701 implements the following steps during operation:
acquiring a panoramic video stream acquired by a gunlock, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event or not;
if the preset detection event is triggered by a monitoring target, acquiring a video frame of the preset detection event triggered by a specified monitoring target in the panoramic video stream to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target triggering the preset detection event;
predicting the predicted position of the specified monitoring target in the image of the gunlock after a preset time period;
determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the image of the gun camera and the PT coordinate of the dome camera;
and acquiring a close-up evidence image at the target PT coordinate through the dome camera.
Optionally, the server 701 may further implement any one of the multiple-camera joint forensics methods when running.
The embodiment of the present application further provides a multi-camera combined forensics device, referring to fig. 8, the device includes:
the video stream detection module 801 is configured to acquire a panoramic video stream acquired by a bolt face and determine whether a monitoring target triggers a preset detection event in the panoramic video stream;
a panoramic evidence obtaining module 802, configured to, if there is a monitoring target triggering the preset detection event, obtain a video frame of the panoramic video stream in which a specified monitoring target triggers the preset detection event, so as to obtain a panoramic evidence image, where the specified monitoring target is a monitoring target triggering the preset detection event;
a position predicting module 803, configured to predict a predicted position of the specified monitoring target in the bolt image after a preset time period;
a target coordinate determination module 804, configured to determine, according to a pre-established association relationship between each position in the gun images and a PT coordinate of the dome camera, a target PT coordinate of the dome camera corresponding to the predicted position;
and a close-up evidence obtaining module 805, configured to obtain, by the dome camera, a close-up evidence image at the target PT coordinate.
Optionally, the multi-camera combined forensics device in the embodiment of the present application further includes: a coordinate association relation establishing module; the coordinate association relationship establishing module includes:
the erection parameter acquisition submodule is used for acquiring the GPS coordinate of the erection position of the dome camera and the erection height of the dome camera;
a longitude and latitude distance calculation submodule for determining a GPS coordinate of an actual position of the position in an actual scene for any position in the image of the gunlock, and calculating a longitude and latitude distance between the dome camera and the actual position according to the GPS coordinate of the actual position and the GPS coordinate of the dome camera;
the horizontal distance calculation submodule is used for calculating the horizontal distance between the actual position and the ball machine according to the longitude and latitude distances;
the horizontal included angle calculation submodule is used for calculating the horizontal included angle between the actual position and the specified direction through a trigonometric function according to the longitude and latitude distances;
the P coordinate determination submodule is used for determining the P coordinate of the dome camera according to the horizontal included angle;
and the T coordinate determination submodule is used for calculating the T coordinate of the dome camera according to the horizontal distance and the erection height of the dome camera so as to obtain the association relation between any position in the gunlock image and the PT coordinate of the dome camera.
Optionally, the panoramic video stream includes lane line information and lane direction information, the designated monitored target is a vehicle, and the preset detection event includes: one or more of illegal parking, reverse driving, line pressing, turning around, occupation of a non-motor vehicle lane by a motor vehicle and illegal lane changing.
Optionally, the position predicting module 803 includes:
a driving parameter determining submodule, configured to determine driving parameters of the specified monitoring target according to the panoramic video stream, where the driving parameters include a motion trajectory, a speed, and an acceleration;
and the prediction position determining submodule is used for predicting the prediction position of the monitoring target in the bolt image after the preset time period according to the running parameters of the specified monitoring target.
Optionally, the predicted position determining sub-module includes:
the first position prediction unit is used for judging according to the motion trail of the specified monitoring target, and predicting the predicted position of the specified monitoring target in the straight line direction after a preset time period according to the speed and the acceleration of the specified monitoring target if the specified monitoring target runs straight;
the second position prediction unit is used for judging according to the motion trail of the specified monitoring target, and predicting the predicted position of the specified monitoring target in the lane after lane change after a preset time period according to the speed and the acceleration of the specified monitoring target if the specified monitoring target has lane change action;
and the third position prediction unit is used for judging according to the speed and the acceleration of the specified monitoring target, stopping the specified monitoring target within a preset first distance threshold value, and taking a position which is a preset second distance ahead of the current position of the specified monitoring target as a predicted position according to the motion track of the specified monitoring target.
Optionally, the close-up evidence obtaining module 805 includes:
the dome camera position adjusting submodule is used for adjusting the dome camera to the position of the target PT coordinate;
a close-up video stream acquisition sub-module, configured to acquire, by the dome camera, a close-up video stream at the target PT coordinate position;
and the close-up video stream analysis submodule is used for analyzing the close-up video stream and acquiring a close-up evidence image including the monitoring target in the close-up video stream.
Optionally, the close-up evidence obtaining module 805 includes:
the dome camera position adjusting submodule is used for adjusting the dome camera to the position of the target PT coordinate;
the snapshot mode triggering submodule is used for triggering the dome camera to start a snapshot mode so as to enable the dome camera to snapshot close-up evidence images including the monitored targets;
and a close-up evidence image receiving sub-module for receiving the close-up evidence image transmitted by the ball machine.
Optionally, the ball machine position adjusting submodule is specifically configured to: and adjusting the plurality of ball machines to the target PT coordinates and monitoring positions adjacent to the target PT coordinates respectively.
Optionally, the multi-camera combined forensics device in the embodiment of the present application further includes:
the same target judgment module is used for judging whether the monitoring target in the close-up evidence image and the specified monitoring target are the same target or not;
and the evidence image uploading module is used for uploading the panoramic evidence image and the close-up evidence image to an alarm platform when the monitoring target in the close-up evidence image and the specified monitoring target are the same monitoring target.
Optionally, the same target determining module includes:
a motion trail judgment sub-module, configured to judge whether a position of a monitoring target in the close-up evidence image matches a motion trail of the specified monitoring target in the panoramic video stream according to the shooting time of the close-up evidence image and a PT coordinate of a dome camera shooting the close-up evidence image;
a first judging submodule for judging whether the monitoring target in the close-up evidence image is the same target as the specified monitoring target if the close-up evidence image is not matched with the specified monitoring target;
a feature configuration sub-module, configured to, if the feature matching is performed, perform feature matching on the monitoring target in the close-up evidence image and the specified monitoring target, and when the feature matching of the monitoring target in the close-up evidence image and the specified monitoring target is the same target, determine that the monitoring target in the close-up evidence image and the specified monitoring target are the same target;
and the second judging submodule is used for judging that the monitoring target in the close-up evidence image is not the same target as the specified monitoring target when the characteristics of the monitoring target in the close-up evidence image and the specified monitoring target are matched into different targets.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the embodiments of the apparatus, the system, the electronic device, and the storage medium, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (13)

1. A multi-camera joint forensics method, comprising:
acquiring a panoramic video stream acquired by a gunlock, and judging whether a monitoring target in the panoramic video stream triggers a preset detection event or not;
if the preset detection event is triggered by a monitoring target, acquiring a video frame of the preset detection event triggered by a specified monitoring target in the panoramic video stream to obtain a panoramic evidence image, wherein the specified monitoring target is the monitoring target triggering the preset detection event;
predicting the predicted position of the specified monitoring target in the image of the gunlock after a preset time period;
determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the image of the gun camera and the PT coordinate of the dome camera;
and acquiring a close-up evidence image at the target PT coordinate through the dome camera.
2. The method of claim 1, wherein the step of pre-associating each position in the image of the bolt face with the coordinates of the ball machine PT comprises:
acquiring a GPS coordinate of the erection position of the dome camera and the erection height of the dome camera;
aiming at any position in the image of the gunlock, determining a GPS coordinate of the actual position of the position in an actual scene, and calculating the longitude and latitude distance between the dome camera and the actual position according to the GPS coordinate of the actual position and the GPS coordinate of the dome camera;
calculating the horizontal distance between the actual position and the ball machine according to the longitude and latitude distance;
calculating a horizontal included angle between the actual position and the specified direction through a trigonometric function according to the longitude and latitude distances;
determining the P coordinate of the ball machine according to the horizontal included angle;
and calculating the T coordinate of the dome camera according to the horizontal distance and the erection height of the dome camera, thereby obtaining the association relation between any position in the image of the gunlock and the PT coordinate of the dome camera.
3. The method according to claim 1, wherein the panoramic video stream includes lane line information and lane direction information, the designated monitored target is a vehicle, and the preset detection event includes: one or more of illegal parking, reverse driving, line pressing, turning around, occupation of a non-motor vehicle lane by a motor vehicle and illegal lane changing.
4. The method of claim 1, wherein predicting the predicted position of the designated monitor target in the bolt face image after a preset period of time comprises:
determining driving parameters of the specified monitoring target according to the panoramic video stream, wherein the driving parameters comprise a motion track, a speed and an acceleration;
and predicting the predicted position of the monitoring target in the bolt image after a preset time period according to the running parameters of the specified monitoring target.
5. The method of claim 4, wherein predicting the predicted position in the bolt face image after a preset period of time of the monitoring target according to the driving parameters of the designated monitoring target comprises:
judging according to the motion trail of the specified monitoring target, if the specified monitoring target runs straight, predicting the predicted position of the specified monitoring target in the straight line direction after a preset time period according to the speed and the acceleration of the specified monitoring target;
judging according to the motion track of the specified monitoring target, if the specified monitoring target has a lane change action, predicting the predicted position of the specified monitoring target in a lane after lane change after a preset time period according to the speed and the acceleration of the specified monitoring target;
and judging according to the speed and the acceleration of the appointed monitoring target, if the appointed monitoring target stops within a preset first distance threshold, taking a position which is a preset second distance ahead of the current position of the appointed monitoring target as a predicted position according to the motion track of the appointed monitoring target.
6. The method of claim 1, wherein the obtaining, by the ball machine, a close-up evidence image at the target PT coordinates comprises:
adjusting the dome camera to the position of the target PT coordinate, and acquiring a close-up video stream at the position of the target PT coordinate through the dome camera;
and analyzing the close-up video stream to acquire a close-up evidence image including the monitoring target in the close-up video stream.
7. The method of claim 1, wherein the obtaining, by the ball machine, a close-up evidence image at the target PT coordinates comprises:
adjusting the dome camera to the position of the target PT coordinate, and triggering the dome camera to start a snapshot mode so that the dome camera takes a snapshot of a close-up evidence image including a monitoring target;
receiving the close-up evidence image sent by the ball machine.
8. The method of claim 6 or 7, wherein the adjusting the ball machine to the location of the target PT coordinates comprises:
and respectively adjusting the plurality of ball machines to the target PT coordinates and monitoring positions adjacent to the target PT coordinates.
9. The method of claim 1, wherein after the obtaining, by the ball machine, a close-up evidence image at the target PT coordinates, the method further comprises:
judging whether the monitoring target in the close-up evidence image and the specified monitoring target are the same target or not;
uploading the panoramic evidence image and the close-up evidence image to an alarm platform when the monitoring target in the close-up evidence image is the same monitoring target as the designated monitoring target.
10. The method of claim 9, wherein said determining whether the monitoring target in the close-up evidence image is the same target as the specified monitoring target comprises:
judging whether the position of the monitoring target in the close-up evidence image is consistent with the motion track of the specified monitoring target in the panoramic video stream or not according to the shooting time of the close-up evidence image and the PT coordinates of a ball machine for shooting the close-up evidence image;
if not, judging that the monitoring target in the close-up evidence image is not the same target as the specified monitoring target;
if the monitoring target in the close-up evidence image is matched with the designated monitoring target in characteristic, judging that the monitoring target in the close-up evidence image is the same target as the designated monitoring target when the monitoring target in the close-up evidence image is matched with the designated monitoring target in characteristic;
when the monitoring target in the close-up evidence image and the specified monitoring target feature match to different targets, determining that the monitoring target in the close-up evidence image and the specified monitoring target are not the same target.
11. A multi-camera joint forensics device, the device comprising:
the video stream detection module is used for acquiring a panoramic video stream acquired by a gunlock and judging whether a monitoring target triggers a preset detection event or not in the panoramic video stream;
a panoramic evidence obtaining module, configured to, if there is a monitoring target that triggers the preset detection event, obtain a video frame in the panoramic video stream in which a specified monitoring target triggers the preset detection event, to obtain a panoramic evidence image, where the specified monitoring target is a monitoring target that triggers the preset detection event;
the position prediction module is used for predicting the predicted position of the specified monitoring target in the gun camera image after a preset time period;
the target coordinate determination module is used for determining a target PT coordinate of the dome camera corresponding to the predicted position according to a pre-established association relation between each position in the gun camera image and the PT coordinate of the dome camera;
and the close-up evidence acquisition module is used for acquiring a close-up evidence image at the target PT coordinate through the dome camera.
12. A multi-camera joint forensics system, the system comprising: gunlock and ball machine; the bolt machine when operating realizes the method steps of any one of claims 1-10.
13. A multi-camera joint forensics system, the system comprising: a server, a rifle bolt and a ball machine; the server implementing the method steps of any of claims 1-10 when executed.
CN201910380772.5A 2019-05-08 2019-05-08 Multi-camera combined evidence obtaining method, device and system Active CN111914592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910380772.5A CN111914592B (en) 2019-05-08 2019-05-08 Multi-camera combined evidence obtaining method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910380772.5A CN111914592B (en) 2019-05-08 2019-05-08 Multi-camera combined evidence obtaining method, device and system

Publications (2)

Publication Number Publication Date
CN111914592A true CN111914592A (en) 2020-11-10
CN111914592B CN111914592B (en) 2023-09-05

Family

ID=73242545

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910380772.5A Active CN111914592B (en) 2019-05-08 2019-05-08 Multi-camera combined evidence obtaining method, device and system

Country Status (1)

Country Link
CN (1) CN111914592B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911231A (en) * 2021-01-22 2021-06-04 杭州海康威视数字技术股份有限公司 Linkage method and system of monitoring cameras
CN113591651A (en) * 2021-07-22 2021-11-02 浙江大华技术股份有限公司 Image capturing method, image display method and device and storage medium
CN114677841A (en) * 2022-02-10 2022-06-28 浙江大华技术股份有限公司 Vehicle lane change detection method and vehicle lane change detection system
WO2024083113A1 (en) * 2022-10-20 2024-04-25 Zhejiang Dahua Technology Co., Ltd. Methods, systems, and computer-readable media for target tracking

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040222904A1 (en) * 2003-05-05 2004-11-11 Transol Pty Ltd Traffic violation detection, recording and evidence processing system
US20050280707A1 (en) * 2004-02-19 2005-12-22 Sezai Sablak Image stabilization system and method for a video camera
CN103716594A (en) * 2014-01-08 2014-04-09 深圳英飞拓科技股份有限公司 Panorama splicing linkage method and device based on moving target detecting
CN105072414A (en) * 2015-08-19 2015-11-18 浙江宇视科技有限公司 Method and system for detecting and tracking target
CN105979210A (en) * 2016-06-06 2016-09-28 深圳市深网视界科技有限公司 Pedestrian identification system based on multi-ball multi-gun camera array
CN108447091A (en) * 2018-03-27 2018-08-24 北京颂泽科技有限公司 Object localization method, device, electronic equipment and storage medium
CN109309809A (en) * 2017-07-28 2019-02-05 阿里巴巴集团控股有限公司 The method and data processing method, device and system of trans-regional target trajectory tracking
CN109584309A (en) * 2018-11-16 2019-04-05 厦门博聪信息技术有限公司 A kind of twin-lens emergency cloth ball-handling of rifle ball linkage

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040222904A1 (en) * 2003-05-05 2004-11-11 Transol Pty Ltd Traffic violation detection, recording and evidence processing system
US20050280707A1 (en) * 2004-02-19 2005-12-22 Sezai Sablak Image stabilization system and method for a video camera
CN103716594A (en) * 2014-01-08 2014-04-09 深圳英飞拓科技股份有限公司 Panorama splicing linkage method and device based on moving target detecting
CN105072414A (en) * 2015-08-19 2015-11-18 浙江宇视科技有限公司 Method and system for detecting and tracking target
CN105979210A (en) * 2016-06-06 2016-09-28 深圳市深网视界科技有限公司 Pedestrian identification system based on multi-ball multi-gun camera array
CN109309809A (en) * 2017-07-28 2019-02-05 阿里巴巴集团控股有限公司 The method and data processing method, device and system of trans-regional target trajectory tracking
CN108447091A (en) * 2018-03-27 2018-08-24 北京颂泽科技有限公司 Object localization method, device, electronic equipment and storage medium
CN109584309A (en) * 2018-11-16 2019-04-05 厦门博聪信息技术有限公司 A kind of twin-lens emergency cloth ball-handling of rifle ball linkage

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112911231A (en) * 2021-01-22 2021-06-04 杭州海康威视数字技术股份有限公司 Linkage method and system of monitoring cameras
CN112911231B (en) * 2021-01-22 2023-03-07 杭州海康威视数字技术股份有限公司 Linkage method and system of monitoring cameras
CN113591651A (en) * 2021-07-22 2021-11-02 浙江大华技术股份有限公司 Image capturing method, image display method and device and storage medium
CN114677841A (en) * 2022-02-10 2022-06-28 浙江大华技术股份有限公司 Vehicle lane change detection method and vehicle lane change detection system
CN114677841B (en) * 2022-02-10 2023-12-29 浙江大华技术股份有限公司 Vehicle lane change detection method and vehicle lane change detection system
WO2024083113A1 (en) * 2022-10-20 2024-04-25 Zhejiang Dahua Technology Co., Ltd. Methods, systems, and computer-readable media for target tracking

Also Published As

Publication number Publication date
CN111914592B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
CN111914592B (en) Multi-camera combined evidence obtaining method, device and system
CN111145545B (en) Road traffic behavior unmanned aerial vehicle monitoring system and method based on deep learning
JP7218535B2 (en) Traffic violation vehicle identification system and server
KR101647370B1 (en) road traffic information management system for g using camera and radar
US8098290B2 (en) Multiple camera system for obtaining high resolution images of objects
US10229588B2 (en) Method and device for obtaining evidences for illegal parking of a vehicle
CN109817014B (en) Roadside inspection parking charging method based on mobile video and high-precision positioning
JP4003623B2 (en) Image processing system using a pivotable surveillance camera
US9154741B2 (en) Apparatus and method for processing data of heterogeneous sensors in integrated manner to classify objects on road and detect locations of objects
JP6815262B2 (en) Traffic violation detectors, systems, traffic violation detection methods and programs
KR101496390B1 (en) System for Vehicle Number Detection
CN108810390A (en) A kind of the large scene vehicle illegal candid camera and its vehicle illegal grasp shoot method of rifle ball cooperating type
KR101678004B1 (en) node-link based camera network monitoring system and method of monitoring the same
KR20200064873A (en) Method for detecting a speed employing difference of distance between an object and a monitoring camera
CN106503622A (en) A kind of vehicle antitracking method and device
KR102061264B1 (en) Unexpected incident detecting system using vehicle position information based on C-ITS
CN111275957A (en) Traffic accident information acquisition method, system and camera
CN111696365A (en) Vehicle tracking system
CN111290001A (en) Target overall planning method, device and equipment based on GPS coordinates
CN115909223A (en) Method and system for matching WIM system information with monitoring video data
CN110580809B (en) Expressway rescue lane occupation snapshot method
KR101033237B1 (en) Multi-function detecting system for vehicles and security using 360 deg. wide image and method of detecting thereof
KR101161557B1 (en) The apparatus and method of moving object tracking with shadow removal moudule in camera position and time
CN116342642A (en) Target tracking method, device, electronic equipment and readable storage medium
JP2006173872A (en) Camera controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant