CN111145213A - Target tracking method, device and system and computer readable storage medium - Google Patents

Target tracking method, device and system and computer readable storage medium Download PDF

Info

Publication number
CN111145213A
CN111145213A CN201911258014.2A CN201911258014A CN111145213A CN 111145213 A CN111145213 A CN 111145213A CN 201911258014 A CN201911258014 A CN 201911258014A CN 111145213 A CN111145213 A CN 111145213A
Authority
CN
China
Prior art keywords
camera
detection
target
frame
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911258014.2A
Other languages
Chinese (zh)
Inventor
任培铭
刘金杰
乐振浒
张翔
林诰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Unionpay Co Ltd
Original Assignee
China Unionpay Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Unionpay Co Ltd filed Critical China Unionpay Co Ltd
Priority to CN201911258014.2A priority Critical patent/CN111145213A/en
Publication of CN111145213A publication Critical patent/CN111145213A/en
Priority to TW109127141A priority patent/TWI795667B/en
Priority to PCT/CN2020/109081 priority patent/WO2021114702A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)
  • Burglar Alarm Systems (AREA)

Abstract

The invention provides a target tracking method, a device, a system and a computer readable storage medium, wherein the method comprises the following steps: acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area; sequentially carrying out target detection on a current frame to be detected of each camera in the plurality of cameras to obtain a detection frame set corresponding to each camera; and tracking the target according to the detection frame set corresponding to each camera, and determining the global target track according to the tracking result. By the method, the computing resource of target tracking based on multiple cameras can be reduced.

Description

Target tracking method, device and system and computer readable storage medium
Technical Field
The invention belongs to the field of image processing, and particularly relates to a target tracking method, device and system and a computer readable storage medium.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
At present, with the popularization of video monitoring technology and the ever-increasing security and protection requirements, target tracking applied to the video monitoring field gradually becomes one of the hotspots in the computer vision research field. Tracking the moving track of the target object generally requires acquiring an image of a monitoring area of a camera, performing target detection on the image to identify a target, and tracking the identified target object so as to obtain a complete track of the target object. Due to the complexity of the monitoring scene and the limited field of view of a single camera, the cooperation of multiple cameras may be required to achieve global coverage of the monitored area. However, the existing target tracking method based on multiple cameras needs to analyze images and realize target tracking through a deep learning method, and as the number of cameras increases, the demand of computing resources and the demand of communication resources increase greatly at the same time, which causes a technical bottleneck of target tracking.
Disclosure of Invention
In view of the above problems in the prior art, a target tracking method, an apparatus and a computer-readable storage medium are provided, by which the above problems can be solved.
The present invention provides the following.
In a first aspect, a target tracking method is provided, including: acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area; sequentially carrying out target detection on a current frame to be detected of each camera in the plurality of cameras to obtain a detection frame set corresponding to each camera; and tracking the target according to the detection frame set corresponding to each camera, and determining the global target track according to the tracking result.
In some possible embodiments, the method further comprises: determining a plurality of frame numbers to be tested, and iteratively acquiring current frames to be tested of a plurality of cameras according to the plurality of frame numbers to be tested in a time sequence manner, so as to iteratively perform target tracking; obtaining an initial global target track according to the initial frame number to be detected in the plurality of frame numbers to be detected; and obtaining the global target track after iterative updating according to the correspondence of the subsequent frame number to be tested in the plurality of frame numbers to be tested.
In some possible embodiments, the performing target detection on the current frame to be detected of each camera includes: inputting the current frame to be detected of each camera into a target detection model for target detection; the target detection model is a pedestrian detection model obtained based on neural network training.
In some possible embodiments, after obtaining the detection frame set corresponding to each camera, the method further includes: and performing projection transformation on the frame bottom center point of each detection frame in the detection frame set corresponding to each camera according to the framing position of each camera, so as to determine the ground coordinates of each detection frame.
In some possible embodiments, the viewing areas of the plurality of cameras at least partially overlap, the method further comprising: dividing a working area of each camera in a ground coordinate system according to a view area of each camera; the working areas of the cameras are not overlapped, and if the ground coordinates of any detection frame corresponding to a first camera in the cameras exceed the corresponding working area, any detection frame is removed from the detection frame set of the first camera.
In some possible embodiments, the method further comprises: and cutting off non-critical areas in the working area of each camera.
In some possible embodiments, the tracking according to the detection frame set corresponding to each camera includes: performing multi-target tracking by adopting a multi-target tracking algorithm based on the detection frame set corresponding to each camera, and determining local tracking information corresponding to each camera; the parameters adopted by the multi-target tracking are determined based on the historical frames to be measured of each camera.
In some possible embodiments, the multi-target tracking algorithm is a deepsort algorithm.
In some possible embodiments, the method further comprises: adding an identity mark for each detection frame according to the local tracking information corresponding to each camera; and determining the global target track after iterative updating based on the identity and the ground coordinates of each detection frame.
In some possible embodiments, the method further comprises: determining an incidence relation among the cameras according to the working areas of the cameras; determining a newly added detection frame and a disappeared detection frame in the corresponding working area according to the local tracking information of each camera; associating the newly added detection frame and the disappeared detection frame in different working areas according to the association relationship among the cameras to obtain association information; and determining the global target track after iterative updating according to the associated information.
In a second aspect, there is provided a target tracking apparatus, comprising: the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area; the detection unit is used for sequentially carrying out target detection on the current frame to be detected of each camera in the multiple cameras to obtain a detection frame set corresponding to each camera; and the tracking unit is used for tracking the target according to the detection frame set corresponding to each camera and determining the global target track according to the tracking result.
In some possible embodiments, the method further comprises: the frame selecting unit is used for determining a plurality of frame numbers to be tested, and iteratively acquiring the current frames to be tested of the plurality of cameras according to the plurality of frame numbers to be tested in a time sequence manner, so as to iteratively perform target tracking; obtaining an initial global target track according to the initial frame number to be detected in the plurality of frame numbers to be detected; and obtaining the global target track after iterative updating according to the correspondence of the subsequent frame number to be tested in the plurality of frame numbers to be tested.
In some possible embodiments, the detection unit is further configured to: inputting the current frame to be detected of each camera into a target detection model for target detection; the target detection model is a pedestrian detection model obtained based on neural network training.
In some possible embodiments, the detection unit is further configured to: after the detection frame set corresponding to each camera is obtained, performing projection transformation on the frame bottom center point of each detection frame in the detection frame set corresponding to each camera according to the view finding position of each camera, and determining the ground coordinates of each detection frame.
In some possible embodiments, the viewing areas of the plurality of cameras at least partially overlap, the apparatus further being configured to: dividing a working area of each camera in a ground coordinate system according to a view area of each camera; the working areas of the cameras are not overlapped, and if the ground coordinates of any detection frame corresponding to a first camera in the cameras exceed the corresponding working area, any detection frame is removed from the detection frame set of the first camera.
In some possible embodiments, the detection unit is further configured to: and cutting off non-critical areas in the working area of each camera.
In some possible embodiments, the tracking unit is further configured to: performing multi-target tracking by adopting a multi-target tracking algorithm based on the detection frame set corresponding to each camera, and determining local tracking information corresponding to each camera; the parameters adopted by the multi-target tracking are determined based on the historical frames to be measured of each camera.
In some possible embodiments, the multi-target tracking algorithm is a deepsort algorithm.
In some possible embodiments, the tracking unit is further configured to: adding an identity mark for each detection frame according to the local tracking information corresponding to each camera; and determining the global target track after iterative updating based on the identity and the ground coordinates of each detection frame.
In some possible embodiments, the tracking unit is further configured to: determining an incidence relation among the cameras according to the working areas of the cameras; determining a newly added detection frame and a disappeared detection frame in the corresponding working area according to the local tracking information of each camera; associating the newly added detection frame and the disappeared detection frame in different working areas according to the association relationship among the cameras to obtain association information; and determining the global target track after iterative updating according to the associated information.
In a third aspect, a target tracking system is provided, comprising: the system comprises a plurality of cameras arranged in a monitoring area and a target tracking device which is respectively in communication connection with the cameras; wherein the target tracking apparatus is configured to perform the method as in the first aspect.
In a fourth aspect, there is provided a target tracking apparatus comprising: one or more multi-core processors; a memory for storing one or more programs; the one or more programs, when executed by the one or more multi-core processors, cause the one or more multi-core processors to implement: acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area; sequentially carrying out target detection on a current frame to be detected of each camera in the plurality of cameras to obtain a detection frame set corresponding to each camera; and tracking the target according to the detection frame set corresponding to each camera, and determining the global target track according to the tracking result.
In a fifth aspect, there is provided a computer readable storage medium storing a program which, when executed by a multicore processor, causes the multicore processor to perform the method of the first aspect.
The embodiment of the application adopts at least one technical scheme which can achieve the following beneficial effects: in this embodiment, image detection is performed on the current frame to be detected of each camera in sequence, and then global tracking is performed in the monitoring area based on the detection result corresponding to each camera, so that global tracking of a target object in a multi-path monitoring video can be realized based on fewer computing resources, and target tracking based on multiple cameras can be realized based on fewer computing resources.
It should be understood that the above description is only an overview of the technical solutions of the present invention, so as to clearly understand the technical means of the present invention, and thus can be implemented according to the content of the description. In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Drawings
The advantages and benefits described herein, as well as other advantages and benefits, will be apparent to those of ordinary skill in the art upon reading the following detailed description of the exemplary embodiments. The drawings are only for purposes of illustrating exemplary embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like elements throughout. In the drawings:
FIG. 1 is a schematic flow chart of a target tracking method according to an embodiment of the invention;
FIG. 2 is a schematic ground view of a monitored area according to an embodiment of the present invention;
FIG. 3 is a schematic view of a plurality of cameras according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating a current frame under test of a plurality of cameras according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a detection frame set corresponding to a plurality of cameras according to an embodiment of the present invention;
FIG. 6 is a diagram of a global target trajectory according to an embodiment of the invention;
FIG. 7 is a schematic structural diagram of a target tracking device according to an embodiment of the invention;
FIG. 8 is a schematic structural diagram of a target tracking device according to another embodiment of the present invention;
fig. 9 is a schematic diagram of a computer-readable storage medium according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
In the present invention, it is to be understood that terms such as "including" or "having," or the like, are intended to indicate the presence of the disclosed features, numbers, steps, behaviors, components, parts, or combinations thereof, and are not intended to preclude the possibility of the presence of one or more other features, numbers, steps, behaviors, components, parts, or combinations thereof.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The present invention will be described in detail below with reference to the embodiments with reference to the attached drawings.
When the moving target in the monitoring area is tracked, the current frames to be detected from the cameras can be sequentially subjected to image detection, then the global tracking is carried out in the monitoring area based on the detection results corresponding to the cameras, the target object in the multi-path monitoring video is globally tracked based on less computing resources, and the requirement for the computing resources is reduced.
Having described the general principles of the invention, various non-limiting embodiments of the invention are described in detail below.
Figure 1 schematically shows a flow diagram of a target tracking method 100 according to an embodiment of the invention,
as shown in fig. 1, the method 100 may include:
s101, acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area;
specifically, the monitoring area refers to the sum of the viewing areas of a plurality of cameras, the plurality of cameras include at least two cameras, and the viewing areas of the plurality of cameras are adjacent to each other or at least partially overlap with each other, so that the target object to be tracked can move in the monitoring area and appear in the viewing area of any one or more cameras. The method comprises the steps of extracting current frames to be measured of a plurality of cameras from monitoring videos of the plurality of cameras respectively, wherein the current frames to be measured of each camera have the same acquisition time. Optionally, the target to be tracked in the present disclosure is preferably a pedestrian, and those skilled in the art will understand that the target to be tracked may also be other movable objects, such as animals, vehicles, etc., which is not limited in the present disclosure.
For example, in a complex monitoring scene, such as a corridor, a large mall, a machine room, etc., a large number of cameras are usually used to monitor each area and obtain multiple monitoring videos. Fig. 2 shows a schematic monitoring scene in which a camera 201 and a camera 202 are arranged, and fig. 3 shows a view of the camera 201 and the camera 202. Wherein, the monitoring video of the camera 201 can be analyzed into image frame sequence (A)1,A2,...,AN) The surveillance video of the camera 202 may be parsed into a sequence of image frames (B)1,B2,...,BN) And the analysis can be carried out in real time on line or off line. Based on the above, the current frame A to be measured of the two cameras can be sequentially extracted from the plurality of image frame sequences according to the time sequencenAnd BnFor the purpose of target tracking shown in the present disclosure, the value of the subscript N may be N ═ 1,2, …, N.
In some possible embodiments, the method 100 may further include: determining a plurality of frame numbers to be tested, and iteratively acquiring current frames to be tested of a plurality of cameras according to the plurality of frame numbers to be tested in a time sequence manner, so as to iteratively perform target tracking; obtaining an initial global target track according to the initial frame number to be detected in the plurality of frame numbers to be detected; and obtaining the global target track after iterative updating according to the correspondence of the subsequent frame number to be tested in the plurality of frame numbers to be tested. Thus, the operation amount can be reduced, the real-time performance of global tracking can be improved,
specifically, the sequence numbers of the frames to be tested may be determined according to a preset frame fetching policy. For example, for a 24-frame-per-second surveillance video, the current frame a to be measured may be acquired from the surveillance videos of the camera 201 and the camera 202 once every 5 framesnAnd BnWhere the subscript n may take on the value of n-1, 6,11, …, and so on. However, other numbers of interval frames may be adopted, or a frame-by-frame detection manner may be adopted, which is not specifically limited by the present disclosure. Based on this, the current frame a to be tested corresponding to the initial frame number to be tested (n is 1) can be used as the basis1And B1To the initial global target track, the current target track may further correspond to the subsequent frame number to be measured (n ═ 6, 11.. et al)Previous frame A to be testednAnd BnAnd carrying out iterative target tracking so as to obtain an iteratively updated global target track.
As shown in fig. 1, the method 100 may further include:
step S102, sequentially carrying out target detection on a current frame to be detected of each camera in a plurality of cameras to obtain a detection frame set corresponding to each camera;
in one possible embodiment, the performing target detection on the current frame to be detected of each camera includes: inputting the current frame to be detected of each camera into a target detection model for target detection; the target detection model is a pedestrian detection model obtained based on neural network training.
For example, as shown in fig. 4, a current frame a to be measured of the camera 201 and the camera 202 is shownnAnd BnThen, inputting the preprocessed current frame A to be detected in any pedestrian detection model based on deep learningnAnd BnAnd detecting and outputting a series of pedestrian detection frames aiming at each camera. The purpose of obtaining the pedestrian detection frame is to obtain the current frame A to be detectednAnd BnAnd, position information and size information of all pedestrians. The pedestrian detection model may be, for example, a YOLO (unified real-time object detection, YouOnly Look Once) model, and the disclosure does not specifically limit this. As shown in FIG. 5, a plurality of current frames A to be measured are shownnAnd BnA plurality of detection frame sets obtained by detection are carried out, wherein the detection frame set (a) corresponding to the camera 2011,a2,a3) And (b) a detection frame set corresponding to the camera 202.
In a possible implementation manner, after obtaining the detection frame set corresponding to each camera, the method further includes: and performing projection transformation according to the framing position of each camera and the frame bottom center point of each detection frame in the detection frame set corresponding to each camera, so as to determine the ground coordinates of each detection frame in the detection frame set corresponding to each camera. In this way, objects identified within the viewing range of each camera can be combined into a unified coordinate system.
For example, the frame bottom center point position of each detection frame corresponding to each camera in fig. 5 may be obtained, and the frame bottom center point position of each detection frame is converted to obtain the actual ground position of the target object in the monitored scene, and fig. 6 shows the ground coordinates of each detection frame obtained through projection conversion. Specifically, it can be seen that the ground passageway under each camera viewing angle is an approximately trapezoidal region, so for the detection frame set corresponding to each camera, firstly, the coordinates of the frame bottom center point of each detection frame in the standard rectangular region can be obtained through trapezoidal-rectangular conversion, secondly, the standard rectangular region is rotated according to the actual layout of the monitoring scene, the rotated coordinates of the frame bottom center point of each detection frame are obtained through rotation matrix calculation, and finally, the rotated coordinates are translated and zoomed according to the actual layout of the monitoring scene, so as to obtain the final coordinate position.
In one possible embodiment, the viewing areas of the plurality of cameras at least partially overlap, the method further comprising: dividing a working area of each camera in a ground coordinate system according to a view area of each camera; the working areas of the cameras are not overlapped, and if the ground coordinates of any detection frame corresponding to a first camera in the cameras exceed the corresponding working area, any detection frame is removed from the detection frame set of the first camera.
For example, as shown in fig. 2, in order to make a dead zone in the monitored scene not exist, there is actually an overlap of the view areas of the camera 201 and the camera 202. Based on this, in order to effectively avoid the problem of coordinate display conflict, the working area of each camera may be divided, for example, the working area of the camera 201 is an X area, and the working area of the camera 202 is a Y area, so that the working areas of each camera are adjacent. Furthermore, the ground coordinates of each detection frame corresponding to each camera need to be located in the working area of the camera, and are removed if the ground coordinates are not located in the working area for which the camera is responsible. For example, the camera 201 detects the corresponding frame set (a)1,a2,a3) Detection frame a in (1)3Ground coordinates ofOutside the X region, therefore, the detection frame a is removed from the detection frame set corresponding to the camera 2013Obtaining (a)1,a2) The subsequent operations are performed.
In one possible embodiment, the method further comprises: and cutting off non-critical areas in the working area of each camera. Specifically, whether the area is a critical area can be determined based on the specific layout of the monitoring scene, for example, the area of the ceiling where pedestrians cannot pass through can be directly cut off, so that the calculation amount of target tracking can be reduced.
As shown in fig. 1, the method 100 may further include:
and S103, tracking the target according to the detection frame set corresponding to each camera, and updating the global target track according to the tracking result.
Specifically, as described above, for each camera, the initial current frame a to be measured may be determined1And B1And carrying out target detection and determining an initial global target track. Further, the current frame A to be measured can be obtained according to the follow-up acquisitionnAnd BnAnd carrying out target detection, and carrying out target tracking iteratively according to a target detection result so as to carry out iterative update on the global target track.
In one possible embodiment, tracking according to the detection frame set corresponding to each camera includes: performing multi-target tracking by adopting a multi-target tracking algorithm based on the detection frame set corresponding to each camera, and determining local tracking information corresponding to each camera; the parameters adopted by the multi-target tracking are determined based on the historical frames to be measured of each camera. This enables multi-target tracking in the monitored area.
Specifically, the multi-target Tracking algorithm is a single-camera-based target Tracking algorithm, such as a deepSORT algorithm (a Simple Online real-time Tracking algorithm based on depth feature Association), so that local Tracking information of each camera can be obtained. Specifically, a target frame to be tracked can be determined when any one target appears in a working area of a certain camera for the first time, and the subsequent frames to be tracked of the camera are tracked based on a multi-target detection algorithm and the target frame with the marked identity, so that the local tracking information of the target in the working area of the camera is determined.
In one possible embodiment, the multi-target tracking algorithm is a deepsort algorithm. Of course, other target tracking algorithms may be used, and those skilled in the art will appreciate that this disclosure is not intended to be limited to any particular target tracking algorithm.
In one possible embodiment, the updating the global target trajectory according to the tracking result further includes: adding an identity mark for each detection frame according to the local tracking information corresponding to each camera; and updating the global target track by using the ground coordinates of each detection frame based on the identity.
For example, as shown in FIG. 6, the curve part shows the current existing global target track, i.e. the global target track determined in the last iteration, and the point a thereof1Point a2And points b represent the ground coordinates of the plurality of detection frames shown in fig. 5, respectively. Wherein, if the local tracking information corresponding to the camera 201 indicates the detection frame a2Matching with the existing 'target 2' feature, then the detection frame is a2Label "target 2" and put point a2If the local tracking information corresponding to the camera 201 indicates the detection frame point a, the ground coordinates of the target 2 are added to the existing track of the target 2 (i.e. the dashed curve of the target 2 in fig. 6)1If there is no matching target, it is the detection box a1And adding a label of 'target 3' and creating a track of 'target 3'.
In one possible embodiment, the updating the global target trajectory according to the tracking result further includes: determining the incidence relation among the cameras by the working areas of the cameras; determining a newly added detection frame and a disappeared detection frame in the corresponding working area according to the local tracking information of each camera; associating the newly added detection frame and the disappeared detection frame in different working areas according to the association relationship among the cameras to obtain association information; and updating the global target track according to the associated information.
Specifically, the association relationship between the plurality of cameras is, for example, that the region X and the region Y are adjacent at a specified position, so that different work regions can be spanned from the adjacent positions based on the association relationship when the target moves. The association information means that a newly added detection frame in a certain working area and a disappeared detection frame in another working area are associated, that is, the newly added detection frame and the disappeared detection frame correspond to the same identity. In other words, for two working areas with adjacent boundaries, the disappearing order of a plurality of tracked targets can be obtained at the adjacent boundary of one working area, the plurality of newly added targets appearing at the adjacent boundary are correspondingly allocated with identifications and continuously tracked in the other working area according to the disappearing order,
for example, as shown in fig. 6, where a point b in the area Y represents the ground coordinates of the detection box b shown in fig. 5. If the local tracking information corresponding to the camera 201 indicates that the detection frame point b does not have a matching target, that is, a newly added target exists in the area Y; and the local tracking information corresponding to the camera 201 indicates that the continuously tracked "target 1" disappears in the current detection frame, that is, there is a disappearing target in the area X, then "target 1" can be labeled for the detection frame b and the ground coordinate of the point b is added to the existing track of "target 1" (i.e., "target 1" dashed curve in fig. 6), so as to implement target tracking across cameras and across working areas.
Thus, according to the target tracking method based on multiple cameras in the embodiment of the invention, the current frames to be detected of the cameras are sequentially subjected to image detection, and then the global tracking is performed in the monitoring area based on the detection results corresponding to the cameras, so that the target object in the multi-path monitoring video can be globally tracked based on less computing resources, and the requirement on the computing resources is reduced. For example, rather than providing separate GPU computing resources for each camera for tracking target objects in each local region, fewer computing resources may be provided for global tracking of target objects in the monitored region.
Based on the same technical concept, the embodiment of the invention also provides a target tracking device, which is used for executing the target tracking method provided by any one of the embodiments. Fig. 7 is a schematic structural diagram of a target tracking apparatus according to an embodiment of the present invention.
As shown in fig. 7, the apparatus 700 includes:
an obtaining unit 701, configured to obtain current frames to be measured of multiple cameras disposed in a monitoring area;
a detection unit 702, configured to perform target detection on a current frame to be detected of each of multiple cameras in sequence, to obtain a detection frame set corresponding to each camera;
and the tracking unit 703 is configured to perform target tracking according to the detection frame set corresponding to each camera, and determine a global target trajectory according to a tracking result.
In some possible embodiments, the apparatus 700 further comprises: the frame selecting unit is used for determining a plurality of frame numbers to be tested, and iteratively acquiring the current frames to be tested of the plurality of cameras according to the plurality of frame numbers to be tested in a time sequence manner, so as to iteratively perform target tracking; obtaining an initial global target track according to the initial frame number to be detected in the plurality of frame numbers to be detected; and obtaining the global target track after iterative updating according to the correspondence of the subsequent frame number to be tested in the plurality of frame numbers to be tested.
In some possible embodiments, the detecting unit 702 is further configured to: inputting the current frame to be detected of each camera into a target detection model for target detection; the target detection model is a pedestrian detection model obtained based on neural network training.
In some possible embodiments, the detecting unit 702 is further configured to: after the detection frame set corresponding to each camera is obtained, performing projection transformation on the frame bottom center point of each detection frame in the detection frame set corresponding to each camera according to the view finding position of each camera, and determining the ground coordinates of each detection frame.
In some possible embodiments, the viewing areas of the multiple cameras at least partially overlap, and the apparatus 700 is further configured to: dividing a working area of each camera in a ground coordinate system according to a view area of each camera; the working areas of the cameras are not overlapped, and if the ground coordinates of any detection frame corresponding to a first camera in the cameras exceed the corresponding working area, any detection frame is removed from the detection frame set of the first camera.
In some possible embodiments, the detecting unit 702 is further configured to: and cutting off non-critical areas in the working area of each camera.
In some possible embodiments, the tracking unit 703 is further configured to: performing multi-target tracking by adopting a multi-target tracking algorithm based on the detection frame set corresponding to each camera, and determining local tracking information corresponding to each camera; the parameters adopted by the multi-target tracking are determined based on the historical frames to be measured of each camera.
In some possible embodiments, the multi-target tracking algorithm is a deepsort algorithm.
In some possible embodiments, the tracking unit 703 is further configured to: adding an identity mark for each detection frame according to the local tracking information corresponding to each camera; and determining the global target track after iterative updating based on the identity and the ground coordinates of each detection frame.
In some possible embodiments, the tracking unit 703 is further configured to: determining an incidence relation among the cameras according to the working areas of the cameras; determining a newly added detection frame and a disappeared detection frame in the corresponding working area according to the local tracking information of each camera; associating the newly added detection frame and the disappeared detection frame in different working areas according to the association relationship among the cameras to obtain association information; and determining the global target track after iterative updating according to the associated information.
In this way, according to the target tracking device based on multiple cameras of the embodiment of the invention, by sequentially performing image detection on the current frames to be detected of each camera and then performing global tracking in the monitoring area based on the detection results corresponding to each camera, global tracking of the target object in multiple paths of monitoring videos can be realized based on fewer computing resources, and the requirement on the computing resources is reduced. For example, rather than providing separate GPU computing resources for each camera for tracking target objects in each local region, fewer computing resources may be provided for global tracking of target objects in the monitored region.
It should be noted that the apparatus in the embodiment of the present application may implement each process of the foregoing method embodiment, and achieve the same effect and function, which are not described herein again.
Based on the same technical concept, an embodiment of the present invention further provides a target tracking system, which specifically includes: the system comprises a plurality of cameras arranged in a monitoring area and a target tracking device which is respectively in communication connection with the cameras; the target tracking device is configured to execute the target tracking method provided by any one of the above embodiments.
Based on the same technical concept, those skilled in the art can appreciate that aspects of the present invention can be implemented as an apparatus, a method, or a computer readable storage medium. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" device.
In some possible embodiments, a target tracking apparatus of the present invention may include at least one or more processors, and at least one memory. Wherein the memory stores a program that, when executed by the processor, causes the processor to perform the steps of: acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area; sequentially carrying out target detection on a current frame to be detected of each camera in the plurality of cameras to obtain a detection frame set corresponding to each camera; and tracking the target according to the detection frame set corresponding to each camera, and determining the global target track according to the tracking result.
The target tracking apparatus 8 according to this embodiment of the present invention is described below with reference to fig. 8. The device 8 shown in fig. 8 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 8, the apparatus 8 may take the form of a general purpose computing device, including but not limited to: at least one processor 10, at least one memory 20, a bus 60 connecting the different device components.
The bus 60 includes a data bus, an address bus, and a control bus.
The memory 20 may include volatile memory, such as Random Access Memory (RAM)21 and/or cache memory 22, and may further include Read Only Memory (ROM) 23.
Memory 20 may also include program modules 24, such program modules 24 including, but not limited to: an operating device, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
The apparatus 5 may also communicate with one or more external devices 2 (e.g. a keyboard, a pointing device, a bluetooth device, etc.) and also with one or more other devices. Such communication may be via an input/output (I/O) interface 40 and displayed on the display unit 30. Also, the device 5 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 50. As shown, the network adapter 50 communicates with other modules in the device 5 over a bus 60. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the apparatus 5, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID devices, tape drives, and data backup storage devices, among others.
Fig. 9 illustrates a computer-readable storage medium for performing the method as described above.
In some possible embodiments, aspects of the invention may also be embodied in the form of a computer-readable storage medium comprising program code for causing a processor to perform the above-described method when the program code is executed by the processor.
The above-described method includes a number of operations and steps shown and not shown in the above figures, which will not be described again.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device, apparatus, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
As shown in fig. 9, a computer-readable storage medium 90 according to an embodiment of the present invention is described, which may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a terminal device, such as a personal computer. However, the computer-readable storage medium of the present invention is not limited thereto, and in this document, the readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution apparatus, device, or apparatus.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Python, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device over any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., over the internet using an internet service provider).
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
While the spirit and principles of the invention have been described with reference to several particular embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, nor is the division of aspects, which is for convenience only as the features in such aspects may not be combined to benefit. The invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (23)

1. A target tracking method, comprising:
acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area;
sequentially carrying out target detection on the current frame to be detected of each camera in the plurality of cameras to obtain a detection frame set corresponding to each camera;
and tracking the target according to the detection frame set corresponding to each camera, and determining the global target track according to the tracking result.
2. The method of claim 1, further comprising:
determining a plurality of frame serial numbers to be tested, and iteratively acquiring the current frame to be tested of the plurality of cameras according to the plurality of frame serial numbers to be tested in a time sequence manner, thereby iteratively executing the target tracking;
obtaining an initial global target track according to the initial frame number to be detected in the plurality of frame numbers to be detected; and obtaining the global target track after iterative updating according to the correspondence of the subsequent frame number to be tested in the plurality of frame numbers to be tested.
3. The method of claim 2, wherein performing object detection on the current frame to be detected of each camera comprises:
inputting the current frame to be detected of each camera into a target detection model for target detection;
the target detection model is a pedestrian detection model obtained based on neural network training.
4. The method according to claim 2, further comprising, after obtaining the set of detection frames corresponding to each camera:
and performing projection transformation on the frame bottom center point of each detection frame in the detection frame set corresponding to each camera according to the framing position of each camera, so as to determine the ground coordinates of each detection frame.
5. The method of claim 4, wherein the viewing areas of the plurality of cameras at least partially overlap, the method further comprising:
dividing the working area of each camera in a ground coordinate system according to the view area of each camera;
the working areas of the cameras are not overlapped, and if the ground coordinates of any detection frame corresponding to a first camera in the cameras exceed the corresponding working area, any detection frame is removed from the detection frame set of the first camera.
6. The method of claim 5, further comprising:
and cutting off a non-critical area in the working area of each camera.
7. The method of claim 2, wherein tracking according to the set of detection frames corresponding to each camera comprises:
performing multi-target tracking by adopting a multi-target tracking algorithm based on the detection frame set corresponding to each camera, and determining local tracking information corresponding to each camera;
and determining parameters adopted by the multi-target tracking based on the historical frames to be measured of each camera.
8. The method of claim 7, wherein the multi-target tracking algorithm is a depsort algorithm.
9. The method of claim 7, further comprising:
adding an identity mark for each detection frame according to the local tracking information corresponding to each camera;
and determining the global target track after iterative updating based on the identity and the ground coordinates of each detection frame.
10. The method of claim 7, further comprising:
determining the incidence relation among the cameras according to the working areas of the cameras;
determining a newly added detection frame and a disappeared detection frame in a corresponding working area according to the local tracking information of each camera;
associating the newly added detection frame and the disappeared detection frame in different working areas according to the association relationship among the cameras to obtain association information;
and determining the global target track after iterative updating according to the associated information.
11. An object tracking device, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area;
the detection unit is used for sequentially carrying out target detection on the current frame to be detected of each camera in the multiple cameras to obtain a detection frame set corresponding to each camera;
and the tracking unit is used for tracking the target according to the detection frame set corresponding to each camera and determining the global target track according to the tracking result.
12. The apparatus of claim 11, further comprising:
the frame selecting unit is used for determining a plurality of frame serial numbers to be tested, and iteratively acquiring the current frames to be tested of the plurality of cameras according to the plurality of frame serial numbers to be tested in a time sequence manner, so as to iteratively execute the target tracking;
obtaining an initial global target track according to the initial frame number to be detected in the plurality of frame numbers to be detected; and obtaining the global target track after iterative updating according to the correspondence of the subsequent frame number to be tested in the plurality of frame numbers to be tested.
13. The apparatus of claim 12, wherein the detection unit is further configured to:
inputting the current frame to be detected of each camera into a target detection model for target detection;
the target detection model is a pedestrian detection model obtained based on neural network training.
14. The apparatus of claim 12, wherein the detection unit is further configured to:
after the detection frame set corresponding to each camera is obtained, performing projection transformation on the frame bottom center point of each detection frame in the detection frame set corresponding to each camera according to the framing position of each camera, and thus determining the ground coordinates of each detection frame.
15. The apparatus of claim 14, wherein the viewing areas of the plurality of cameras at least partially overlap, the apparatus further configured to:
dividing the working area of each camera in a ground coordinate system according to the view area of each camera;
the working areas of the cameras are not overlapped, and if the ground coordinates of any detection frame corresponding to a first camera in the cameras exceed the corresponding working area, any detection frame is removed from the detection frame set of the first camera.
16. The apparatus of claim 15, wherein the detection unit is further configured to:
and cutting off a non-critical area in the working area of each camera.
17. The apparatus of claim 12, wherein the tracking unit is further configured to:
performing multi-target tracking by adopting a multi-target tracking algorithm based on the detection frame set corresponding to each camera, and determining local tracking information corresponding to each camera;
and determining parameters adopted by the multi-target tracking based on the historical frames to be measured of each camera.
18. The apparatus of claim 17, wherein the multi-target tracking algorithm is a depsort algorithm.
19. The apparatus of claim 17, wherein the tracking unit is further configured to:
adding an identity mark for each detection frame according to the local tracking information corresponding to each camera;
and determining the global target track after iterative updating based on the identity and the ground coordinates of each detection frame.
20. The apparatus of claim 17, wherein the tracking unit is further configured to:
determining the incidence relation among the cameras according to the working areas of the cameras;
determining a newly added detection frame and a disappeared detection frame in a corresponding working area according to the local tracking information of each camera;
associating the newly added detection frame and the disappeared detection frame in different working areas according to the association relationship among the cameras to obtain association information;
and determining the global target track after iterative updating according to the associated information.
21. An object tracking system, comprising: the system comprises a plurality of cameras arranged in a monitoring area and a target tracking device which is respectively in communication connection with the cameras;
wherein the target tracking device is configured to perform the method of any one of claims 1-10.
22. An object tracking device, comprising:
one or more multi-core processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more multi-core processors, cause the one or more multi-core processors to implement:
acquiring current frames to be detected of a plurality of cameras arranged in a monitoring area;
sequentially carrying out target detection on the current frame to be detected of each camera in the plurality of cameras to obtain a detection frame set corresponding to each camera;
and tracking the target according to the detection frame set corresponding to each camera, and determining the global target track according to the tracking result.
23. A computer-readable storage medium storing a program that, when executed by a multi-core processor, causes the multi-core processor to perform the method of any one of claims 1-10.
CN201911258014.2A 2019-12-10 2019-12-10 Target tracking method, device and system and computer readable storage medium Pending CN111145213A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201911258014.2A CN111145213A (en) 2019-12-10 2019-12-10 Target tracking method, device and system and computer readable storage medium
TW109127141A TWI795667B (en) 2019-12-10 2020-08-11 A target tracking method, device, system, and computer accessible storage medium
PCT/CN2020/109081 WO2021114702A1 (en) 2019-12-10 2020-08-14 Target tracking method, apparatus and system, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911258014.2A CN111145213A (en) 2019-12-10 2019-12-10 Target tracking method, device and system and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111145213A true CN111145213A (en) 2020-05-12

Family

ID=70518015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911258014.2A Pending CN111145213A (en) 2019-12-10 2019-12-10 Target tracking method, device and system and computer readable storage medium

Country Status (3)

Country Link
CN (1) CN111145213A (en)
TW (1) TWI795667B (en)
WO (1) WO2021114702A1 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111815675A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Target object tracking method and device, electronic equipment and storage medium
CN111967498A (en) * 2020-07-20 2020-11-20 重庆大学 Night target detection and tracking method based on millimeter wave radar and vision fusion
CN112200841A (en) * 2020-09-30 2021-01-08 杭州海宴科技有限公司 Cross-domain multi-camera tracking method and device based on pedestrian posture
CN112381132A (en) * 2020-11-11 2021-02-19 上汽大众汽车有限公司 Target object tracking method and system based on fusion of multiple cameras
CN112418064A (en) * 2020-11-19 2021-02-26 上海交通大学 Real-time automatic detection method for number of people in library reading room
CN112489085A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN112560621A (en) * 2020-12-08 2021-03-26 北京大学 Identification method, device, terminal and medium based on animal image
CN112614159A (en) * 2020-12-22 2021-04-06 浙江大学 Cross-camera multi-target tracking method for warehouse scene
CN112634332A (en) * 2020-12-21 2021-04-09 合肥讯图信息科技有限公司 Tracking method based on YOLOv4 model and DeepsORT model
CN112819859A (en) * 2021-02-02 2021-05-18 重庆特斯联智慧科技股份有限公司 Multi-target tracking method and device applied to intelligent security
CN112906483A (en) * 2021-01-25 2021-06-04 ***股份有限公司 Target re-identification method and device and computer readable storage medium
CN112906452A (en) * 2020-12-10 2021-06-04 叶平 Automatic identification, tracking and statistics method and system for antelope buffalo deer
WO2021114702A1 (en) * 2019-12-10 2021-06-17 ***股份有限公司 Target tracking method, apparatus and system, and computer-readable storage medium
CN113012223A (en) * 2021-02-26 2021-06-22 清华大学 Target flow monitoring method and device, computer equipment and storage medium
CN113223060A (en) * 2021-04-16 2021-08-06 天津大学 Multi-agent cooperative tracking method and device based on data sharing and storage medium
CN113257003A (en) * 2021-05-12 2021-08-13 上海天壤智能科技有限公司 Traffic lane-level traffic flow counting system, method, device and medium thereof
CN113473091A (en) * 2021-07-09 2021-10-01 杭州海康威视数字技术股份有限公司 Camera association method, device, system, electronic equipment and storage medium
CN115086527A (en) * 2022-07-04 2022-09-20 天翼数字生活科技有限公司 Household video tracking and monitoring method, device, equipment and storage medium
CN115619832A (en) * 2022-12-20 2023-01-17 浙江莲荷科技有限公司 Multi-camera collaborative multi-target track confirmation method, system and related device
CN116071686A (en) * 2023-02-27 2023-05-05 中国信息通信研究院 Correlation analysis method, device and system for cameras in industrial Internet
CN117784798A (en) * 2024-02-26 2024-03-29 安徽蔚来智驾科技有限公司 Target tracking method, intelligent device and computer readable storage medium

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113436295B (en) * 2021-06-25 2023-09-15 平安科技(深圳)有限公司 Living body culture monitoring track drawing method, device, equipment and storage medium
CN113592903A (en) * 2021-06-28 2021-11-02 北京百度网讯科技有限公司 Vehicle track recognition method and device, electronic equipment and storage medium
CN113688278A (en) * 2021-07-13 2021-11-23 北京旷视科技有限公司 Information processing method, device, electronic equipment and computer readable medium
CN113627497B (en) * 2021-07-27 2024-03-12 武汉大学 Space-time constraint-based cross-camera pedestrian track matching method
CN113610895A (en) * 2021-08-06 2021-11-05 烟台艾睿光电科技有限公司 Target tracking method and device, electronic equipment and readable storage medium
CN113642454B (en) * 2021-08-11 2024-03-01 汇纳科技股份有限公司 Seat use condition identification method, system, equipment and computer storage medium
CN113743260B (en) * 2021-08-23 2024-03-05 北京航空航天大学 Pedestrian tracking method under condition of dense pedestrian flow of subway platform
CN113744299B (en) * 2021-09-02 2022-07-12 上海安维尔信息科技股份有限公司 Camera control method and device, electronic equipment and storage medium
CN114120188B (en) * 2021-11-19 2024-04-05 武汉大学 Multi-row person tracking method based on joint global and local features
CN114820700B (en) * 2022-04-06 2023-05-16 北京百度网讯科技有限公司 Object tracking method and device
CN115527162B (en) * 2022-05-18 2023-07-18 湖北大学 Multi-pedestrian re-identification method and system based on three-dimensional space
CN115311820A (en) * 2022-07-11 2022-11-08 西安电子科技大学广州研究院 Intelligent security system near water
CN115497303A (en) * 2022-08-19 2022-12-20 招商新智科技有限公司 Expressway vehicle speed detection method and system under complex detection condition
WO2024108539A1 (en) * 2022-11-25 2024-05-30 京东方科技集团股份有限公司 Target people tracking method and apparatus
CN115690163B (en) * 2023-01-04 2023-05-09 中译文娱科技(青岛)有限公司 Target tracking method, system and storage medium based on image content
CN116363494B (en) * 2023-05-31 2023-08-04 睿克环境科技(中国)有限公司 Fish quantity monitoring and migration tracking method and system
CN117315028B (en) * 2023-10-12 2024-04-30 北京多维视通技术有限公司 Method, device, equipment and medium for positioning fire point of outdoor fire scene
CN117058331B (en) * 2023-10-13 2023-12-19 山东建筑大学 Indoor personnel three-dimensional track reconstruction method and system based on single monitoring camera
CN117237418B (en) * 2023-11-15 2024-01-23 成都航空职业技术学院 Moving object detection method and system based on deep learning

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108876821A (en) * 2018-07-05 2018-11-23 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN110428448A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 Target detection tracking method, device, equipment and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI502558B (en) * 2013-09-25 2015-10-01 Chunghwa Telecom Co Ltd Traffic Accident Monitoring and Tracking System
CN104933392A (en) * 2014-03-19 2015-09-23 通用汽车环球科技运作有限责任公司 Probabilistic people tracking using multi-view integration
CN104331901A (en) * 2014-11-26 2015-02-04 北京邮电大学 TLD-based multi-view target tracking device and method
CN104463900A (en) * 2014-12-31 2015-03-25 天津汉光祥云信息科技有限公司 Method for automatically tracking target among multiple cameras
CN106845385A (en) * 2017-01-17 2017-06-13 腾讯科技(上海)有限公司 The method and apparatus of video frequency object tracking
CN108986158A (en) * 2018-08-16 2018-12-11 新智数字科技有限公司 A kind of across the scene method for tracing identified again based on target and device and Computer Vision Platform
CN109903260B (en) * 2019-01-30 2023-05-23 华为技术有限公司 Image processing method and image processing apparatus
CN111145213A (en) * 2019-12-10 2020-05-12 ***股份有限公司 Target tracking method, device and system and computer readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108875588A (en) * 2018-05-25 2018-11-23 武汉大学 Across camera pedestrian detection tracking based on deep learning
CN108876821A (en) * 2018-07-05 2018-11-23 北京云视万维科技有限公司 Across camera lens multi-object tracking method and system
CN110428448A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 Target detection tracking method, device, equipment and storage medium

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021114702A1 (en) * 2019-12-10 2021-06-17 ***股份有限公司 Target tracking method, apparatus and system, and computer-readable storage medium
CN111815675B (en) * 2020-06-30 2023-07-21 北京市商汤科技开发有限公司 Target object tracking method and device, electronic equipment and storage medium
CN111815675A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Target object tracking method and device, electronic equipment and storage medium
CN111967498A (en) * 2020-07-20 2020-11-20 重庆大学 Night target detection and tracking method based on millimeter wave radar and vision fusion
CN112200841A (en) * 2020-09-30 2021-01-08 杭州海宴科技有限公司 Cross-domain multi-camera tracking method and device based on pedestrian posture
CN112381132A (en) * 2020-11-11 2021-02-19 上汽大众汽车有限公司 Target object tracking method and system based on fusion of multiple cameras
CN112418064A (en) * 2020-11-19 2021-02-26 上海交通大学 Real-time automatic detection method for number of people in library reading room
CN112560621A (en) * 2020-12-08 2021-03-26 北京大学 Identification method, device, terminal and medium based on animal image
CN112906452A (en) * 2020-12-10 2021-06-04 叶平 Automatic identification, tracking and statistics method and system for antelope buffalo deer
CN112489085A (en) * 2020-12-11 2021-03-12 北京澎思科技有限公司 Target tracking method, target tracking device, electronic device, and storage medium
CN112634332A (en) * 2020-12-21 2021-04-09 合肥讯图信息科技有限公司 Tracking method based on YOLOv4 model and DeepsORT model
CN112614159B (en) * 2020-12-22 2023-04-07 浙江大学 Cross-camera multi-target tracking method for warehouse scene
CN112614159A (en) * 2020-12-22 2021-04-06 浙江大学 Cross-camera multi-target tracking method for warehouse scene
CN112906483A (en) * 2021-01-25 2021-06-04 ***股份有限公司 Target re-identification method and device and computer readable storage medium
CN112906483B (en) * 2021-01-25 2024-01-23 ***股份有限公司 Target re-identification method, device and computer readable storage medium
CN112819859A (en) * 2021-02-02 2021-05-18 重庆特斯联智慧科技股份有限公司 Multi-target tracking method and device applied to intelligent security
CN113012223A (en) * 2021-02-26 2021-06-22 清华大学 Target flow monitoring method and device, computer equipment and storage medium
CN113223060A (en) * 2021-04-16 2021-08-06 天津大学 Multi-agent cooperative tracking method and device based on data sharing and storage medium
CN113257003A (en) * 2021-05-12 2021-08-13 上海天壤智能科技有限公司 Traffic lane-level traffic flow counting system, method, device and medium thereof
CN113473091A (en) * 2021-07-09 2021-10-01 杭州海康威视数字技术股份有限公司 Camera association method, device, system, electronic equipment and storage medium
CN115086527B (en) * 2022-07-04 2023-05-12 天翼数字生活科技有限公司 Household video tracking and monitoring method, device, equipment and storage medium
CN115086527A (en) * 2022-07-04 2022-09-20 天翼数字生活科技有限公司 Household video tracking and monitoring method, device, equipment and storage medium
CN115619832B (en) * 2022-12-20 2023-04-07 浙江莲荷科技有限公司 Multi-camera collaborative multi-target track confirmation method, system and related device
CN115619832A (en) * 2022-12-20 2023-01-17 浙江莲荷科技有限公司 Multi-camera collaborative multi-target track confirmation method, system and related device
CN116071686A (en) * 2023-02-27 2023-05-05 中国信息通信研究院 Correlation analysis method, device and system for cameras in industrial Internet
CN117784798A (en) * 2024-02-26 2024-03-29 安徽蔚来智驾科技有限公司 Target tracking method, intelligent device and computer readable storage medium
CN117784798B (en) * 2024-02-26 2024-05-31 安徽蔚来智驾科技有限公司 Target tracking method, intelligent device and computer readable storage medium

Also Published As

Publication number Publication date
WO2021114702A1 (en) 2021-06-17
TWI795667B (en) 2023-03-11
TW202123171A (en) 2021-06-16

Similar Documents

Publication Publication Date Title
CN111145213A (en) Target tracking method, device and system and computer readable storage medium
JP7073527B2 (en) Target object tracking methods and devices, electronic devices and storage media
CN109242913B (en) Method, device, equipment and medium for calibrating relative parameters of collector
US20240092344A1 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
US8730396B2 (en) Capturing events of interest by spatio-temporal video analysis
US20200167568A1 (en) Image processing method, device, and storage medium
US11657373B2 (en) System and method for identifying structural asset features and damage
EP3822857B1 (en) Target tracking method, device, electronic apparatus and storage medium
CN107992366B (en) Method, system and electronic equipment for detecting and tracking multiple target objects
CN111460967A (en) Illegal building identification method, device, equipment and storage medium
CN110706262B (en) Image processing method, device, equipment and storage medium
CN112508078B (en) Image multitasking multi-label recognition method, system, equipment and medium
CN113989696B (en) Target tracking method and device, electronic equipment and storage medium
CN112232203A (en) Pedestrian recognition method and device, electronic equipment and storage medium
CN113592171B (en) Building template support system safety prediction method, medium, device and computing equipment based on augmented reality technology
CN112857746A (en) Tracking method and device of lamplight detector, electronic equipment and storage medium
CN115984516B (en) Augmented reality method based on SLAM algorithm and related equipment
CN113762017B (en) Action recognition method, device, equipment and storage medium
CN113947717A (en) Label identification method and device, electronic equipment and storage medium
US11575589B2 (en) Network traffic rule identification
CN109729316B (en) Method for linking 1+ N cameras and computer storage medium
CN114067145A (en) Passive optical splitter detection method, device, equipment and medium
CN113962955A (en) Method and device for identifying target object from image and electronic equipment
CN113869163A (en) Target tracking method and device, electronic equipment and storage medium
KR20210042859A (en) Method and device for detecting pedestrians

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40020327

Country of ref document: HK