CN112581497A - Multi-target tracking method, system, computing device and storage medium - Google Patents

Multi-target tracking method, system, computing device and storage medium Download PDF

Info

Publication number
CN112581497A
CN112581497A CN201910945830.4A CN201910945830A CN112581497A CN 112581497 A CN112581497 A CN 112581497A CN 201910945830 A CN201910945830 A CN 201910945830A CN 112581497 A CN112581497 A CN 112581497A
Authority
CN
China
Prior art keywords
tracking
tracked
objects
quality
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910945830.4A
Other languages
Chinese (zh)
Inventor
杨迪
孙海洋
李扬彦
陈颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuzhou Online E Commerce Beijing Co ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910945830.4A priority Critical patent/CN112581497A/en
Publication of CN112581497A publication Critical patent/CN112581497A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a multi-target tracking method, a multi-target tracking system, a computing device and a storage medium, wherein in the embodiment of the application, a current frame image is obtained and comprises a plurality of objects to be tracked; and determining the tracking modes of the plurality of objects to be tracked and tracking according to the tracking quality of the plurality of objects to be tracked and at least two provided tracking modes, wherein each tracking mode corresponds to one tracking quality. The tracking mode of the objects to be tracked is determined from at least two tracking modes according to the tracking quality of the objects to be tracked in time, so that the objects to be tracked with poor tracking quality can have accurate tracking effect during subsequent tracking, and the tracking effect of the objects to be tracked in each frame of image can be maintained at a good accuracy. The method and the device realize the accuracy of the tracking effect, and save the running resources and the tracking time.

Description

Multi-target tracking method, system, computing device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a multi-target tracking method, system, computing device, and storage medium.
Background
With the development of information technology, computer vision has been well developed, which enables machines to "see" things, even things that cannot be seen by humans. Particularly for tracking the object in the image, when the machine tracks the object in the image, as the tracking requirement, the number of images and the number of objects in the image increase, in order to improve the tracking accuracy, an algorithm with higher tracking accuracy is adopted to track all the objects in the image separately.
However, the algorithm with higher tracking accuracy will result in more resource consumption and longer tracking time. Aiming at the problems faced by the existing image tracking field, a new solution needs to be provided urgently.
Disclosure of Invention
The embodiment of the application provides a multi-target tracking method, a multi-target tracking system, a computing device and a storage medium, which are used for meeting the accuracy of a tracking effect, saving the consumption of computing resources and improving the instantaneity.
The embodiment of the application provides a multi-target tracking method, which comprises the following steps: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked; and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
The embodiment of the application provides an updating method of a tracking mode, which comprises the following steps: a method for updating a tracking mode, comprising: acquiring tracking modes to be added, updating the original tracking modes, wherein each updated tracking mode corresponds to one tracking quality; selecting at least two tracking modes from the updated tracking modes according to the tracking quality; and under the condition of acquiring a current frame image and tracking a plurality of objects to be tracked in the current frame image, providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality.
The embodiment of the application provides an automatic driving method of a vehicle, which comprises the following steps: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame image of a road condition vehicle through an image acquisition device, wherein the current frame image comprises a plurality of vehicles to be tracked; determining the tracking modes of the vehicles to be tracked according to the tracking qualities of the vehicles to be tracked and tracking; and determining the geographical positions of the vehicles running on the road near the vehicles according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
The embodiment of the application provides an unmanned aerial vehicle tracking method, which comprises the following steps: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame outdoor monitoring image through an image acquisition device, wherein the current frame outdoor monitoring image comprises a plurality of objects to be tracked; determining the tracking modes of the objects to be tracked according to the tracking quality of the objects to be tracked and tracking; and tracking the plurality of objects to be tracked on the geographical positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result.
The embodiment of the application provides a video processing method, which comprises the following steps: providing at least two tracking modes and corresponding to one tracking quality for each tracking mode; acquiring a current frame live video image through a video live end, wherein the current frame live video image comprises a plurality of display objects to be tracked; determining the tracking modes of the plurality of to-be-tracked display objects according to the tracking quality of the plurality of to-be-tracked display objects and tracking; and according to the positions of the plurality of display objects to be tracked in the live video images in the tracking result, performing image processing on the plurality of display objects to be tracked, and displaying the processed live video images.
The embodiment of the application provides a multi-target tracking system, includes: the image acquisition equipment and the image processing equipment; the image acquisition equipment acquires multi-frame images, wherein each frame of image comprises a plurality of objects to be tracked, and sends the multi-frame images to the image processing equipment; the image processing device provides at least two tracking modes and each tracking mode corresponds to one tracking quality; and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
The embodiment of the application provides a computing device, which comprises a memory and a processor; the memory for storing a computer program; the processor to execute the computer program to: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked; and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
The embodiment of the application provides a computing device, which comprises a memory and a processor; the memory for storing a computer program; the processor to execute the computer program to: acquiring tracking modes to be added, updating the original tracking modes, wherein each updated tracking mode corresponds to one tracking quality; selecting at least two tracking modes from the updated tracking modes according to the tracking quality; and under the condition of acquiring a current frame image and tracking a plurality of objects to be tracked in the current frame image, providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality.
The embodiment of the application provides a computing device, which comprises a memory and a processor; the memory for storing a computer program; the processor to execute the computer program to: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame image of a road condition vehicle through an image acquisition device, wherein the current frame image comprises a plurality of vehicles to be tracked; determining the tracking modes of the vehicles to be tracked according to the tracking qualities of the vehicles to be tracked and tracking; and determining the geographical positions of the vehicles running on the road near the vehicles according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
The embodiment of the application provides a computing device, which comprises a memory and a processor; the memory for storing a computer program; the processor to execute the computer program to: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame outdoor monitoring image through an image acquisition device, wherein the current frame outdoor monitoring image comprises a plurality of objects to be tracked; determining the tracking modes of the objects to be tracked according to the tracking quality of the objects to be tracked and tracking; and tracking the plurality of objects to be tracked on the geographical positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result.
The embodiment of the application provides a computing device, which comprises a memory and a processor; the memory for storing a computer program; the processor to execute the computer program to: providing at least two tracking modes and corresponding to one tracking quality for each tracking mode; acquiring a current frame live video image through a video live end, wherein the current frame live video image comprises a plurality of display objects to be tracked; determining the tracking modes of the plurality of to-be-tracked display objects according to the tracking quality of the plurality of to-be-tracked display objects and tracking; and according to the positions of the plurality of display objects to be tracked in the live video images in the tracking result, performing image processing on the plurality of display objects to be tracked, and displaying the processed live video images.
The present application provides a computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to implement the steps of the method.
In the embodiment of the application, a current frame image is obtained, wherein the current frame image comprises a plurality of objects to be tracked; and determining the tracking modes of the plurality of objects to be tracked and tracking according to the tracking quality of the plurality of objects to be tracked and at least two provided tracking modes, wherein each tracking mode corresponds to one tracking quality. Therefore, the tracking method tracks a plurality of objects to be tracked through at least two tracking modes, obviously reduces the waste of running resources, saves more tracking time and improves the real-time property compared with the method of tracking a plurality of objects to be tracked by singly using a tracking mode with higher tracking quality. Because at least two tracking modes belong to different tracking modes, different tracking modes can have the tracking modes with better tracking quality and poorer tracking quality, so that the tracking mode with poorer tracking quality can save the waste of running resources and save more tracking time.
Meanwhile, the tracking modes of the objects to be tracked are determined from at least two tracking modes according to the tracking quality of the objects to be tracked in time, so that the objects to be tracked with poor tracking quality can have accurate tracking effect in subsequent tracking, and the tracking effect of the objects to be tracked in each frame image can be maintained at a good accuracy. The method and the device realize the accuracy of the tracking effect, and save the running resources and the tracking time.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
FIG. 1 is a schematic flow chart diagram illustrating a multi-target tracking method according to an exemplary embodiment of the present application;
FIG. 2 is a schematic structural diagram of a multi-target tracking system according to an exemplary embodiment of the present application;
FIG. 3 is a flowchart illustrating a tracking update method according to an exemplary embodiment of the present application;
FIG. 4A is a schematic flow chart diagram illustrating a method for automated driving of a vehicle according to an exemplary embodiment of the present application;
FIG. 4B is a schematic view of a scenario illustrating autonomous driving of a vehicle according to an exemplary embodiment of the present application;
FIG. 4C is a schematic view of a scenario illustrating autonomous driving of a vehicle according to an exemplary embodiment of the present application;
fig. 5 is a schematic flowchart of a tracking method for a drone according to an exemplary embodiment of the present application;
fig. 6 is a schematic view of a scene tracked by a drone according to an exemplary embodiment of the present application;
fig. 7A is a flowchart illustrating a video processing method according to an exemplary embodiment of the present application;
FIG. 7B is a schematic view of a video processing scenario according to an exemplary embodiment of the present application;
FIG. 8 is a block diagram of a multi-target tracking device according to an exemplary embodiment of the present disclosure;
FIG. 9 is a block diagram of an architecture of an update apparatus for tracking according to an exemplary embodiment of the present application;
FIG. 10 is a schematic structural framework diagram of an autopilot system for a vehicle according to yet another exemplary embodiment of the present application;
fig. 11 is a schematic structural frame diagram of a tracking device of a drone according to yet another exemplary embodiment of the present application;
fig. 12 is a schematic structural framework diagram of a video processing apparatus according to another exemplary embodiment of the present application;
FIG. 13 is a schematic block diagram of a computing device provided in an exemplary embodiment of the present application;
FIG. 14 is a schematic block diagram of a computing device provided in an exemplary embodiment of the present application;
FIG. 15 is a schematic block diagram of a computing device provided in an exemplary embodiment of the present application;
FIG. 16 is a schematic block diagram of a computing device provided in an exemplary embodiment of the present application;
fig. 17 is a schematic structural diagram of a computing device according to an exemplary embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In image recognition of image processing, in order to better perform image recognition on an object in a continuous image or a multi-frame image having an association, a technology for tracking an object to be tracked in an image has been developed. Especially when there are more objects to be tracked in each frame of image.
In order to determine a plurality of objects to be tracked in each frame of image more accurately, an advanced tracking algorithm is used to track each object to be tracked in one frame of image. However, due to the fact that a plurality of objects to be tracked need to be tracked, and the complexity of the tracking time is positively correlated with the number of the objects to be tracked, the tracking time of the advanced tracking algorithm is long, so that multi-target tracking cannot be completed within a limited time, and meanwhile, more running resources are occupied. The advanced tracking algorithm may include: and (3) tracking algorithm of image feature prediction.
In order to not occupy excessive running resources and save the tracking time, a low-level tracking algorithm can be applied to track each object to be tracked in one frame of image. However, the low-level tracking algorithm is difficult to adapt to the change of the motion law of the object to be tracked, and the tracking effect is poor. In an actual application scene, the motion law of an object to be tracked often changes, so that a low-level tracking algorithm cannot complete a target tracking task. Among other things, the low-level tracking algorithm may include: and (3) a tracking algorithm for spatial feature prediction.
The tracking characteristics of the low-level tracking algorithm and the high-level tracking algorithm are shown by table 1 below:
table 1:
Figure BDA0002224087350000061
Figure BDA0002224087350000071
based on the method, a low-level tracking algorithm and a high-level tracking algorithm can be used alternately in a plurality of frames of images, and each tracking algorithm still tracks all objects to be tracked in one frame of image. For example, of the 10 frames of images, the 2 nd frame of image, the fourth frame of image and the eighth frame of image use a low-level tracking algorithm to track all objects to be tracked in the images. And tracking all objects to be tracked in the image by using an advanced tracking algorithm in other images.
However, this approach will result in an increase in the image frame interval in which advanced tracking algorithms are used, which in turn results in an increase in the displacement of objects in the image, resulting in a decrease in tracking accuracy.
Therefore, no matter which way is used for tracking, the requirements of tracking effect, tracking time and running resources cannot be met at the same time.
Therefore, in the embodiment of the application, a current frame image is obtained, and the current frame image comprises a plurality of objects to be tracked; and determining the tracking modes of the plurality of objects to be tracked and tracking according to the tracking quality of the plurality of objects to be tracked and at least two provided tracking modes, wherein each tracking mode corresponds to one tracking quality. Therefore, the tracking method tracks a plurality of objects to be tracked through at least two tracking modes, obviously reduces the waste of running resources, saves more tracking time and improves the real-time property compared with the method of tracking a plurality of objects to be tracked by singly using a tracking mode with higher tracking quality. Because at least two tracking modes belong to different tracking modes, different tracking modes can have the tracking modes with better tracking quality and poorer tracking quality, so that the tracking mode with poorer tracking quality can save the waste of running resources and save more tracking time.
Meanwhile, the tracking modes of the objects to be tracked are determined from at least two tracking modes according to the tracking quality of the objects to be tracked in time, so that the objects to be tracked with poor tracking quality can have accurate tracking effect in subsequent tracking, and the tracking effect of the objects to be tracked in each frame image can be maintained at a good accuracy. Therefore, the accuracy of the tracking effect is met, the running resource is saved, and the tracking time is saved.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a multi-target tracking method according to an exemplary embodiment of the present application. The method 100 provided by the embodiment of the present application is executed by an image processing device, such as an image processing device, an indoor server, a virtual server, and the like, which are disposed on a road, and the method 100 includes the following steps:
101: at least two tracking modes are provided and each tracking mode corresponds to one tracking quality.
102: and acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked.
103: and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
It should be noted that, besides being executed by the executing device, the method 100 may also be executed directly by a terminal device, such as a computer, which ultimately needs the tracking results.
The following is detailed for the above steps:
101: at least two tracking modes are provided and each tracking mode corresponds to one tracking quality.
The at least two tracking modes comprise a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the first tracking mode refers to an advanced tracking algorithm, the advanced tracking algorithm can have a plurality of different advanced tracking algorithms, each advanced tracking algorithm corresponds to a tracking quality, and the tracking quality refers to the quality of a tracking effect. The tracking quality is high, the tracking effect is good, and the tracking quality is low, the tracking effect is poor. The first tracking manner may include: and (3) image feature prediction tracking algorithm (image feature tracking). More specifically, the Tracking algorithm for image feature prediction may be a KCF (Kernel Correlation Filter) Tracking algorithm, a CSK (circular Structure of Tracking-by-detection with Kernels) Tracking algorithm, a CN (Color Name) Tracking algorithm, or the like.
The KCF tracking algorithm mainly comprises classifier training, target frame detection and classifier updating, and the size of an object is assumed to be unchanged in the tracking process.
The second tracking mode refers to a low-level tracking algorithm, which may have a plurality of different low-level tracking algorithms, each corresponding to a tracking quality, such as tracking accuracy. The low-level tracking algorithm may include: a space feature prediction tracking algorithm (space feature tracking). The tracking algorithm for spatial feature prediction may be more specifically a Kalman Filter (Kalman Filter) tracking algorithm, a Smooth Filter (Smooth Filter) tracking algorithm, or the like.
Kalman filtering is a recursive filter proposed by Kalman (Kalman) for time-varying linear systems. The system can be described by a differential equation model containing orthogonal state variables, and the filter is used for estimating future errors by combining the past measurement estimation errors into new measurement errors.
It should be understood from the foregoing that the first tracking quality is better than the second tracking quality means that the tracking quality of the high-level tracking algorithm is better than the tracking quality of the low-level tracking algorithm, i.e., the tracking effect or tracking accuracy of the high-level tracking algorithm is better than the tracking effect or tracking accuracy of the low-level tracking algorithm.
Because the provided tracking mode comprises the high-level tracking algorithm and the low-level tracking algorithm, and the operation resources and the tracking time occupied by the low-level tracking algorithm are obviously lower than those occupied by the high-level tracking algorithm, compared with the method for tracking a plurality of objects to be tracked by singly using the high-level tracking algorithm, the method for tracking the plurality of objects to be tracked by using at least two tracking algorithms obviously saves the operation resources and the tracking time.
102: and acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked.
The object to be tracked refers to an object to be tracked or an object to be positioned, and the object to be tracked is different according to different application scene environments. The object to be tracked may be a person, a commodity, or the like.
The manner of acquiring the current frame image may include, but is not limited to, the following:
1) the related images can be multi-frame images continuously shot in time, or multi-frame images shot at preset time intervals in time, or a section of video continuously shot in time, wherein the video comprises the multi-frame images. And taking an image to be subjected to object tracking in the multi-frame images as a current frame image.
For example, a camera of an image capturing device disposed on a road continuously captures 10 frames of vehicle images on the road section in time, and transmits the images to the image capturing device, and the image capturing device performs object tracking on a first frame of image, which is a current frame of image. Wherein each frame of image has a plurality of vehicles.
2) The method comprises the steps of receiving a plurality of frames of associated images sent by equipment with an image acquisition function, wherein the associated images can be a plurality of frames of images continuously shot in time, can also be a plurality of frames of images shot at preset time intervals in time, and can also be a section of video continuously shot in time, and the video comprises the plurality of frames of images. And taking an image to be subjected to object tracking in the multi-frame images as a current frame image.
For example, the server receives a section of video which is continuously shot in time and is sent by a roadside camera, the video comprises 100 frames of images, and the server performs object tracking on a first frame of image, wherein the first frame of image is a current frame of image. Wherein each frame of image has a plurality of merchandise.
No matter which method is adopted, after the current frame image is obtained, the tracking methods of the multiple objects to be tracked are determined and tracked according to the tracking quality of the multiple objects to be tracked, see step 103.
103: and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
The method for determining the tracking quality may include:
1) counting the number of images which are used for tracking the plurality of objects to be tracked respectively in a second tracking mode in the images before the current image; and determining the tracking quality of a plurality of objects to be tracked according to the number of the images.
It should be understood that the higher the number of images respectively tracking the plurality of objects to be tracked by using the second tracking method is, the worse the tracking quality is, and the lower the number of images respectively tracking the plurality of objects to be tracked by using the second tracking method is, the better the tracking quality is.
The tracking quality S can be determined by the following equation 1):
S=w1*Ai 1)
wherein w1 is a weight coefficient, which is a preset value; ai is the number of images for respectively tracking the plurality of objects to be tracked by using at least one second tracking mode.
For example, according to the foregoing, after receiving 10 vehicle images, the server determines the tracking algorithm for each vehicle in each frame of the previous 1-4 vehicle images before tracking the vehicle in the 5 th frame of the previous vehicle images, e.g., each frame of the vehicle images includes vehicle a, vehicle B, and vehicle C. The vehicle A in the 2 nd and 4 th images is tracked by using a Kalman filtering tracking algorithm. Then the number of images is 2 for vehicle a and the other vehicles determine the number of images in the same manner, and so too much of an example is not given here. Finally, it can be determined that the number of images of the vehicle a is 2, the number of images of the vehicle B is 3, and the number of images of the vehicle C is 4. The tracking mass of the vehicle a is 2w1, the tracking mass of the vehicle B is 3w1, and the tracking mass of the vehicle C is 4w 1.
The tracking quality is determined by using the number of the images for respectively tracking the plurality of objects to be tracked in the second tracking mode, so that the tracking quality can be simply, effectively and accurately determined, and the tracking time is prolonged.
In order to further determine the tracking quality and accurately select the target object and improve the tracking time when the number of images is the same, the method 100 further includes: and determining the tracking quality of the plurality of objects to be tracked according to the number of the images and the size of the objects to be tracked in the current frame image.
The determination method of the size of the object to be tracked in the current frame image may include the following two ways:
1) the area of the vehicle in the image is directly taken as the size of the vehicle, and the area can be the area of the circumscribed rectangular frame of each vehicle in the image, namely the length and the width (or the height and the width) of the corresponding circumscribed rectangular frame, and the size can be further (the length and the width)/(the image width) for the convenience of calculation and the self-adaptive image resolution2
2) The length and/or width of the area of the vehicle in the image is directly taken as its size, e.g., size (pixel height + pixel width)/image width. I.e., pixel height and pixel width corresponding to the circumscribed rectangle, wherein the pixel height/image width is to adapt to the FOV (Field of Vision) of different lenses.
It should be noted that when the final target object cannot be determined by the number of images, another reference factor may be added to further determine the target object, where the other reference factor is the size. It can also be said that the number of images and the size can be used as reference factors for determining the tracking quality.
Therefore, the tracking quality can also be determined by the following formula 2):
S=w1*Ai+w2*Bi 2)
wherein w2 is a weight coefficient, which is a preset value, and w1 is far greater than w 2; bi is the above-mentioned size.
Since w1 is much larger than w2, S is mainly affected by Ai, and Bi when Ai is the same. Wherein S can also be determined by more dimensions according to the business requirements. And because Ai is bigger, the more the condition that one of the low-level tracking algorithms is used for tracking the object to be tracked is, the bigger the error of the tracking result of the object to be tracked is, and the bigger the positioning error is.
For example, as described above, if the server determines that the number of images of the vehicle a using the kalman filter tracking algorithm is 2, the number of images of the vehicle B using the kalman filter tracking algorithm is 4, and the number of images of the vehicle C using the kalman filter tracking algorithm is 4, then the tracking quality of the vehicle B is 4w1 and the tracking quality of the vehicle C is 4w1, and it is necessary to further determine the tracking quality again. The tracking masses of the vehicle B and the vehicle C are determined again by the above equation 2), when the determined tracking mass of the vehicle C is 4w1+0.01w2 higher than the tracking mass of the vehicle B of 4w1+0.005w 2.
It should be noted that, for the equations 1) and 2), Ai is preferably the number of low-level tracking algorithms, such as the number of kalman filter tracking algorithms. That is, of the at least two tracking algorithms provided, it is preferred that there be only one low-level tracking algorithm, e.g., the others are high-level tracking algorithms. More preferably, a high level tracking algorithm and a low level tracking algorithm may be provided.
If there are multiple low-level tracking algorithms in the provided at least two tracking algorithms, the tracking quality can be determined by using the number of images of all low-level tracking algorithms in the previous image that respectively track multiple objects to be tracked. The tracking quality may also be determined based on the number of images in the previous image that respectively track the plurality of objects to be tracked using the worst low-level tracking algorithm, using the worst low-level tracking algorithm of the plurality of low-level tracking algorithms as a criterion.
Furthermore, the manner of determining the tracking quality may further include:
2) counting the number of images which are used for tracking a plurality of objects to be tracked respectively in a first tracking mode in the images before the current image; and determining the tracking quality according to the number of the images.
It should be noted that, at this time, the smaller the number of images respectively tracking the plurality of objects to be tracked by using the first tracking method is, the worse the tracking quality is, and the higher the number of images respectively tracking the plurality of objects to be tracked by using the first tracking method is, the better the tracking quality is.
For example, according to the foregoing, after receiving 10 vehicle images, the server determines the tracking algorithm for each vehicle in each frame of the previous 1-4 vehicle images before tracking the vehicle in the 5 th frame of the previous vehicle images, e.g., each frame of the vehicle images includes vehicle a, vehicle B, and vehicle C. The vehicle a in the 2 nd and 4 th images is tracked by using a KCF tracking algorithm. Then the number of images using the KCF tracking algorithm is 2 for the a vehicle and the other vehicles determine the number of images in the same manner, and so too much of an example is not given here. Finally, it can be determined that the number of images of vehicle a using the KCF tracking algorithm is 2, the number of images of vehicle B using the KCF tracking algorithm is 3, and the number of images of vehicle C using the KCF tracking algorithm is 4. Then, the tracking mass of the vehicle a is 2M, the tracking mass of the vehicle B is 3N, and the tracking mass of the vehicle C is 4N, where N is a preset weight coefficient.
Under the condition that the tracking quality is the same, the size of the object to be tracked can be continuously determined, and the determination mode is similar to the determination mode of the formula 2), and the details are not repeated here.
It should be noted that, for the mode 2), the number of images is preferably the number of advanced tracking algorithms, such as the number of KCF tracking algorithms. That is, of the at least two tracking algorithms provided, it is preferred that only one high-level tracking algorithm, e.g., the others, be low-level tracking algorithms. More preferably, a high level tracking algorithm and a low level tracking algorithm may be provided.
If there are multiple advanced tracking algorithms in at least two provided tracking algorithms, the tracking quality can be determined by using the number of images of all advanced tracking algorithms in the previous image, which respectively track multiple objects to be tracked. The tracking quality can also be determined according to the number of images which are respectively tracked on a plurality of objects to be tracked and use the best high-level tracking algorithm in the previous images by taking the best high-level tracking algorithm in a plurality of high-level tracking algorithms as a standard.
Specifically, determining a tracking mode and tracking according to the tracking quality of a plurality of objects to be tracked comprises the following steps: screening out target objects which do not meet the tracking quality requirement from a plurality of objects to be tracked; and tracking the screened target object in a first tracking mode, and tracking other objects to be tracked in a second tracking mode.
The tracking quality requirement refers to the condition that the tracking quality is adjusted without using a first tracking mode, namely an advanced tracking algorithm, in the current frame image.
By tracking the target object which does not meet the tracking quality requirement by the advanced tracking algorithm, the tracking quality of the target object can be adjusted in time, so that the tracking quality of the target object can be maintained at a stable tracking quality, and the tracking quality of the target object is accurate. Meanwhile, other objects to be tracked cannot be influenced, because the tracking quality of other objects to be tracked can be maintained on a more stable tracking quality standard.
The screening out the target object which does not meet the tracking quality requirement may include: and screening out a first tracking number of target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked.
The first tracking mode corresponds to a first tracking number, and the first tracking number is determined according to preset running resources and preset tracking time of the execution equipment.
Similarly, the second tracking mode corresponds to the second tracking number. The sum of the first tracking number and the second tracking number is greater than or equal to the number of the plurality of objects to be tracked. The second tracking number is also determined based on preset execution resources of the execution device and a preset tracking time.
The running resources may also be referred to as computing resources, and refer to resources required by the execution device when the running program (e.g., the tracking algorithm) runs, such as CPU resources, memory resources, hard disk resources, network resources, and the like. For example, the CPU model is i7-7700HQ, the CPU number is 4 cores and 8 threads, the dominant frequency is 2.80GHz, and the like, and the test of all tracking algorithms is completed by using a single thread. The preset running resource refers to a set running resource and cannot be changed.
The tracking time may also be referred to as a positioning time, and refers to a time when a tracking result is acquired, such as a tracking time for one frame of image or a tracking time for multiple frames of images. The preset positioning time refers to the set positioning time and cannot be changed.
The maximum number of the objects to be tracked in any frame of image can be used as the number of the objects to be tracked in any frame of image, and the determination method can be as follows: according to the size (such as the size) of one frame of image and the minimum size (such as the size) of the object to be tracked in the image, determining how many objects to be tracked can appear in the image at most, and determining the maximum number of the objects to be tracked. It should be noted that, for the same device with an image capturing function, or for images of the same specification, the specification image may be applied to a plurality of devices with an image capturing function, and the maximum number of objects to be tracked is the same.
For example, as described above, an image processing apparatus, such as a server, may run various tracking algorithms according to its own running resources, such as the CPU model of i7-7700HQ, the number of CPUs being 4-core 8 threads, the main frequency being 2.80GHz, and the like, where all the tests of the tracking algorithms are completed using a single thread, to obtain a tracking result, and meanwhile, it is ensured that the tracking algorithms are completed within the preset positioning time and the number of all the executed tracking algorithms meets the maximum number of the objects to be tracked.
On the premise of ensuring the preset positioning time and the number of all executed tracking algorithms meets the maximum number (for example, 10) of the objects to be tracked, so that a combination scheme of multiple tracking algorithms is determined, for example, a KCF tracking algorithm can run for 2, a Kalman filtering tracking algorithm can run for 2, and a Smooth filtering (smoothfilter) tracking algorithm can run for 6. Or the KCF tracking algorithm can run 4, the Kalman filtering tracking algorithm 6 and the like. At this time, a scheme with a better tracking effect can be selected.
It should be noted that, in a preferred embodiment, multi-target tracking is implemented for a KCF tracking algorithm and a kalman filter tracking algorithm. The tracking time and occupied operation resources of the KCF tracking algorithm are far greater than those of the Kalman filtering tracking algorithm, and the difference between the tracking time and occupied operation resources of the KCF tracking algorithm can be hundreds of times, so that the Kalman filtering tracking algorithm can be basically ignored for the KCF tracking algorithm, at the moment, the number of the KCF tracking algorithms can be only determined, for example, the server determines the maximum number of executable KCF tracking algorithms in preset positioning time according to the operation resources of the server, and the number of the Kalman filtering tracking algorithms can be set to be unlimited, because the Kalman filtering tracking algorithm can ignore the occupied tracking time and the occupied operation resources relative to the KCF tracking algorithm.
Therefore, the server can determine that the number of KCF tracking algorithms can be supported by the self running resources and the preset positioning time, for example, the number of KCF tracking algorithms is 4, but the running number of the Kalman filtering tracking algorithms is not limited.
For example, according to the foregoing, after receiving 10 vehicle images, the server determines the tracking algorithm for each vehicle in each frame of the previous 1-4 vehicle images before tracking the vehicle in the 5 th frame of the previous vehicle images, e.g., each frame of the vehicle images includes vehicle a, vehicle B, and vehicle C. Finally, it can be determined that the number of images of the vehicle A using the Kalman filtering tracking algorithm is 2, the number of images of the vehicle B using the Kalman filtering tracking algorithm is 3, and the number of images of the vehicle C using the Kalman filtering tracking algorithm is 4. The sequencing result of the tracking quality from bad to good is as follows: vehicle C, vehicle B, vehicle a. When the tracking algorithms are the Kalman filtering tracking algorithm and the KCF tracking algorithm, the number of the KCF tracking algorithms is 1, and the number of the Kalman filtering tracking algorithms is not limited. The vehicle C is selected as the target object according to the order from bad to good.
It should be noted that the quality of tracking can be sorted from good to bad. When a plurality of advanced tracking algorithms exist in the provided tracking algorithms, the target object is selected according to the number of all the advanced tracking algorithms.
It should be noted that, in the at least two tracking methods provided, a first tracking method, such as a KCF tracking algorithm, and a second tracking method, such as a kalman filter tracking algorithm, are included. The tracking mode of the target object is directly set as a first tracking mode, and the tracking modes of other objects to be tracked are set as second tracking modes.
For example, as described above, the server determines the target object, such as vehicle C. The tracking mode of the vehicle C is directly set as the KCF tracking algorithm. And setting the tracking mode of other vehicles to be tracked as a Kalman filtering tracking algorithm so as to track the target.
If the at least two tracking modes include multiple first tracking modes, the corresponding tracking mode can be set according to the tracking quality corresponding to the target object. Wherein, the setting mode can include: and sequentially matching the tracking quality sorted from bad to good with the corresponding number of first tracking modes sorted from good to bad. And if the tracking quality is the same, directly and randomly setting a first tracking mode.
For a plurality of second tracking modes, the tracking modes of other objects to be tracked can be randomly set as the second tracking mode. Or, according to the tracking quality of other objects to be tracked sorted from bad to good, the second tracking modes with the corresponding number sorted from good to bad are sequentially matched. When the number of the second tracking modes is not limited, the number problem is not considered, and other objects to be tracked are directly set as the best second tracking mode.
For example, as described above, the server determines the target objects, such as vehicle C and vehicle B, and the tracking quality of vehicle C is worse than that of vehicle B. The first tracking mode comprises a KCF tracking algorithm and a CSK tracking algorithm, the tracking effect of the KCF tracking algorithm is superior to that of the CSK tracking algorithm, the number of the KCF tracking algorithms is 1, and the number of the CSK tracking algorithms is 1. The tracking mode of the vehicle C is set to the KCF tracking algorithm and the tracking mode of the vehicle B is set to the CSK tracking algorithm. And setting the tracking mode of other vehicles to be tracked as a Kalman filtering tracking algorithm so as to track the target.
In addition, after the adjustment of the tracking mode of the object to be tracked in the current frame image is completed, the method 100 further includes: and determining and acquiring the positions of a plurality of objects to be tracked in the current frame image.
For example, after determining the tracking mode of each vehicle in the 5 th frame of vehicle image, the server tracks the coordinates of each vehicle according to the determined tracking mode, and obtains the tracking result, such as the coordinates of a bounding rectangle of each vehicle. The coordinates may be the center coordinates of a bounding rectangle.
In addition, when performing object tracking for each frame of images in the plurality of related images, in order to further improve the tracking effect or tracking quality of the object, the method 100 further includes: determining whether the current frame image is an image to be corrected or not according to preset time or the number of preset images; if yes, each image to be tracked in the images to be corrected is directly tracked through an image recognition algorithm. Therefore, the tracking effect or tracking quality of each object to be tracked in the current frame image is at the same height, the tracking quality of the object with poor tracking quality is effectively adjusted, and a tracking basis is provided for other subsequent related images during object tracking.
For example, according to the foregoing, the server may perform the image recognition algorithm every five frames of images or one minute to determine the position of the object to be tracked, and then perform the image recognition algorithm recognition on the corresponding images in the sixth frame of image, the eleventh frame of image, and the like in a section of video, for example, determining the coordinates and the size of each vehicle in the sixth frame of image, and the like.
After the image recognition operation is performed on the corresponding image, the adjustment of the tracking mode can be continuously performed on other subsequent images of the corresponding image until the image recognition is performed again.
It should be noted that, when the current frame image is the first frame image, that is, when the current frame image is the initial image, each object to be tracked in the first frame image may be tracked through the above identification algorithm, and the position of each object to be tracked in the first frame image is determined. Therefore, the tracking effect or tracking quality of each object to be tracked from the initial tracking is at the same height, and the target tracking of subsequent images is better carried out. The tracking quality may be determined according to the above formula 2) before the target tracking is performed on the subsequent second frame image. Because each object to be tracked in the first frame image is tracked through the identification algorithm, and the tracking quality of the object to be tracked can be ensured to be better by tracking through the identification algorithm. Therefore, in the first frame image, w1 × Ai of each object to be tracked in formula 2) is the same and is 0, and w1 × Ai is 0 because Ai refers to the number of images to be tracked using the specified tracking method, and the first frame image is the recognition algorithm used. It is only necessary to determine the tracking quality of the first frame image by w2 × Bi. Therefore, the tracking algorithm is adjusted according to the above specific embodiment, and the tracking of each object to be tracked in the second frame image is completed.
It should be further noted that, in the embodiment of the present application, the object determined by image recognition is different from the previously recognized object, for example, the object is newly added or reduced, and more specifically, on a section of road, a new vehicle enters into the shooting range, or a vehicle exits from the shooting range, and the like, then for the newly appeared object, the tracking quality is determined according to the above contents, so as to perform tracking mode adjustment. And for the disappeared object, the tracking is not continued.
When tracking is performed by adjusting the tracking mode according to the tracking quality through the above contents, new objects are added or reduced, and the processing mode is the same as that of the above contents, which is not described herein again.
In addition, the tracking algorithm can be continuously updated, so that the goodness of the algorithm is further improved, the tracking effect is improved again on the basis of the embodiment of the application, the tracking time is reduced, the occupied running resources are saved, and the like. When the algorithm is updated, the method 100 further includes: and after the tracking modes are updated, determining a first tracking mode and a second tracking mode according to the updated tracking quality corresponding to each tracking mode.
For example, after the tracking algorithm is updated, the server may re-determine the high-level tracking algorithm and the low-level tracking algorithm according to the tracking quality corresponding to each updated tracking algorithm, so as to perform multi-target tracking. The server may also re-determine the number of tracking algorithms for updates according to the above-described number-of-determinations embodiments. And when multi-target tracking is carried out, an updated tracking algorithm is provided.
According to the embodiment of the application, the accuracy and the real-time performance of different tracking algorithms are utilized, various different tracking algorithms are fused, under the condition that the tracking quality is guaranteed, the multi-target tracking can be completed within a limited time, and meanwhile the current computing resources are met.
In a data simulation experiment, a data multi-target tracking data set collected in a traffic monitoring scene is used, 25 video data are used, and the length of each video is about 1800 frames. The data set is a real traffic road scene shot at different places of a city, the average number of target objects in a single frame of the test set is 4, and the maximum number of the target objects is 14.
The following two algorithm results were compared experimentally: (1) the tracking algorithm of image feature prediction (original version origin version) is used entirely. (2) In the tracking algorithm fusing image feature prediction and the tracking algorithm (optimized version, this embodiment of the present application), for one image, the upper limit of the number of target objects using the tracking algorithm (e.g., KCF tracking algorithm) using image feature prediction is 4, and the remaining targets use the tracking algorithm (e.g., kalman filter tracking algorithm) using spatial feature prediction.
The experiment compares the origin version and the enhanced version as shown in Table 2. Table 2 is a comparison of the improved before and after average accuracy (accuracy) and accuracy (precision). The accuracy is an inter-over-Unit function IoU (interaction-over-Unit) of a target object frame truth value and a tracking result frame, the accuracy is a recall rate of the target object, and the maximum time (max time cost) is the maximum time for completing multi-target tracking of each frame.
TABLE 2
accuracy precision max time cost
origin version 82.94% 96.05% 29ms
improved version 82.32% 95.79% 8.5ms
Through comparison of evaluation results before and after optimization in table 1, it can be seen that compared with the original algorithm, the embodiment of the application is based on time consumption of optimization tracking: (1) tracking accuracy drops by 0.62%, (2) precision drops by 0.26%, (3) max time cost drops by 8.5 ms; achieves good technical effect and accords with the content.
In order to enable the multi-target tracking method provided by the embodiment of the application to be suitable for more scenes and reduce the complexity of executing the multi-target tracking method at a single end, the application also provides a multi-target tracking system.
Fig. 2 is a schematic structural diagram of a multi-target tracking system according to an exemplary embodiment of the present application. As shown in fig. 2, the tracking system 200 may include: an image acquisition device 201 and an image processing device 202.
The image capturing device 201 is mainly responsible for capturing images. In addition, the image capturing device 201 also has a network transmission function, and is responsible for transmitting the captured image to the image processing device. Besides, the image capturing device 201 may also provide a computing processing service, and is a device for tracking information using a network. In physical implementation, the image capturing device 201 may be any device capable of providing a computing service, responding to a service request, and performing processing, and may be, for example, an image capturing device disposed at a roadside, indoors, in an office area, or in a public place, such as a camera, an electronic eye, or the like. The image capturing device 201 mainly includes a processor, a hard disk, a memory, a system bus, and the like, and is similar to a general computer architecture.
The image processing device 202 may be a device with certain computing, communication, and image processing capabilities. The image processing device 202 may be responsible for tracking the object to be tracked in the image. In terms of physical implementation, the image processing device 202 may also be any device capable of providing a computing service, responding to a service request, and performing processing, and may be, for example, a conventional server, a cloud server, a smart terminal, an in-vehicle terminal, an unmanned aerial vehicle, or the like.
In the present embodiment described above, the image capturing apparatus 201 may be in network connection with the image processing apparatus 202, and the network connection may be a wireless connection. If the image capturing device 201 is communicatively connected to the image processing device 202, the network format of the mobile network may be any one of 2G (gsm), 2.5G (gprs), 3G (WCDMA, TD-SCDMA, CDMA2000, UTMS), 4G (LTE), 4G + (LTE +), WiMax, and 5G.
In the embodiment of the present application, the image capturing device 201 acquires a plurality of frames of images, where each frame of image includes a plurality of objects to be tracked, and sends the plurality of frames of images to the image processing device 202.
The image processing device 202 is used for providing at least two tracking modes and each tracking mode corresponds to one tracking quality; and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
It should be understood that, since the system 200 is implemented based on the multi-target tracking method 100, the system 200 can also implement the technical effects of the multi-target tracking method 100 to solve the technical problems thereof, and thus, the detailed description thereof is omitted here.
Specifically, the image processing apparatus 202, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the image processing equipment screens out target objects which do not meet the tracking quality requirement from a plurality of objects to be tracked; and tracking the screened target object in a first tracking mode, and tracking other objects to be tracked in a second tracking mode. The first tracking quality is better than the second tracking quality.
In addition, the first tracking mode corresponds to a first tracking number; the image processing equipment screens out a first tracking number of target objects which do not meet the tracking quality requirement from a plurality of objects to be tracked. The first trace amount and the second trace amount are determined according to a preset running resource of the execution device and a preset trace time.
It should be noted that, since the embodiments of the multi-target tracking method have been described in detail in the foregoing, the embodiments of the system 200 are similar to those of the multi-target tracking method, and thus will not be described in detail herein.
It should be further noted that, for other parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the foregoing multi-target tracking method, and details are not repeated herein.
In order to further improve the multi-target tracking time again, save the occupation of running resources, improve the tracking quality and optimize the tracking effect on the basis of a multi-target tracking algorithm, the application also provides an updating method of a tracking mode, and further tracking optimization is realized through the updated tracking mode.
Fig. 3 is a flowchart illustrating an updating method of a tracking manner according to an exemplary embodiment of the present application. The method 300 provided by the embodiment of the present application is executed by an image processing device, such as an image processing device, an indoor server, a virtual server, and the like, which are disposed on a road, and the method 300 includes the following steps:
301: and acquiring the tracking modes to be added, updating the original tracking modes, wherein each updated tracking mode corresponds to one tracking quality.
302: and selecting at least two tracking modes from the updated tracking modes according to the tracking quality.
303: under the condition of acquiring a current frame image and tracking a plurality of objects to be tracked in the current frame image, at least two tracking modes are provided, and each tracking mode corresponds to one tracking quality.
The following is detailed with respect to the above steps 301-304:
301: and acquiring the tracking modes to be added, updating the original tracking modes, wherein each updated tracking mode corresponds to one tracking quality.
The original tracking mode refers to a tracking algorithm which is already used for multi-target tracking.
For example, the server for multi-target tracking acquires the tracking algorithm to be added from the algorithm server, and the tracking algorithm to be added can be a newly developed tracking algorithm or an existing algorithm, but is not used for multi-target tracking. And stored into the original tracking algorithm.
302: and selecting at least two tracking modes from the updated tracking modes according to the tracking quality.
Specifically, according to the tracking quality, at least two tracking modes are selected from the updated tracking modes, including: selecting a tracking mode with the highest tracking quality as a first tracking mode; and selecting the tracking mode with lower tracking quality as the second tracking mode.
For example, as described above, the server determines that the tracking algorithm to be added is the advanced tracking algorithm, i.e., the first tracking mode, according to the characteristics of the algorithms, such as tracking effect, tracking time, occupation of operating resources, and the like, the server may select the tracking algorithm with the best tracking quality from the updated advanced tracking algorithms as the first tracking mode, and the second tracking mode may be unchanged. And vice versa.
303: under the condition of acquiring a current frame image and tracking a plurality of objects to be tracked in the current frame image, at least two tracking modes are provided, and each tracking mode corresponds to one tracking quality.
For example, as described above, in performing multi-target tracking, the server may provide at least two tracking methods. It should be noted that the specific implementation of step 303 is similar to the specific implementation of the tracking algorithm in the multi-target tracking described above, and therefore, the detailed description thereof is omitted here.
It should be noted that, for other parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the foregoing multi-target tracking method, and details are not repeated herein.
To optimize the tracking quality while reducing the storage space, the method 300 further comprises: the unselected tracking mode is deleted from the updated tracking mode.
For example, as described above, after selecting at least two tracking methods, the server deletes the non-selected tracking method, such as deleting the low-level tracking algorithm with the worst tracking quality.
In the field of unmanned vehicle driving, the driving of unmanned vehicles on roads and the safety of driving are of great interest to the public. In order to ensure the driving speed of the unmanned vehicle, the position of the vehicle beside the unmanned vehicle needs to be quickly determined in the driving process, so that the unmanned vehicle can avoid and smoothly drive. How to quickly, efficiently and accurately determine the position of the vehicle becomes a key for ensuring the smooth driving of the unmanned vehicle and the driving safety. Therefore, the application also provides an automatic driving method of the vehicle.
Fig. 4A is a schematic flowchart of an automatic driving method for a vehicle according to another exemplary embodiment of the present application. The method 400A provided in the embodiment of the present application is executed by an image processing apparatus with an image capturing function, for example, an image processing apparatus or a server with an image capturing function disposed on a road, and the method 400A includes the following steps:
401: at least two tracking modes are provided, and each tracking mode corresponds to one tracking quality.
402: the method comprises the steps of obtaining a current frame image of road condition vehicles through an image acquisition device, wherein the current frame image comprises a plurality of vehicles to be tracked.
403: and determining the tracking modes of the vehicles to be tracked according to the tracking qualities of the vehicles to be tracked and tracking the vehicles to be tracked.
404: and determining the geographical positions of vehicles nearby the vehicles running on the road according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
It should be noted that, since the steps 401 and 403 are similar to the specific implementation process of the multi-target tracking method in the foregoing, detailed description is omitted here. For the following description, the road condition refers to the condition of the vehicle running on the road. The image acquisition device may be a camera.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the tracking method of the plurality of vehicles to be tracked is determined and tracked according to the tracking quality of the plurality of vehicles to be tracked, and the tracking method comprises the following steps: screening out target vehicles which do not meet the tracking quality requirement from a plurality of vehicles to be tracked; and tracking the screened target vehicle in a first tracking mode, and tracking other vehicles to be tracked in a second tracking mode. The first tracking quality is better than the second tracking quality.
It should be noted that, since the specific implementation process of selecting the target vehicle is similar to the specific implementation process of selecting the target object in the foregoing, detailed description is omitted here.
In order to select the target vehicle more accurately, wherein the target vehicle which does not meet the tracking quality requirement is screened from a plurality of vehicles to be tracked, the method comprises the following steps: and screening a first tracking number of target vehicles which do not meet the tracking quality requirement from the plurality of vehicles to be tracked. The first tracking manner corresponds to a first tracking number.
It should be noted that, since the specific implementation process of selecting the target vehicle is similar to the specific implementation process of selecting the target object in the foregoing, detailed description is omitted here.
In order to ensure that the tracing algorithm can save the running resources and the tracing time, the first tracing amount is determined according to the preset running resources and the preset tracing time of the execution device. It should be noted that, since the specific implementation process of the determined number is similar to the specific implementation process of the determined number in the foregoing, detailed description is omitted here.
It should be noted that the method 400A may also be directly executed by a device that ultimately needs the tracking results, such as an in-vehicle device, and it should be understood that when executed by the in-vehicle device, the tracking results obtained are directly obtained by the device (e.g., the in-vehicle device) and processed.
Step 404 is described in detail below:
404: and determining the geographical positions of vehicles nearby the vehicles running on the road according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
Specifically, determining the geographical position of the vehicle near the vehicle running on the road according to the positions of a plurality of vehicles to be tracked in the current frame image in the tracking result, includes: and mapping the positions of a plurality of vehicles to be tracked in the current frame image to corresponding geographic positions in the vehicle navigation map so as to determine the geographic positions of vehicles in the vicinity of the vehicles running on the road.
By mapping the locations in the image to the geographic locations of the map, the unmanned vehicle can be accurately aided in locating the vehicle next to it.
For example, as shown in fig. 4B, in an unmanned vehicle application scenario, the following are included: a plurality of vehicles traveling on a road 410 and a plurality of image processing apparatuses 406 having cameras disposed on one side of the road, each two adjacent image processing apparatuses 406 being spaced apart by a preset distance. A plurality of unmanned vehicles may be included in many vehicles, such as unmanned vehicle 407, unmanned vehicle 408, and unmanned vehicle 409. When the vehicle runs on the road 410, the camera of the image processing device continuously takes pictures of the passing vehicle, and the camera can also rotate freely to adjust the shooting angle and shooting direction. When the vehicle passes through the road section, the vehicle can automatically perform communication connection with the image processing device 406 in the road section.
As shown in fig. 4C, the camera 411 transmits a plurality of continuous images, for example, 10-frame images, which are captured, to the image processing apparatus 406. Since the vehicle tracking algorithm is set in the same manner in each frame of image, only one of the frames of image is taken as an example for explanation. The unmanned vehicle 407, the unmanned vehicle 408, and the unmanned vehicle 409 in the image may be referred to as target vehicles hereinafter.
And before the image processing device 406 tracks the target vehicles in the 5 th frame of images, the image processing device 406 counts the number of images for respectively tracking each target vehicle by using a Kalman filtering tracking algorithm in the previous 1-4 frames of images, wherein the number of images of the unmanned vehicle 407 is 3, the number of images of the unmanned vehicle 408 is 2, and the number of images of the unmanned vehicle 409 is 4. The unmanned vehicle 409 whose number of images is the largest is taken as the target object and its tracking manner in the 5 th frame image is set as the KCF tracking algorithm. The unmanned vehicle 408 and the manner in which the unmanned vehicle 408 is tracked is set to be a kalman filter tracking algorithm. And by analogy, the setting of the target vehicle tracking algorithm in each frame of image is completed. Thus, the tracking is completed, and the tracking result, that is, the coordinates of the target vehicle in each frame of image, is mapped by the image processing device 406 into the corresponding coordinate system, for example, in the vehicle navigation map, the coordinates in the vehicle navigation map are determined, and the coordinates in the map are sent to the vehicles having a communication relationship, such as the unmanned vehicle 407, the unmanned vehicle 408, and the unmanned vehicle 409, and after receiving the map coordinates, the unmanned vehicle can determine the position relationship between other vehicles running on the road and itself, thereby controlling the vehicle to run.
It should be noted that the camera 411 may also perform video shooting on a passing vehicle, where the shot video is composed of a plurality of continuous images, so that the image processing apparatus 406 may perform subsequent image tracking processing on the plurality of continuous images in the video.
It should be noted that, for the parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the foregoing multi-target tracking method, and details are not repeated herein.
In the field of tracking of unmanned aerial vehicles, particularly in scenes where tracking objects are tracked in real time, it has been recognized outdoors that a plurality of tracking objects in a monitored image are targets being tracked, such as people, vehicles, and the like. To facilitate rapid real-time tracking and to prevent multiple objects being tracked from rapidly disappearing within the monitored area, monitoring and tracking cannot continue. A plurality of targets being tracked need to be quickly located in an image, so that the geographical positions of the plurality of targets being tracked currently are determined immediately, and subsequent unmanned aerial vehicle tracking is performed. From this, this application has still provided an unmanned aerial vehicle's tracking method.
Fig. 5 is a schematic flowchart of a tracking method for a drone according to another exemplary embodiment of the present application. The method 500 provided by the embodiment of the present application is executed by an outdoor image processing apparatus, which has an image capturing function and can be disposed on a roadside. The method 500 includes the steps of:
501: at least two tracking modes are provided and each tracking mode corresponds to one tracking quality.
502: the method comprises the steps of obtaining a current frame outdoor monitoring image through an image acquisition device, wherein the current frame outdoor monitoring image comprises a plurality of objects to be tracked.
503: and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
504: and tracking the plurality of objects to be tracked in the geographical positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result.
It should be noted that, since the steps 501-503 are similar to the specific implementation process of the multi-target tracking method in the foregoing, detailed description is omitted here. For example, the outdoor monitoring image refers to an image captured by an outdoor monitoring camera. The object to be tracked is the target being tracked. The image acquisition device may be a camera.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the method for determining and tracking the tracking modes of a plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked comprises the following steps: screening out target objects which do not meet the tracking quality requirement from a plurality of objects to be tracked; and tracking the screened target object in a first tracking mode, and tracking other objects to be tracked in a second tracking mode. The first tracking quality is better than the second tracking quality.
It should be noted that, since the specific implementation process for selecting the target object is similar to the specific implementation process for selecting the target object in the foregoing, detailed description is omitted here.
In order to more accurately select the target object, the first tracking mode corresponds to the first tracking number; the method comprises the following steps of screening out target objects which do not meet the tracking quality requirement from a plurality of objects to be tracked, wherein the method comprises the following steps: and screening out a first tracking number of target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked.
It should be noted that, since the specific implementation process for selecting the target object is similar to the specific implementation process for selecting the target object in the foregoing, detailed description is omitted here.
In order to ensure that the tracing algorithm can save the running resources and the tracing time, the first tracing amount is determined according to the preset running resources and the preset tracing time of the execution device.
It should be noted that, since the specific implementation process of the determined number is similar to the specific implementation process of the determined number in the foregoing, detailed description is omitted here.
Step 504 is described in detail below:
504: and tracking the plurality of objects to be tracked in the geographical positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result.
Specifically, tracking the multiple objects to be tracked in the geographic positions according to the positions of the multiple objects to be tracked in the outdoor monitoring image in the tracking result includes: and mapping the positions of the objects to be tracked in the current frame outdoor monitoring image to corresponding geographic positions in the unmanned aerial vehicle navigation map, thereby determining the geographic positions of the objects to be tracked and tracking the objects to be tracked.
By mapping the position in the image to the geographical position of the map, the quasi-unmanned aerial vehicle can be accurately helped to determine the geographical position of the object to be tracked.
For example, according to the foregoing, fig. 6 shows a schematic diagram of unmanned aerial vehicle tracking, in which the camera 602 of the outdoor image processing device 601 sends the shot outdoor monitoring image to the external image processing device 601, after the outdoor image processing device 601 recognizes that there are multiple targets to be tracked in the image, the outdoor image processing device 601 tracks the target to be tracked in the image according to the set tracking algorithm to obtain the position of the corresponding target to be tracked in the image, such as the coordinates in the image, after the outdoor image processing device 601 maps the coordinates into the aeromap coordinate system of the corresponding unmanned aerial vehicle, and sends the aeromap coordinates to the multiple searched unmanned aerial vehicles 603, and the unmanned aerial vehicles 603 determine the geographic positions of the multiple tracked targets according to the aeromap coordinates, so as to perform aerial tracking on the tracked targets, even if the tracked targets are separated from the monitoring area, tracking may also continue.
It should be noted that, for the parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the foregoing multi-target tracking method, and details are not repeated herein.
In the field of image processing of video, in particular scenes of image processing in live video. In order to ensure that a user has better viewing experience when watching a live video, the received live video needs to be quickly delivered to the user side, so that the user side can watch the video more smoothly. However, when image processing is performed on live video, due to the characteristics of the live video, it is necessary for the video server to quickly process images in the live video so as not to degrade the viewing experience of the user. Therefore, the application also provides a video processing method.
Fig. 7A is a flowchart illustrating a video processing method according to another exemplary embodiment of the present application. The method 700A provided in the embodiment of the present application is executed by a video server, and the method 700A includes the following steps:
701: at least two tracking modes are provided and each tracking mode corresponds to one tracking quality.
702: the method comprises the steps that a current frame live video image is obtained through a video live broadcast end, and the current frame live video image comprises a plurality of display objects to be tracked.
703: and determining the tracking modes of the plurality of to-be-tracked display objects according to the tracking quality of the plurality of to-be-tracked display objects and tracking.
And 704, according to the positions of the plurality of display objects to be tracked in the live video images in the tracking result, performing image processing on the plurality of display objects to be tracked, and displaying the processed live video images.
It should be noted that, since the step 701-703 is similar to the specific implementation process of the multi-target tracking method in the foregoing, detailed description is omitted here. For example, the display object may have a plurality of objects, such as goods, articles, and the like, according to the shooting scene in the video image.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the method for tracking the display objects to be tracked comprises the following steps of determining the tracking modes of the display objects to be tracked according to the tracking quality of the display objects to be tracked, wherein the tracking modes comprise: screening out target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked; and tracking the screened target display object in a first tracking mode, and tracking other display objects to be tracked in a second tracking mode. The first tracking quality is better than the second tracking quality.
It should be noted that, since the specific implementation process for selecting the target display object is similar to the specific implementation process for selecting the target object in the foregoing, detailed description is omitted here.
In order to more accurately select a target display object, the first tracking mode corresponds to a first tracking number; the method comprises the following steps of screening out target display objects which do not meet the tracking quality requirement from a plurality of display objects to be tracked, wherein the method comprises the following steps: and screening out a first tracking number of target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked.
It should be noted that, since the specific implementation process for selecting the target display object is similar to the specific implementation process for selecting the target object in the foregoing, detailed description is omitted here.
In order to ensure that the tracing algorithm can save the running resources and the tracing time, the first tracing amount is determined according to the preset running resources and the preset tracing time of the execution device.
It should be noted that, since the specific implementation process of the determined number is similar to the specific implementation process of the determined number in the foregoing, detailed description is omitted here.
Step 704 is described in detail below:
and 704, according to the positions of the plurality of display objects to be tracked in the live video images in the tracking result, performing image processing on the plurality of display objects to be tracked, and displaying the processed live video images.
For example, as described above, fig. 7B shows a schematic diagram of video processing, wherein the computer 706 may upload live video to the video server 707 in response to an upload operation of a user, the video server 707 identifies a plurality of display objects to be processed from the images according to the live setting requirement, and tracks the display objects in the video images, such as the display object 1 "toy", the display object 2 "beverage", and the display object 3 "watch", according to a set tracking algorithm to determine the positions of the display objects in the images, such as the coordinates in the images. The video server 707 deletes the display object 3 image or performs mosaic processing according to the coordinates, and sends the video to which the processed image belongs to the live broadcast end mobile phone for the mobile phone to broadcast the live broadcast video.
It should be noted that the video server 707 may also obtain an image in a to-be-processed video from the mobile phone, where the to-be-processed video may be used for live broadcast, and is sent to a terminal, such as a mobile phone or a computer, for live broadcast viewing after being processed by the video server 704.
It should be noted that, for the parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the foregoing multi-target tracking method, and details are not repeated herein.
Fig. 8 is a schematic structural framework diagram of a multi-target tracking device according to an exemplary embodiment of the present application. The apparatus 800 may be applied to a first device, for example, an image tracking device disposed on a road, and the apparatus 800 includes a providing module 801, an obtaining module 802, and a determining module 803, and the functions of the respective modules are described in detail below:
a module 801 is provided for providing at least two tracking modes, each tracking mode corresponding to a tracking quality.
An obtaining module 802, configured to; and acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked.
The determining module 803 is configured to determine a tracking manner of the multiple objects to be tracked according to the tracking quality of the multiple objects to be tracked, and perform tracking.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the determining module 803 includes: the screening unit is used for screening out target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked; and the tracking unit is used for tracking the screened target object in a first tracking mode and tracking other objects to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the screening unit is used for screening out a first tracking number of target objects which do not meet the requirement of tracking quality from a plurality of objects to be tracked.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
Specifically, the second tracking manner corresponds to a second tracking number, the sum of the first tracking number and the second tracking number is greater than or equal to the number of the plurality of objects to be tracked, and the second tracking number is determined according to a preset running resource and a preset tracking time of the execution device.
Specifically, the tracking quality of the plurality of objects to be tracked corresponds to the tracking quality of the plurality of objects to be tracked in the image before the current image.
In addition, the apparatus 800 further comprises: the statistical module is used for counting the number of images which are used for tracking the plurality of objects to be tracked respectively in the images before the current image by using a second tracking mode; the determining module 803 is further configured to determine the tracking quality of the plurality of objects to be tracked according to the number of the images.
In addition, in the case that the number of images is the same, the determining module 803 is further configured to determine the tracking quality of the plurality of objects to be tracked according to the number of images and the size of the objects to be tracked in the current frame image.
In addition, the determining module 803 is further configured to determine, after the tracking manner is updated, the first tracking manner and the second tracking manner according to the updated tracking quality corresponding to each tracking manner.
In addition, the apparatus 800 further comprises: and the tracking module is used for tracking each image to be tracked in the first frame image directly through an image recognition algorithm, wherein the current frame image is a first frame image of a plurality of images.
In addition, the tracking module is also used for determining whether the current frame image is an image to be corrected or not according to preset time or preset image quantity; if yes, each image to be tracked in the images to be corrected is directly tracked through an image recognition algorithm.
In addition, the determining module 803 is further configured to determine and obtain positions of a plurality of objects to be tracked in the current frame image.
Specifically, the first tracking mode includes a tracking algorithm for image feature prediction.
Specifically, the tracking mode includes a tracking algorithm for spatial feature prediction.
Fig. 9 is a schematic structural framework diagram of an update apparatus of a tracking manner according to an exemplary embodiment of the present application. The apparatus 900 may be applied to an image processing apparatus, a server, etc. provided on the road. The apparatus 900 includes an obtaining module 901, a selecting module 902, and a providing module 903, and the functions of the modules are described in detail below:
an obtaining module 901, configured to obtain the tracking modes to be added, and update the original tracking modes, where each updated tracking mode corresponds to one tracking quality.
A selecting module 902, configured to select at least two tracking manners from the updated tracking manners according to the tracking quality.
A providing module 903, configured to provide at least two tracking manners under the condition that a current frame image is obtained and a plurality of objects to be tracked in the current frame image are tracked, where each tracking manner corresponds to one tracking quality.
Specifically, according to the tracking quality, at least two tracking modes are selected from the updated tracking modes, including: selecting a tracking mode with the highest tracking quality as a first tracking mode; and selecting the tracking mode with lower tracking quality as the second tracking mode.
In addition, the apparatus 900 further comprises: and the deleting module is used for deleting the unselected tracking modes from the updated tracking modes.
It should be noted that, for the parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the multi-target tracking apparatus, and further description is omitted here.
Fig. 10 is a schematic structural framework diagram of an automatic driving device of a vehicle according to still another exemplary embodiment of the present application. The apparatus 1000 can be applied to an image processing apparatus and a server, etc. provided on the road; the apparatus 1000 comprises: the providing module 1001, the obtaining module 1002, and the determining module 1003 are described in detail below with respect to functions of the respective modules:
a module 1001 is provided for providing at least two tracking modes, each tracking mode corresponding to a tracking quality.
The acquiring module 1002 is configured to acquire a current frame image of a road condition vehicle through an image acquisition device, where the current frame image includes a plurality of vehicles to be tracked.
The determining module 1003 is configured to determine a tracking manner of the multiple vehicles to be tracked according to the tracking quality of the multiple vehicles to be tracked and perform tracking.
The determining module 1003 is configured to determine the geographic position of a vehicle near a vehicle traveling on the road according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
Specifically, the determining module 1003 is configured to map the positions of the vehicles to be tracked in the current frame image to corresponding geographic positions in a vehicle navigation map, so as to determine the geographic positions of the vehicles in the vicinity of the vehicles running on the road.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the determining module 1003 includes: the screening unit is used for screening target vehicles which do not meet the tracking quality requirement from a plurality of vehicles to be tracked; and the tracking unit is used for tracking the screened target vehicle in a first tracking mode and tracking other vehicles to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the tracking unit is used for screening out a first tracking number of target vehicles which do not meet the tracking quality requirement from a plurality of vehicles to be tracked.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
It should be noted that, for the parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the multi-target tracking apparatus, and further description is omitted here.
Fig. 11 is a schematic structural framework diagram of a tracking device of a drone according to another exemplary embodiment of the present application. The apparatus 1100 may be applied to an image processing device or a server that can be installed outdoors; the apparatus 1100 comprises: a module 1101, an obtaining module 1102 and a determining module 1103 are provided, and the functions of the respective modules are explained in detail as follows:
a module 1101 is provided for providing at least two tracking modes, each tracking mode corresponding to a tracking quality.
The acquiring module 1102 is configured to acquire a current frame outdoor monitoring image through an image acquisition device, where the current frame outdoor monitoring image includes a plurality of objects to be tracked.
A determining module 1103, configured to determine a tracking manner of the multiple objects to be tracked according to tracking qualities of the multiple objects to be tracked, and perform tracking.
A determining module 1103, configured to perform geographic position tracking on the multiple objects to be tracked according to the positions of the multiple objects to be tracked in the outdoor monitoring image in the tracking result.
Specifically, the determining module 1103 is configured to map the positions of the multiple objects to be tracked in the current frame outdoor monitoring image to corresponding geographic positions in the unmanned aerial vehicle navigation map, so as to determine the geographic positions where the multiple objects to be tracked appear and track the multiple objects to be tracked.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the determining module 1103 includes a screening unit, configured to screen out, from the multiple objects to be tracked, a target object that does not meet the tracking quality requirement; and the tracking unit is used for tracking the screened target object in a first tracking mode and tracking other objects to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the screening unit is used for screening out a first tracking number of target objects which do not meet the requirement of tracking quality from a plurality of objects to be tracked.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
It should be noted that, for the parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the multi-target tracking apparatus, and further description is omitted here.
Fig. 12 is a schematic structural framework diagram of a video processing apparatus according to still another exemplary embodiment of the present application. The apparatus 1200 may be applied to a video server; the apparatus 1200 includes: a providing module 1201, an obtaining module 1202, a determining module 1203 and a processing module 1204 are provided, and the functions of the respective modules are explained in detail as follows:
a providing module 1201 is configured to provide at least two tracking manners and each tracking manner corresponds to a tracking quality.
The obtaining module 1202 is configured to obtain a current frame live video image through a video live end, where the current frame live video image includes a plurality of display objects to be tracked.
The determining module 1203 is configured to determine a tracking manner of the multiple to-be-tracked display objects according to the tracking quality of the multiple to-be-tracked display objects, and perform tracking.
And the processing module 1204 is configured to perform image processing on the multiple display objects to be tracked according to positions of the multiple display objects to be tracked in the live video image in the tracking result, and display the processed live video image.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the determining module 1203 includes: the screening unit is used for screening out target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked; and the tracking unit is used for tracking the screened target display object in a first tracking mode and tracking other display objects to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the screening unit is used for screening a first tracking number of target display objects which do not meet the tracking quality requirement from a plurality of display objects to be tracked.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
It should be noted that, for the parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the multi-target tracking apparatus, and further description is omitted here.
Having described the internal functions and structure of the tracking device 800 shown in fig. 8, in one possible design, the structure of the tracking device 800 shown in fig. 8 may implement an electronic device, such as an image processing device or a server, disposed on a road, as shown in fig. 13, and the device 1300 may include: a memory 1301 and a processor 1302;
a memory 1301 for storing a computer program;
a processor 1302 for executing a computer program for: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked; and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the processor 1302 is specifically configured to screen out a target object that does not meet the tracking quality requirement from a plurality of objects to be tracked; and tracking the screened target object in a first tracking mode, and tracking other objects to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the processor 1302 is specifically configured to screen out, from a plurality of objects to be tracked, a first tracking number of target objects that do not meet the tracking quality requirement.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
Specifically, the second tracking manner corresponds to a second tracking number, the sum of the first tracking number and the second tracking number is greater than or equal to the number of the plurality of objects to be tracked, and the second tracking number is determined according to a preset running resource and a preset tracking time of the execution device.
Specifically, the tracking quality of the plurality of objects to be tracked corresponds to the tracking quality of the plurality of objects to be tracked in the image before the current image.
Further, processor 1302 is further configured to: counting the number of images which are used for tracking a plurality of objects to be tracked respectively in a second tracking mode in the images before the current image; and determining the tracking quality of a plurality of objects to be tracked according to the number of the images.
Further, in the case that the number of images is the same, the processor 1302 is further configured to: and determining the tracking quality of the plurality of objects to be tracked according to the number of the images and the size of the objects to be tracked in the current frame image.
Further, processor 1302 is further configured to: and after the tracking modes are updated, determining a first tracking mode and a second tracking mode according to the updated tracking quality corresponding to each tracking mode.
Further, processor 1302 is further configured to: the current frame image is a first frame image of a plurality of images, and each image to be tracked in the first frame image is tracked directly through an image recognition algorithm.
Further, processor 1302 is further configured to: determining whether the current frame image is an image to be corrected or not according to preset time or the number of preset images; if yes, each image to be tracked in the images to be corrected is directly tracked through an image recognition algorithm.
Further, processor 1302 is further configured to: and determining and acquiring the positions of a plurality of objects to be tracked in the current frame image.
Specifically, the first tracking mode includes a tracking algorithm for image feature prediction.
Specifically, the tracking mode includes a tracking algorithm for spatial feature prediction.
Additionally, embodiments of the present invention provide a computer storage medium, which when executed by one or more processors, causes the one or more processors to implement the steps of the multi-target tracking method in the embodiment of method 100 of fig. 1.
Having described the internal functions and structure of the updating apparatus 900 shown in fig. 9, in one possible design, the structure of the updating apparatus 900 may be implemented as an electronic device, such as an image processing device or a server disposed on a road, as shown in fig. 14, and the device 1400 may include: a memory 1401 and a processor 1402;
a memory 1401 for storing a computer program;
a processor 1402 for executing a computer program for: acquiring tracking modes to be added, updating the original tracking modes, wherein each updated tracking mode corresponds to one tracking quality; selecting at least two tracking modes from the updated tracking modes according to the tracking quality; under the condition of acquiring a current frame image and tracking a plurality of objects to be tracked in the current frame image, at least two tracking modes are provided, and each tracking mode corresponds to one tracking quality.
Specifically, the processor 1402 is specifically configured to select a tracking manner with the highest tracking quality as the first tracking manner; and selecting the tracking mode with lower tracking quality as the second tracking mode.
Further, the processor 1402 is further configured to: the unselected tracking mode is deleted from the updated tracking mode.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the electronic device 1300, and details are not described herein again.
Additionally, embodiments of the present invention provide a computer storage medium, which when executed by one or more processors, causes the one or more processors to implement the steps of the update method for tracking patterns in the embodiment of the method 300 of fig. 3.
Having described the internal functions and structure of the autopilot 1000 shown in fig. 10, in one possible design, the structure of the autopilot 1000 may be implemented as an electronic device, such as an image processing device or a server disposed on the road, as shown in fig. 15, and the device 1500 may include: a memory 1501 and a processor 1502;
a memory 1501 for storing a computer program;
a processor 1502 for executing a computer program for: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame image of a road condition vehicle through an image acquisition device, wherein the current frame image comprises a plurality of vehicles to be tracked; determining the tracking modes of a plurality of vehicles to be tracked according to the tracking quality of the plurality of vehicles to be tracked and tracking; and determining the geographical positions of vehicles nearby the vehicles running on the road according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
Specifically, the processor 1502 is specifically configured to: and mapping the positions of a plurality of vehicles to be tracked in the current frame image to corresponding geographic positions in the vehicle navigation map so as to determine the geographic positions of vehicles in the vicinity of the vehicles running on the road.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the processor 1502 is specifically configured to: screening out target vehicles which do not meet the tracking quality requirement from a plurality of vehicles to be tracked; and tracking the screened target vehicle in a first tracking mode, and tracking other vehicles to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the processor 1502 is specifically configured to: and screening a first tracking number of target vehicles which do not meet the tracking quality requirement from the plurality of vehicles to be tracked.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the electronic device 1300, and details are not described herein again.
Additionally, embodiments of the present invention provide a computer storage medium, which when executed by one or more processors, causes the one or more processors to implement the steps of the method for autonomous driving of a vehicle in the embodiment of method 400A of fig. 4.
Having described the internal functions and structure of the tracking device 1100 shown in fig. 11, in one possible design, the structure of the tracking device 1100 may be implemented as an electronic device, such as an image processing device or a server disposed on a road, as shown in fig. 16, and the device 1600 may include: a memory 1601 and a processor 1602;
a memory 1601 for storing a computer program;
a processor 1602 for executing a computer program for: providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality; acquiring a current frame outdoor monitoring image through an image acquisition device, wherein the current frame outdoor monitoring image comprises a plurality of objects to be tracked; determining the tracking modes of a plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking; and tracking the plurality of objects to be tracked in the geographical positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result.
Specifically, the processor 1602 is specifically configured to: and mapping the positions of the objects to be tracked in the current frame outdoor monitoring image to corresponding geographic positions in the unmanned aerial vehicle navigation map, thereby determining the geographic positions of the objects to be tracked and tracking the objects to be tracked.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the processor 1602 is specifically configured to: screening out target objects which do not meet the tracking quality requirement from a plurality of objects to be tracked; and tracking the screened target object in a first tracking mode, and tracking other objects to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the processor 1602 is specifically configured to: and screening out a first tracking number of target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the electronic device 1300, and details are not described herein again.
Additionally, embodiments of the present invention provide a computer storage medium, where the computer program, when executed by one or more processors, causes the one or more processors to implement the steps of the tracking method for drones in the embodiment of the method 500 of fig. 5.
Having described the internal functions and structure of the processing apparatus 1200 shown in fig. 12, in one possible design, the structure of the processing apparatus 1200 may be implemented as an electronic device, such as a video server, and as shown in fig. 17, the device 1700 may include: a memory 1701 and a processor 1702;
a memory 1701 for storing a computer program;
a processor 1702 for executing a computer program for: providing at least two tracking modes and corresponding to one tracking quality for each tracking mode; acquiring a current frame live video image through a video live end, wherein the current frame live video image comprises a plurality of display objects to be tracked; determining a tracking mode of a plurality of to-be-tracked display objects and tracking according to the tracking quality of the plurality of to-be-tracked display objects; and according to the positions of the plurality of display objects to be tracked in the live video images in the tracking result, performing image processing on the plurality of display objects to be tracked, and displaying the processed live video images.
Specifically, the at least two tracking modes include a first tracking mode corresponding to the first tracking quality and a second tracking mode corresponding to the second tracking quality; the processor 1702 is specifically configured to: screening out target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked; and tracking the screened target display object in a first tracking mode, and tracking other display objects to be tracked in a second tracking mode.
In particular, the first tracking quality is better than the second tracking quality.
Specifically, the first tracking mode corresponds to a first tracking number; the processor 1702 is specifically configured to: and screening out a first tracking number of target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked.
Specifically, the first trace amount is determined according to a preset running resource of the execution device and a preset trace time.
It should be noted that, for parts not described in detail in this embodiment, reference may be made to the related description of the embodiment shown in the electronic device 1300, and details are not described herein again.
Additionally, embodiments of the present invention provide a computer storage medium, where the computer program, when executed by one or more processors, causes the one or more processors to implement the steps of the method for processing video in the embodiment of method 700A of fig. 7A.
In addition, in some of the flows described in the above embodiments and the drawings, a plurality of operations are included in a specific order, but it should be clearly understood that the operations may be executed out of the order presented herein or in parallel, and the sequence numbers of the operations, such as 201, 202, 203, etc., are merely used for distinguishing different operations, and the sequence numbers themselves do not represent any execution order. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable multimedia data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable multimedia data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable multimedia data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable multimedia data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (46)

1. A multi-target tracking method is characterized by comprising the following steps:
providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality;
acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked;
and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
2. The method of claim 1, wherein the at least two tracking modes comprise a first tracking mode corresponding to a first tracking quality and a second tracking mode corresponding to a second tracking quality;
determining a tracking mode and tracking according to the tracking quality of the plurality of objects to be tracked, wherein the tracking mode comprises the following steps:
screening out target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked;
and tracking the screened target object in the first tracking mode, and tracking other objects to be tracked in the second tracking mode.
3. The method of claim 2, wherein the first tracking quality is better than the second tracking quality.
4. The method of claim 2, wherein the first tracking manner corresponds to a first tracking number;
wherein, the step of screening out the target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked comprises the following steps: and screening out a first tracking number of target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked.
5. The method of claim 2, wherein the first tracking number is determined based on a preset trace time and a preset run resource of an execution device.
6. The method according to claim 2, wherein the second tracking manner corresponds to a second tracking number, a sum of the first tracking number and the second tracking number is greater than or equal to the number of the plurality of objects to be tracked, and the second tracking number is determined according to a preset running resource and a preset tracking time of the execution device.
7. The method of claim 1, wherein the tracking quality of the plurality of objects to be tracked corresponds to the tracking quality of the plurality of objects to be tracked in an image preceding the current image.
8. The method of claim 2, further comprising:
counting the number of images which are used for respectively tracking a plurality of objects to be tracked by using the second tracking mode in the images before the current image;
and determining the tracking quality of the plurality of objects to be tracked according to the number of the images.
9. The method of claim 8, wherein if the number of images is the same, the method further comprises:
and determining the tracking quality of the plurality of objects to be tracked according to the number of the images and the size of the objects to be tracked in the current frame image.
10. The method of claim 2, further comprising:
and after the tracking modes are updated, determining the first tracking mode and the second tracking mode according to the updated tracking quality corresponding to each tracking mode.
11. The method of claim 1, further comprising:
when the current frame image is a first frame image of a plurality of images, each image to be tracked in the first frame image is tracked directly through an image recognition algorithm.
12. The method of claim 1, further comprising:
determining whether the current frame image is an image to be corrected or not according to preset time or preset image quantity;
if so, tracking each image to be tracked in the images to be corrected directly through an image recognition algorithm.
13. The method of claim 1, further comprising:
and determining and acquiring the positions of the plurality of objects to be tracked in the current frame image.
14. The method of claim 2, wherein the first tracking mode comprises a tracking algorithm for image feature prediction.
15. The method of claim 2, wherein the second tracking mode comprises a tracking algorithm for spatial feature prediction.
16. A method for updating a tracking mode, comprising:
acquiring tracking modes to be added, updating the original tracking modes, wherein each updated tracking mode corresponds to one tracking quality;
selecting at least two tracking modes from the updated tracking modes according to the tracking quality;
and under the condition of acquiring a current frame image and tracking a plurality of objects to be tracked in the current frame image, providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality.
17. The method of claim 16, wherein selecting at least two tracking modes from the updated tracking modes based on the tracking quality comprises:
selecting a tracking mode with the highest tracking quality as a first tracking mode;
and selecting the tracking mode with lower tracking quality as the second tracking mode.
18. The method of claim 16, further comprising:
the unselected tracking mode is deleted from the updated tracking mode.
19. A method of automatic driving of a vehicle, comprising:
providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality;
acquiring a current frame image of a road condition vehicle through an image acquisition device, wherein the current frame image comprises a plurality of vehicles to be tracked;
determining the tracking modes of the vehicles to be tracked according to the tracking qualities of the vehicles to be tracked and tracking;
and determining the geographical positions of the vehicles running on the road near the vehicles according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
20. The method of claim 19, wherein the determining the geographical position of the vehicle in the vicinity of the vehicle traveling on the road according to the positions of the vehicles to be tracked in the current frame image in the tracking result comprises:
and mapping the positions of the vehicles to be tracked in the current frame image to corresponding geographic positions in a vehicle navigation map so as to determine the geographic positions of the vehicles in the vicinity of the vehicles running on the road.
21. The method of claim 19, wherein the at least two tracking modes comprise a first tracking mode corresponding to a first tracking quality and a second tracking mode corresponding to a second tracking quality;
determining the tracking modes of the vehicles to be tracked according to the tracking qualities of the vehicles to be tracked, and tracking the vehicles to be tracked, wherein the tracking modes comprise:
screening out target vehicles which do not meet the tracking quality requirement from the plurality of vehicles to be tracked;
and tracking the screened target vehicles in the first tracking mode, and tracking other vehicles to be tracked in the second tracking mode.
22. The method of claim 21, wherein the first tracking quality is better than the second tracking quality.
23. The method of claim 21, wherein the first tracking manner corresponds to a first tracking number;
wherein, the step of screening out target vehicles which do not meet the tracking quality requirement from the plurality of vehicles to be tracked comprises the following steps: and screening out a first tracking number of target vehicles which do not meet the tracking quality requirement from the plurality of vehicles to be tracked.
24. The method of claim 21, wherein the first tracking number is determined based on a preset trace time and a preset run resource of an execution device.
25. A method for tracking a drone, comprising:
providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality;
acquiring a current frame outdoor monitoring image through an image acquisition device, wherein the current frame outdoor monitoring image comprises a plurality of objects to be tracked;
determining the tracking modes of the objects to be tracked according to the tracking quality of the objects to be tracked and tracking;
and tracking the plurality of objects to be tracked on the geographical positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result.
26. The method of claim 25, wherein the tracking the plurality of objects to be tracked in geographic positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result comprises:
and mapping the positions of the objects to be tracked in the current frame outdoor monitoring image to corresponding geographic positions in an unmanned aerial vehicle navigation map, so as to determine the geographic positions of the objects to be tracked and track the objects.
27. The method of claim 25, wherein the at least two tracking modes comprise a first tracking mode corresponding to a first tracking quality and a second tracking mode corresponding to a second tracking quality;
determining a tracking mode of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked, and tracking the objects to be tracked, wherein the tracking mode comprises the following steps:
screening out target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked;
and tracking the screened target object in the first tracking mode, and tracking other objects to be tracked in the second tracking mode.
28. The method of claim 27, wherein the first tracking quality is better than the second tracking quality.
29. The method of claim 27, wherein the first tracking manner corresponds to a first tracking number;
wherein, the step of screening out the target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked comprises the following steps: and screening out a first tracking number of target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked.
30. The method of claim 27, wherein the first tracking number is determined based on a preset trace time and a preset run resource of an execution device.
31. A method for processing video, comprising:
providing at least two tracking modes and corresponding to one tracking quality for each tracking mode;
acquiring a current frame live video image through a video live end, wherein the current frame live video image comprises a plurality of display objects to be tracked;
determining the tracking modes of the plurality of to-be-tracked display objects according to the tracking quality of the plurality of to-be-tracked display objects and tracking;
and according to the positions of the plurality of display objects to be tracked in the live video images in the tracking result, performing image processing on the plurality of display objects to be tracked, and displaying the processed live video images.
32. The method of claim 31, wherein the at least two tracking modes comprise a first tracking mode corresponding to a first tracking quality and a second tracking mode corresponding to a second tracking quality;
determining a tracking mode of the plurality of to-be-tracked display objects according to the tracking quality of the plurality of to-be-tracked display objects and tracking the plurality of to-be-tracked display objects, wherein the tracking mode comprises the following steps:
screening out target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked;
and tracking the screened target display object in the first tracking mode, and tracking other display objects to be tracked in the second tracking mode.
33. The method of claim 32, wherein the first tracking quality is better than the second tracking quality.
34. The method of claim 32, wherein the first tracking manner corresponds to a first tracking number;
wherein, the step of screening out the target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked comprises the following steps: and screening out a first tracking number of target display objects which do not meet the tracking quality requirement from the plurality of display objects to be tracked.
35. The method of claim 32, wherein the first tracking number is determined based on a preset trace time and a preset run resource of an execution device.
36. A multi-target tracking system, comprising: the image acquisition equipment and the image processing equipment;
the image acquisition equipment acquires multi-frame images, wherein each frame of image comprises a plurality of objects to be tracked, and sends the multi-frame images to the image processing equipment;
the image processing device provides at least two tracking modes and each tracking mode corresponds to one tracking quality;
and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
37. The system of claim 36, wherein the at least two tracking modes include a first tracking mode corresponding to a first tracking quality and a second tracking mode corresponding to a second tracking quality;
the image processing equipment screens out target objects which do not meet the tracking quality requirement from the multiple objects to be tracked;
and tracking the screened target object in the first tracking mode, and tracking other objects to be tracked in the second tracking mode.
38. The system of claim 37, wherein the first tracking quality is better than the second tracking quality.
39. The system of claim 37, wherein the first tracking manner corresponds to a first tracking number;
the image processing equipment screens out a first tracking number of target objects which do not meet the tracking quality requirement from the plurality of objects to be tracked.
40. The system of claim 37, wherein the first number of traces and the second number of traces are determined based on preset execution resources and preset trace times of an execution device.
41. A computing device comprising a memory and a processor;
the memory for storing a computer program;
the processor to execute the computer program to:
providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality;
acquiring a current frame image, wherein the current frame image comprises a plurality of objects to be tracked;
and determining the tracking modes of the plurality of objects to be tracked according to the tracking quality of the plurality of objects to be tracked and tracking.
42. A computing device comprising a memory and a processor;
the memory for storing a computer program;
the processor to execute the computer program to:
acquiring tracking modes to be added, updating the original tracking modes, wherein each updated tracking mode corresponds to one tracking quality;
selecting at least two tracking modes from the updated tracking modes according to the tracking quality;
and under the condition of acquiring a current frame image and tracking a plurality of objects to be tracked in the current frame image, providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality.
43. A computing device comprising a memory and a processor;
the memory for storing a computer program;
the processor to execute the computer program to:
providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality;
acquiring a current frame image of a road condition vehicle through an image acquisition device, wherein the current frame image comprises a plurality of vehicles to be tracked;
determining the tracking modes of the vehicles to be tracked according to the tracking qualities of the vehicles to be tracked and tracking;
and determining the geographical positions of the vehicles running on the road near the vehicles according to the positions of the vehicles to be tracked in the current frame image in the tracking result.
44. A computing device comprising a memory and a processor;
the memory for storing a computer program;
the processor to execute the computer program to:
providing at least two tracking modes, wherein each tracking mode corresponds to one tracking quality;
acquiring a current frame outdoor monitoring image through an image acquisition device, wherein the current frame outdoor monitoring image comprises a plurality of objects to be tracked;
determining the tracking modes of the objects to be tracked according to the tracking quality of the objects to be tracked and tracking;
and tracking the plurality of objects to be tracked on the geographical positions according to the positions of the plurality of objects to be tracked in the outdoor monitoring image in the tracking result.
45. A computing device comprising a memory and a processor;
the memory for storing a computer program;
the processor to execute the computer program to:
providing at least two tracking modes and corresponding to one tracking quality for each tracking mode;
acquiring a current frame live video image through a video live end, wherein the current frame live video image comprises a plurality of display objects to be tracked;
determining the tracking modes of the plurality of to-be-tracked display objects according to the tracking quality of the plurality of to-be-tracked display objects and tracking;
and according to the positions of the plurality of display objects to be tracked in the live video images in the tracking result, performing image processing on the plurality of display objects to be tracked, and displaying the processed live video images.
46. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by one or more processors, causes the one or more processors to perform the steps of the method of any one of claims 1-35.
CN201910945830.4A 2019-09-30 2019-09-30 Multi-target tracking method, system, computing device and storage medium Pending CN112581497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910945830.4A CN112581497A (en) 2019-09-30 2019-09-30 Multi-target tracking method, system, computing device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910945830.4A CN112581497A (en) 2019-09-30 2019-09-30 Multi-target tracking method, system, computing device and storage medium

Publications (1)

Publication Number Publication Date
CN112581497A true CN112581497A (en) 2021-03-30

Family

ID=75116943

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910945830.4A Pending CN112581497A (en) 2019-09-30 2019-09-30 Multi-target tracking method, system, computing device and storage medium

Country Status (1)

Country Link
CN (1) CN112581497A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170615A (en) * 2022-09-02 2022-10-11 环球数科集团有限公司 High-speed visual system based on intelligent camera and target tracking algorithm thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109107A (en) * 2017-12-18 2018-06-01 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN109784155A (en) * 2018-12-10 2019-05-21 西安电子科技大学 Visual target tracking method, intelligent robot based on verifying and mechanism for correcting errors

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109107A (en) * 2017-12-18 2018-06-01 北京奇虎科技有限公司 Video data handling procedure and device, computing device
CN109784155A (en) * 2018-12-10 2019-05-21 西安电子科技大学 Visual target tracking method, intelligent robot based on verifying and mechanism for correcting errors

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170615A (en) * 2022-09-02 2022-10-11 环球数科集团有限公司 High-speed visual system based on intelligent camera and target tracking algorithm thereof

Similar Documents

Publication Publication Date Title
US9104919B2 (en) Multi-cue object association
CN105844256A (en) Panorama video frame image processing method and device
CN110659391A (en) Video detection method and device
US20200336710A1 (en) Systems and methods for image processing
CN110084797B (en) Plane detection method, plane detection device, electronic equipment and storage medium
CN110310301B (en) Method and device for detecting target object
CN111383204A (en) Video image fusion method, fusion device, panoramic monitoring system and storage medium
CN111310727A (en) Object detection method and device, storage medium and electronic device
CN114418861B (en) Camera image splicing processing method and system
CN115134677A (en) Video cover selection method and device, electronic equipment and computer storage medium
CN113191221B (en) Vehicle detection method and device based on panoramic camera and computer storage medium
CN111105351A (en) Video sequence image splicing method and device
CN112581497A (en) Multi-target tracking method, system, computing device and storage medium
CN112818743B (en) Image recognition method and device, electronic equipment and computer storage medium
CN114782496A (en) Object tracking method and device, storage medium and electronic device
CN112184901A (en) Depth map determination method and device
CN111988520A (en) Picture switching method and device, electronic equipment and storage medium
CN111225180B (en) Picture processing method and device
US10223592B2 (en) Method and associated apparatus for performing cooperative counting with aid of multiple cameras
US20200202140A1 (en) Method and device for evaluating images, operating assistance method, and operating device
CN113205144B (en) Model training method and device
US11526966B2 (en) Image processing device, image processing method, and storage medium storing image processing program
CN117575985B (en) Method, device, equipment and medium for supervising casting of automobile parts
JP2019174989A (en) Image compression method and image compression device
WO2021036275A1 (en) Multi-channel video synchronization method, system and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230724

Address after: Room 437, Floor 4, Building 3, No. 969, Wenyi West Road, Wuchang Subdistrict, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: Wuzhou Online E-Commerce (Beijing) Co.,Ltd.

Address before: Box 847, four, Grand Cayman capital, Cayman Islands, UK

Applicant before: ALIBABA GROUP HOLDING Ltd.