CN110264497B - Method and device for determining tracking duration, storage medium and electronic device - Google Patents

Method and device for determining tracking duration, storage medium and electronic device Download PDF

Info

Publication number
CN110264497B
CN110264497B CN201910502736.1A CN201910502736A CN110264497B CN 110264497 B CN110264497 B CN 110264497B CN 201910502736 A CN201910502736 A CN 201910502736A CN 110264497 B CN110264497 B CN 110264497B
Authority
CN
China
Prior art keywords
tracking
tracking object
determining
frame image
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910502736.1A
Other languages
Chinese (zh)
Other versions
CN110264497A (en
Inventor
李中振
潘华东
龚磊
彭志蓉
林桥洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN201910502736.1A priority Critical patent/CN110264497B/en
Publication of CN110264497A publication Critical patent/CN110264497A/en
Application granted granted Critical
Publication of CN110264497B publication Critical patent/CN110264497B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a method and a device for determining tracking duration, a storage medium and an electronic device, wherein the method comprises the following steps: determining the identification of a first tracking object positioned in a target area, the tracking time for tracking the first tracking object and the shooting time of an Nth frame image in the Nth frame image obtained by carrying out video monitoring on the target area; determining that a first tracking object does not exist in a target area in an Mth frame image obtained by carrying out video monitoring on the target area; determining the identifier of a second tracking object appearing in the target area and the shooting time of an O frame image in the O frame image obtained by carrying out video monitoring on the target area; and under the condition that the first tracking object is matched with the second tracking object, determining the tracking time length for tracking the second tracking object in the O frame image as a second time length. By the method and the device, the problem that the tracking time of the object which is not detected is not accurately calculated is solved, and the effect of accurately determining the tracking time is achieved.

Description

Method and device for determining tracking duration, storage medium and electronic device
Technical Field
The invention relates to the field of computers, in particular to a method and a device for determining a tracking duration, a storage medium and an electronic device.
Background
In the prior art, when an object in a queue is tracked by using an image, the tracking time of the tracked object is judged only by depending on the tracking continuity.
And when the tracking object is not detected, determining whether the tracking object is the same tracking object according to the overlapping area proportion of the tracking frames in the front and back tracking images. Or the similarity between the potential object and the tracking object is detected by comparing, and whether the potential object and the tracking object are the same tracking object is determined, so as to realize the relocation of the tracking object.
As can be seen from the above, in the prior art, the calculation of the tracking duration of the undetected object is not accurate.
In view of the problems in the prior art, the related art has not yet proposed an effective solution.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining tracking duration, a storage medium and an electronic device, which are used for at least solving the problem that the tracking duration of an undetected object is not accurately calculated in the related art.
According to an embodiment of the present invention, there is provided a method for determining a tracking duration, including: determining the identification of a first tracking object located in a target area, the tracking time for tracking the first tracking object and the shooting time of an Nth frame image in the Nth frame image obtained by video monitoring of the target area, wherein the tracking time for tracking the first tracking object is the first time; determining that a first tracking object does not exist in a target area in an Mth frame image obtained by carrying out video monitoring on the target area; determining the identifier of a second tracking object appearing in the target area and the shooting time of an Oth frame image in the Oth frame image obtained by carrying out video monitoring on the target area, wherein O > M > N and O, M, N are positive integers; and under the condition that the first tracking object is matched with the second tracking object, determining the tracking time length for tracking the second tracking object in the O frame image as a second time length, wherein the second time length is the sum of the first time length and a shooting time interval, and the shooting time interval is the difference value of the shooting time of the O frame image and the shooting time of the N frame image.
According to another embodiment of the present invention, there is provided a tracking duration determining apparatus including: the device comprises a first determining module, a second determining module and a control module, wherein the first determining module is used for determining the identification of a first tracking object located in a target area, the tracking time for tracking the first tracking object and the shooting time of an Nth frame image in the Nth frame image obtained by video monitoring of the target area, and the tracking time for tracking the first tracking object is the first time; the second determining module is used for determining that the first tracking object does not exist in the target area in the Mth frame image obtained by carrying out video monitoring on the target area; a third determining module, configured to determine, in an O-th frame image obtained by video monitoring of a target area, an identifier of a second tracking object appearing in the target area and a shooting time of the O-th frame image, where O > M > N, and O, M, N are positive integers; and the fourth determining module is used for determining the tracking time length for tracking the second tracking object in the O frame image as a second time length under the condition that the first tracking object is matched with the second tracking object, wherein the second time length is the sum of the first time length and the shooting time interval, and the shooting time interval is the difference value of the shooting time of the O frame image and the shooting time of the N frame image.
According to a further embodiment of the present invention, there is also provided a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, the identification of the first tracking object in the target area, the tracking time for tracking the first tracking object and the shooting time of the N frame image are determined in the N frame image obtained by video monitoring of the target area; if it is determined that the first tracking object does not exist in the target area in the M-th frame image obtained by video monitoring of the target area, that is, if the first tracking object is not detected, if the identifier of the second tracking object and the shooting time of the O-th frame image appear in the O-th frame image, the sum of the first time length and the shooting time interval can be determined as the tracking time length of the second tracking object under the condition that the first tracking object is matched with the second tracking object, and the shooting time interval is the difference between the shooting time of the O-th frame image and the shooting time of the N-th frame image. Therefore, the tracking time length can be accurately determined under the condition that the first tracking object and the second tracking object are the same object. Therefore, the problem that the tracking duration of the undetected object is not accurately calculated in the related technology can be solved, and the effect of accurately determining the tracking duration is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the invention without limiting the invention. In the drawings:
fig. 1 is a block diagram of a hardware structure of a mobile terminal of a method for determining a tracking duration according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining a tracking duration in accordance with an embodiment of the present invention;
FIG. 3 is a flow chart of a preferred embodiment of the present embodiment;
FIG. 4 is a schematic diagram of a 491 frame of a video;
FIG. 5 is a schematic view of a 498 th frame of video;
FIG. 6 is a diagram of a 533 th frame of video;
fig. 7 is a schematic diagram of a 573 frame of video;
FIG. 8 is a diagram of a 592 th frame of a video;
fig. 9 is a block diagram of a structure of a determination apparatus of a trace duration according to an embodiment of the present invention.
Detailed Description
The invention will be described in detail hereinafter with reference to the accompanying drawings in conjunction with embodiments. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device. Taking the operation on the mobile terminal as an example, fig. 1 is a hardware structure block diagram of the mobile terminal of the determination method of the tracking duration according to the embodiment of the present invention. As shown in fig. 1, the mobile terminal 10 may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA) and a memory 104 for storing data, and optionally may also include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program and a module of an application software, such as a computer program corresponding to the determination method of the tracking duration in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some instances, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal 10 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal 10. In one example, the transmission device 106 includes a Network adapter (NIC), which can be connected to other Network devices through a base station so as to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used for communicating with the internet in a wireless manner.
In the present embodiment, a method for determining a tracking duration is provided, and fig. 2 is a flowchart of a method for determining a tracking duration according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, determining the identification of a first tracking object in a target area, the tracking duration of the first tracking object and the shooting time of an Nth frame image in the Nth frame image obtained by video monitoring of the target area, wherein the tracking duration of the first tracking object is the first duration;
alternatively, in the present embodiment, the method can be applied to queuing scenarios, including, but not limited to, queuing for buying meals at restaurants, queuing for buying tickets at railway stations, queuing for registering at hospitals, queuing for getting on the railway stations, and the like. The target area may be a queued area, such as an entrance to a train station or the like. The identifier of the first tracked object may be a displacement identifier ID number assigned to the first tracked object, and the identifier may further display a first time length, coordinates of a position where the first tracked object is located, feature information of the first tracked object, and the like. In addition, in the tracking process, the shooting time at which each frame image is shot is displayed in each frame image.
Step S204, determining that no first tracking object exists in the target area in the Mth frame of image obtained by video monitoring of the target area;
alternatively, in this embodiment, it may be determined that the first tracking object is not detected after the nth frame image or is not detected after the nth frame image and before the mth frame image. The interval between the shooting time of the Nth frame image and the shooting time of the Mth frame image is a first preset time length, and no first tracking object exists in the target area and no tracking object matched with the first tracking object exists in the time period from the shooting time of the Mth frame image to the shooting time of the Oth frame image and before the second preset time length.
Alternatively, the first preset time period and the second preset time period may be 1 second.
It should be noted that, after the first tracked object is not detected in the target area, the identifier of the first tracked object may be stored, that is, the first duration, the coordinate of the position where the first tracked object is located, the feature information of the first tracked object, and the like are stored, so as to facilitate subsequent comparison.
Step S206, determining the identification of a second tracking object appearing in the target area and the shooting time of an O-th frame image in the O-th frame image obtained by carrying out video monitoring on the target area, wherein O > M > N and O, M, N are positive integers;
in this embodiment, the identification of the second tracked object includes, but is not limited to, an identification ID number. The photographing time of the O-th frame image may be in the form of a year, month, day, for example, 2019-06-05.
Step S208, under the condition that the first tracking object is matched with the second tracking object, determining the tracking time length for tracking the second tracking object in the O frame image as a second time length, wherein the second time length is the sum of the first time length and the shooting time interval, and the shooting time interval is the difference value of the shooting time of the O frame image and the shooting time of the N frame image.
In this embodiment, the tracking duration of the second tracked object may be displayed in the O-th frame as the second duration to facilitate observation by the user.
According to the invention, the identification of the first tracking object in the target area, the tracking time for tracking the first tracking object and the shooting time of the N frame image are determined in the N frame image obtained by video monitoring of the target area; when the first tracking object is determined not to exist in the target area in the M frame image obtained by video monitoring of the target area, that is, when the first tracking object is not detected in the video monitoring, if the second tracking object appears in the O frame image, the identifier of the second tracking object and the shooting time of the O frame image are obtained, when the first tracking object is matched with the second tracking object, the sum of the first time length and the shooting time interval can be determined as the tracking time length of the second tracking object, and the shooting time interval is the difference value between the shooting time of the O frame image and the shooting time of the N frame image. Therefore, the tracking time length can be accurately determined under the condition that the first tracking object and the second tracking object are the same object. Therefore, the problem that the tracking duration of the undetected object is not accurately calculated in the related technology can be solved, and the effect of accurately determining the tracking duration is achieved.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
In an optional embodiment, after determining the identifier of a second tracking object appearing in the target area and the shooting time of the O frame image in the O frame image obtained by video monitoring of the target area, matching first information for representing the first tracking object with second information for representing the second tracking object, wherein the first information comprises the coordinate position of the first tracking object, biometric information of the first tracking object and color texture information of the first tracking object, and the second information comprises the coordinate position of the second tracking object, biometric information of the second tracking object and color texture information of the second tracking object; the first tracked object and the second tracked object are matched using the first information and the second information. In this embodiment, the first information may be feature information of the first tracked object, and the second information may be feature information of the second tracked object. The distance between the first tracked object and the second tracked object can be found by using the coordinate position. The color texture information may be a color of clothes or hair worn by the first and second tracked objects.
In an alternative embodiment, the first tracked object and the second tracked object may be matched by: and under the condition that the distance between the coordinate position of the first tracking object and the coordinate position of the second tracking object is smaller than the preset distance, the biological characteristic information of the first tracking object is matched with the biological characteristic information of the second tracking object, and the color texture information of the first tracking object is matched with the color texture information of the second tracking object, determining that the first tracking object is matched with the second tracking object. As an example, if the distance between the first tracked object and the second tracked object is relatively small, for example, 1 meter, according to the normal queuing rule, the first tracked object and the second tracked object can be regarded as the same object. If the distance between the first tracked object and the second tracked object is relatively large, for example, the distance between the first tracked object and the second tracked object is 10 meters, according to the normal queuing rule, the first tracked object cannot move 10 meters within 1 second, that is, it can be determined that the first tracked object and the second tracked object are not the same object.
Optionally, in this embodiment, the biometric information includes, but is not limited to, head and shoulder features of the tracked object, and face information.
In an alternative embodiment, in the case that the first information indicating the first tracked object does not match the second information indicating the second tracked object, the second tracked object may be determined as the tracked object that appears for the first time in the video surveillance; the tracking duration of the second tracked object is timed from zero. In this embodiment, the missing information of the first tracking object may be stored within a certain time, and if the second tracking object appearing in the image frame acquired within the certain time does not match the first tracking object, it is considered that the second tracking object is not an object that has not been detected before, and tracking is performed according to a new tracking object.
In an alternative embodiment, the displacement distance of the first tracked object before the nth frame image is determined; and determining the maximum frame number of the first tracking object stored by utilizing the relation between the displacement distance and a preset displacement threshold value in the target area. In this embodiment, the queuing distance in the target region is fixed, the average displacement of the first tracked object may be determined by using the first duration and the displacement distance of the first tracked object, and a ratio between the average displacement and a preset displacement threshold is determined as the maximum number of frames in which the first tracked object is stored. If the first tracked object is not detected in the mth frame image, the newly appearing second tracked object may be compared with the first tracked object within the maximum number of frames after the mth frame image.
In an alternative embodiment, the number of tracked objects is determined in an identification box for identifying the target area; and updating the number of the determined tracked objects in the identification frame of the target area under the condition that the second tracked object appearing in the target area is determined in the O-th frame image. In the present embodiment, the number of all tracked objects is displayed in the target area in each frame image, and in the case where a tracked object is not detected, the number of tracked objects is decreased, and in the case where a new tracked object appears, the number of tracked objects is increased.
The invention is explained in detail below with reference to specific examples:
in this embodiment, the second tracked object takes a new target as an example, the target area takes a queuing area as an example, and the O-th frame image takes a current frame as an example for explanation.
And when a new target appears in the target area of the current frame, judging whether a first tracking target appears in a preset previous N frames of the current frame and is not detected, acquiring a position, which is not detected and tracked before the first tracking target appears, of the current frame and comparing the position with the position of the new target of the current frame, or comparing the characteristics of the new target with the characteristics of the first tracking target which is not detected, and if the position relation meets a preset constraint relation or the characteristic similarity reaches a threshold value, updating the queuing time of the new position target to be the sum of the time length of the first tracking target and the time length of the frame interval between the current frame and a lost frame.
Fig. 3 is a flowchart of a preferred embodiment of the present invention, and as shown in fig. 3, in a scene in which pedestrians are regularly queued, the tracking of the tracked object includes the following steps:
s301: the image of lining up in the pedestrian regular lining up area is obtained, and in this embodiment, the image of lining up can be obtained through a monocular camera, a binocular camera, a fish eye camera and the like.
S302: and determining a plurality of monitoring areas in the pedestrian regular queuing area, and acquiring a queuing image in each monitoring area.
S303: detecting head and shoulder information of a tracked object in a queuing image;
s304: tracking a tracked object in the monitoring area by using the head and shoulder information; and determining a queuing image in which the tracking object appears as a first frame image, and displaying information such as a tracking identification ID, a coordinate position, a tracking duration and the like of the tracking object in the first frame image.
S305: comparing the tracking ID of the tracking object in the second frame image with the tracking ID in the first frame image;
s306: judging whether the second frame image has undetected tracking objects or not;
s307: if the condition that the tracking object is not detected exists in the second frame, the coordinate position and the tracking time length of the undetected tracking object are reserved, and meanwhile, the ID, the coordinate position, the tracking time length and the pixel displacement of the frame before and after the ID of the tracking object which is successfully tracked are saved. Otherwise, outputting the queuing trace time length.
And comparing the tracking ID in the third frame image with the tracking ID in the second frame image, judging whether the tracking object exists in the third frame image and is not detected, and judging whether the ID of a new tracking object exists in the third frame image relative to the tracking object ID in the second frame image. If the tracking object exists in the third frame of image and is not detected, the coordinate and the tracking time length of the undetected tracking object are reserved, and meanwhile, the target ID, the coordinate position and the tracking time length of the successful tracking are saved. If the ID of the new object appears in the third frame image and the tracking object is not detected in the second frame image, respectively judging the relationship between the coordinate position of each newly appeared ID of the third frame image and the coordinate positions of all the undetected objects in the second frame image, and judging whether the tracking duration of the newly appeared object needs to be updated; for example: and calculating the coordinates of the newly-appeared object ID of the third frame image and the coordinates of all objects which are not detected in the second frame image, updating the tracking time length of the new object B in the third frame image if the distance between the object A which is not detected in the second frame image and the new object B in the third frame image meets a given threshold value or the distance between the object A and the new object B and the color-texture characteristics meet the given threshold value, updating the total time length of the tracking time length of the corresponding object A which is not detected in the second frame image and meets the constraint condition, which is added with the two-frame interval time length, into the queuing time length of the new object B in the third frame image, and simultaneously deleting the information of the object A which is not detected in the second frame image.
S308: comparing the tracking ID in the image of the mth frame with the tracking ID in the image of the (m-1) th frame, wherein a tracking success object B exists in the previous and next frames, recording the average displacement of the object B in the previous and next frames, detecting the tracking of the object B in a period of time, calculating the average displacement x of the object B in a period of time, calculating the maximum storage frame number after the object B is not detected through a preset target displacement threshold value d, and calculating the formula: and (4) setting the maximum storage frame number after the object B tracking is not detected as a preset target displacement threshold d/the average displacement of the object B tracking in the previous period of time. And simultaneously judging whether the tracking object exists in the mth frame of image and is not detected, and judging whether a new object ID exists in the mth frame of image relative to the tracking object ID of the (m-1) th frame of image. If the tracking object in the mth frame of image is not detected, the coordinates of the object which is not detected in tracking, the target tracking time length and the maximum frame number stored in the object which is not detected in tracking are reserved, and the target ID, the coordinate position, the tracking time length and the pixel displacement which are successfully tracked are simultaneously stored. And if the new ID appears in the mth frame image and the stored tracking undetected target exists in the previous m-1 frame corresponding to the mth frame, judging the relationship between the coordinate position of the newly appearing ID of the mth frame image and all the stored undetected target coordinate positions of the previous m-1 frame corresponding to the mth frame image, and determining whether to update the queuing time of the new object in the mth frame image or not, wherein the rest new objects which do not meet the updating condition are used as the tracking targets of the normal newly entered area. If the object information is not detected in a certain stored tracking, if the continuous n frames of the object do not correspond to a new object which is suitable for the threshold (n is the maximum stored frame number of the object which is not detected in the tracking), the object information is deleted.
S309: and under the condition that the newly appeared ID coordinate position of the mth frame image is matched with all stored undetected target coordinate positions of the previous m-1 frame image corresponding to the mth frame image, updating the queuing time length of the new object in the mth frame image.
S310: and outputting the queuing time corresponding to the new target after the correction.
Fig. 4 is a schematic diagram of frame 491 of a video, wherein a large gray area in fig. 4 is a set regular queuing area, and a small gray frame in the area is a tracking target. The target in the lower right corner of the regular queuing area in fig. 4 is the monitoring observation target, the current ID of the target is 425, the ID tracking time is 20 seconds, and the current time is shown in the upper right corner of fig. 4.
Fig. 5 is a schematic diagram of frame 498 of the video, and the target is observed in the lower right frame in the queuing area in fig. 5, and tracking of the target is not detected due to overexposure and the like.
Fig. 6 is a schematic diagram of frame 533 of the video, the observation target in the frame at the lower right corner in fig. 6 is tracked again as a new target, the current ID of the target is 477, if the tracking duration of the new target is not updated, the ID tracking duration starts to be timed from 0, but by the method in this document, the duration of the new ID is updated, the ID queuing duration is 23 seconds after updating, and by making a difference between the time at the upper right of fig. 4 and fig. 6, it is obtained that the queuing duration before and after the observation target tracking is not detected in the video monitoring is consistent with the real queuing duration.
Fig. 7 is a schematic diagram of a 573 frame of a video, and the tracking of the observation target in the lower right corner box in fig. 7 is not detected again.
FIG. 8 is a schematic diagram of frame 592 of a video, where an observed target in the lower right box in FIG. 8 is tracked again as a new target, the current ID of the target is 512, if the tracking duration of the new target is not updated, the ID tracking duration will be counted from 0, but the duration of the new ID is updated by the method in this document, the ID queuing duration is 27 seconds after updating, and the queuing duration before and after the tracking of the observed target is not detected is identical to the real queuing duration by making a difference between the upper right time in FIG. 6 and FIG. 8.
The method can better solve the problem that the calculated queuing time is inaccurate after the target is not detected due to reasons such as shielding and the like in the regular queuing process.
In the embodiment, compared with a strategy of judging the target time length only by depending on the tracking continuity, the embodiment can solve the problem of inaccuracy in calculating the target time length after the target is not detected in the tracking process to a certain extent.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a device for determining a tracking duration is further provided, and the device is used to implement the foregoing embodiments and preferred embodiments, which have already been described and are not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 9 is a block diagram of a structure of a trace duration determination apparatus according to an embodiment of the present invention, as shown in fig. 9, the apparatus includes: a first determination module 92, a second determination module 94, a third determination module 96, and a fourth determination module 98, which are described in detail below:
a first determining module 92, configured to determine, in an nth frame image obtained by video monitoring of a target area, an identifier of a first tracked object located in the target area, a tracking duration for tracking the first tracked object, and a shooting time of the nth frame image, where the tracking duration for tracking the first tracked object is the first duration;
alternatively, in the present embodiment, the method can be applied to queuing scenarios, including, but not limited to, queuing for buying meals at restaurants, queuing for buying tickets at railway stations, queuing for registering at hospitals, queuing for getting on the railway stations, and the like. The target area may be a queued area, such as an entrance to a train station or the like. The identifier of the first tracked object may be a displacement identifier ID number assigned to the first tracked object, and the identifier may further display a first time length, coordinates of a position where the first tracked object is located, feature information of the first tracked object, and the like. In addition, in the tracking process, the shooting time at which each frame image is shot is displayed in each frame image.
A second determining module 94, configured to determine that the first tracking object does not exist in the target area in an mth frame image obtained by video monitoring of the target area;
alternatively, in this embodiment, it may be determined that the first tracking object is not detected after the nth frame image or is not detected after the nth frame image and before the mth frame image. The interval between the shooting time of the Nth frame image and the shooting time of the Mth frame image is a first preset time length, and no first tracking object exists in the target area and no tracking object matched with the first tracking object exists in the time period from the shooting time of the Mth frame image to the shooting time of the Oth frame image and before the second preset time length.
Alternatively, the first preset time period and the second preset time period may be 1 second.
It should be noted that, after the first tracked object is not detected in the target area, the identifier of the first tracked object may be stored, that is, the first duration, the coordinate of the position where the first tracked object is located, the feature information of the first tracked object, and the like are stored, so as to facilitate subsequent comparison.
A third determining module 96, configured to determine, in an O-th frame image obtained by video monitoring of the target area, an identifier of a second tracking object appearing in the target area and a shooting time of the O-th frame image, where O > M > N, and O, M, N are positive integers;
in this embodiment, the identification of the second tracked object includes, but is not limited to, an identification ID number. The photographing time of the O-th frame image may be in the form of a year, month, day, for example, 2019-06-05.
And a fourth determining module 98, configured to determine, in a case that the first tracked object is matched with the second tracked object, that the tracking time length for tracking the second tracked object in the O-th frame image is a second time length, where the second time length is a sum of the first time length and a shooting time interval, and the shooting time interval is a difference between the shooting time of the O-th frame image and the shooting time of the N-th frame image.
In this embodiment, the tracking duration of the second tracked object may be displayed in the O-th frame as the second duration to facilitate observation by the user.
According to the invention, the identification of the first tracking object in the target area, the tracking time for tracking the first tracking object and the shooting time of the N frame image are determined in the N frame image obtained by video monitoring of the target area; when the first tracking object is determined not to exist in the target area in the M-th frame image obtained by video monitoring of the target area, that is, when the first tracking object is not detected, if a second tracking object appears in the O-th frame image, the identifier of the second tracking object and the shooting time of the O-th frame image are obtained, and when the first tracking object is matched with the second tracking object, the sum of the first time length and the shooting time interval can be determined as the tracking time length of the second tracking object, and the shooting time interval is the difference value between the shooting time of the O-th frame image and the shooting time of the N-th frame image. Therefore, the tracking time length can be accurately determined under the condition that the first tracking object and the second tracking object are the same object. Therefore, the problem that the tracking duration of the undetected object is not accurately calculated in the related technology can be solved, and the effect of accurately determining the tracking duration is achieved.
Alternatively, the execution subject of the above steps may be a terminal or the like, but is not limited thereto.
In an optional embodiment, after determining the identifier of a second tracking object appearing in the target area and the shooting time of the O frame image in the O frame image obtained by video monitoring of the target area, matching first information for representing the first tracking object with second information for representing the second tracking object, wherein the first information comprises the coordinate position of the first tracking object, biometric information of the first tracking object and color texture information of the first tracking object, and the second information comprises the coordinate position of the second tracking object, biometric information of the second tracking object and color texture information of the second tracking object; the first tracked object and the second tracked object are matched using the first information and the second information. In this embodiment, the first information may be feature information of the first tracked object, and the second information may be feature information of the second tracked object. The distance between the first tracked object and the second tracked object can be found by using the coordinate position. The color texture information may be a color of clothes or hair worn by the first and second tracked objects.
In an alternative embodiment, the first tracked object and the second tracked object may be matched by: and under the condition that the distance between the coordinate position of the first tracking object and the coordinate position of the second tracking object is smaller than the preset distance, the biological characteristic information of the first tracking object is matched with the biological characteristic information of the second tracking object, and the color texture information of the first tracking object is matched with the color texture information of the second tracking object, determining that the first tracking object is matched with the second tracking object. As an example, if the distance between the first tracked object and the second tracked object is relatively small, for example, 1 meter, according to the normal queuing rule, the first tracked object and the second tracked object can be regarded as the same object. If the distance between the first tracked object and the second tracked object is relatively large, for example, the distance between the first tracked object and the second tracked object is 10 meters, according to the normal queuing rule, the first tracked object cannot move 10 meters within 1 second, that is, it can be determined that the first tracked object and the second tracked object are not the same object.
Optionally, in this embodiment, the biometric information includes, but is not limited to, head and shoulder features of the tracked object, and face information.
In an alternative embodiment, in the case that the first information indicating the first tracked object does not match the second information indicating the second tracked object, the second tracked object may be determined as the tracked object that appears for the first time in the video surveillance; the tracking duration of the second tracked object is timed from zero. In this embodiment, the undetected information of the first tracking object may be stored within a certain time, and if a second tracking object appearing in the image frame acquired within the certain time does not match the first tracking object, it is considered that the second tracking object is not an object that has not been detected before, and tracking is performed according to a new tracking object.
In an alternative embodiment, the displacement distance of the first tracked object before the nth frame image is determined; and determining the maximum frame number of the first tracking object stored by utilizing the relation between the displacement distance and a preset displacement threshold value in the target area. In this embodiment, the queuing distance in the target region is fixed, the average displacement of the first tracked object may be determined by using the first duration and the displacement distance of the first tracked object, and a ratio between the average displacement and a preset displacement threshold is determined as the maximum number of frames in which the first tracked object is stored. If the first tracked object is not detected in the mth frame image, the newly appearing second tracked object may be compared with the first tracked object within the maximum number of frames after the mth frame image.
In an alternative embodiment, the number of tracked objects is determined in an identification box for identifying the target area; and updating the number of the determined tracked objects in the identification frame of the target area under the condition that the second tracked object appearing in the target area is determined in the O-th frame image. In the present embodiment, the number of all tracked objects is displayed in the target area in each frame image, and in the case where the presence of a tracked object is not detected, the number of tracked objects is decreased, and in the case where a new tracked object is present, the number of tracked objects is increased.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the above method embodiments when executed.
Alternatively, in the present embodiment, the storage medium may be configured to store a computer program for executing the above steps.
Optionally, in this embodiment, the storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in this embodiment, the processor may be configured to execute the above steps through a computer program.
Optionally, the specific examples in this embodiment may refer to the examples described in the above embodiments and optional implementation manners, and this embodiment is not described herein again.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and alternatively, they may be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method for determining a tracking duration, comprising:
determining an identifier of a first tracking object located in a target area, tracking time for tracking the first tracking object and shooting time of an Nth frame of image in an Nth frame of image obtained by video monitoring of the target area, wherein the tracking time for tracking the first tracking object is first time;
determining that the first tracking object does not exist in the target area in an Mth frame image obtained by video monitoring of the target area;
determining the identifier of a second tracking object appearing in the target area and the shooting time of an Oth frame image in the Oth frame image obtained by carrying out video monitoring on the target area, wherein O > M > N, and O, M, N are positive integers;
and under the condition that the first tracking object is matched with the second tracking object, determining the tracking time length for tracking the second tracking object in the O frame image as a second time length, wherein the second time length is the sum of the first time length and a shooting time interval, and the shooting time interval is the difference between the shooting time of the O frame image and the shooting time of the N frame image.
2. The method according to claim 1, wherein after determining an identifier of a second tracking object appearing in the target region and a shooting time of an O-th frame image obtained by video-monitoring the target region, the method further comprises:
matching first information used for representing the first tracking object with second information used for representing the second tracking object, wherein the first information comprises a coordinate position of the first tracking object, biological feature information of the first tracking object and color texture information of an image area where the first tracking object is located, the second information comprises a coordinate position of the second tracking object, biological feature information of the second tracking object and color texture information of an image area where the second tracking object is located, and the biological feature information comprises head and shoulder features;
and determining whether the first tracking object is matched with the second tracking object or not by using the matching result of the first information and the second information.
3. The method of claim 2, wherein determining whether the first tracked object and the second tracked object match using the match of the first information and the second information comprises:
determining that the first tracking object matches the second tracking object when a distance between the coordinate position of the first tracking object and the coordinate position of the second tracking object is less than a preset distance, the biometric information of the first tracking object matches the biometric information of the second tracking object, and the color texture information of the first tracking object matches the color texture information of the second tracking object.
4. The method of claim 2, wherein in the event that first information representing the first tracked object does not match second information representing the second tracked object, the method further comprises:
determining the second tracking object as a tracking object appearing for the first time in the video monitoring;
timing a tracking duration of the second tracked object from zero.
5. The method of claim 1, further comprising:
determining a displacement distance of the first tracked object before the Nth frame of image;
and determining the maximum frame number of the first tracking object stored by utilizing the relation between the displacement distance and a preset displacement threshold value in the target area.
6. The method of claim 5, wherein determining the maximum number of frames the first tracked object is saved to using the relationship between the displacement distance and a preset displacement threshold in the target region comprises:
determining the average displacement of the first tracking object by using the first time length and the displacement distance of the first tracking object;
determining a ratio between the average displacement and the preset displacement threshold as a maximum number of frames that the first tracking object is saved, wherein the maximum number of frames that the first tracking object is saved includes saved image frames including the first tracking object.
7. The method according to any one of claims 1 to 6,
and a first preset time interval is formed between the shooting time of the Nth frame of image and the shooting time of the Mth frame of image, and no first tracking object exists in the target area and no tracking object matched with the first tracking object appears in the time period from the shooting time of the Mth frame of image to a second preset time before the shooting time of the Oth frame of image.
8. The method according to any one of claims 1 to 6,
determining the number of tracked objects in an identification box for identifying the target area;
and updating the number of the tracked objects in the identification frame of the target area under the condition that a second tracked object appears in the target area in the O-th frame image.
9. An apparatus for determining a tracked time duration, comprising:
the device comprises a first determining module, a second determining module and a control module, wherein the first determining module is used for determining the identification of a first tracking object located in a target area, the tracking time length for tracking the first tracking object and the shooting time of an Nth frame image in the Nth frame image obtained by video monitoring of the target area, and the tracking time length for tracking the first tracking object is the first time length;
the second determining module is used for determining that a first tracking object does not exist in the target area in an M frame image obtained by video monitoring of the target area;
a third determining module, configured to determine, in an O-th frame image obtained by video monitoring of the target area, an identifier of a second tracking object appearing in the target area and a shooting time of the O-th frame image, where O > M > N, and O, M, N are positive integers;
a fourth determining module, configured to determine, in the O frame image, that a tracking time duration for tracking the second tracked object is a second time duration when the first tracked object is matched with the second tracked object, where the second time duration is a sum of the first time duration and a shooting time interval, and the shooting time interval is a difference between a shooting time of the O frame image and a shooting time of the N frame image.
10. A storage medium, in which a computer program is stored, wherein the computer program is arranged to perform the method of any of claims 1 to 8 when executed.
11. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 8.
CN201910502736.1A 2019-06-11 2019-06-11 Method and device for determining tracking duration, storage medium and electronic device Active CN110264497B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910502736.1A CN110264497B (en) 2019-06-11 2019-06-11 Method and device for determining tracking duration, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910502736.1A CN110264497B (en) 2019-06-11 2019-06-11 Method and device for determining tracking duration, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN110264497A CN110264497A (en) 2019-09-20
CN110264497B true CN110264497B (en) 2021-09-17

Family

ID=67917612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910502736.1A Active CN110264497B (en) 2019-06-11 2019-06-11 Method and device for determining tracking duration, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN110264497B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111126807B (en) * 2019-12-12 2023-10-10 浙江大华技术股份有限公司 Stroke segmentation method and device, storage medium and electronic device
CN111008611B (en) * 2019-12-20 2023-07-14 浙江大华技术股份有限公司 Queuing time length determining method and device, storage medium and electronic device
US11553136B2 (en) * 2020-02-19 2023-01-10 Canon Kabushiki Kaisha Subject tracking device, subject tracking method, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103413295A (en) * 2013-07-12 2013-11-27 长沙理工大学 Video multi-target long-range tracking method
CN103793921A (en) * 2012-10-29 2014-05-14 浙江大华技术股份有限公司 Moving object extraction method and moving object extraction device
CN104867198A (en) * 2015-03-16 2015-08-26 北京首都国际机场股份有限公司 Queuing time acquiring method and queuing time acquiring apparatus
CN108764167A (en) * 2018-05-30 2018-11-06 上海交通大学 A kind of target of space time correlation recognition methods and system again
CN109117721A (en) * 2018-07-06 2019-01-01 江西洪都航空工业集团有限责任公司 A kind of pedestrian hovers detection method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080073933A (en) * 2007-02-07 2008-08-12 삼성전자주식회사 Object tracking method and apparatus, and object pose information calculating method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793921A (en) * 2012-10-29 2014-05-14 浙江大华技术股份有限公司 Moving object extraction method and moving object extraction device
CN103413295A (en) * 2013-07-12 2013-11-27 长沙理工大学 Video multi-target long-range tracking method
CN104867198A (en) * 2015-03-16 2015-08-26 北京首都国际机场股份有限公司 Queuing time acquiring method and queuing time acquiring apparatus
CN108764167A (en) * 2018-05-30 2018-11-06 上海交通大学 A kind of target of space time correlation recognition methods and system again
CN109117721A (en) * 2018-07-06 2019-01-01 江西洪都航空工业集团有限责任公司 A kind of pedestrian hovers detection method

Also Published As

Publication number Publication date
CN110264497A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
CN107742100B (en) A kind of examinee's auth method and terminal device
CN110264497B (en) Method and device for determining tracking duration, storage medium and electronic device
CN109165645B (en) Image processing method and device and related equipment
US20200005090A1 (en) Target recognition method and apparatus, storage medium, and electronic device
CN109299658B (en) Face detection method, face image rendering device and storage medium
US9031862B2 (en) Advertisement delivery target identifying apparatus, advertisement delivery apparatus, advertisement delivery target identifying method, advertisement delivery method, program, and recording medium
CN105975980A (en) Method of monitoring image mark quality and apparatus thereof
CN107480624B (en) Permanent resident population's acquisition methods, apparatus and system, computer installation and storage medium
CN110738219A (en) Method and device for extracting lines in image, storage medium and electronic device
CN110363176B (en) Image analysis method and device
CN109961472B (en) Method, system, storage medium and electronic device for generating 3D thermodynamic diagram
CN110348519A (en) Financial product cheats recognition methods and the device of clique
EP3846114A1 (en) Animal information management system and animal information management method
CN111047622B (en) Method and device for matching objects in video, storage medium and electronic device
CN111583118A (en) Image splicing method and device, storage medium and electronic equipment
CN112631333B (en) Target tracking method and device of unmanned aerial vehicle and image processing chip
CN110309330A (en) The treating method and apparatus of vision map
CN111950507B (en) Data processing and model training method, device, equipment and medium
CN112365315A (en) Commodity display position recommendation method, device and system and storage medium
WO2021103727A1 (en) Data processing method, device, and storage medium
CN111385527B (en) Method for judging peer and related products
CN112163519A (en) Image mapping processing method, device, storage medium and electronic device
JP5560704B2 (en) Display schedule setting device, content display system, schedule setting method, and program
CN111145212B (en) Target tracking processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant