CN110838134B - Target object statistical method and device, computer equipment and storage medium - Google Patents

Target object statistical method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN110838134B
CN110838134B CN201910960300.7A CN201910960300A CN110838134B CN 110838134 B CN110838134 B CN 110838134B CN 201910960300 A CN201910960300 A CN 201910960300A CN 110838134 B CN110838134 B CN 110838134B
Authority
CN
China
Prior art keywords
target object
current
frame
target
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910960300.7A
Other languages
Chinese (zh)
Other versions
CN110838134A (en
Inventor
苏睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Jingdong Technology Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN201910960300.7A priority Critical patent/CN110838134B/en
Publication of CN110838134A publication Critical patent/CN110838134A/en
Application granted granted Critical
Publication of CN110838134B publication Critical patent/CN110838134B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a statistical method and device of a target object, computer equipment and a storage medium. The method comprises the following steps: acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects; identifying a target object in a current frame to obtain the current target object and a corresponding position; acquiring the position of a target object in the previous frame and a previous statistical value; and updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistical value. And when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value. The multi-frame images shot at different positions of the preset area are counted, so that the area limitation of single-frame image counting is avoided, and the counting accuracy is improved.

Description

Target object statistical method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a statistical method and apparatus for a target object, a computer device, and a storage medium.
Background
The existing method for counting moving target objects mainly adopts a fixed camera, and the whole algorithm consists of three parts of target detection, moving target tracking and target counting; wherein the target detection is the basis, which directly affects the accuracy of the subsequent processing; the tracking of the moving target requires accuracy, and the target can not be lost when the moving target is partially shielded; the target counting is mainly to count the passing number of a certain object in video monitoring.
The existing statistical method of the target object is based on a single-frame image, but the statistical method of the single-frame image cannot accurately count the target object in a large scene.
Disclosure of Invention
In order to solve the technical problem, the application provides a statistical method, a statistical device, a computer device and a storage medium for a target object.
In a first aspect, the present application provides a statistical method for a target object, including:
acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects;
identifying a target object in a current frame to obtain the current target object and a corresponding position;
acquiring the position of a target object in the previous frame and a previous statistical value;
and updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistical value.
And when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value.
In a second aspect, the present application provides a statistical apparatus for a target object, including:
the data acquisition module is used for acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects;
the data acquisition module is used for identifying a target object in a current frame to obtain the current target object and a corresponding position;
the previous frame data acquisition module is used for acquiring the position of the target object in the previous frame and the previous statistical value;
the data updating module is used for updating the previous statistic value according to the position of each current target object and the position of the target object of the previous frame to obtain the current statistic value;
and the counting module is used for taking the current statistical value as the target statistical value when the current frame is the last frame in the video sequence frame.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects;
identifying a target object in a current frame to obtain the current target object and a corresponding position;
acquiring the position of a target object in the previous frame and a previous statistical value;
and updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistical value.
And when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects;
identifying a target object in a current frame to obtain the current target object and a corresponding position;
acquiring the position of a target object in the previous frame and a previous statistical value;
and updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistical value.
And when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value.
The statistical method, the statistical device, the computer equipment and the storage medium of the target object comprise the following steps: acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects; identifying a target object in a current frame to obtain the current target object and a corresponding position; acquiring the position of a target object in the previous frame and a previous statistical value; and updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistical value. And when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value. The images shot at different positions of the preset area are counted, so that the area limitation of single-frame image counting is avoided, and the counting accuracy is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a diagram of an application environment of a statistical method for a target object in one embodiment;
FIG. 2 is a schematic flow chart diagram illustrating a statistical method for a target object according to one embodiment;
FIG. 3 is a diagram illustrating the result of image partitioning in one embodiment;
FIG. 4 is a flow diagram illustrating a statistical method for a target object in an exemplary embodiment;
FIG. 5 is a schematic flow chart diagram illustrating a method for fusion and suppression of a target object in one embodiment;
FIG. 6 is a flowchart illustrating a method for object fusion of a target object according to an embodiment;
FIG. 7 is a flowchart illustrating a false positive suppression method for a target object according to an embodiment;
FIG. 8 is a block diagram of an example statistical apparatus for a target object;
FIG. 9 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
FIG. 1 is a diagram of an application environment of a statistical method for a target object in one embodiment. Referring to fig. 1, the statistical method of the target object is applied to a statistical system of the target object. The statistical system of the target object includes a terminal 110 and a server 120. The terminal 110 or the server 110 obtains a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects; identifying a target object in a current frame to obtain the current target object and a corresponding position; acquiring the position of a target object in the previous frame and a previous statistical value; and updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistical value.
The terminal 110 and the server 120 are connected through a network. The terminal 110 may specifically be a desktop terminal or a mobile terminal, and the mobile terminal may specifically be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or a server cluster composed of a plurality of servers.
In one embodiment, as shown in FIG. 2, a statistical method of a target object is provided. The embodiment is mainly illustrated by applying the method to the terminal 110 (or the server 120) in fig. 1. Referring to fig. 2, the statistical method of the target object specifically includes the following steps:
in step S201, a current frame is obtained from video sequence frames captured at a plurality of different positions within a preset area.
In this particular embodiment, the video sequence frames contain the target object.
Specifically, the preset area refers to a preset area range, which is adapted to a shooting area of a camera shooting a video sequence frame. The current frame is any frame of the video sequence that is ready for processing. The video sequence frames can be images shot by shooting devices at different positions, or videos shot by the same shooting device at different positions, and the exposure time of the video frames at different positions is different.
Step S202, identifying a target object in the current frame, and obtaining the current target object and a corresponding position.
Specifically, the current target object refers to a target object identified in the current frame. The position of the current target object refers to the image position of the current target object in the current frame. Identifying the target object and the corresponding location in the current frame may employ at least one of a common target tracking algorithm and a target detection algorithm. The tracking algorithm includes a kalman tracking algorithm, a particle filter, a mean-shift, an inter-frame difference method, a Kernel Correlation Filter (KCF), and the like, and the target detection algorithm includes a deep learning network, a machine learning network, and the like, wherein the deep learning network includes a rotating Region Convolutional Neural network (R2 CNN), a Convolutional Neural Network (CNN), and the like.
In step S203, the position of the target object in the previous frame and the previous statistical value are acquired.
Specifically, the previous frame refers to a video frame before the current frame, and the previous frame and the current frame may be two adjacent time sequence frames after framing the video data, or may be two adjacent sampling frames after sampling a plurality of frames after framing. The position of the target object in the previous frame refers to the image position of the target object in the previous frame. The last statistical value is a statistical value corresponding to the last frame, the statistical value is a statistical result of the number of the target objects in the first image area from the first frame to the last frame, namely, the appearing target object and the disappearing target object are counted, the corresponding appearing value is added to the statistical result of the newly appearing target object, and the corresponding disappearing value is subtracted from the statistical result of the disappearing target object.
Step S204, updating the last statistic value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistic value.
Specifically, whether each current target object is a newly appearing object or not is judged according to the position of each current target object, the number of newly appearing target objects and/or the number of disappearing target objects are counted, the numerical values of the newly appearing and disappearing target objects are counted, the last statistical value is updated according to the numerical values of the newly appearing target objects and the disappearing target objects, and the current statistical value is obtained. Newly appearing target objects belong to increasing target objects and disappearing target objects belong to decreasing target objects. Where disappearing refers to disappearing a target object that disappears at a particular location in the image.
In step S205, when the current frame is the last frame in the video sequence frame, the current statistic value is used as the target statistic value.
Specifically, if the current frame is the last frame in the video sequence frame, that is, the last frame of the effective image shot in the preset area, the current statistical value is the statistical result of the target objects in all the video frames, so that the current statistical value is directly used as the target statistical value. The region limitation caused by single frame image statistics is avoided, and the statistical accuracy is improved.
The statistical method of the target object comprises the following steps: acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects, and the video sequence frames are divided into a first image area and a second image area by using area dividing parameters; identifying a target object in a current frame to obtain the current target object and a corresponding position; acquiring the position of a target object in the previous frame and a previous statistical value; updating the last statistical value according to the position of each current target object, the first image area and the position of the target object of the last frame to obtain a current statistical value; and when the current frame is the last frame in the video sequence frame, calculating the number of target objects in a preset area according to the current statistical value, the second image area and the positions of all current target objects to obtain a target statistical value. The images shot at different positions of the preset area are counted, so that the area limitation of single-frame image counting is avoided, and the counting accuracy is improved.
In an embodiment, the statistical method for the target object further includes:
in step S301, a region division parameter is acquired.
In this particular embodiment again, the region division parameter is used to divide a frame of the video sequence into a first image region and a second image region.
Specifically, the area dividing parameter is a parameter for dividing the video sequence frame, and the parameter may be a straight line parameter, and a straight line represented by the straight line parameter may be parallel to one of the edges in the image, or may not be parallel to the edge of the image. The first image region and the second image region constitute a single video sequence frame. The direction and position of the dividing line of the first image area and the second image area may be determined according to a photographing position when the video is photographed. The distance between the shooting positions is related to the monitoring area of the camera, and images shot at adjacent positions have certain overlapping areas. The target object refers to an object to be counted, and the target object includes at least one of an animal, a person, and equipment, an object, and the like. If the target object is one or more of pig, sheep, cattle and the like.
In one embodiment, the video sequence frame is an image captured by a camera mounted on the mobile device, and a dividing line of the first image region and the second image region forms an angle with a running direction of the mobile device, where the angle is different from 0, such as an angle of 60 ° or 90 °. The included angle can be self-defined according to the requirement. If the included angle between the dividing line and the moving direction of the mobile device is 90 °, the position of the dividing line in the image can be customized according to the requirement, as shown in fig. 3, the dividing line 310 is located at the middle line of the Y axis in the image, the dividing line 320 is located at 2/3 of the Y axis in the image, and the like.
In an embodiment, step S204 includes: and updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain the current statistical value.
Specifically, whether each current target object is located in a first image area or not is judged according to the position of each current target object, whether each current target object located in the first image area is a new object or not is judged, the numerical value of the new object is counted, the numerical value of the target object exiting from the first image area is counted, the last statistical value is updated according to the numerical value of the new object and the numerical value of the target object exiting from the first image area, and the current statistical value is obtained. Newly emerging target objects belong to increasing target objects and outgoing target objects belong to decreasing target objects.
In this embodiment, step S205 includes: and when the current frame is the last frame in the video sequence frame, calculating the number of the internal standard objects in the preset area according to the current statistical value, the second image area and the position of each current target object to obtain a target statistical value.
Specifically, if the current frame is the last frame in the video sequence frame, that is, the last frame of the effective image shot for the preset area, the current statistical value is a statistical result of target objects appearing in the first area in all the video frames, and in order to count data of the target objects in the preset area, the number of the target objects in the second image area of the current frame needs to be added. The number of the target objects in the second image area can be determined by the area position of the second image area of the current frame in the image and the position of each current target object. When the current target object exists in the second image area of the current frame, counting the statistical value of the second image area pair of the current target object, and calculating the sum of the statistical value of the second image area pair and the current statistical value to obtain a target statistical value. The image is divided into areas, the values of the target objects of all frames in the same area are counted to obtain a current statistical value, and the values of the target objects of the residual areas and the current statistical value are summed to obtain the target statistical value during the last frame, so that the area limitation caused by single-frame image counting is avoided, and the counting accuracy is improved.
In one embodiment, when the current frame is not the last frame in the video sequence frame, the next frame of the current frame is obtained, the next frame is used as the current frame, and the target object in the current frame is identified to obtain the current target object and the corresponding position.
Specifically, the current frame is not the last frame in the video sequence frame, which indicates that a statistical process of the next frame needs to be performed, and step S202 is performed until the statistics of all frames in the video sequence frame is completed, that is, until the current frame is the last frame in the video sequence frame, step S205 is performed. And executing to use the current statistical value as a target statistical value or executing to calculate the number of target objects in a preset area according to the current statistical value, the second image area and the positions of all current target objects to obtain the target statistical value.
In one embodiment, step S203 includes: and screening an effective object from the current target object according to the position of the target object of the previous frame and the position of the current target object, and updating the previous statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value.
In particular, the location includes one or more of a center location, a region box, and the like that may be used to represent one or more of the parameters of the location. If the position is only represented by the central position, the central position is adopted to directly judge whether the current target object is a valid object, if the position is only represented by the area frame, the area frame is adopted to directly judge whether the current target object is a valid object, and if the position is simultaneously represented by the central position and the area frame, the central position and the area frame are adopted to simultaneously judge whether the current target object is a valid object. After the effective objects in the current frame are determined, whether each effective object is located in the first image area or not is judged, the number of the effective objects located in the first image area is counted, the last counted value is updated according to the number of the effective objects located in the first image area, and the current counted value is obtained. According to the requirement, different position parameters can be adopted to represent the position information of the target object, and the target object can be described from different dimensions by the different position parameters. And judging whether the current target object is an effective object according to the position parameters, so that the judgment accuracy can be improved.
In one embodiment, the screening out valid objects from the current target object according to the position of the target object of the previous frame and the position of the current target object comprises: calculating the difference degree between the position of the target object of the last frame and the position of the current target object; and taking the current target object with the difference degree smaller than the preset difference degree as an effective object.
Specifically, calculating the difference between the position of each target object in the previous frame and the position of each current target object in the current frame, determining whether the difference is smaller than a preset difference, and when the difference between the current target object and the position of one of the target objects in the previous frame is smaller than the preset difference, taking the current target object as an effective object. Wherein the preset difference degree can be determined according to the requirement or the experience value of a technician. Whether the current target object is an effective object is judged through the difference degree, so that false detection caused by a target detection algorithm is reduced, and the statistical accuracy of the target object is improved.
In one embodiment, the identifier of the target object in the previous frame whose position difference degree from the current target object is less than the preset difference degree is used as the identifier of the current target object, that is, the same target object in different frames is represented.
In one embodiment, when the difference between the current target object and each target object of the previous frame is greater than the preset difference, it is determined whether the position of the current target object is located in the preset image area, and when the current target object is located in the preset image area, the current target object is a valid object, otherwise, the current target object is an invalid object detected by mistake.
Specifically, the preset image area refers to an edge area of the image, wherein the definition of the edge area is determined according to requirements, such as may be determined according to one or more of the moving speed of the target object, the running speed of the inspection vehicle, the shooting parameters, and the like. Since the edge region detects a new target object, it is impossible to determine whether the new object is a valid object by using a single position, and the edge region has a high probability of the new target object, so that the new object is defined as a valid object.
In one embodiment, updating the last statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value includes: counting the number of target objects located in the second image area in the previous frame image and the number of target objects located in the first image area in the current frame image to obtain a first updated value, counting the number of target objects located in the first image area in the previous frame image and the number of target objects located in the second image area in the current frame image to obtain a second updated value, and calculating the weighted sum of the first updated value, the first updated value and the second updated value to obtain a current statistical value.
Specifically, when the same target object is located in the second image region at the position of the previous frame and is located in the first image region at the position of the current frame, it indicates that the target object enters the first image region from the second image region, and counts the number of the target objects of this type to obtain a first updated value. The region division of the video frame can increase the accuracy of statistics, namely, whether the target object is a valid object is judged, so that the statistics are more accurate.
In one embodiment, step S202 includes: identifying a target object in a current frame by adopting a target tracking algorithm to obtain a first target object and a corresponding position; identifying a target object in the current frame by adopting a target detection algorithm to obtain a second target object and a corresponding position; judging whether the number of the first target objects is consistent with that of the second target objects; and when the first target object and the second target object are consistent, the first target object or the second target object is taken as the current target object.
Specifically, the target tracking algorithm and the target detection algorithm may employ a conventional target tracking algorithm and a conventional target detection algorithm. And identifying the target object in the current frame by adopting any target tracking algorithm to obtain a first target object and a corresponding position, detecting the target object in the current frame by adopting a target detection algorithm to obtain a second target object and a corresponding position, and judging whether the first target object is consistent with the second target object. The first target object and the second target object are consistent, wherein the first target object and the second target object are consistent in position and number, and when the first target object and the second target object are consistent, the consistent target object in the first target object and the second target object is used as the current target object. Two different types of algorithms are adopted to determine the current target object, so that the influence of false detection on the statistical result is avoided.
In one embodiment, when the first target object and the second target object are inconsistent, whether the inconsistent target object is located in a preset boundary area in the current frame is judged, when the inconsistent target object is located, a union of the first target object and the second target object is used as the current target object, and when the inconsistent target object is not located, an intersection of the first target object and the second target object is used as the current target object.
Specifically, for a target object in which the first target object and the second target object are inconsistent, the position of the target object is continuously determined, and whether the first target object and the second target object are located in a preset boundary region (a preset boundary region) is determined, where the boundary region is a boundary region of an image. Such as an edge region located near the edge to which the direction of travel of the inspection vehicle is directed. And if the target object is located in the preset boundary area, taking the target object formed by the union of the first target object and the second target object as the current target object. Otherwise, if the target object is not located in the preset boundary area, the target object formed by the intersection of the first target object and the second target object is used as the current target object.
In one embodiment, a method of identifying a target object in a current frame includes a target tracking algorithm and a target detection using an R2CNN model. The target tracking uses a KCF algorithm, namely, the ID information of the object in the previous frame is utilized to correspond to the position information, and the object information of the current frame is fused into an estimation algorithm by combining the image characteristics of the current frame, wherein the target tracking algorithm is KCF, and the target detection algorithm is R2 CNN. The R2CNN is relatively insensitive to Non-Maximum Suppression (NMS) in dense scenes, and performs inference operation on a current frame by using a target detection model to obtain position information of all objects.
In one embodiment, step S205 includes: counting the number of the current target objects in a second image area of the current frame according to the position of each current target object to obtain a second statistical value; and calculating the sum of the second statistical value and the current statistical value to obtain a target statistical value.
Specifically, the second statistical value refers to the number of target objects in a second image area in the last frame of the video frames, and the target statistical value of the target object is obtained by calculating the sum of the current statistical value and the second statistical value.
In a specific embodiment, as shown in fig. 4, the statistical method for the target object includes:
and S301, adopting an image acquisition mode of the inspection vehicle, and simultaneously setting a counting line in the center of the video.
Step S302, initializing the target object of the first frame. And identifying the position and the number of the current frame target objects through target detection, and establishing a target object tracking information file comprising id and coordinates.
In step S303, it is determined whether the position of the target object is located within the count line (within the first image region). Yes proceeds to step S304, no, and proceeds to step S305.
Step S304, counting to obtain the current statistical value. And counting the target objects within the counting line (in the direction opposite to the movement direction of the visual angle), and updating the file. Such as { 'ID': 0, 'loc': x0, y0, w0, h0] }, { 'ID': 1, 'loc': x1, y1, w1, h1 } …, { 'ID': n, [ xn, yn, wn, hn ] } where ID is used to distinguish between different objects and loc represents the position of the object in the image. The target detection returns the result [ loc0, loc1, …, locn ], wherein each loc information is as follows: [ [ x1, y1], [ x2, y2], [ x3, y3], [ x4, y4] ]. The target tracking returns the result [ loc0, loc1, …, locn ], where the information of each loc is like: unlike target detection, the tracking result is in the order of tracking target information. In the above expression, x represents the abscissa of the center point of the target frame, y represents the ordinate of the center point of the target frame, w represents the width of the target frame, h represents the height of the target frame, x1 represents the abscissa of the upper left corner point of the target frame, y 1-the ordinate of the upper left corner point of the target frame, x2 represents the abscissa of the upper right corner point of the target frame, y2 represents the abscissa of the lower left corner point of the target frame, y3 represents the ordinate of the lower left corner point of the target frame, x4 represents the abscissa of the lower right corner point of the target frame, and y4 represents the ordinate of the lower right corner point of the target frame.
In step S305, the next frame in the video sequence frames is obtained. Namely, the sliding of the inspection vehicle enters the next frame. And (2) performing two-part operation on the image, namely 1. target detection, wherein an R2CNN model is used, and because the model is relatively insensitive to nms of a dense scene, the target detection model is used for performing inference operation on the current frame to obtain the position information of all objects. 2. And tracking the target by using a KCF algorithm, namely fusing the object information of the current frame by combining the image characteristics of the current frame by using the corresponding position information of the ID information of the object of the previous frame.
Step S306, the information of the target object detected by the current frame target and the information of the target object obtained by the tracking method are mutually verified. The checking logic is to calculate the IOU and the central point distance of the target detection result and the tracking result, if a false alarm occurs, the problem of the detector is determined, and the logic is used to keep the frame information which is closest to the actual object, thereby achieving the effect of inhibition. And eliminating the influence of the false scenes on the experimental results, and updating the file.
As shown in fig. 5, step S306 includes:
step S3061, all target objects of the next frame are obtained by tracking the KCF algorithm, and the first target object is obtained.
Step S3062, all target objects of the next frame are obtained through R2CNN detection, and a second target object is obtained.
In step S3063, it is determined whether the first target object and the second target object match. The process advances to step S3064 for consistency, and the process advances to step S3065 for inconsistency.
Step S3064, if they match, an object fusion algorithm is executed. I.e. the updated object information, i.e. the target object and the position of the next frame, is obtained.
As shown in fig. 6, step S3064, among others, includes:
step S30641 acquires information of an existing target object. The existing information of the target objects includes the counted target objects and corresponding positions, identifications and the like.
Step S30642, obtain the current target object and the corresponding position of the current frame.
Step S30643, calculate the minimum distance and IOU of the existing target object and the current target object.
Step S30644, fusing the target objects according to the minimum distance between the existing target object and the current target object and the IOU. That is, the same target object is merged, and the position information of the same target object is updated.
Step S3065, if not, executing a false positive suppression algorithm.
Referring to fig. 7, step S3065 includes:
step S30651, a first target object and a corresponding location are acquired.
Step S30652, a second target object and corresponding location are acquired.
Step S30653, calculates the minimum distance and IOU between the first target object and the second target object and the existing target object.
In step S30654, it is determined whether the first target object and the second target object are valid objects based on the minimum distance and the IOU. If the object is a valid object, the process proceeds to step S30655, whereas the process proceeds to step S30656.
In step S30655, the active objects are fused. That is, when the distances between the first target object and the second target object and the existing target object are smaller than the preset distance and the IOU is larger than the preset IOU, the corresponding valid object is allocated to the identifier of the existing target object, and the position of the existing target object is updated according to the positions of the first target object and the second target object.
In step S30656, the invalid object is removed. Namely, the first target object and the second target object are removed from the existing target objects, wherein the distances between the first target object and the second target object and the existing target objects are larger than or equal to the preset distance, or the IOU is smaller than or equal to the preset IOU.
In step S307, a counter determines. When the counting line moves along with the inspection vehicle, if the counting line detects that the target object passes through in the moving direction of the inspection vehicle, the counter is +1, and if the target object returns along the moving direction of the inspection vehicle, the number of the target objects is-1.
Step S308, the counter is changed, and the file is updated according to the change.
Step S309, if not, go to step S305; if yes, the process proceeds to step S310.
In step S310, the counter and the count line are added to determine whether the next frame is the number of target objects outside the last frame, which is the final result.
The invention can effectively solve the problem of counting only the target objects in the borderless scene and prevent the occurrence of large-scale false detection and missed detection.
Fig. 2 and 4-7 are schematic flow charts illustrating a statistical method for a target object in one embodiment. It should be understood that although the various steps in the flowcharts of fig. 2, 4-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 4-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided a statistical apparatus 200 of a target object, including:
the data obtaining module 201 is configured to obtain a current frame from video sequence frames captured at a plurality of different positions in a preset area, where the video sequence frames include a target object.
The identifying module 202 is configured to identify a target object in the current frame, and obtain a current target object and a corresponding position.
The previous frame data obtaining module 203 is configured to obtain a position of the target object in the previous frame and a previous statistical value.
The data updating module 204 is configured to update the previous statistical value according to the position of each current target object and the position of the target object in the previous frame, so as to obtain a current statistical value.
And the counting module 205 is configured to take the current statistic value as the target statistic value when the current frame is the last frame in the video sequence frames.
In an embodiment, the statistical apparatus 200 for target objects further includes:
the parameter acquisition module is used for acquiring a region division parameter, and the region division parameter is used for dividing the video sequence frame into a first image region and a second image region.
The data updating module 204 is configured to update the previous statistical value according to the position of each current target object, the first image area, and the position of the target object in the previous frame, so as to obtain a current statistical value.
The counting module 205 is configured to, when the current frame is a last frame in the video sequence frame, calculate the number of target objects in the preset area according to the current statistical value, the second image area, and the position of each current target object, so as to obtain a target statistical value.
In an embodiment, the statistical module 205 is further configured to, when the current frame is not the last frame in the video sequence frame, obtain a next frame of the current frame, use the next frame as the current frame, and perform identification of the target object in the current frame to obtain the current target object and the corresponding position until the current frame is the last frame in the video sequence frame.
In one embodiment, the data updating module 204 is specifically configured to screen out valid objects from the current target object according to the position of the target object in the previous frame and the position of the current target object; and updating the last statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value.
In one embodiment, the data updating module 204 is further configured to calculate a difference between the position of the target object in the previous frame and the position of the current target object, and use the current target object with the difference smaller than a preset difference as the valid object.
In one embodiment, the data updating module 204 is further configured to count the number of target objects located in the second image region in the previous frame of image and located in the first image region of the current frame to obtain a first updated value, count the number of target objects located in the first image region in the previous frame of image and located in the second image region in the current frame to obtain a second updated value, and calculate a weighted sum of the first updated value, and the second updated value to obtain the current statistical value.
In an embodiment, the identifying module 202 is specifically configured to identify a target object in a current frame by using a target tracking algorithm to obtain a first target object and a corresponding position, identify the target object in the current frame by using a target detection algorithm to obtain a second target object and a corresponding position, determine whether the number of the first target object and the number of the second target object are consistent, and if so, take the first target object or the second target object as the current target object.
In one embodiment, the identification module 202 is further configured to determine whether an inconsistent target object of the first target object and the second target object is located in a preset boundary area in the current frame when the inconsistent target object is located, and when the inconsistent target object is located, use a union of the first target object and the second target object as the current target object, and when the inconsistent target object is not located, use an intersection of the first target object and the second target object as the current target object.
In an embodiment, the statistics module 205 is specifically configured to, according to the position of each current target object, count the number of the current target objects located in the second image area of the current frame to obtain a second statistics value, and calculate the sum of the second statistics value and the current statistics value to obtain a target statistics value.
FIG. 9 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be the terminal 110 (or the server 120) in fig. 1. As shown in fig. 9, the computer apparatus includes a processor, a memory, a network interface, an input device, and a display screen connected via a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement a statistical method for a target object. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform a statistical method of the target object. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 9 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, the statistical apparatus of the target object provided in the present application may be implemented in the form of a computer program, and the computer program may be run on a computer device as shown in fig. 9. The memory of the computer device may store various program modules constituting the statistical means of the target object, such as the data acquisition module 201, the recognition module 202, the last frame data acquisition module 203, the data update module 204, and the statistical module 205 shown in fig. 8. The computer program constituted by the respective program modules causes the processor to execute the steps in the statistical method of the target object of the respective embodiments of the present application described in the present specification.
For example, the computer device shown in fig. 9 can perform the acquisition of the current frame from video sequence frames captured at a plurality of different positions within a preset area, the video sequence frames including the target object, by the data acquisition module 201 in the statistical apparatus of the target object shown in fig. 8. The computer device may perform the identification of the target object in the current frame through the identification module 202, so as to obtain the current target object and the corresponding position. The computer device may perform the acquisition of the position of the target object and the last statistical value in the last frame through the last frame data acquisition module 203. The computer device may update the previous statistic according to the position of each current target object and the position of the target object of the previous frame by the data updating module 204, so as to obtain the current statistic. The computer device may perform, through the statistics module 205, taking the current statistics as the target statistics when the current frame is the last frame in the frames of the video sequence.
In one embodiment, a computer device is provided, comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program: acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects; identifying a target object in a current frame to obtain the current target object and a corresponding position; acquiring the position of a target object in the previous frame and a previous statistical value; updating the last statistical value according to the position of each current target object and the position of the target object of the last frame to obtain a current statistical value; and when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring a region division parameter, wherein the region division parameter is used for dividing a video sequence frame into a first image region and a second image region; updating the previous statistic value according to the position of each current target object and the position of the target object of the previous frame to obtain a current statistic value, wherein the current statistic value comprises: updating the last statistical value according to the position of each current target object, the first image area and the position of the target object of the last frame to obtain a current statistical value; when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value, including: and when the current frame is the last frame in the video sequence frame, calculating the number of target objects in a preset area according to the current statistical value, the second image area and the positions of all current target objects to obtain a target statistical value.
In one embodiment, the processor, when executing the computer program, further performs the steps of: and when the current frame is not the last frame in the video sequence frame, acquiring the next frame of the current frame, taking the next frame as the current frame, identifying the target object in the current frame, and obtaining the current target object and the corresponding position until the current frame is the last frame in the video sequence frame.
In one embodiment, updating the previous statistic according to the position of each current target object, the first image area, and the position of the target object in the previous frame, and obtaining the current statistic includes: screening effective objects from the current target object according to the position of the target object of the previous frame and the position of the current target object; and updating the last statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value.
In one embodiment, the screening out valid objects from the current target object according to the position of the target object of the previous frame and the position of the current target object comprises: calculating the difference degree between the position of the target object of the last frame and the position of the current target object; and taking the current target object with the difference degree smaller than the preset difference degree as an effective object.
In one embodiment, updating the last statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value includes: counting the number of target objects located in a second image area in the previous frame image and the number of target objects located in a first image area of the current frame to obtain a first updated value, and counting the number of target objects located in the first image area and the number of target objects located in the second image area in the current frame image to obtain a second updated value; and calculating the weighted sum of the first statistical value, the first updating value and the second updating value to obtain the current statistical value.
In one embodiment, identifying a target object in a current frame to obtain the current target object and a corresponding position includes: identifying a target object in a current frame by adopting a target tracking algorithm to obtain a first target object and a corresponding position; identifying a target object in the current frame by adopting a target detection algorithm to obtain a second target object and a corresponding position; judging whether the number of the first target objects is consistent with that of the second target objects; and when the first target object is consistent with the second target object, the first target object is taken as the current target object.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the first target object and the second target object are inconsistent, judging whether the inconsistent target object in the first target object and the second target object is positioned in a preset boundary area in the current frame; when the target object is located, taking the union of the first target object and the second target object as a current target object; and when the target object is not located, taking the intersection of the first target object and the second target object as the current target object.
In one embodiment, calculating the number of target objects in a preset area according to the current statistical value, the second image area and the position of each current target object to obtain a target statistical value, including: counting the number of the current target objects in a second image area of the current frame according to the position of each current target object to obtain a second statistical value; and calculating the sum of the second statistical value and the current statistical value to obtain a target statistical value.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of: acquiring a current frame and corresponding region division parameters from video sequence frames shot at a plurality of different positions in a preset region, wherein the video sequence frames comprise target objects, and the region division parameters divide the video sequence frames into a first image region and a second image region; identifying a target object in a current frame to obtain the current target object and a corresponding position; acquiring the position of a target object in the previous frame and a previous statistical value; updating the last statistical value according to the position of each current target object, the first image area and the position of the target object of the last frame to obtain a current statistical value; and when the current frame is the last frame in the video sequence frame, calculating the number of target objects in a preset area according to the current statistical value, the second image area and the positions of all current target objects to obtain a target statistical value.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring a region division parameter, wherein the region division parameter is used for dividing a video sequence frame into a first image region and a second image region; updating the previous statistic value according to the position of each current target object and the position of the target object of the previous frame to obtain a current statistic value, wherein the current statistic value comprises: updating the last statistical value according to the position of each current target object, the first image area and the position of the target object of the last frame to obtain a current statistical value; when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value, including: and when the current frame is the last frame in the video sequence frame, calculating the number of target objects in a preset area according to the current statistical value, the second image area and the positions of all current target objects to obtain a target statistical value.
In one embodiment, the computer program when executed by the processor further performs the steps of: and when the current frame is not the last frame in the video sequence frame, acquiring the next frame of the current frame, taking the next frame as the current frame, identifying the target object in the current frame, and obtaining the current target object and the corresponding position until the current frame is the last frame in the video sequence frame.
In one embodiment, updating the previous statistic according to the position of each current target object, the first image area, and the position of the target object in the previous frame, and obtaining the current statistic includes: screening effective objects from the current target object according to the position of the target object of the previous frame and the position of the current target object; and updating the last statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value.
In one embodiment, the screening out valid objects from the current target object according to the position of the target object of the previous frame and the position of the current target object comprises: calculating the difference degree between the position of the target object of the last frame and the position of the current target object; and taking the current target object with the difference degree smaller than the preset difference degree as an effective object.
In one embodiment, updating the last statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value includes: counting the number of target objects located in a second image area in the previous frame image and the number of target objects located in a first image area of the current frame to obtain a first updated value, counting the number of newly added target objects located in the first image area of the current frame to obtain a first updated value, and counting the number of target objects located in the first image area in the previous frame image and the number of target objects located in the second image area in the current frame to obtain a second updated value; and calculating the weighted sum of the first statistical value, the first updating value and the second updating value to obtain the current statistical value.
In one embodiment, identifying a target object in a current frame to obtain the current target object and a corresponding position includes: identifying a target object in a current frame by adopting a target tracking algorithm to obtain a first target object and a corresponding position; identifying a target object in the current frame by adopting a target detection algorithm to obtain a second target object and a corresponding position; judging whether the number of the first target objects is consistent with that of the second target objects; and when the first target object is consistent with the second target object, the first target object is taken as the current target object.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the first target object and the second target object are inconsistent, judging whether the inconsistent target object in the first target object and the second target object is positioned in a preset boundary area in the current frame; when the target object is located, taking the union of the first target object and the second target object as a current target object; and when the target object is not located, taking the intersection of the first target object and the second target object as the current target object.
In one embodiment, calculating the number of target objects in a preset area according to the current statistical value, the second image area and the position of each current target object to obtain a target statistical value, including: counting the number of the current target objects in a second image area of the current frame according to the position of each current target object to obtain a second statistical value; and calculating the sum of the second statistical value and the current statistical value to obtain a target statistical value.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (11)

1. A statistical method of a target object, the method comprising:
acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects;
identifying a target object in the current frame to obtain a current target object and a corresponding position;
acquiring the position of a target object in the previous frame and a previous statistical value;
acquiring a region division parameter, wherein the region division parameter is used for dividing the video sequence frame into a first image region and a second image region;
updating the previous statistic value according to the position of each current target object and the position of the target object of the previous frame to obtain a current statistic value;
the updating the previous statistic value according to the position of each current target object and the position of the target object of the previous frame to obtain a current statistic value includes: updating the previous statistic value according to the position of each current target object, the first image area and the position of the target object of the previous frame to obtain a current statistic value;
when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value;
when the current frame is the last frame in the video sequence frames, taking the current statistic value as a target statistic value, including: and when the current frame is the last frame in the video sequence frames, calculating the number of the target objects in the preset area according to the current statistical value, the second image area and the positions of all the current target objects to obtain a target statistical value.
2. The method of claim 1, further comprising:
and when the current frame is not the last frame in the video sequence frames, acquiring the next frame of the current frame, taking the next frame as the current frame, and identifying the target object in the current frame to obtain the current target object and the corresponding position.
3. The method of claim 1, wherein said updating the previous statistic according to the position of each of the current target object, the first image area, and the position of the target object in the previous frame to obtain a current statistic comprises:
screening effective objects from the current target object according to the position of the target object of the previous frame and the position of the current target object;
and updating the last statistical value corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistical value.
4. The method of claim 3, wherein the screening out valid objects from the current target object according to the position of the target object of the previous frame and the position of the current target object comprises:
calculating the difference degree between the position of the target object of the previous frame and the position of the current target object;
and taking the current target object with the difference degree smaller than a preset difference degree as the effective object.
5. The method of claim 3, wherein the updating the last statistic corresponding to the first image area according to the position of the effective object and the first image area of the current frame to obtain the current statistic comprises:
counting the number of target objects which are positioned in the second image area in the previous frame image and are positioned in the first image area of the current frame to obtain a first update value;
counting the number of target objects in the first image area and the second image area in the current frame of image to obtain a second update value;
and calculating the weighted sum of the last statistical value, the first updated value and the second updated value to obtain the current statistical value.
6. The method of claim 1, wherein the identifying the target object in the current frame, and obtaining the current target object and the corresponding position comprises:
identifying a target object in the current frame by adopting a target tracking algorithm to obtain a first target object and a corresponding position;
identifying a target object in the current frame by adopting a target detection algorithm to obtain a second target object and a corresponding position;
judging whether the number of the first target objects is consistent with that of the second target objects;
and when the current target object is consistent with the first target object, taking the first target object or the second target object as the current target object.
7. The method of claim 6, further comprising:
when the first target object and the second target object are inconsistent, judging whether the inconsistent target object in the first target object and the second target object is positioned in a preset boundary area in the current frame;
when the target object is located, taking the union of the first target object and the second target object as the current target object;
and when the target object is not located, taking the intersection of the first target object and the second target object as the current target object.
8. The method according to claim 1, wherein said calculating the number of the target objects in the preset area according to the current statistical value, the second image area and the position of each of the current target objects to obtain a target statistical value comprises:
counting the number of the current target objects in the second image area of the current frame according to the position of each current target object to obtain a second statistical value;
and calculating the sum of the second statistic value and the current statistic value to obtain the target statistic value.
9. A statistical apparatus of a target object, the apparatus comprising:
the data acquisition module is used for acquiring a current frame from video sequence frames shot at a plurality of different positions in a preset area, wherein the video sequence frames comprise target objects;
the data acquisition module is used for identifying the target object in the current frame to obtain the current target object and the corresponding position;
the previous frame data acquisition module is used for acquiring the position of the target object in the previous frame and the previous statistical value;
the parameter acquisition module is used for acquiring a region division parameter, and the region division parameter is used for dividing the video sequence frame into a first image region and a second image region;
a data updating module, configured to update the previous statistical value according to the position of each current target object and the position of the target object in the previous frame, so as to obtain a current statistical value;
the data updating module is specifically configured to update the previous statistical value according to the position of each current target object, the first image area, and the position of the target object of the previous frame, so as to obtain a current statistical value;
a statistic module, configured to take the current statistic value as a target statistic value when the current frame is a last frame in the video sequence frames;
the counting module is specifically configured to, when the current frame is a last frame in the video sequence frame, calculate the number of the target objects in the preset area according to the current statistical value, the second image area, and the position of each current target object, so as to obtain a target statistical value.
10. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 8 are implemented when the computer program is executed by the processor.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 8.
CN201910960300.7A 2019-10-10 2019-10-10 Target object statistical method and device, computer equipment and storage medium Active CN110838134B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910960300.7A CN110838134B (en) 2019-10-10 2019-10-10 Target object statistical method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910960300.7A CN110838134B (en) 2019-10-10 2019-10-10 Target object statistical method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110838134A CN110838134A (en) 2020-02-25
CN110838134B true CN110838134B (en) 2020-09-29

Family

ID=69575249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910960300.7A Active CN110838134B (en) 2019-10-10 2019-10-10 Target object statistical method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110838134B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111383455A (en) * 2020-03-11 2020-07-07 上海眼控科技股份有限公司 Traffic intersection object flow statistical method, device, computer equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980245A (en) * 2010-10-11 2011-02-23 北京航空航天大学 Adaptive template matching-based passenger flow statistical method
CN104637058A (en) * 2015-02-06 2015-05-20 武汉科技大学 Image information-based client flow volume identification statistic method
CN105574499A (en) * 2015-12-15 2016-05-11 东华大学 Method and system for detecting and counting number of people based on SOC
CN109815936A (en) * 2019-02-21 2019-05-28 深圳市商汤科技有限公司 A kind of target object analysis method and device, computer equipment and storage medium
CN110222579A (en) * 2019-05-09 2019-09-10 华南理工大学 A kind of the video object method of counting of the combination characteristics of motion and target detection
CN110287907A (en) * 2019-06-28 2019-09-27 北京海益同展信息科技有限公司 A kind of method for checking object and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6475718B2 (en) * 2013-07-30 2019-02-27 プレジデント アンド フェローズ オブ ハーバード カレッジ Quantitative DNA-based imaging and super-resolution imaging

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101980245A (en) * 2010-10-11 2011-02-23 北京航空航天大学 Adaptive template matching-based passenger flow statistical method
CN104637058A (en) * 2015-02-06 2015-05-20 武汉科技大学 Image information-based client flow volume identification statistic method
CN105574499A (en) * 2015-12-15 2016-05-11 东华大学 Method and system for detecting and counting number of people based on SOC
CN109815936A (en) * 2019-02-21 2019-05-28 深圳市商汤科技有限公司 A kind of target object analysis method and device, computer equipment and storage medium
CN110222579A (en) * 2019-05-09 2019-09-10 华南理工大学 A kind of the video object method of counting of the combination characteristics of motion and target detection
CN110287907A (en) * 2019-06-28 2019-09-27 北京海益同展信息科技有限公司 A kind of method for checking object and device

Also Published As

Publication number Publication date
CN110838134A (en) 2020-02-25

Similar Documents

Publication Publication Date Title
CN109035299B (en) Target tracking method and device, computer equipment and storage medium
US10417503B2 (en) Image processing apparatus and image processing method
CN108985162A (en) Object real-time tracking method, apparatus, computer equipment and storage medium
US8995714B2 (en) Information creation device for estimating object position and information creation method and program for estimating object position
CN109614948B (en) Abnormal behavior detection method, device, equipment and storage medium
CN107633208B (en) Electronic device, the method for face tracking and storage medium
CN109740416B (en) Target tracking method and related product
US20150104067A1 (en) Method and apparatus for tracking object, and method for selecting tracking feature
CN112489090B (en) Method for tracking target, computer readable storage medium and computer device
CN111583118B (en) Image stitching method and device, storage medium and electronic equipment
CN109035287B (en) Foreground image extraction method and device and moving vehicle identification method and device
CN109308704B (en) Background eliminating method, device, computer equipment and storage medium
CN111144398A (en) Target detection method, target detection device, computer equipment and storage medium
WO2022206680A1 (en) Image processing method and apparatus, computer device, and storage medium
CN112989962A (en) Track generation method and device, electronic equipment and storage medium
CN110838134B (en) Target object statistical method and device, computer equipment and storage medium
CN113284167B (en) Face tracking detection method, device, equipment and medium
CN111583159B (en) Image complement method and device and electronic equipment
CN113505643A (en) Violation target detection method and related device
CN113489897A (en) Image processing method and related device
CN113313189A (en) Shielding detection method and device and electronic equipment
CN113052019A (en) Target tracking method and device, intelligent equipment and computer storage medium
WO2022206679A1 (en) Image processing method and apparatus, computer device and storage medium
KR101407394B1 (en) System for abandoned and stolen object detection
CN116266358A (en) Target shielding detection and tracking recovery method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP03 Change of name, title or address

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Patentee after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Patentee before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

CP03 Change of name, title or address