CN111563479A - Same-pedestrian duplicate removal method, group analysis method, device and electronic equipment - Google Patents

Same-pedestrian duplicate removal method, group analysis method, device and electronic equipment Download PDF

Info

Publication number
CN111563479A
CN111563479A CN202010455227.0A CN202010455227A CN111563479A CN 111563479 A CN111563479 A CN 111563479A CN 202010455227 A CN202010455227 A CN 202010455227A CN 111563479 A CN111563479 A CN 111563479A
Authority
CN
China
Prior art keywords
person
time
snapshot
image
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010455227.0A
Other languages
Chinese (zh)
Other versions
CN111563479B (en
Inventor
李晓通
李蔚琳
梁栋
葛飞剑
付豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN202010455227.0A priority Critical patent/CN111563479B/en
Publication of CN111563479A publication Critical patent/CN111563479A/en
Application granted granted Critical
Publication of CN111563479B publication Critical patent/CN111563479B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention provides a peer duplicate removal method, a group analysis method, a device and electronic equipment, wherein the peer duplicate removal method, the group analysis method, the device and the electronic equipment comprise the following steps: acquiring an image including a face of a first person; acquiring an image class including a face of a first person according to the image; determining the track of a first person according to the snapshot time of each image in the image class and the snapshot camera, wherein the track of the first person comprises at least one track point; determining the same person of the first person according to the track; and under the condition that the second person appears in at least one image captured by the first track point, determining that the second person and the first person have the same line at the first track point once, wherein the second person is any person in the same line, and the first track point is any one track point in the at least one track point. According to the embodiment of the invention, the accuracy of the times of the same row can be improved.

Description

Same-pedestrian duplicate removal method, group analysis method, device and electronic equipment
Technical Field
The invention relates to the technical field of computers, in particular to a co-pedestrian weight removing method, a group analysis method, a device and electronic equipment.
Background
With the continuous development of electronic technology, the use occasions of the monitoring camera are more and more. With the continuous development of the face recognition technology, the track of a person can be obtained by analyzing the images or videos captured by the monitoring camera through face recognition, the co-pedestrian of the person can be determined based on the track of the person, and the co-pedestrian frequency of each co-pedestrian in the co-pedestrian can be determined. At present, the method for determining the number of times of the same-row of each same-pedestrian in the same-pedestrian comprises the following steps: the number of times the co-pedestrian appears in the trajectory of the person is determined as the number of co-pedestrians. In the method, the determined number of times of the same row of the same person is higher, so that the accuracy of the number of times of the same row is reduced.
Disclosure of Invention
The embodiment of the invention provides a co-pedestrian duplicate removal method, a group analysis method and a related device, which are used for improving the accuracy of times of co-pedestrians.
A first aspect provides a method for duplicate removal of a peer, comprising: acquiring an image including a face of a first person; acquiring an image class including a face of the first person according to the image; determining the track of the first person according to the capturing time of each image in the image class and the capturing camera, wherein the track of the first person comprises at least one track point; determining the co-pedestrian of the first person according to the track; under the condition that the second person appears in at least one image of first track point snapshot, confirm the second person with first person is in first track point moves one time, the second person is arbitrary person in the people that move one's bank, first track point is arbitrary track point in at least one track point.
In the embodiment of the invention, in a track point of the same pedestrian and the target person in the track of the target person, no matter how many times the same pedestrian appears, the same pedestrian and the target person are considered to have performed the same line at the track point, so that the situation that the number of times of the same line is too high due to the fact that multiple times of snapshot of the same person in a short time are counted for multiple times can be avoided, and the accuracy of the number of times of the same line can be improved.
As a possible implementation, the acquiring, from the image, an image class including a face of the first person includes: extracting the face features of the face from the image; determining a label of the face feature; and acquiring the image class corresponding to the label according to the corresponding relation between the label and the image class to obtain the image class comprising the face of the first person.
In the embodiment of the invention, the images comprising the face of the same person are clustered into one image class in advance, the face features are marked with the labels, and the corresponding relation between the image class and the labels of the corresponding face features is established, so that the corresponding image class can be quickly searched through the face features, and the deduplication efficiency of the same person can be improved.
As a possible implementation, the determining the track of the first person according to the capturing time and the capturing camera of each image in the image class includes: acquiring the snapshot time and a snapshot camera of each image in the image class; classifying the images in the image class according to the snapshot cameras to obtain M classes of images, wherein M is the number of the snapshot cameras; determining that a first snapshot camera and a first time period are a track point of the first person under the condition that the current image in the M-type images comprises one image, wherein the first snapshot camera is a snapshot camera corresponding to the current image, the first time period is a time period between a threshold time before the first snapshot time and a threshold time after the first snapshot time, and the first snapshot time is the snapshot time of the one image; and determining the track of the first person according to the track points of the first person.
In the embodiment of the invention, when the track of the target person is determined, under the condition that the capturing cameras of the images in the image class are different, the time interval between capturing times does not need to be considered.
As a possible implementation manner, when the current-class image includes two images, calculating a time interval between a second capturing time and a third capturing time to obtain a first time interval, where the second capturing time is the capturing time of one of the two images, and the third capturing time is the capturing time of the other of the two images; determining that the first snapshot camera and the second time period are one track point of the first person and the first snapshot camera and the third time period are the other track point of the first person under the condition that the first time interval is greater than a threshold value, wherein the second time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time; and under the condition that the first time interval is not greater than the threshold, determining that the first snapshot camera and a fourth time period are a track point of the first person, wherein the fourth time period is a time period between the threshold time before the second snapshot time and the threshold time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
In the embodiment of the invention, when the track of the target person is determined, under the condition that the time interval between the capturing times of two images captured by the same camera is larger, two track points of the target person can be respectively determined according to the capturing times of the two images, the situation that the image captured by the same camera with the capturing time difference being far is determined as the image captured by the same track point can be avoided, the situation that the number of times of the same line is lower due to the fact that the same person counts at the same track point in the two images with the capturing time difference being far can be further avoided, and the accuracy of the number of times of the same line can be further improved. Under the condition that the time interval between the capturing time of two images captured by the same camera is small, one track point of a target person can be determined according to the capturing time of the two images, the superposition of different track points with the same position in time can be avoided, the situation that the image captured by the camera at one capturing time is determined as the image captured by the other track point can be avoided, the situation that multiple captured images of the target person captured by the same camera (namely the same position) in a short time are determined as the images captured by the multiple track points can be avoided, the situation that the number of times of the same line is high due to the fact that the same person in the same image is subjected to repeated statistics at different track points can be further avoided, and the accuracy of the number of the same line can be further improved.
As a possible implementation manner, when the current-class image includes N images, the capturing times of the N images are sorted according to a time sequence to obtain a sorting table, where N is an integer greater than or equal to 3; calculating the time interval between two adjacent snapshot times in the ranking table; dividing the N capturing times into K groups of capturing times according to the time interval, wherein K is an integer less than or equal to N, and the time interval between any two groups of capturing times in the K groups of capturing times is greater than the threshold value; and determining K track points of the first person according to the K groups of snapshot time.
In the embodiment of the invention, when the track of the target person is determined, under the condition that the time interval between each capturing time and the adjacent capturing time in the capturing time of a plurality of images captured by the same camera is smaller, one track point of the target person can be determined according to the plurality of snapshot times, so that the superposition of different track points with the same position on time can be avoided, the image that can avoid a camera to take a candid photograph at a candid photograph time is confirmed simultaneously for the image of different track points candid photograph, also can avoid determining the image of taking a candid photograph many images of the target people who takes a candid photograph with same camera (same position promptly) in the short time for a plurality of track points candid photograph, further can avoid carrying out the higher condition of repeated statistics in different track points to same people in same image and lead to the number of times of the same journey to can further improve the accuracy of the number of times of the same journey.
As a possible implementation, the determining the co-pedestrian of the first person according to the trajectory comprises: and determining people except the first person in the images captured by all track points in the track as the same person of the first person.
As a possible implementation, the method further comprises: and determining the times of the same-row of each of the same-row persons and the first person.
A second aspect provides a group analysis method, comprising: determining a number of co-occurrences of each of the first person's co-occurrences with the first person, the number of co-occurrences being determined according to the method provided above; sequencing the co-workers according to the sequence of the times of the co-workers from high to low to obtain a first sequencing table, or sequencing the co-workers according to the sequence of the times of the co-workers from low to high to obtain a second sequencing table; determining the first ranked L-individuals in the first ranked list as a group of the first person, or determining the second ranked L-individuals in the second ranked list as a group of the first person, L being an integer greater than 1.
In the embodiment of the invention, when the times of the same-row of the same person and the target person are counted, one track point in the track of the target person is considered that the same person and the target person travel at the track point once no matter how many times the same person appears, so that the condition that the times of the same row are too high due to the fact that the times of the same row are counted for many times in a short time can be avoided, the accuracy of the times of the same row can be improved, and the accuracy of group analysis can be further improved.
A third aspect provides a peer de-duplication apparatus comprising: a first acquisition unit configured to acquire an image including a face of a first person; a second acquisition unit configured to acquire an image class including a face of the first person from the image; the first determining unit is used for determining the track of the first person according to the capturing time of each image in the image class and the capturing camera, and the track of the first person comprises at least one track point; a second determining unit, configured to determine a co-pedestrian of the first person according to the trajectory; the third determining unit is used for determining that the second person and the first person are in the same row of the first track point once under the condition that the second person appears in at least one image captured by the first track point, the second person is any person in the same row, and the first track point is any track point in at least one track point.
As a possible implementation manner, the second obtaining unit is specifically configured to: extracting the face features of the face from the image; determining a label of the face feature; and acquiring the image class corresponding to the label according to the corresponding relation between the label and the image class to obtain the image class comprising the face of the first person.
As a possible implementation manner, the first determining unit is specifically configured to: acquiring the snapshot time and a snapshot camera of each image in the image class; classifying the images in the image class according to the snapshot cameras to obtain M classes of images, wherein M is the number of the snapshot cameras; determining that a first snapshot camera and a first time period are a track point of the first person under the condition that the current image in the M-type images comprises one image, wherein the first snapshot camera is a snapshot camera corresponding to the current image, the first time period is a time period between a threshold time before the first snapshot time and a threshold time after the first snapshot time, and the first snapshot time is the snapshot time of the one image; and determining the track of the first person according to the track points of the first person.
As a possible implementation manner, the first determining unit is specifically further configured to: under the condition that the current image comprises two images, calculating a time interval between second snapshot time and third snapshot time to obtain a first time interval, wherein the second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images; determining that the first snapshot camera and the second time period are one track point of the first person and the first snapshot camera and the third time period are the other track point of the first person under the condition that the first time interval is greater than a threshold value, wherein the second time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time; and under the condition that the first time interval is not greater than the threshold, determining that the first snapshot camera and a fourth time period are a track point of the first person, wherein the fourth time period is a time period between the threshold time before the second snapshot time and the threshold time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
As a possible implementation manner, the first determining unit is specifically further configured to: when the current type image comprises N images, sorting the snapshot time of the N images according to the time sequence to obtain a sorting table, wherein N is an integer greater than or equal to 3; calculating the time interval between two adjacent snapshot times in the ranking table; dividing the N capturing times into K groups of capturing times according to the time interval, wherein K is an integer less than or equal to N, and the time interval between any two groups of capturing times in the K groups of capturing times is greater than the threshold value; and determining K track points of the first person according to the K groups of snapshot time.
As a possible implementation manner, the second determining unit is specifically configured to determine people other than the first person in the images captured by all track points in the track as the same person as the first person.
As a possible implementation, the apparatus further comprises: and the fourth determining unit is used for determining the number of times of the same row of each of the same-row persons and the first person.
A fourth aspect provides a group analysis apparatus comprising: a first determination unit for determining the number of times of the same-row of each of the same-row persons of a first person with the first person, the number of times of the same-row being determined according to the method provided above; the sorting unit is used for sorting the same-rowed people according to the sequence of the times of the same row from high to low to obtain a first sorting table, or sorting the same-rowed people according to the sequence of the times of the same row from low to high to obtain a second sorting table; a second determining unit, configured to determine an L person ranked in the first ranking list as a group of the first person, or determine an L person ranked in the second ranking list as a group of the first person, where L is an integer greater than 1.
A fifth aspect provides an electronic device, comprising a processor and a memory, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program stored in the memory to execute the method for removing duplicate of a pedestrian as provided in the first aspect or any one of the possible implementation manners of the first aspect.
A sixth aspect provides an electronic device comprising a processor and a memory, the memory for storing a computer program, the processor for invoking the memory stored computer program to perform the group analysis method as provided in the second aspect.
A seventh aspect provides a computer-readable storage medium having a computer program stored thereon, the computer program comprising program code which, when executed by a processor, causes the processor to perform the peer deduplication method of the first aspect or any of the possible implementations of the first aspect.
An eighth aspect provides a computer-readable storage medium storing a computer program comprising program code which, when executed by a processor, causes the processor to perform the group analysis method provided by the second aspect.
A ninth aspect provides an application for performing at runtime the peer deduplication method of the first aspect or any one of the possible implementations of the first aspect.
A tenth aspect provides an application for executing the group analysis method provided by the second aspect at runtime.
Drawings
Fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for removing duplicate of a peer according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a trajectory determination method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of track point determination provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of dividing six capturing times into three groups of capturing times according to time intervals according to an embodiment of the present invention;
fig. 6 is a schematic flow chart of a group analysis method according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a peer de-weighting apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a group partner analysis apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a co-pedestrian duplicate removal method, a group analysis device and electronic equipment, which are used for improving the accuracy of the times of co-pedestrians. The following are detailed below.
In order to better understand the peer deduplication method, the group analysis method, the apparatus, and the electronic device provided in the embodiments of the present invention, a network architecture used in the embodiments of the present invention is described below. Referring to fig. 1, fig. 1 is a schematic diagram of a network architecture according to an embodiment of the present invention. As shown in fig. 1, the network architecture may include a camera 101, a resolution server 102, and a processing device 103. A camera 101 for acquiring images in real time or periodically. The camera 101 may be a camera in a monitoring device, the monitoring device may send an image collected by the camera 101 to the analysis server 102, the monitoring device may send the image collected by the camera 101 to the analysis server 102 in real time or periodically, or the monitoring device may send the image collected by the camera 101 to the analysis server 102 after receiving information for obtaining the image from the analysis server 102. The camera 101 may be a camera, and when the camera does not have a communication function, the captured image may be uploaded to the analysis server 102 through a memory card of the camera. The camera 101 may also be a camera in a device such as a mobile phone.
The analysis server 102 is configured to mark a label for a face feature in the static library, extract a face feature of a face from an image acquired by the camera 101, perform aggregation classification on the aggregated face features by using a clustering algorithm according to distribution of the face features in a multidimensional space to obtain a plurality of image classes, compare the face feature corresponding to each image class in the plurality of image classes with the face feature in the static library, and establish a correspondence between the image class whose comparison result meets a confidence standard and the label corresponding to the face feature in the static library. The label can be the name, serial number, etc. of the person corresponding to the face feature.
The processing device 103 is used for acquiring an image including a face of a person, acquiring an image class including the face of the person from the analysis server 102 according to the image, and determining the same-row persons of the person and the number of times of the same-row persons of each same-row person and the person according to the image class.
The processing device 103 is a device with image processing capability and computing capability, such as a server deployed based on a private cloud, an edge device, and the like.
Referring to fig. 2 based on the network architecture shown in fig. 1, fig. 2 is a schematic flow chart of a peer deduplication method according to an embodiment of the present invention. Wherein the co-pedestrian deduplication method is described from the perspective of the processing device 103. As shown in fig. 2, the peer deduplication method may include the following steps.
201. An image including a face of a first person is acquired.
In the case where the number of the same-row persons of the first person and the number of times of the same-row of each of the same-row persons and the first person need to be determined, the image including the face of the first person may be acquired according to an instruction input by the user or a generated instruction. The image of the face of the first person may be a whole-body image including the face of the first person, or may be a half-body image including the face of the first person. The image including the face of the first person may be obtained locally, from another device, or uploaded by the user.
202. An image class including a face of a first person is acquired from an image including the face of the first person.
After the image including the face of the first person is acquired, an image class including the face of the first person may be acquired from the image including the face of the first person. The processing device may obtain stored face features, labels of the face features, and image classes from the parsing server in advance, and then store the obtained face features, labels of the face features, and image classes locally for subsequent invocation. The image processing method includes the steps of extracting face features of a face of a first person from an image including the face of the first person to obtain first face features, determining labels of the first face features, and finally obtaining image classes corresponding to the labels of the first face features according to corresponding relations between the labels of the first face features and the image classes to obtain the image classes including the face of the first person. When the label of the first face feature is determined, each face feature in the stored face features acquired from the analysis server may be determined as a class, the first face features are clustered according to a clustering algorithm, and the label of the face feature of the class to which the first face feature is clustered is determined as the label of the first face feature.
An information acquisition request for acquiring an image class including the face of the first person may also be sent to the resolution server, where the information acquisition request carries an image including the face of the first person. After receiving the information acquisition request, the resolution server may determine an image class including the face of the first person according to the above method, and send the image class including the face of the first person to the processing device. The processing device may receive an image class from the resolution server that includes a face of the first person.
The method further includes the steps of extracting face features of the face of the first person from the image including the face of the first person to obtain first face features, and then sending an information acquisition request for acquiring the image class corresponding to the first face features to the analysis server, wherein the information acquisition request carries the first face features. After receiving the information acquisition request, the analysis server may determine a tag of the first person feature, finally acquire an image class corresponding to the tag of the first person feature according to a correspondence between the tag of the first person feature and the image class to obtain an image class including a face of the first person, and send the image class including the face of the first person to the processing device. The processing device may receive an image class from the resolution server that includes a face of the first person.
203. And determining the track of the first person according to the snapshot time of each image in the image class and the snapshot camera.
After the image class including the face of the first person is obtained according to the image including the face of the first person, the track of the first person can be determined according to the capturing time of each image in the image class and the capturing camera. The first person's trajectory comprises at least one point of trajectory, i.e. comprises one or more points of trajectory. Each track point comprises a time period and a camera, namely position information, namely the position information corresponding to one camera.
204. And determining the same person of the first person according to the track of the first person.
After the track of the first person is determined according to the capturing time of each image in the image class and the capturing camera, the same person of the first person can be determined according to the track of the first person. People other than the first person in the image captured by all track points in the track of the first person can be determined as the same person as the first person. An image captured at each track point in the track of the first person may be acquired first, and then persons other than the first person in the acquired images may be determined as persons who are the same as the first person. The images acquired include images in an image class of a face of a first person. The image captured by one track point is all images captured by the capturing camera included by the track point in the time period included by the track point.
205. And under the condition that the second person appears in at least one image captured by the first track point, determining that the second person and the first person have the same line at the first track point once.
After determining the co-walking person of the first person according to the track of the first person, the co-walking person with each track point appearing for a plurality of times in the track of the first person can be deduplicated, namely under the condition that the second person appears in at least one image captured by the first track point, the second person and the first person are determined to be co-walking at the first track point once, namely, no matter one person appears for a plurality of times at one track point, the co-walking times at the track point are once. The second person is any one of the same pedestrians as the first person, and the first track point is any one of the at least one track point.
After or at the same time as step 205, the number of times of the same-row of each of the same-row persons of the first person with the first person may be determined, that is, the number of track points included in the track of the first person where each of the same-row persons of the first person appears may be counted. The number of times of the second person and the first person having the same row is greater than or equal to 0 and less than or equal to the number of track points included in the track of the first person.
In the method for removing duplicate of the co-pedestrian described in fig. 2, at a track point in the track of the target person, the co-pedestrian and the target person are considered to have co-traveled once at the track point no matter how many times the co-pedestrian appears, so that the situation that the co-pedestrian times are too high due to the fact that multiple times of statistics are carried out on multiple times of snap shots of the same person in a short time can be avoided, and the accuracy of the co-pedestrian times can be improved.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a track determination method according to an embodiment of the present invention. As shown in fig. 3, step 203 may specifically include the following steps.
301. And acquiring the snapshot time and the snapshot camera of each image in the image class.
302. And classifying the images in the image class according to the snapshot camera to obtain M class images.
After the capturing time of each image in the image class and the capturing camera are obtained, the images in the image class can be classified according to the capturing camera to obtain M-class images, namely, the images captured by the same capturing camera in the image class are classified into one class to obtain M-class images. And M is the number of the snapshot cameras.
303. The number of images included in the current class of images is determined.
After the images in the image class are classified according to the capturing camera to obtain the M classes of images, any one of the M classes of images can be determined as a current class of image, and then the number of images included in the current class of image is determined.
304. In the case where the current class image includes an image, the first snapshot camera and a track point for which the first time period is a first person are determined.
The first snapshot camera is a snapshot camera corresponding to the current type of image, the first time period is a time period between threshold time before the first snapshot time and threshold time after the first snapshot time, and the first snapshot time is the snapshot time of the image. The first time period may also be a time period between a first threshold time before the first snapshot time and a second threshold time after the first snapshot time, and the first threshold may be greater than the second threshold or smaller than the second threshold.
305. And under the condition that the current image comprises two images, calculating the time interval between the second capturing time and the third capturing time to obtain a first time interval.
The second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images.
306. It is determined whether the first time interval is greater than (or greater than or equal to) the threshold, and if the first time interval is greater than (or greater than or equal to) the threshold, step 307 is performed, and if the first time interval is not greater than (or less than) the threshold, step 308 is performed.
307. And determining that the first snapshot camera and the second time period are one track point of the first person, and the first snapshot camera and the third time period are the other track point of the first person.
The second time period is a time period between the threshold time before the second snapshot time and the threshold time after the second snapshot time, and the third time period is a time period between the threshold time before the third snapshot time and the threshold time after the third snapshot time.
308. And determining that the first snapshot camera and the fourth time period are a track point of the first person.
The fourth time period is a time period between the threshold time before the second snapshot time and the threshold time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
Referring to fig. 4, fig. 4 is a schematic diagram of track point determination according to an embodiment of the present invention. As shown in fig. 4, the threshold is X seconds, and a time period of one track point is obtained by merging X second time periods before and after the capturing time of two images captured by the same capturing camera.
309. Under the condition that the current image comprises N images, the snapshot times of the N images are sequenced according to the time sequence to obtain a sequencing list, the time interval between two adjacent snapshot times in the sequencing list is calculated, the N snapshot times are divided into K groups of snapshot times according to the time interval, and K track points of a first person are determined according to the K groups of snapshot times.
N is an integer greater than or equal to 3. Under the condition that the current type image comprises three or more images, the snapshot times of the N images can be firstly sequenced according to the time sequence to obtain a sequencing list, and then the time interval between two adjacent snapshot times in the sequencing list is calculated. And then dividing the N capturing time into K groups of capturing time according to the time interval, namely cutting the positions of which the time interval is greater than the threshold value in the sequencing list to obtain the K groups of capturing time. K is an integer less than or equal to N, and the time interval between any two groups of the K groups of the snapshot time is greater than a threshold value. Referring to fig. 5, fig. 5 is a schematic diagram illustrating dividing six capturing times into three groups according to time intervals according to an embodiment of the present invention. As shown in fig. 5, the time interval between the second snapshot time and the third snapshot time in the ranking table is 4 hours and 10 minutes greater than 30 seconds (i.e., the threshold time), and the time interval between the fourth snapshot time and the fifth snapshot time in the ranking table is 6 hours and 10 minutes and 2 seconds greater than 30 seconds, so that three groups of snapshot times can be obtained by cutting between the second snapshot time and the third snapshot time and between the fourth snapshot time and the fifth snapshot time. And then K track points of the first person can be determined according to the K groups of snapshot time, namely one track point is respectively determined according to each group of snapshot time in the K groups of snapshot time. Each track point in the K track points comprises a current snapshot camera and a time period determined by a corresponding group of snapshot time. The time period is determined in a manner that is between a threshold time before the earliest snapshot time in a set of snapshot times and a threshold time after the latest snapshot time in the set of snapshot times.
Under the condition that the current type image comprises N images, the snapshot times of the N images can be sequenced according to the time sequence to obtain a sequencing list, and then the time interval between two adjacent snapshot times in the sequencing list is calculated. And then judging whether the first time interval is greater than a threshold value or not, and determining the time interval between the current camera and the threshold value time before the first snapshot time and the time after the first snapshot time as a track point of the first person under the condition that the first time interval is greater than the threshold value. And under the condition that the first time interval is judged to be smaller than or equal to the threshold value, whether the second time interval is larger than the threshold value or not can be continuously judged, and under the condition that the second time interval is judged to be larger than the threshold value, the current camera and the time interval between the threshold value time before the first snapshot time and the threshold value time after the second snapshot time can be determined as a track point of the first person. And under the condition that the second time interval is judged to be smaller than or equal to the threshold, continuously judging whether the third time interval is larger than the threshold, and under the condition that the third time interval is judged to be larger than the threshold, determining the current camera and the time interval between the threshold time before the first snapshot time and the threshold time after the third snapshot time as a track point of the first person until all the calculated time intervals are judged and correspondingly processed. Here, the first snapshot time, the second snapshot time, …, and the nth snapshot time are the order of the snapshot times in the order table. Similarly, the first time interval, …, the N-1 th time interval is the ranking of the time intervals calculated according to the ranking of the snap shots in the ranking table.
After step 304, step 307, step 308 or step 309 is executed, whether the M-class images have been processed or not may be continuously determined, and in the case where it is determined that the M-class images have not been processed, any of the unprocessed M-class images is determined as the current class image, and then step 303 to step 310 are executed. When it is determined that the processing of the M-class image is completed, the process is terminated.
Referring to fig. 6, based on the network architecture shown in fig. 1, fig. 6 is a schematic flow chart of a group partner analysis method according to an embodiment of the present invention. Wherein the group analysis method is described from the perspective of the processing device 103. As shown in fig. 6, the group analysis method may include the following steps.
601. And determining the times of the same-row of each same-row person and the first person in the same-row persons of the first person.
For the detailed description of determining the number of times of the same row of each of the same-row persons of the first person and the first person, reference may be made to the above description, which is not repeated herein.
602. And sequencing the same-rowed people of the first person from high to low according to the number of times of the same-rowed people to obtain a first sequencing list, or sequencing the same-rowed people of the first person from low to high according to the number of times of the same-rowed people to obtain a second sequencing list.
After determining the number of times of the same row of each person in the same row of the first person and the first person, the persons in the same row of the first person can be sequenced from high to low according to the number of times of the same row to obtain a first sequencing table, namely, the persons in the same row of the first person are sequenced in a descending order according to the number of times of the same row. The same-row persons of the first person can also be sequenced according to the sequence of the times of the same rows from low to high to obtain a second sequencing table, namely the same-row persons of the first person are sequenced in an ascending order according to the times of the same rows.
603. The first ranked L-person is determined as a first person's group in the first ranked list or the second ranked L-person is determined as a first person's group in the second ranked list.
After the first ranking table is obtained by ranking the peers of the first person in the order of the number of peers from high to low, the L persons ranked at the top in the first ranking table may be determined as the group of the first person. After the peer of the first person is ranked from low to high according to the peer times to obtain the second ranking table, the L persons ranked at the back in the second ranking table may be determined as the group of the first person. Through the method, groups such as crime groups and the like can be determined. L is an integer greater than 1, and the specific value can be adjusted as desired.
In the group analysis method described in fig. 6, when the times of the same-row movement of the same person and the target person are counted, one track point in the track of the target person is considered that the same person and the target person move at the track point once no matter how many times the same person appears, so that the situation that the times of the same row movement is too high due to the fact that multiple times of statistics are performed on multiple snapshots of the same person in a short time can be avoided, the accuracy of the times of the same row movement can be improved, and the accuracy of group analysis can be improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a peer deduplication apparatus according to an embodiment of the present invention based on the network architecture shown in fig. 1. The duplicate removal device for the same pedestrian may be disposed in the processing device 103, or may be the processing device 103. As shown in fig. 7, the peer de-duplication apparatus may include:
a first acquisition unit 701 for acquiring an image including a face of a first person;
a second obtaining unit 702 configured to obtain an image class including a face of the first person from an image including the face of the first person;
the first determining unit 703 is configured to determine a track of the first person according to the snapshot time and the snapshot camera of each image in the image class, where the track of the first person includes at least one track point;
a second determination unit 704 for determining a co-pedestrian of the first person from the trajectory of the first person;
a third determining unit 705, configured to determine that the second person and the first person perform the same-row operation once at the first track point when the second person appears in the at least one image captured by the first track point, where the second person is any one of the same-row persons, and the first track point is any one of the at least one track point.
In some embodiments, the second obtaining unit 702 is specifically configured to:
extracting face features of a face of a first person from an image including the face of the first person;
determining a label of the face feature;
and acquiring the image class corresponding to the label according to the corresponding relation between the label and the image class to obtain the image class comprising the face of the first person.
In some embodiments, the first determining unit 703 is specifically configured to:
acquiring the snapshot time and a snapshot camera of each image in the image class;
classifying the images in the image class according to the snapshot cameras to obtain M classes of images, wherein M is the number of the snapshot cameras;
determining a first snapshot camera and a track point of a first person in a first time period under the condition that a current image in M images comprises one image, wherein the first snapshot camera is a snapshot camera corresponding to the current image, the first time period is a time period between threshold time before the first snapshot time and threshold time after the first snapshot time, and the first snapshot time is the snapshot time of the image;
and determining the track of the first person according to the track points of the first person.
In some embodiments, the first determining unit 703 is further specifically configured to:
under the condition that the current image comprises two images, calculating a time interval between second snapshot time and third snapshot time to obtain a first time interval, wherein the second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images;
under the condition that the first time interval is larger than the threshold value, determining that the first snapshot camera and the second time period are one track point of a first person and the first snapshot camera and the third time period are the other track point of the first person, wherein the second time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time;
and under the condition that the first time interval is not greater than the threshold, determining that the first snapshot camera and the fourth time period are a track point of the first person, wherein the fourth time period is a time period between the threshold time before the second snapshot time and the threshold time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
In some embodiments, the first determining unit 703 is further specifically configured to:
under the condition that the current-class image comprises N images, sorting the snapshot time of the N images according to the time sequence to obtain a sorting table, wherein N is an integer greater than or equal to 3;
calculating the time interval between two adjacent snapshot times in the ranking table;
dividing the N capturing times into K groups of capturing times according to the calculated time interval, wherein K is an integer less than or equal to N, and the time interval between any two groups of capturing times in the K groups of capturing times is greater than a threshold value;
and determining K track points of the first person according to the K groups of snapshot time.
In some embodiments, the second determining unit 704 is specifically configured to determine people other than the first person in the image captured by all track points in the track of the first person as the same people as the first person.
In some embodiments, the peer de-duplication apparatus may further include:
a fourth determining unit 706, configured to determine the number of times of the same-row of the first person for each of the same-row persons of the first person.
This embodiment may correspond to the description of the method embodiment in the embodiment of the present application, and the above and other operations and/or functions of each unit are respectively for implementing corresponding flows in each method in fig. 2 to fig. 3, and are not described herein again for brevity.
Referring to fig. 8, based on the network architecture shown in fig. 1, fig. 8 is a schematic structural diagram of a group partner analysis apparatus according to an embodiment of the present invention. The group analysis device may be disposed in the processing device 103, or may be the processing device 103. As shown in fig. 8, the group analysis apparatus may include:
a first determination unit 801, configured to determine the number of times of the same-row of each of the same-row persons of the first person and the first person, where the number of times of the same-row of each of the same-row persons of the first person and the first person is determined according to the method;
the sorting unit 802 is configured to sort the same-rowed people of the first person from high to low according to the number of times of the same-rowed people to obtain a first sorting table, or sort the same-rowed people of the first person from low to high according to the number of times of the same-rowed people to obtain a second sorting table;
a second determining unit 803, configured to determine the L persons ranked in the first ranking list as the group of the first person, or determine the L persons ranked in the second ranking list as the group of the first person, where L is an integer greater than 1.
This embodiment may correspond to the description of the method embodiment in the embodiment of the present application, and the above and other operations and/or functions of each unit are respectively for implementing corresponding flows in each method in fig. 2 to fig. 6, and are not described herein again for brevity.
Referring to fig. 9 based on the network architecture shown in fig. 1, fig. 9 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. As shown in fig. 9, the electronic device may include: at least one processor 901, such as a CPU, memory 902, transceiver 903, and at least one bus 904. A bus 904 for enabling connection communication between these components. Wherein:
in one case, a set of computer programs is stored in the memory 902, and the processor 901 is configured to call the computer programs stored in the memory 902 to perform the following operations:
acquiring an image including a face of a first person;
acquiring an image class including a face of a first person from an image including the face of the first person;
determining the track of a first person according to the snapshot time of each image in the image class and the snapshot camera, wherein the track of the first person comprises at least one track point;
determining the same person of the first person according to the track of the first person;
under the condition that a second person appears in at least one image captured by the first track point, the second person and the first person are determined to have a same walk at the first track point once, the second person is any person in the same walk, and the first track point is any track point in at least one track point.
In some embodiments, the processor 901 obtaining an image class including a face of the first person from an image including the face of the first person includes:
extracting face features of a face of a first person from an image including the face of the first person;
determining a label of the face feature;
and acquiring the image class corresponding to the label according to the corresponding relation between the label and the image class to obtain the image class comprising the face of the first person.
In some embodiments, the determining, by the processor 901, the trajectory of the first person according to the snapshot time and the snapshot camera of each image in the image class includes:
acquiring the snapshot time and a snapshot camera of each image in the image class;
classifying the images in the image class according to the snapshot cameras to obtain M classes of images, wherein M is the number of the snapshot cameras;
determining a first snapshot camera and a track point of a first person in a first time period under the condition that a current image in M images comprises one image, wherein the first snapshot camera is a snapshot camera corresponding to the current image, the first time period is a time period between threshold time before the first snapshot time and threshold time after the first snapshot time, and the first snapshot time is the snapshot time of the image;
and determining the track of the first person according to the track points of the first person.
In some embodiments, the determining, by the processor 901, the trajectory of the first person according to the capturing time and the capturing camera of each image in the image class further includes:
under the condition that the current image comprises two images, calculating a time interval between second snapshot time and third snapshot time to obtain a first time interval, wherein the second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images;
under the condition that the first time interval is larger than the threshold value, determining that the first snapshot camera and the second time period are one track point of a first person and the first snapshot camera and the third time period are the other track point of the first person, wherein the second time period is a time period between the threshold value time before the second snapshot time and the threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time;
and under the condition that the first time interval is not greater than the threshold, determining that the first snapshot camera and the fourth time period are a track point of the first person, wherein the fourth time period is a time period between the threshold time before the second snapshot time and the threshold time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
In some embodiments, the determining, by the processor 901, the trajectory of the first person according to the capturing time and the capturing camera of each image in the image class further includes:
under the condition that the current-class image comprises N images, sorting the snapshot time of the N images according to the time sequence to obtain a sorting table, wherein N is an integer greater than or equal to 3;
calculating the time interval between two adjacent snapshot times in the ranking table;
dividing the N capturing times into K groups of capturing times according to the calculated time interval, wherein K is an integer less than or equal to N, and the time interval between any two groups of capturing times in the K groups of capturing times is greater than a threshold value;
and determining K track points of the first person according to the K groups of snapshot time.
In some embodiments, the processor 901 determining the co-pedestrian of the first person from the trajectory of the first person comprises:
and determining people except the first person in the images captured by all track points in the track of the first person as the same people of the first person.
In some embodiments, the processor 901 is further configured to invoke a computer program stored in the memory 902 to perform the following operations:
and determining the times of the same-row of each same-row person and the first person in the same-row persons of the first person.
In some embodiments, a transceiver 903 is used to transmit and receive information.
In another case, a set of computer programs is stored in the memory 902, and the processor 901 is configured to call the computer programs stored in the memory 902 to perform the following operations:
determining the times of the same-row of each same-row person and the first person in the same-row persons of the first person, wherein the times of the same-row of each same-row person and the first person in the same-row persons of the first person are determined according to the method;
sequencing the same-rowed people of the first person from high to low according to the number of times of the same-rowed people to obtain a first sequencing table, or sequencing the same-rowed people of the first person from low to high according to the number of times of the same-rowed people to obtain a second sequencing table;
the first ranked L-persons in the first ranking list are determined to be a first person's group, or the second ranked L-persons in the second ranking list are determined to be a first person's group, L being an integer greater than 1.
In some embodiments, a transceiver 903 is used to transmit and receive information.
The electronic device may also be configured to execute various methods executed in the foregoing method embodiments, and details are not repeated.
In some embodiments, a computer-readable storage medium is provided for storing an application program for, when executed, performing the peer-to-peer deduplication method of fig. 2 or the group analysis method of fig. 6.
In some embodiments, an application is provided for performing the peer-to-peer deduplication method of fig. 2 or the group analysis method of fig. 6 at runtime.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware associated with program instructions, and the program may be stored in a computer readable memory, which may include: flash disk, ROM, RAM, magnetic or optical disk, and the like.
The above embodiments of the present invention are described in detail, and the principle and the implementation of the present invention are explained by applying specific embodiments, and the above description of the embodiments is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. A method for duplicate removal of a peer, comprising:
acquiring an image including a face of a first person;
acquiring an image class including a face of the first person according to the image;
determining the track of the first person according to the capturing time of each image in the image class and the capturing camera, wherein the track of the first person comprises at least one track point;
determining the co-pedestrian of the first person according to the track;
under the condition that the second person appears in at least one image of first track point snapshot, confirm the second person with first person is in first track point moves one time, the second person is arbitrary person in the people that move one's bank, first track point is arbitrary track point in at least one track point.
2. The method of claim 1, wherein the obtaining an image class including a face of the first person from the image comprises:
extracting the face features of the face from the image;
determining a label of the face feature;
and acquiring the image class corresponding to the label according to the corresponding relation between the label and the image class to obtain the image class comprising the face of the first person.
3. The method of claim 1 or 2, wherein the determining the trajectory of the first person from the snapshot time and the snapshot camera of each image of the class of images comprises:
acquiring the snapshot time and a snapshot camera of each image in the image class;
classifying the images in the image class according to the snapshot cameras to obtain M classes of images, wherein M is the number of the snapshot cameras;
determining that a first snapshot camera and a first time period are a track point of the first person under the condition that the current image in the M-type images comprises one image, wherein the first snapshot camera is a snapshot camera corresponding to the current image, the first time period is a time period between a threshold time before the first snapshot time and a threshold time after the first snapshot time, and the first snapshot time is the snapshot time of the one image;
and determining the track of the first person according to the track points of the first person.
4. The method of claim 3, wherein determining the trajectory of the first person from the snapshot time and the snapshot camera for each of the images in the class of images further comprises:
under the condition that the current image comprises two images, calculating a time interval between second snapshot time and third snapshot time to obtain a first time interval, wherein the second snapshot time is the snapshot time of one of the two images, and the third snapshot time is the snapshot time of the other of the two images;
determining that the first snapshot camera and the second time period are one track point of the first person and the first snapshot camera and the third time period are the other track point of the first person under the condition that the first time interval is greater than a threshold value, wherein the second time period is a time period between a threshold value time before the second snapshot time and a threshold value time after the second snapshot time, and the third time period is a time period between the threshold value time before the third snapshot time and the threshold value time after the third snapshot time;
and under the condition that the first time interval is not greater than the threshold, determining that the first snapshot camera and a fourth time period are a track point of the first person, wherein the fourth time period is a time period between the threshold time before the second snapshot time and the threshold time after the third snapshot time, and the second snapshot time is earlier than the third snapshot time.
5. The method of claim 4, wherein determining the trajectory of the first person from the snapshot time and the snapshot camera for each of the images in the class of images further comprises:
when the current type image comprises N images, sorting the snapshot time of the N images according to the time sequence to obtain a sorting table, wherein N is an integer greater than or equal to 3;
calculating the time interval between two adjacent snapshot times in the ranking table;
dividing the N capturing times into K groups of capturing times according to the time interval, wherein K is an integer less than or equal to N, and the time interval between any two groups of capturing times in the K groups of capturing times is greater than the threshold value;
and determining K track points of the first person according to the K groups of snapshot time.
6. The method of any of claims 1-5, wherein said determining a peer of the first person from the trajectory comprises:
and determining people except the first person in the images captured by all track points in the track as the same person of the first person.
7. The method according to any one of claims 1-6, further comprising:
and determining the times of the same-row of each of the same-row persons and the first person.
8. A method of group analysis, comprising:
determining a number of co-occurrences of each of the first person's co-occurrences with the first person, the number of co-occurrences determined according to the method of claim 7;
sequencing the co-workers according to the sequence of the times of the co-workers from high to low to obtain a first sequencing table, or sequencing the co-workers according to the sequence of the times of the co-workers from low to high to obtain a second sequencing table;
determining the first ranked L-individuals in the first ranked list as a group of the first person, or determining the second ranked L-individuals in the second ranked list as a group of the first person, L being an integer greater than 1.
9. A pedestrian deduplication apparatus, comprising:
a first acquisition unit configured to acquire an image including a face of a first person;
a second acquisition unit configured to acquire an image class including a face of the first person from the image;
the first determining unit is used for determining the track of the first person according to the capturing time of each image in the image class and the capturing camera, and the track of the first person comprises at least one track point;
a second determining unit, configured to determine a co-pedestrian of the first person according to the trajectory;
the third determining unit is used for determining that the second person and the first person are in the same row of the first track point once under the condition that the second person appears in at least one image captured by the first track point, the second person is any person in the same row, and the first track point is any track point in at least one track point.
10. A group analysis apparatus, comprising:
a first determination unit configured to determine a number of times of co-walking of the first person with each of co-pedestrians of the first person, the number of times of co-walking being determined according to the method of claim 7;
the sorting unit is used for sorting the same-rowed people according to the sequence of the times of the same row from high to low to obtain a first sorting table, or sorting the same-rowed people according to the sequence of the times of the same row from low to high to obtain a second sorting table;
a second determining unit, configured to determine an L person ranked in the first ranking list as a group of the first person, or determine an L person ranked in the second ranking list as a group of the first person, where L is an integer greater than 1.
11. An electronic device comprising a processor and a memory, the memory for storing a computer program, the processor for invoking the computer program to perform the peer deduplication method of any one of claims 1-7.
12. An electronic device comprising a processor and a memory, the memory storing a computer program, the processor being configured to invoke the computer program to perform the group analysis method of claim 8.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the peer deduplication method as recited in any one of claims 1-7.
14. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements a group analysis method as claimed in claim 8.
CN202010455227.0A 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment Active CN111563479B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010455227.0A CN111563479B (en) 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010455227.0A CN111563479B (en) 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN111563479A true CN111563479A (en) 2020-08-21
CN111563479B CN111563479B (en) 2023-11-03

Family

ID=72073773

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010455227.0A Active CN111563479B (en) 2020-05-26 2020-05-26 Concurrent person weight removing method, partner analyzing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111563479B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511816A (en) * 2020-11-16 2022-05-17 杭州海康威视***技术有限公司 Data processing method and device, electronic equipment and machine-readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016322A (en) * 2016-01-28 2017-08-04 浙江宇视科技有限公司 A kind of method and device of trailing personnel analysis
CN108229335A (en) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 It is associated with face identification method and device, electronic equipment, storage medium, program
CN109117714A (en) * 2018-06-27 2019-01-01 北京旷视科技有限公司 A kind of colleague's personal identification method, apparatus, system and computer storage medium
CN110276272A (en) * 2019-05-30 2019-09-24 罗普特科技集团股份有限公司 Confirm method, apparatus, the storage medium of same administrative staff's relationship of label personnel

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016322A (en) * 2016-01-28 2017-08-04 浙江宇视科技有限公司 A kind of method and device of trailing personnel analysis
CN108229335A (en) * 2017-12-12 2018-06-29 深圳市商汤科技有限公司 It is associated with face identification method and device, electronic equipment, storage medium, program
CN109117714A (en) * 2018-06-27 2019-01-01 北京旷视科技有限公司 A kind of colleague's personal identification method, apparatus, system and computer storage medium
CN110276272A (en) * 2019-05-30 2019-09-24 罗普特科技集团股份有限公司 Confirm method, apparatus, the storage medium of same administrative staff's relationship of label personnel

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114511816A (en) * 2020-11-16 2022-05-17 杭州海康威视***技术有限公司 Data processing method and device, electronic equipment and machine-readable storage medium

Also Published As

Publication number Publication date
CN111563479B (en) 2023-11-03

Similar Documents

Publication Publication Date Title
Beery et al. Efficient pipeline for camera trap image review
CN110175549B (en) Face image processing method, device, equipment and storage medium
CN109117714B (en) Method, device and system for identifying fellow persons and computer storage medium
CN110235138A (en) System and method for appearance search
CN109740004B (en) Filing method and device
CN111860318A (en) Construction site pedestrian loitering detection method, device, equipment and storage medium
JP2022518469A (en) Information processing methods and devices, storage media
JP2017033547A (en) Information processing apparatus, control method therefor, and program
JP2022518459A (en) Information processing methods and devices, storage media
CN108563651B (en) Multi-video target searching method, device and equipment
CN109800664B (en) Method and device for determining passersby track
CN113570635B (en) Target motion trail restoration method and device, electronic equipment and storage medium
CN112818149A (en) Face clustering method and device based on space-time trajectory data and storage medium
CN109784220B (en) Method and device for determining passerby track
CN110245268A (en) A kind of route determination, the method and device of displaying
CN115062186B (en) Video content retrieval method, device, equipment and storage medium
CN112084812A (en) Image processing method, image processing device, computer equipment and storage medium
CN116244609A (en) Passenger flow volume statistics method and device, computer equipment and storage medium
CN114902299A (en) Method, device, equipment and storage medium for detecting associated object in image
CN111445442A (en) Crowd counting method and device based on neural network, server and storage medium
CN111563479B (en) Concurrent person weight removing method, partner analyzing method and device and electronic equipment
CN109241316B (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN105678333B (en) Method and device for determining crowded area
CN113962199A (en) Text recognition method, text recognition device, text recognition equipment, storage medium and program product
CN112580616A (en) Crowd quantity determination method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant