CN111460977B - Cross-view personnel re-identification method, device, terminal and storage medium - Google Patents

Cross-view personnel re-identification method, device, terminal and storage medium Download PDF

Info

Publication number
CN111460977B
CN111460977B CN202010237294.5A CN202010237294A CN111460977B CN 111460977 B CN111460977 B CN 111460977B CN 202010237294 A CN202010237294 A CN 202010237294A CN 111460977 B CN111460977 B CN 111460977B
Authority
CN
China
Prior art keywords
monitored
target
identification
image
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010237294.5A
Other languages
Chinese (zh)
Other versions
CN111460977A (en
Inventor
杨英仪
吴昊
王伟
黄炎
陈辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Guangdong Power Grid Co Ltd filed Critical Electric Power Research Institute of Guangdong Power Grid Co Ltd
Priority to CN202010237294.5A priority Critical patent/CN111460977B/en
Publication of CN111460977A publication Critical patent/CN111460977A/en
Application granted granted Critical
Publication of CN111460977B publication Critical patent/CN111460977B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]

Abstract

The application provides a method, a device, a terminal and a storage medium for re-identifying personnel in a cross-view field, which are used for identifying the target to be monitored in different monitoring images through real-time longitude and latitude coordinates and historical moving paths based on the corresponding relation of the longitude and latitude coordinates, the longitude and latitude coordinates and a monitoring area of the target to be monitored, and solve the technical problems that the target to be monitored is easy to be influenced by environmental factors and the identification error rate is high when the target to be monitored is re-identified in an image identification mode in the prior art.

Description

Cross-view personnel re-identification method, device, terminal and storage medium
Technical Field
The present disclosure relates to the field of video monitoring technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for identifying personnel across vision.
Background
Person Re-Identification (Person Re-Identification), abbreviated as ReID, is a process of Re-identifying a Person and establishing a correspondence between pedestrian images captured by different cameras without overlapping views. When there is no overlap between the shooting ranges of the cameras, the search difficulty is increased greatly due to the fact that continuous information is not available.
At present, due to the maturity of image recognition technology, the application of cross-field personnel recognition in video monitoring is more and more extensive, but when facing to a real and complex monitoring environment, a high recognition error rate still exists.
Disclosure of Invention
The application provides a method, a device, a terminal and a storage medium for identifying personnel across vision, which are used for solving the technical problem of high identification error rate of the existing technology for identifying personnel across vision.
The first aspect of the application provides a method for re-identifying personnel across vision, which comprises the following steps:
acquiring longitude and latitude coordinates of a target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
determining a monitoring area where the target to be monitored is currently located and a previous monitoring area passing by according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
respectively extracting features of the objects to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and comparing the similarity of the feature vector to be identified and the contrast feature vector to obtain a cross-view re-identification result of the object to be monitored.
Optionally, the method further comprises:
and acquiring a characteristic sample data set of the staff, and inputting the characteristic sample data set of the staff into a preset initial deep learning model for training to obtain a characteristic recognition model.
Optionally, the method further comprises:
and determining the positions of the objects to be monitored in the identification image and the comparison image according to the preset mapping relation between each monitoring lens coordinate and longitude and latitude coordinates, wherein the monitoring lens coordinates are coordinate values under a lens coordinate system of a monitoring camera in the monitoring area.
Optionally, the comparing the similarity between the feature vector to be identified and the reference feature vector to obtain a cross-view re-identification result of the object to be monitored specifically includes:
and carrying out similarity comparison on the feature vector to be identified and the contrast feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold value to obtain a cross-view re-identification result of the object to be monitored.
A second aspect of the present application provides a cross-view person re-identification apparatus, comprising:
the longitude and latitude coordinate processing unit is used for acquiring longitude and latitude coordinates of the target to be monitored and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
the monitoring image acquisition unit is used for determining a monitoring area where the target to be monitored is currently located and a previous monitoring area passing by according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the monitoring area where the target to be monitored is currently located, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
the feature extraction unit is used for extracting features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and the feature comparison unit is used for comparing the similarity of the feature vector to be identified and the contrast feature vector to obtain a cross-view re-identification result of the object to be monitored.
Optionally, the method further comprises:
the characteristic recognition model construction unit is used for acquiring a characteristic sample data set of the staff, inputting the characteristic sample data set of the staff into a preset initial deep learning model for training, and obtaining a characteristic recognition model.
Optionally, the method further comprises:
and the coordinate conversion unit is used for determining the position of the object to be monitored in the identification image and the comparison image according to the preset mapping relation between each monitoring lens coordinate and longitude and latitude coordinates, wherein the monitoring lens coordinate is a coordinate value under a lens coordinate system of a monitoring camera in the monitoring area.
Optionally, the feature comparison unit is specifically configured to:
and carrying out similarity comparison on the feature vector to be identified and the contrast feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold value to obtain a cross-view re-identification result of the object to be monitored.
A third aspect of the present application provides a terminal, including: a memory and a processor;
the memory is used for storing program codes corresponding to the cross-vision person re-identification method in the first aspect of the application;
the processor is configured to execute the program code.
A fourth aspect of the present application provides a storage medium having stored therein program code corresponding to the cross-view person re-identification method described in the first aspect of the present application.
From the above technical solutions, the embodiments of the present application have the following advantages:
the application provides a method for re-identifying personnel across vision, which comprises the following steps: acquiring longitude and latitude coordinates of a target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates; determining a monitoring area where the target to be monitored is currently located and a previous monitoring area passing by according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area; respectively extracting features of the objects to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector; and comparing the similarity of the feature vector to be identified and the contrast feature vector to obtain a cross-view re-identification result of the object to be monitored.
According to the method and the device, the target to be monitored in different monitoring images is identified through the real-time longitude and latitude coordinates and the historical moving paths based on the corresponding relation among the longitude and latitude coordinates, the longitude and latitude coordinates and the monitoring area of the target to be monitored, and the technical problem that the identification error rate is high due to the fact that the environment factor is easily affected when the target to be monitored is identified again in an image identification mode in the prior art is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a flowchart of a first embodiment of a cross-view person re-identification method provided in the present application;
FIG. 2 is a flowchart of a second embodiment of a cross-view person re-identification method provided in the present application;
fig. 3 is a schematic structural diagram of a first embodiment of a cross-vision person re-identifying device provided in the present application.
Detailed Description
In recent years, video image recognition technology based on deep learning has made great progress, and application of video monitoring technology in various industries is promoted. However, in practical application, due to severe appearance change factors such as illumination change, visual angle change of different cameras, shielding blurring, similar dressing, walking gesture and the like, the image similarity of the same target in different views is greatly reduced, so that the technical problem of high recognition error rate when the existing person crossing the views is recognized again in the face of a real and complex monitoring environment is caused.
The embodiment of the application provides a method, a device, a terminal and a storage medium for identifying personnel across vision, which are used for solving the technical problem of high identification error rate of the existing technology for identifying personnel across vision.
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, a first embodiment of the present application provides a method for re-identifying personnel across vision, including:
step 101, acquiring longitude and latitude coordinates of a target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates.
It should be noted that, the longitude and latitude coordinates and the moving path of the object to be monitored are obtained by the satellite positioning device assembled on the object to be monitored.
Step 102, determining a monitoring area where a target to be monitored is currently located and a previous monitoring area passing by according to longitude and latitude coordinates and a moving path, and acquiring an identification image and a comparison image of the target to be monitored.
The identification image is a monitoring image of the object to be monitored in the current monitoring area, and the comparison image is a monitoring image of the object to be monitored in the last monitoring area.
It should be noted that, according to the longitude and latitude coordinates, the monitoring area where the target to be monitored is currently located is determined, and an identification image of the target to be monitored is obtained, and before the target to be monitored enters the monitoring area where the target to be monitored is currently located, the last monitoring area of the path is determined according to the moving path, and a comparison image of the target to be monitored is obtained.
And 103, respectively extracting the characteristics of the objects to be monitored in the identification image and the comparison image through a preset characteristic identification model to obtain a characteristic vector to be identified and a comparison characteristic vector.
It should be noted that, through a pre-trained feature recognition model, image feature extraction is performed on the target to be monitored in the recognition image and the comparison image respectively, so as to obtain a feature vector to be recognized and a feature vector to be compared.
And 104, comparing the similarity of the feature vector to be identified with the similarity of the contrast feature vector to obtain a cross-view re-identification result of the object to be monitored.
It should be noted that, next, the feature vector to be identified obtained in step 103 is compared with the feature vector to be identified by feature similarity, so as to obtain a cross-view re-identification result of the object to be monitored according to the comparison result.
According to the method and the device, the target to be monitored in different monitoring images is identified through the real-time longitude and latitude coordinates and the historical moving paths based on the corresponding relation between the longitude and latitude coordinates of the target to be monitored and the monitoring area, and the technical problem that the identification error rate is high due to the fact that the target to be monitored is easily affected by environmental factors when the target to be monitored is identified again in an image identification mode in the prior art is solved.
The foregoing is a detailed description of a first embodiment of a method for cross-view person re-identification provided herein, and the following is a detailed description of a second embodiment of a method for cross-view person re-identification provided herein.
Referring to fig. 2, a second embodiment of the present application provides a method for re-identifying personnel across vision, including:
step 200, acquiring a characteristic sample data set of the staff, and inputting the characteristic sample data set of the staff into a preset initial deep learning model for training to obtain a characteristic recognition model.
In order to construct a working personnel data set, images of a transformer substation worker in different angles and in different postures under a plurality of non-overlapping areas are acquired by using a dome camera, image enhancement operations such as horizontal overturning and random noise enhancement are performed, the images are uniformly scaled to the same scale, for example, 128×256 pixels, and finally, the images are manually marked, and each worker is endowed with a unique class number.
And then, building a SE-ResNet50 deep learning network, and performing model training by using the built staff data set to obtain a feature recognition model.
Step 201, acquiring longitude and latitude coordinates of a target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates.
It should be noted that, the longitude and latitude coordinates and the moving path of the object to be monitored are obtained by the satellite positioning device assembled on the object to be monitored.
Step 202, determining a monitoring area where a target to be monitored is currently located and a previous monitoring area passing by according to longitude and latitude coordinates and a moving path, and acquiring an identification image and a comparison image of the target to be monitored.
The identification image is a monitoring image of the object to be monitored in the current monitoring area, and the comparison image is a monitoring image of the object to be monitored in the last monitoring area.
It should be noted that, according to the longitude and latitude coordinates, the monitoring area where the target to be monitored is currently located is determined, and an identification image of the target to be monitored is obtained, and before the target to be monitored enters the monitoring area where the target to be monitored is currently located, the last monitoring area of the path is determined according to the moving path, and a comparison image of the target to be monitored is obtained.
And 203, determining the positions of the objects to be monitored in the identification image and the comparison image according to the preset mapping relation between the coordinates of each monitoring lens and the longitude and latitude coordinates.
The monitoring lens coordinates are coordinate values of the monitoring camera in the monitoring area under a lens coordinate system.
It should be noted that, according to the preset mapping relation between each monitoring lens coordinate and longitude and latitude coordinates, the longitude and latitude coordinates are converted into coordinate values in the monitoring image, and the positions of the targets to be monitored in the identification image and the comparison image are determined.
And 204, respectively extracting the characteristics of the objects to be monitored in the identification image and the comparison image through a preset characteristic identification model to obtain a characteristic vector to be identified and a comparison characteristic vector.
It should be noted that, through a pre-trained feature recognition model, image feature extraction is performed on the target to be monitored in the recognition image and the comparison image respectively, so as to obtain a feature vector to be recognized and a feature vector to be compared.
Step 205, comparing the similarity of the feature vector to be identified and the contrast feature vector to obtain a cross-view re-identification result of the object to be monitored.
It should be noted that, next, the feature vector to be identified obtained in step 204 is compared with the feature vector to be identified by feature similarity, so as to obtain a cross-view re-identification result of the object to be monitored according to the comparison result.
And calculating the similarity of the two feature vectors by adopting cosine similarity, and when the maximum similarity is greater than a threshold value of 0.8, considering the similarity as the same staff.
Let the feature vectors of the staff in two different monitoring areas be A, B respectively, wherein the dimension of A, B is d, and the cosine similarity formula for calculating the two feature vectors is as follows:
where d is the individual number of staff in the collected dataset.
The foregoing is a detailed description of a second embodiment of a cross-view person re-identification method provided in the present application, and the following is a detailed description of a first embodiment of a cross-view person re-identification device provided in the present application.
Referring to fig. 3, a third embodiment of the present application provides a cross-view person re-identifying apparatus, including:
the longitude and latitude coordinate processing unit 301 is configured to obtain longitude and latitude coordinates of an object to be monitored, and determine a movement path of the object to be monitored according to the longitude and latitude coordinates;
the monitoring image obtaining unit 302 is configured to determine, according to the latitude and longitude coordinates and the moving path, a monitoring area where a target to be monitored is currently located and a previous monitoring area where the target to be monitored passes through, and obtain an identification image and a comparison image of the target to be monitored, where the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
the feature extraction unit 303 is configured to perform feature extraction on the target to be monitored in the identification image and the comparison image through a preset feature identification model, so as to obtain a feature vector to be identified and a comparison feature vector;
the feature comparison unit 304 is configured to compare the similarity between the feature vector to be identified and the reference feature vector, so as to obtain a cross-view re-identification result of the object to be monitored.
Optionally, the method further comprises:
the feature recognition model construction unit 300 is configured to obtain a feature sample dataset of a worker, input the feature sample dataset of the worker to a preset initial deep learning model for training, and obtain a feature recognition model.
Optionally, the method further comprises:
the coordinate conversion unit 305 is configured to determine a position of a target to be monitored in the identification image and the comparison image according to a preset mapping relationship between each monitoring lens coordinate and longitude and latitude coordinates, where the monitoring lens coordinate is a coordinate value under a lens coordinate system of a monitoring camera in the monitoring area.
Optionally, the feature comparison unit 304 is specifically configured to:
and comparing the similarity between the feature vector to be identified and the contrast feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold value to obtain a cross-view re-identification result of the object to be monitored.
The foregoing is a detailed description of a first embodiment of a cross-view person re-identification apparatus provided herein, and the following is a detailed description of embodiments of a terminal and a storage medium provided herein.
A fourth embodiment of the present application provides a terminal, including: a memory and a processor;
the memory is used for storing program codes corresponding to the cross-vision person re-identification method in the first embodiment and the second embodiment of the application;
the processor is configured to execute the program code.
A fifth embodiment of the present application provides a storage medium having program code stored therein, the program code corresponding to the cross-view person re-identification method described in the first and second embodiments of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A method for re-identifying personnel across a viewing area, comprising:
acquiring longitude and latitude coordinates of a target to be monitored through a satellite positioning device assembled on the target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
determining a monitoring area where the target to be monitored is currently located and a previous monitoring area passing by according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
respectively extracting features of the objects to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and comparing the similarity of the feature vector to be identified and the contrast feature vector to obtain a cross-view re-identification result of the object to be monitored.
2. The method of cross-view person re-identification of claim 1, further comprising:
and acquiring a characteristic sample data set of the staff, and inputting the characteristic sample data set of the staff into a preset initial deep learning model for training to obtain a characteristic recognition model.
3. The method of cross-view person re-identification of claim 1, further comprising:
and determining the positions of the objects to be monitored in the identification image and the comparison image according to the preset mapping relation between each monitoring lens coordinate and longitude and latitude coordinates, wherein the monitoring lens coordinates are coordinate values under a lens coordinate system of a monitoring camera in the monitoring area.
4. The method for identifying the person in the cross-view area according to claim 1, wherein the comparing the similarity between the feature vector to be identified and the reference feature vector to obtain the result of identifying the person in the cross-view area of the object to be monitored specifically includes:
and carrying out similarity comparison on the feature vector to be identified and the contrast feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold value to obtain a cross-view re-identification result of the object to be monitored.
5. A cross-view person re-identification apparatus, comprising:
the longitude and latitude coordinate processing unit is used for acquiring longitude and latitude coordinates of the target to be monitored through a satellite positioning device assembled on the target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
the monitoring image acquisition unit is used for determining a monitoring area where the target to be monitored is currently located and a previous monitoring area passing by according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the monitoring area where the target to be monitored is currently located, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
the feature extraction unit is used for extracting features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and the feature comparison unit is used for comparing the similarity of the feature vector to be identified and the contrast feature vector to obtain a cross-view re-identification result of the object to be monitored.
6. The cross-view person re-identification apparatus as in claim 5, further comprising:
the characteristic recognition model construction unit is used for acquiring a characteristic sample data set of the staff, inputting the characteristic sample data set of the staff into a preset initial deep learning model for training, and obtaining a characteristic recognition model.
7. The cross-view person re-identification apparatus as in claim 5, further comprising:
and the coordinate conversion unit is used for determining the position of the object to be monitored in the identification image and the comparison image according to the preset mapping relation between each monitoring lens coordinate and longitude and latitude coordinates, wherein the monitoring lens coordinate is a coordinate value under a lens coordinate system of a monitoring camera in the monitoring area.
8. The device for re-identifying personnel across vision according to claim 5, wherein the feature comparison unit is specifically configured to:
and carrying out similarity comparison on the feature vector to be identified and the contrast feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold value to obtain a cross-view re-identification result of the object to be monitored.
9. A terminal, comprising: a memory and a processor;
the memory is used for storing program codes corresponding to the cross-vision person re-identification method according to any one of claims 1 to 4;
the processor is configured to execute the program code.
10. A storage medium having stored therein program code corresponding to the cross-view person re-identification method according to any one of claims 1 to 4.
CN202010237294.5A 2020-03-30 2020-03-30 Cross-view personnel re-identification method, device, terminal and storage medium Active CN111460977B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010237294.5A CN111460977B (en) 2020-03-30 2020-03-30 Cross-view personnel re-identification method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010237294.5A CN111460977B (en) 2020-03-30 2020-03-30 Cross-view personnel re-identification method, device, terminal and storage medium

Publications (2)

Publication Number Publication Date
CN111460977A CN111460977A (en) 2020-07-28
CN111460977B true CN111460977B (en) 2024-02-20

Family

ID=71685067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010237294.5A Active CN111460977B (en) 2020-03-30 2020-03-30 Cross-view personnel re-identification method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN111460977B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018223955A1 (en) * 2017-06-09 2018-12-13 北京深瞐科技有限公司 Target monitoring method, target monitoring device, camera and computer readable medium
CN109409250A (en) * 2018-10-08 2019-03-01 高新兴科技集团股份有限公司 A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning
CN110147471A (en) * 2019-04-04 2019-08-20 平安科技(深圳)有限公司 Trace tracking method, device, computer equipment and storage medium based on video
CN110674746A (en) * 2019-09-24 2020-01-10 视云融聚(广州)科技有限公司 Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9824624B2 (en) * 2014-07-31 2017-11-21 Cloverleaf Media, LLC Dynamic merchandising communication system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018223955A1 (en) * 2017-06-09 2018-12-13 北京深瞐科技有限公司 Target monitoring method, target monitoring device, camera and computer readable medium
CN109409250A (en) * 2018-10-08 2019-03-01 高新兴科技集团股份有限公司 A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning
CN110147471A (en) * 2019-04-04 2019-08-20 平安科技(深圳)有限公司 Trace tracking method, device, computer equipment and storage medium based on video
CN110674746A (en) * 2019-09-24 2020-01-10 视云融聚(广州)科技有限公司 Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111460977A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
US11205276B2 (en) Object tracking method, object tracking device, electronic device and storage medium
CN103093198B (en) A kind of crowd density monitoring method and device
KR101409810B1 (en) Real-time object tracking method in moving camera by using particle filter
CN111160243A (en) Passenger flow volume statistical method and related product
CN111402294A (en) Target tracking method, target tracking device, computer-readable storage medium and computer equipment
CN108647587B (en) People counting method, device, terminal and storage medium
KR101645959B1 (en) The Apparatus and Method for Tracking Objects Based on Multiple Overhead Cameras and a Site Map
CN110675426B (en) Human body tracking method, device, equipment and storage medium
CN110458198B (en) Multi-resolution target identification method and device
Tian et al. Scene Text Detection in Video by Learning Locally and Globally.
CN111263955A (en) Method and device for determining movement track of target object
CN106471440A (en) Eye tracking based on efficient forest sensing
CN109636828A (en) Object tracking methods and device based on video image
CN110084830A (en) A kind of detection of video frequency motion target and tracking
CN112101195A (en) Crowd density estimation method and device, computer equipment and storage medium
CN111291612A (en) Pedestrian re-identification method and device based on multi-person multi-camera tracking
CN113743177A (en) Key point detection method, system, intelligent terminal and storage medium
CN115345906A (en) Human body posture tracking method based on millimeter wave radar
Ali et al. Deep Learning Algorithms for Human Fighting Action Recognition.
CN111353429A (en) Interest degree method and system based on eyeball turning
CN114821430A (en) Cross-camera target object tracking method, device, equipment and storage medium
CN113297963A (en) Multi-person posture estimation method and device, electronic equipment and readable storage medium
CN112949539A (en) Pedestrian re-identification interactive retrieval method and system based on camera position
CN111460977B (en) Cross-view personnel re-identification method, device, terminal and storage medium
CN115018886B (en) Motion trajectory identification method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant