CN111814690A - Target re-identification method and device and computer readable storage medium - Google Patents

Target re-identification method and device and computer readable storage medium Download PDF

Info

Publication number
CN111814690A
CN111814690A CN202010659246.5A CN202010659246A CN111814690A CN 111814690 A CN111814690 A CN 111814690A CN 202010659246 A CN202010659246 A CN 202010659246A CN 111814690 A CN111814690 A CN 111814690A
Authority
CN
China
Prior art keywords
retrieved
target
video
similarity
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010659246.5A
Other languages
Chinese (zh)
Other versions
CN111814690B (en
Inventor
吴翠玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010659246.5A priority Critical patent/CN111814690B/en
Publication of CN111814690A publication Critical patent/CN111814690A/en
Application granted granted Critical
Publication of CN111814690B publication Critical patent/CN111814690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Library & Information Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Human Computer Interaction (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a target re-identification method, a target re-identification device and a computer-readable storage medium, wherein the method comprises the following steps: acquiring a target image and a video to be retrieved, wherein the target image comprises a retrieval target; carrying out target detection on a video to be retrieved to obtain an illumination intensity value corresponding to the video to be retrieved; determining a similarity threshold matched with the illumination intensity value by using the illumination intensity value; calculating the similarity between the retrieval target and each target in the video to be retrieved, and judging whether the similarity is greater than a similarity threshold value or not; and if the similarity between the retrieval target and the target in the video to be retrieved is greater than the similarity threshold, determining that the retrieval target is in the video to be retrieved. Through the mode, the accuracy of target identification can be improved.

Description

Target re-identification method and device and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a method and an apparatus for re-identifying a target, and a computer-readable storage medium.
Background
In an actual monitoring scene, due to the complex environment and the low resolution of a camera, a clear face is difficult to obtain, and the effect of a face recognition technology is not prominent; pedestrian re-identification is used as a supplement of face identification, the problem of crossing equipment or crossing scenes in practical application can be effectively solved, the pedestrian re-identification needs to identify whether pedestrians in different environments and under the illumination condition are the same person, the scenes which can be identified have different illumination environments such as day, night, cloudy day, dim light or bright light, and therefore the identification effect is greatly influenced, the existing pedestrian re-identification methods do not consider that the retrieval images can come from different environments, and the retrieval effect is reduced due to different illumination conditions, so a re-identification method for solving the environmental influence is urgently needed.
Disclosure of Invention
The application provides a target re-identification method, a target re-identification device and a computer readable storage medium, which can improve the accuracy of target identification.
In order to solve the above technical problem, the present application adopts a technical solution of providing a target re-identification method, including: acquiring a target image and a video to be retrieved, wherein the target image comprises a retrieval target; carrying out target detection on a video to be retrieved to obtain an illumination intensity value corresponding to the video to be retrieved; determining a similarity threshold matched with the illumination intensity value by using the illumination intensity value; calculating the similarity between the retrieval target and each target in the video to be retrieved, and judging whether the similarity is greater than a similarity threshold value or not; and if the similarity between the retrieval target and the target in the video to be retrieved is greater than the similarity threshold, determining that the retrieval target is in the video to be retrieved.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide an object re-identification apparatus, which includes a memory and a processor connected to each other, wherein the memory is used for storing a computer program, and the computer program is used for implementing the object re-identification method when being executed by the processor.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a computer-readable storage medium for storing a computer program, wherein the computer program is configured to implement the above object re-identification method when executed by a processor.
Through the scheme, the beneficial effects of the application are that: the method comprises the steps of firstly obtaining a target image and a video to be retrieved, and carrying out target detection on the video to be retrieved to obtain a corresponding illumination intensity value; then, determining a matched similarity threshold value by using the illumination intensity value; then calculating the similarity between the retrieval target in the target image and the target in the video to be retrieved, and judging whether the similarity is greater than a similarity threshold value; if the similarity between the retrieval target and the target in the video to be retrieved is larger than the similarity threshold, the retrieval target in the video to be retrieved is determined to exist.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a target re-identification method provided in the present application;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a target re-identification method provided in the present application;
FIG. 3 is a schematic diagram of the structure of the neural network in the embodiment shown in FIG. 2;
FIG. 4 is a schematic structural diagram of an embodiment of a target re-identification apparatus provided in the present application;
FIG. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a target re-identification method provided in the present application, where the method includes:
step 11: and acquiring a target image and a video to be retrieved.
The target image or the video to be retrieved sent by other devices can be received, or the data in the storage equipment is read to obtain the target image or the video to be retrieved, or the camera device can be controlled to shoot to obtain the target image or the video to be retrieved.
Further, the target image comprises a retrieval target, the retrieval target is an object concerned by a user, and the video to be retrieved is a video possibly containing the retrieval target; whether the retrieval target exists in the video to be retrieved can be judged so as to track and identify the retrieval target, for example, the retrieval target is a criminal, the action route of the criminal can be determined by analyzing the video near the accident site, and the case detection is accelerated.
Step 12: and carrying out target detection on the video to be retrieved to obtain the illumination intensity value corresponding to the video to be retrieved.
After the video to be retrieved is acquired, the video to be retrieved can be analyzed and processed, and an illumination intensity value of the video to be retrieved during shooting is generated, so that the influence of illumination is eliminated when whether a retrieval target exists in the video to be retrieved is judged, and the accuracy of identification is improved.
Step 13: and determining a similarity threshold value matched with the illumination intensity value by using the illumination intensity value.
For example, the similarity of a person in a clear environment and a cloudy environment may be different from the similarity of the person in the clear environment, so when identifying whether the pedestrian in the two images is the same person, an environmental factor, that is, an illumination intensity value, needs to be considered, that is, a similarity threshold value of whether the pedestrian is the same person under different illumination conditions is different, that is, different similarity threshold values may be set under different illumination intensities, so that the similarity threshold value needs to be determined according to the illumination intensity value, where the similarity threshold value is a reference for determining whether the retrieval target is the same as the target in a certain frame of image in the video to be retrieved.
Step 14: and calculating the similarity between the retrieval target and each target in the video to be retrieved, and judging whether the similarity is greater than a similarity threshold value.
The retrieval target and the target in the video to be retrieved can be matched, whether the similarity of the retrieval target and the target in the video to be retrieved exceeds a set similarity threshold value or not is judged, the category of the target in the video to be retrieved is the same as that of the detection target, namely if the retrieval target is a person, the target in the video to be retrieved is also a person, and if the retrieval target is a dog, the target in the video to be retrieved is also a dog; for example, the retrieval target is a pedestrian a, the video to be retrieved includes pedestrians a-C, and for a certain frame of image in the video to be retrieved, it is determined whether the similarity between the retrieval target and the pedestrian a exceeds a similarity threshold, and it is determined whether the similarity between the retrieval target and the pedestrian B exceeds the similarity threshold.
Step 15: and if the similarity between the retrieval target and the target in the video to be retrieved is greater than the similarity threshold, determining that the retrieval target is in the video to be retrieved.
If the similarity between the retrieval target and a certain target in the video to be retrieved is judged to be greater than the similarity threshold, the retrieval target is shown to be similar to the certain target in the video to be retrieved, and the retrieval target in the video to be retrieved can be determined to exist at the moment; if the similarity between the retrieval target and all the targets in the video to be retrieved is smaller than the similarity threshold, the retrieval target is larger in difference with the targets in the video to be retrieved, the probability of belonging to the same target is smaller, and at the moment, the retrieval target does not exist in the video to be retrieved.
The embodiment provides a method for re-identifying a target in a complex environment, which includes the steps of carrying out target detection on an obtained video to be retrieved to obtain a corresponding illumination intensity value; then, determining a similarity threshold value by using the illumination intensity value, calculating the similarity between a retrieval target in the target image and a target in the video to be retrieved, and judging whether the similarity between the retrieval target and the target in the video to be retrieved is greater than the similarity threshold value or not; when the similarity between the retrieval target and the target in the video to be retrieved is greater than the similarity threshold value, the retrieval target is determined to be in the video to be retrieved, and the influence of the illumination intensity in the environment on the identification is considered, so that the interference of the illumination intensity on the identification can be reduced, and the identification accuracy can be improved.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of a target re-identification method provided in the present application, the method including:
step 201: and acquiring a target image and a video to be retrieved.
This step is the same as step 11 and is not described herein again.
Step 202: and acquiring training images under different illumination intensities, and training by using the training images to obtain the neural network model.
The method comprises the steps that images under various environments can be obtained in advance, then training is carried out, a neural network model is obtained, the input of the neural network model is the images, and the output of the neural network model is the illumination intensity value of the input images; specifically, the illumination intensity value output by the neural network model can be 0-4, and the illumination intensity value is larger to indicate that the illumination is stronger.
Step 203: and inputting each image to be retrieved into the neural network model to obtain the illumination intensity value of the image to be retrieved.
The video to be retrieved comprises a plurality of images to be retrieved, when the video to be retrieved is in actual use, the images to be retrieved can be input into a trained neural network model, and the neural network model can process the input images to be retrieved and generate illumination intensity values corresponding to the images to be retrieved.
Step 204: and averaging the illumination intensity values of all the images to be retrieved to obtain the illumination intensity value of the video to be retrieved.
And processing each image to be retrieved in the video to be retrieved according to the manner provided in step 203 to obtain an illumination intensity value corresponding to each image to be retrieved, averaging the illumination intensity values of all the images to be retrieved, and taking the average value as the illumination intensity value of the whole video to be retrieved.
It can be understood that, in other embodiments, the image to be retrieved may also be selected at certain intervals, the target detection is performed on the illumination intensity value of the selected image to be retrieved, then the average value is performed, and the average value is used as the illumination intensity value of the whole video to be retrieved.
Step 205: and calculating the similarity between the first image to be retrieved under the first preset illumination intensity and at least one second image to be retrieved under the second preset illumination intensity, and taking the average value of all the similarities as the similarity threshold under the second preset illumination intensity.
The first image to be retrieved and the second image to be retrieved comprise the same target, for example, if the first image to be retrieved comprises a pedestrian A, the second image to be retrieved also comprises the pedestrian A; after the illumination intensity value of the video to be retrieved is determined, the corresponding similarity threshold value under the illumination intensity can be further determined, the images to be retrieved under different illumination conditions can be selected, the targets in the images to be retrieved are detected, the similarity between the target under the first preset illumination intensity and the corresponding target under the second preset illumination intensity can be counted, and the average value of all the similarities under the second preset illumination intensity is taken as the similarity threshold value under the second preset illumination intensity.
For example, if the illumination intensity value at the first preset illumination intensity is 0, the illumination intensity value at the second preset illumination intensity is 1, the target is a pedestrian C, and the target image is denoted as image a, a plurality of images including the pedestrian C at the illumination intensity value of 1 may be obtained and denoted as image B, the similarity between the image a and each image B is calculated and averaged, and the average value is used as the similarity threshold value at the illumination intensity value of 1.
Step 206: and extracting the features in the retrieval target to obtain the feature value of the retrieval target.
The target image may be processed by a feature extraction method to extract features in the search target to obtain corresponding feature values, for example, feature extraction may be performed by a deep learning based method.
Step 207: and screening the image to be retrieved to obtain an identification reference image, and extracting the features in the identification reference image to obtain the feature value of the identification reference image.
For a given target image and a series of video libraries to be retrieved, target detection can be performed on each video to be retrieved in the video libraries by using a target detection method, and the video to be retrieved is tracked by using a target tracking method to obtain a tracking image sequence containing a tracking target.
In order to ensure the recognition effect, the images in the preferred image sequence can be marked as recognition reference images by screening according to the postures of the tracking targets in the tracking image sequence to obtain the preferred image sequence.
Further, taking the example that the retrieval target is a pedestrian as an example, for each tracking image sequence, the image of the pedestrian participating in the recognition, that is, the recognition reference image, can be preferably selected according to the posture (front, back or side) of the pedestrian; in particular, a preferred algorithm may be used to select a number of satisfactory images from a plurality of images having different poses to form a preferred image sequence, for example, by screening according to the sharpness or completeness of the images. If there is no image of a pedestrian in a certain posture, the posture is not preferred, for example, if only the front of the pedestrian is photographed, the pedestrian with the back or side posture cannot be screened.
For the preferred identification reference image, the characteristics of the identification reference image can be extracted by using a neural network; specifically, taking pedestrian identification as an example, the pedestrian features may be extracted through a convolutional neural network, the structure of the neural network is shown in fig. 3, the main network includes a plurality of convolutional layers, pooling layers, and the like, the feature map extracted by the main network is subjected to pooling operation, the feature map may be divided into upper and lower two parts of features, that is, upper body features and lower body features, and then final features may be obtained through dimension reduction operation.
It can be understood that the actually used features are the concatenation of the upper body features and the lower body features, and compared with the method of directly acquiring the whole body features, different loss values can be set respectively due to the fact that the whole body features are split into the upper body features and the lower body features, the training is performed respectively, the extracted features are more accurate, and the robustness is better for recognition.
Step 208: and calculating the similarity between the characteristic value of the retrieval target and the characteristic value of each recognition reference image, counting the maximum value of the similarities corresponding to all the recognition reference images, and taking the recognition reference image corresponding to the maximum similarity as a recognition result.
The quantity product of the characteristic value of the retrieval target and the characteristic value of each recognition reference image can be calculated, the product of the modulus of the characteristic value of the retrieval target and the modulus of the characteristic value of the corresponding recognition reference image is calculated at the same time, and then the quantity product and the product are divided to obtain the corresponding similarity, namely the similarity is calculated by adopting the following formula:
Figure BDA0002577905980000071
wherein S isiIs the i (i is 1,2, …, n) th similarity, fqTo retrieve the characteristic value of the target, fgiFor the characteristic value of the ith identification reference image, n is the number of identification reference images in the preferred image sequence.
After calculating the similarity between the retrieval target and each recognition reference image in the preferred image sequence, all the similarities can be arranged from large to small, and the recognition reference image with the largest similarity is selected to represent the result of recognizing the preferred image sequence, that is, the similarity between the retrieval target and the video to be retrieved is as follows:
Figure BDA0002577905980000072
step 209: and judging whether the similarity between the characteristic value of the identification result and the characteristic value of the retrieval target is greater than a similarity threshold value.
Whether a retrieval target exists in the video to be retrieved can be judged according to the size relation between the maximum similarity and the similarity threshold.
Step 210: and if the similarity between the characteristic value of the identification result and the characteristic value of the retrieval target is greater than the similarity threshold, determining that the retrieval target is in the video to be retrieved.
The identification result is an identification reference image with the maximum similarity to the retrieval target in the preferred image sequence, if the maximum similarity is greater than the similarity threshold, the retrieval target is judged to be the same as the target in the identification reference image in the preferred image sequence, namely the retrieval target exists in the video to be retrieved; otherwise, judging that the retrieval target is different from the target in the identification reference image in the preferred image sequence, namely that the retrieval target does not exist in the video to be retrieved; for example, the retrieval target is a pedestrian a, the similarity threshold is 90%, and when the maximum similarity is 92%, it is determined that the pedestrian a appears in the video to be retrieved; if the maximum similarity is smaller than 92%, it can be determined that the pedestrian A does not appear in the video to be retrieved.
The similarity threshold value of the target recognition is changed under the influence of the environment, and different similarity threshold values are set under different illumination environments, so that the target recognition effect is improved, and the target recognition under the complex environment is realized; in addition, targets in multiple postures can be used for participating in recognition, the accuracy of target recognition is further improved, the number of similarity contrasts can be reduced, calculation of characteristic values is reduced, and the recognition speed is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of the object re-identification apparatus provided in the present application, the object re-identification apparatus 40 includes a memory 41 and a processor 42 connected to each other, the memory 41 is used for storing a computer program, and the computer program is used for implementing the object re-identification method in the foregoing embodiment when being executed by the processor 42.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium 50 provided in the present application, where the computer-readable storage medium 50 is used for storing a computer program 51, and the computer program 51 is used for implementing the object re-identification method in the foregoing embodiment when being executed by a processor.
The computer readable storage medium 50 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of modules or units is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (10)

1. A target re-identification method is characterized by comprising the following steps:
acquiring a target image and a video to be retrieved, wherein the target image comprises a retrieval target;
performing target detection on the video to be retrieved to obtain an illumination intensity value corresponding to the video to be retrieved;
determining a similarity threshold value matched with the illumination intensity value by using the illumination intensity value;
calculating the similarity between the retrieval target and each target in the video to be retrieved, and judging whether the similarity is greater than the similarity threshold value;
and if so, determining that the retrieval target is in the video to be retrieved.
2. The object re-identification method according to claim 1, wherein the video to be retrieved includes a plurality of images to be retrieved, and the step of calculating the similarity between the retrieved object and each object in the video to be retrieved includes, before:
extracting features in the retrieval target to obtain a feature value of the retrieval target;
screening the image to be retrieved to obtain an identification reference image;
and extracting the features in the identification reference image to obtain the feature value of the identification reference image.
3. The object re-identification method according to claim 2, wherein the step of screening the image to be retrieved to obtain an identification reference image comprises:
performing target detection on the video to be retrieved by using a target detection method, and tracking the video to be retrieved by using a target tracking method to obtain a tracking image sequence containing a tracking target;
and screening according to the postures of the tracking targets in the tracking image sequence to obtain a preferred image sequence, wherein the images in the preferred image sequence are marked as the identification reference images.
4. The object re-recognition method according to claim 2, wherein the step of calculating the similarity between the retrieval object and each object in the video to be retrieved comprises:
calculating the similarity between the characteristic value of the retrieval target and the characteristic value of each recognition reference image;
and counting the maximum value of the similarity corresponding to all the identification reference images, and taking the identification reference image corresponding to the maximum similarity as an identification result.
5. The object re-identification method according to claim 4, wherein the step of determining whether the similarity is greater than the similarity threshold comprises:
and judging whether the similarity between the characteristic value of the identification result and the characteristic value of the retrieval target is greater than the similarity threshold value.
6. The object re-identification method according to claim 4, wherein the method further comprises:
calculating the number product of the characteristic value of the retrieval target and the characteristic value of each recognition reference image;
and calculating the product of the modulus of the characteristic value of the retrieval target and the modulus of the corresponding characteristic value of the identification reference image, and dividing the product by the product to obtain the corresponding similarity.
7. The method of claim 2, wherein the step of performing the target detection on the video to be retrieved to obtain the illumination intensity value corresponding to the video to be retrieved comprises:
acquiring training images under different illumination intensities, and training by using the training images to obtain a neural network model;
inputting each image to be retrieved into the neural network model to obtain the illumination intensity value of the image to be retrieved;
and averaging the illumination intensity values of all the images to be retrieved to obtain the illumination intensity value of the video to be retrieved.
8. The object re-identification method according to claim 2, wherein the step of determining a similarity threshold matching the illumination intensity value using the illumination intensity value comprises:
calculating the similarity between a first image to be retrieved under a first preset illumination intensity and at least one second image to be retrieved under a second preset illumination intensity, wherein the first image to be retrieved and the second image to be retrieved comprise the same target;
and taking the average value of all the similarity as the similarity threshold value under the second preset illumination intensity.
9. An object re-recognition apparatus comprising a memory and a processor connected to each other, wherein the memory is used for storing a computer program, which when executed by the processor is used for implementing the object re-recognition method according to any one of claims 1-8.
10. A computer-readable storage medium for storing a computer program, wherein the computer program, when executed by a processor, is adapted to carry out the object re-identification method of any one of claims 1-8.
CN202010659246.5A 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium Active CN111814690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010659246.5A CN111814690B (en) 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010659246.5A CN111814690B (en) 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111814690A true CN111814690A (en) 2020-10-23
CN111814690B CN111814690B (en) 2023-09-01

Family

ID=72842157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010659246.5A Active CN111814690B (en) 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111814690B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269177A (en) * 2021-07-21 2021-08-17 广州乐盈信息科技股份有限公司 Target capturing system based on monitoring equipment
CN113344027A (en) * 2021-05-10 2021-09-03 北京迈格威科技有限公司 Retrieval method, device, equipment and storage medium for object in image
CN113743387A (en) * 2021-11-05 2021-12-03 中电科新型智慧城市研究院有限公司 Video pedestrian re-identification method and device, electronic equipment and readable storage medium
CN114419491A (en) * 2021-12-28 2022-04-29 云从科技集团股份有限公司 Video identification method and device and computer storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09293082A (en) * 1996-04-26 1997-11-11 Toshiba Corp Device for retrieving picture and its method
JP2006350465A (en) * 2005-06-13 2006-12-28 Nec Corp Image matching device, image matching method, and program for image matching
JP2010020602A (en) * 2008-07-11 2010-01-28 Nikon Corp Image matching device and camera
JP2010146099A (en) * 2008-12-16 2010-07-01 Fujitsu Ltd Image retrieval program, image retrieval method, and recording medium
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN105320710A (en) * 2014-08-05 2016-02-10 北京大学 Illumination variation resistant vehicle retrieval method and device
CN106295571A (en) * 2016-08-11 2017-01-04 深圳市赛为智能股份有限公司 The face identification method of illumination adaptive and system
WO2017113692A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Method and device for image matching
CN106933861A (en) * 2015-12-30 2017-07-07 北京大唐高鸿数据网络技术有限公司 A kind of customized across camera lens target retrieval method of supported feature
CN107729930A (en) * 2017-10-09 2018-02-23 济南大学 A kind of method and system of the width same scene image irradiation similarity of Quick two
CN108804549A (en) * 2018-05-21 2018-11-13 电子科技大学 Eyeground contrastographic picture search method based on the adjustment of medical image features weight
CN110019895A (en) * 2017-07-27 2019-07-16 杭州海康威视数字技术股份有限公司 A kind of image search method, device and electronic equipment
CN110659391A (en) * 2019-08-29 2020-01-07 苏州千视通视觉科技股份有限公司 Video detection method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09293082A (en) * 1996-04-26 1997-11-11 Toshiba Corp Device for retrieving picture and its method
JP2006350465A (en) * 2005-06-13 2006-12-28 Nec Corp Image matching device, image matching method, and program for image matching
JP2010020602A (en) * 2008-07-11 2010-01-28 Nikon Corp Image matching device and camera
JP2010146099A (en) * 2008-12-16 2010-07-01 Fujitsu Ltd Image retrieval program, image retrieval method, and recording medium
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN105320710A (en) * 2014-08-05 2016-02-10 北京大学 Illumination variation resistant vehicle retrieval method and device
WO2017113692A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Method and device for image matching
CN106933861A (en) * 2015-12-30 2017-07-07 北京大唐高鸿数据网络技术有限公司 A kind of customized across camera lens target retrieval method of supported feature
CN106295571A (en) * 2016-08-11 2017-01-04 深圳市赛为智能股份有限公司 The face identification method of illumination adaptive and system
CN110019895A (en) * 2017-07-27 2019-07-16 杭州海康威视数字技术股份有限公司 A kind of image search method, device and electronic equipment
CN107729930A (en) * 2017-10-09 2018-02-23 济南大学 A kind of method and system of the width same scene image irradiation similarity of Quick two
CN108804549A (en) * 2018-05-21 2018-11-13 电子科技大学 Eyeground contrastographic picture search method based on the adjustment of medical image features weight
CN110659391A (en) * 2019-08-29 2020-01-07 苏州千视通视觉科技股份有限公司 Video detection method and device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344027A (en) * 2021-05-10 2021-09-03 北京迈格威科技有限公司 Retrieval method, device, equipment and storage medium for object in image
CN113344027B (en) * 2021-05-10 2024-04-23 北京迈格威科技有限公司 Method, device, equipment and storage medium for retrieving objects in image
CN113269177A (en) * 2021-07-21 2021-08-17 广州乐盈信息科技股份有限公司 Target capturing system based on monitoring equipment
CN113269177B (en) * 2021-07-21 2021-09-14 广州乐盈信息科技股份有限公司 Target capturing system based on monitoring equipment
CN113743387A (en) * 2021-11-05 2021-12-03 中电科新型智慧城市研究院有限公司 Video pedestrian re-identification method and device, electronic equipment and readable storage medium
CN113743387B (en) * 2021-11-05 2022-03-22 中电科新型智慧城市研究院有限公司 Video pedestrian re-identification method and device, electronic equipment and readable storage medium
CN114419491A (en) * 2021-12-28 2022-04-29 云从科技集团股份有限公司 Video identification method and device and computer storage medium

Also Published As

Publication number Publication date
CN111814690B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN106778595B (en) Method for detecting abnormal behaviors in crowd based on Gaussian mixture model
CN111814690B (en) Target re-identification method, device and computer readable storage medium
CN107153817B (en) Pedestrian re-identification data labeling method and device
US8340420B2 (en) Method for recognizing objects in images
CN110569731B (en) Face recognition method and device and electronic equipment
CN109472191B (en) Pedestrian re-identification and tracking method based on space-time context
US8855363B2 (en) Efficient method for tracking people
CN111582126B (en) Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion
CN112800967B (en) Posture-driven shielded pedestrian re-recognition method
CN113033523B (en) Method and system for constructing falling judgment model and falling judgment method and system
CN110796074A (en) Pedestrian re-identification method based on space-time data fusion
CN111091025A (en) Image processing method, device and equipment
CN113435355A (en) Multi-target cow identity identification method and system
CN112232140A (en) Crowd counting method and device, electronic equipment and computer storage medium
CN114581709A (en) Model training, method, apparatus, and medium for recognizing target in medical image
CN116311063A (en) Personnel fine granularity tracking method and system based on face recognition under monitoring video
KR20190050551A (en) Apparatus and method for recognizing body motion based on depth map information
CN111353385A (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN112699842A (en) Pet identification method, device, equipment and computer readable storage medium
CN117475353A (en) Video-based abnormal smoke identification method and system
Stentiford Attention-based vanishing point detection
CN111444816A (en) Multi-scale dense pedestrian detection method based on fast RCNN
CN113657169B (en) Gait recognition method, device and system and computer readable storage medium
CN113255549B (en) Intelligent recognition method and system for behavior state of wolf-swarm hunting
CN115311680A (en) Human body image quality detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant