CN111814690B - Target re-identification method, device and computer readable storage medium - Google Patents

Target re-identification method, device and computer readable storage medium Download PDF

Info

Publication number
CN111814690B
CN111814690B CN202010659246.5A CN202010659246A CN111814690B CN 111814690 B CN111814690 B CN 111814690B CN 202010659246 A CN202010659246 A CN 202010659246A CN 111814690 B CN111814690 B CN 111814690B
Authority
CN
China
Prior art keywords
target
searched
video
similarity
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010659246.5A
Other languages
Chinese (zh)
Other versions
CN111814690A (en
Inventor
吴翠玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010659246.5A priority Critical patent/CN111814690B/en
Publication of CN111814690A publication Critical patent/CN111814690A/en
Application granted granted Critical
Publication of CN111814690B publication Critical patent/CN111814690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • G06F16/784Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content the detected or recognised objects being people
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a target re-identification method, a target re-identification device and a computer readable storage medium, wherein the method comprises the following steps: acquiring a target image and a video to be searched, wherein the target image comprises a search target; performing target detection on the video to be searched to obtain an illumination intensity value corresponding to the video to be searched; determining a similarity threshold matched with the illumination intensity value by utilizing the illumination intensity value; calculating the similarity between the search target and each target in the video to be searched, and judging whether the similarity is larger than a similarity threshold value or not; and if the similarity between the retrieval target and the target in the video to be retrieved is greater than a similarity threshold, determining that the retrieval target is in the video to be retrieved. Through the mode, the accuracy of target identification can be improved.

Description

Target re-identification method, device and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a target re-identification method, apparatus, and computer readable storage medium.
Background
In an actual monitoring scene, the camera resolution is low due to complex environment, so that a clear face is difficult to obtain, and the effect of the face recognition technology is not outstanding; pedestrian re-recognition is used as a supplement to face recognition, so that the problem of cross-equipment or cross-scene in practical application can be effectively solved, whether pedestrians in different environments and under the illumination condition are the same or not can be recognized, and different illumination environments such as daytime, night, cloudy days, weak light or strong light exist in the possibly recognized scene, so that the recognition effect is greatly affected, the fact that the search image can come from different environments is not considered in the existing pedestrian re-recognition method, the search effect is reduced due to different illumination conditions, and therefore, a re-recognition method for solving the environmental influence is needed.
Disclosure of Invention
The application provides a target re-identification method, a target re-identification device and a computer readable storage medium, which can improve the accuracy of target identification.
In order to solve the technical problems, the technical scheme adopted by the application is to provide a target re-identification method, which comprises the following steps: acquiring a target image and a video to be searched, wherein the target image comprises a search target; performing target detection on the video to be searched to obtain an illumination intensity value corresponding to the video to be searched; determining a similarity threshold matched with the illumination intensity value by utilizing the illumination intensity value; calculating the similarity between the search target and each target in the video to be searched, and judging whether the similarity is larger than a similarity threshold value or not; and if the similarity between the retrieval target and the target in the video to be retrieved is greater than a similarity threshold, determining that the retrieval target is in the video to be retrieved.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a target re-recognition device, which includes a memory and a processor connected to each other, wherein the memory is used for storing a computer program, and the computer program is used for implementing the target re-recognition method when being executed by the processor.
In order to solve the above technical problem, another technical solution adopted by the present application is to provide a computer readable storage medium, where the computer readable storage medium is used for storing a computer program, and the computer program is used for implementing the above target re-identification method when being executed by a processor.
Through the scheme, the application has the beneficial effects that: the method comprises the steps that a target image and a video to be searched can be obtained firstly, and a corresponding illumination intensity value is obtained by carrying out target detection on the video to be searched; then determining a matching similarity threshold by using the illumination intensity value; calculating the similarity between the retrieval target in the target image and the target in the video to be retrieved, and judging whether the similarity is larger than a similarity threshold value or not; if the similarity between the search target and the target in the video to be searched is larger than the similarity threshold, determining that the search target exists in the video to be searched, and setting different similarity thresholds in different illumination environments can reduce the influence of illumination on recognition and improve the effect of target recognition.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art. Wherein:
FIG. 1 is a flowchart of an embodiment of a target re-identification method according to the present application;
FIG. 2 is a flowchart illustrating another embodiment of a target re-identification method according to the present application;
FIG. 3 is a schematic diagram of the neural network in the embodiment shown in FIG. 2;
FIG. 4 is a schematic diagram of an embodiment of a target re-recognition device according to the present application;
fig. 5 is a schematic structural diagram of an embodiment of a computer readable storage medium according to the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of a target re-identification method according to the present application, where the method includes:
step 11: and acquiring the target image and the video to be retrieved.
The method can receive the target image or the video to be searched sent by other devices, or read the data in the storage equipment to obtain the target image or the video to be searched, or can control the camera device to shoot to obtain the target image or the video to be searched.
Further, the target image comprises a retrieval target, the retrieval target is an object focused by a user, and the video to be retrieved is a video possibly containing the retrieval target; whether the search target exists in the video to be searched or not can be judged so as to track and identify the search target, for example, the search target is a criminal, and the action route of the criminal can be determined by analyzing the video near the related place, so that the detection of cases is quickened.
Step 12: and carrying out target detection on the video to be searched to obtain an illumination intensity value corresponding to the video to be searched.
After the video to be searched is obtained, the video to be searched can be analyzed and processed, and the illumination intensity value of the video to be searched when shooting is generated, so that when judging whether a search target exists in the video to be searched, the influence of illumination is eliminated, and the identification accuracy is improved.
Step 13: and determining a similarity threshold matched with the illumination intensity value by utilizing the illumination intensity value.
The similarity of the same pedestrian in different environments may be different, for example, the similarity of a person in a sunny environment and a cloudy environment may be different from the similarity of the person in a sunny environment, so when identifying whether the pedestrians in two images are the same person, environmental factors, that is, illumination intensity values, that is, whether the similarity threshold of the same pedestrian in different illumination conditions, that is, different illumination intensities, may be set, and thus the similarity threshold needs to be determined according to the illumination intensity values, where the similarity threshold is a reference for determining whether the retrieval target is the same as the target in a certain frame of image in the video to be retrieved.
Step 14: and calculating the similarity between the search target and each target in the video to be searched, and judging whether the similarity is larger than a similarity threshold value.
The method comprises the steps that a search target is matched with a target in a video to be searched, whether the similarity of the search target and the target exceeds a set similarity threshold is judged, the category of the target in the video to be searched is the same as the category of a detection target, namely if the search target is a person, the target in the video to be searched is also a person, and if the search target is a dog, the target in the video to be searched is also a dog; for example, the search target is a pedestrian a, the video to be searched includes pedestrians a-C, and for a certain frame of image in the video to be searched, the video to be searched includes a pedestrian a and a pedestrian B, whether the similarity between the search target and the pedestrian a exceeds a similarity threshold is judged, and whether the similarity between the search target and the pedestrian B exceeds the similarity threshold can be judged.
Step 15: and if the similarity between the retrieval target and the target in the video to be retrieved is greater than a similarity threshold, determining that the retrieval target is in the video to be retrieved.
If the similarity between the retrieval target and a certain target in the video to be retrieved is judged to be greater than a similarity threshold, the retrieval target is indicated to be similar to the certain target in the video to be retrieved, and the existence of the retrieval target in the video to be retrieved can be determined at the moment; if the similarity between the retrieval target and all the targets in the video to be retrieved is smaller than the similarity threshold, the fact that the retrieval target has larger phase difference with the targets in the video to be retrieved is indicated, the probability of belonging to the same target is smaller, and at the moment, the fact that the retrieval target does not exist in the video to be retrieved can be judged.
The embodiment provides a target re-identification method in a complex environment, which is used for carrying out target detection on an acquired video to be searched to obtain a corresponding illumination intensity value; then determining a similarity threshold by utilizing the illumination intensity value, calculating the similarity between a retrieval target in a target image and a target in a video to be retrieved, and judging whether the similarity between the retrieval target and the target is larger than the similarity threshold; when the similarity between the search target and the target in the video to be searched is larger than a similarity threshold, the search target is determined to be in the video to be searched, and the influence of the illumination intensity in the environment on the recognition is considered, so that the interference of the illumination intensity on the recognition can be reduced, and the recognition accuracy can be improved.
Referring to fig. 2, fig. 2 is a flowchart of another embodiment of a target re-identification method according to the present application, where the method includes:
step 201: and acquiring the target image and the video to be retrieved.
This step is the same as step 11 and will not be described here again.
Step 202: and acquiring training images under different illumination intensities, and training by using the training images to obtain a neural network model.
The method comprises the steps of acquiring images in various environments in advance, and training to obtain a neural network model, wherein the input of the neural network model is an image, and the output of the neural network model is the illumination intensity value of the input image; specifically, the illumination intensity value output by the neural network model may be 0-4, and the larger the illumination intensity value is, the stronger the illumination is.
Step 203: inputting each image to be searched into the neural network model to obtain the illumination intensity value of the image to be searched.
The video to be searched comprises a plurality of images to be searched, and when in actual use, the images to be searched can be input into a trained neural network model, and the neural network model can process the input images to be searched to generate illumination intensity values corresponding to the images to be searched.
Step 204: and averaging the illumination intensity values of all the images to be searched to obtain the illumination intensity value of the video to be searched.
Processing each image to be searched in the video to be searched in the manner provided in step 203, thereby obtaining an illumination intensity value corresponding to each image to be searched, then averaging the illumination intensity values of all the images to be searched, and taking the average value as the illumination brightness value of the whole video to be searched.
It can be appreciated that in other embodiments, the images to be searched may be selected at a certain interval, the illumination intensity values of the selected images to be searched are subjected to target detection, and then the average value is averaged, so that the average value is used as the illumination brightness value of the whole video to be searched.
Step 205: and calculating the similarity between the first image to be searched under the first preset illumination intensity and at least one second image to be searched under the second preset illumination intensity, and taking the average value of all the similarities as a similarity threshold under the second preset illumination intensity.
The first image to be searched and the second image to be searched comprise the same target, for example, the first image to be searched comprises a pedestrian A, and the second image to be searched also comprises the pedestrian A; after the illumination intensity value of the video to be searched is determined, a corresponding similarity threshold value under the illumination intensity can be further determined, images to be searched under different illumination conditions can be selected, targets in the images to be searched are detected, the similarity between the target under the first preset illumination intensity and the corresponding target under the second preset illumination intensity can be counted, and the average value of all the similarities under the second preset illumination intensity is taken as the similarity threshold value under the second preset illumination intensity.
For example, when the illumination intensity value under the first preset illumination intensity is 0, the illumination intensity value under the second preset illumination intensity is 1, the target is a pedestrian C, and the target image is recorded as an image a, a plurality of images including the pedestrian C when the illumination intensity value is 1 can be acquired and recorded as an image B, the similarity between the image a and each image B is calculated, and the average value is taken as a similarity threshold when the illumination intensity value is 1.
Step 206: extracting features in the search target to obtain a feature value of the search target.
The feature extraction method can be used for processing the target image to extract the features in the search target to obtain corresponding feature values, for example, a deep learning-based method can be used for feature extraction.
Step 207: screening the images to be retrieved to obtain identification reference images, and extracting features in the identification reference images to obtain feature values of the identification reference images.
For a given target image and a series of video libraries to be searched, the target detection method can be utilized to detect the target of each video to be searched in the video libraries, and the target tracking method is utilized to track the video to be searched, so as to obtain a tracking image sequence containing a tracking target.
In order to ensure the recognition effect, the images in the preferred image sequence are marked as recognition reference images by screening according to the gesture of the tracking target in the tracking image sequence.
Further, taking the search target as a pedestrian as an example for explanation, for each tracking image sequence, the pedestrian image participating in recognition, namely the recognition reference image, can be optimized according to the gesture (front, back or side) of the pedestrian; specifically, a preferred algorithm may be used to select a number of images from a plurality of images with different poses to form a preferred image sequence, e.g., by filtering according to sharpness or integrity of the images. If there is no pedestrian image in a certain posture, the posture is not preferable, for example, if only the front side of the pedestrian is photographed, the pedestrian having the back or side posture cannot be screened.
For the preferred identification reference image, the neural network can be utilized to extract the characteristics of the identification reference image; specifically, taking pedestrian recognition as an example, pedestrian features can be extracted through a convolutional neural network, the structure of the neural network is shown in fig. 3, a main network comprises a plurality of convolutional layers, pooling layers and the like, a feature map extracted by the main network can be divided into upper and lower features, namely an upper body feature and a lower body feature through pooling operation, and final features can be obtained through dimension reduction operation.
It can be understood that the features of practical use are the concatenation of upper body features and lower body features, and compared with directly acquiring the whole body features, since the whole body features are disassembled into the upper body features and the lower body features, different loss values can be respectively set for training respectively, the extracted features are more accurate, and the robustness is better for recognition.
Step 208: and calculating the similarity between the characteristic value of the retrieval target and the characteristic value of each identification reference image, counting the maximum value in the similarity corresponding to all the identification reference images, and taking the identification reference image corresponding to the maximum similarity as an identification result.
The product of the module of the characteristic value of the search target and the module of the characteristic value of the corresponding identification reference image can be calculated, and then the product is divided by the product to obtain the corresponding similarity, namely the similarity is calculated by adopting the following formula:
wherein S is i I (i=1, 2, …, n) th similarity, f q To search the feature value of the target, f gi For the i-th feature value identifying the reference image, n is the number of identified reference images in the preferred image sequence.
After calculating the similarity between the search target and each identification reference image in the preferred image sequence, all the similarities can be arranged in the order from big to small, and the identification reference image with the largest similarity is selected to represent the result of identifying the preferred image sequence, namely the similarity between the search target and the video to be searched is as follows:
step 209: and judging whether the similarity between the characteristic value of the identification result and the characteristic value of the retrieval target is larger than a similarity threshold value.
Whether a retrieval target exists in the video to be retrieved can be determined according to the magnitude relation between the maximum similarity and the similarity threshold.
Step 210: and if the similarity between the characteristic value of the identification result and the characteristic value of the retrieval target is greater than a similarity threshold value, determining that the retrieval target is in the video to be retrieved.
The identification result is an identification reference image with the maximum similarity with the search target in the preferred image sequence, and if the maximum similarity is larger than a similarity threshold value, the search target is judged to be the same as the target in the identification reference image in the preferred image sequence, namely, the search target exists in the video to be searched; otherwise, judging that the retrieval target is different from the target in the identification reference image in the preferred image sequence, namely, the retrieval target does not exist in the video to be retrieved; for example, the search target is pedestrian a, the similarity threshold is 90%, and when the maximum similarity is 92%, it can be determined that pedestrian a appears in the video to be searched; if the maximum similarity is less than 92%, it can be determined that the pedestrian a is not present in the video to be searched.
According to the embodiment, the fact that the similarity threshold value of target identification changes under the influence of the environment is considered, different similarity threshold values are set under different illumination environments, the effect of target identification is improved, and target identification under a complex environment is achieved; in addition, targets in multiple postures can be used for participating in recognition, accuracy of target recognition is further improved, the number of similarity comparison can be reduced, calculation of characteristic values is reduced, and recognition speed is improved.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an embodiment of an object re-recognition device according to the present application, where the object re-recognition device 40 includes a memory 41 and a processor 42 connected to each other, and the memory 41 is used for storing a computer program, and the computer program is used for implementing the object re-recognition method in the above embodiment when executed by the processor 42.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of a computer readable storage medium provided by the present application, where the computer readable storage medium 50 is used to store a computer program 51, and the computer program 51, when executed by a processor, is used to implement the target re-recognition method in the above embodiment.
The computer readable storage medium 50 may be a server, a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, etc. various media capable of storing program codes.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The foregoing description is only illustrative of the present application and is not intended to limit the scope of the application, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (9)

1. A method of target re-identification, comprising:
acquiring a target image and a video to be searched, wherein the target image comprises a search target;
performing target detection on the video to be searched to obtain an illumination intensity value corresponding to the video to be searched;
calculating the similarity of a first image to be searched under a first preset illumination intensity and at least one second image to be searched under a second preset illumination intensity, and taking the average value of all the similarities as a similarity threshold under the second preset illumination intensity to determine a similarity threshold matched with the illumination intensity value, wherein the first image to be searched and the second image to be searched comprise the same target;
calculating the similarity between the search target and each target in the video to be searched, and judging whether the similarity is larger than the similarity threshold;
if yes, determining that the retrieval target is in the video to be retrieved.
2. The method for re-identifying objects according to claim 1, wherein the video to be searched comprises a plurality of images to be searched, and before the step of calculating the similarity between the searched object and each object in the video to be searched, the method comprises:
extracting features in the search target to obtain a feature value of the search target;
screening the images to be searched to obtain identification reference images;
and extracting the characteristics in the identification reference image to obtain the characteristic value of the identification reference image.
3. The method for re-identifying a target according to claim 2, wherein the step of screening the image to be retrieved to obtain an identification reference image comprises:
performing target detection on the video to be searched by using a target detection method, and tracking the video to be searched by using a target tracking method to obtain a tracking image sequence containing a tracking target;
and screening according to the gesture of the tracking target in the tracking image sequence to obtain a preferred image sequence, wherein the images in the preferred image sequence are marked as the identification reference images.
4. The method according to claim 2, wherein the step of calculating the similarity of the search target and each target in the video to be searched comprises:
calculating the similarity between the characteristic value of the retrieval target and the characteristic value of each identification reference image;
and counting the maximum value in the similarity corresponding to all the identification reference images, and taking the identification reference image corresponding to the maximum similarity as an identification result.
5. The target re-recognition method according to claim 4, wherein the step of judging whether the similarity is greater than the similarity threshold value includes:
and judging whether the similarity between the characteristic value of the identification result and the characteristic value of the retrieval target is larger than the similarity threshold value.
6. The target re-identification method of claim 4, further comprising:
calculating the number product of the characteristic value of the retrieval target and the characteristic value of each identification reference image;
and calculating the product of the module of the characteristic value of the search target and the module of the characteristic value of the corresponding identification reference image, and dividing the product by the number product to obtain the corresponding similarity.
7. The method for re-identifying a target according to claim 2, wherein the step of performing target detection on the video to be searched to obtain an illumination intensity value corresponding to the video to be searched comprises the following steps:
acquiring training images under different illumination intensities, and training by using the training images to obtain a neural network model;
inputting each image to be searched into the neural network model to obtain an illumination intensity value of the image to be searched;
and averaging the illumination intensity values of all the images to be searched to obtain the illumination intensity value of the video to be searched.
8. An object re-recognition device comprising a memory and a processor connected to each other, wherein the memory is adapted to store a computer program which, when executed by the processor, is adapted to carry out the object re-recognition method according to any one of claims 1-7.
9. A computer readable storage medium storing a computer program, which, when executed by a processor, is adapted to carry out the target re-identification method of any one of claims 1-7.
CN202010659246.5A 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium Active CN111814690B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010659246.5A CN111814690B (en) 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010659246.5A CN111814690B (en) 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111814690A CN111814690A (en) 2020-10-23
CN111814690B true CN111814690B (en) 2023-09-01

Family

ID=72842157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010659246.5A Active CN111814690B (en) 2020-07-09 2020-07-09 Target re-identification method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111814690B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113344027B (en) * 2021-05-10 2024-04-23 北京迈格威科技有限公司 Method, device, equipment and storage medium for retrieving objects in image
CN113269177B (en) * 2021-07-21 2021-09-14 广州乐盈信息科技股份有限公司 Target capturing system based on monitoring equipment
CN113743387B (en) * 2021-11-05 2022-03-22 中电科新型智慧城市研究院有限公司 Video pedestrian re-identification method and device, electronic equipment and readable storage medium
CN114419491A (en) * 2021-12-28 2022-04-29 云从科技集团股份有限公司 Video identification method and device and computer storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09293082A (en) * 1996-04-26 1997-11-11 Toshiba Corp Device for retrieving picture and its method
JP2006350465A (en) * 2005-06-13 2006-12-28 Nec Corp Image matching device, image matching method, and program for image matching
JP2010020602A (en) * 2008-07-11 2010-01-28 Nikon Corp Image matching device and camera
JP2010146099A (en) * 2008-12-16 2010-07-01 Fujitsu Ltd Image retrieval program, image retrieval method, and recording medium
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN105320710A (en) * 2014-08-05 2016-02-10 北京大学 Illumination variation resistant vehicle retrieval method and device
CN106295571A (en) * 2016-08-11 2017-01-04 深圳市赛为智能股份有限公司 The face identification method of illumination adaptive and system
WO2017113692A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Method and device for image matching
CN106933861A (en) * 2015-12-30 2017-07-07 北京大唐高鸿数据网络技术有限公司 A kind of customized across camera lens target retrieval method of supported feature
CN107729930A (en) * 2017-10-09 2018-02-23 济南大学 A kind of method and system of the width same scene image irradiation similarity of Quick two
CN108804549A (en) * 2018-05-21 2018-11-13 电子科技大学 Eyeground contrastographic picture search method based on the adjustment of medical image features weight
CN110019895A (en) * 2017-07-27 2019-07-16 杭州海康威视数字技术股份有限公司 A kind of image search method, device and electronic equipment
CN110659391A (en) * 2019-08-29 2020-01-07 苏州千视通视觉科技股份有限公司 Video detection method and device

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09293082A (en) * 1996-04-26 1997-11-11 Toshiba Corp Device for retrieving picture and its method
JP2006350465A (en) * 2005-06-13 2006-12-28 Nec Corp Image matching device, image matching method, and program for image matching
JP2010020602A (en) * 2008-07-11 2010-01-28 Nikon Corp Image matching device and camera
JP2010146099A (en) * 2008-12-16 2010-07-01 Fujitsu Ltd Image retrieval program, image retrieval method, and recording medium
CN103136533A (en) * 2011-11-28 2013-06-05 汉王科技股份有限公司 Face recognition method and device based on dynamic threshold value
CN105320710A (en) * 2014-08-05 2016-02-10 北京大学 Illumination variation resistant vehicle retrieval method and device
WO2017113692A1 (en) * 2015-12-29 2017-07-06 乐视控股(北京)有限公司 Method and device for image matching
CN106933861A (en) * 2015-12-30 2017-07-07 北京大唐高鸿数据网络技术有限公司 A kind of customized across camera lens target retrieval method of supported feature
CN106295571A (en) * 2016-08-11 2017-01-04 深圳市赛为智能股份有限公司 The face identification method of illumination adaptive and system
CN110019895A (en) * 2017-07-27 2019-07-16 杭州海康威视数字技术股份有限公司 A kind of image search method, device and electronic equipment
CN107729930A (en) * 2017-10-09 2018-02-23 济南大学 A kind of method and system of the width same scene image irradiation similarity of Quick two
CN108804549A (en) * 2018-05-21 2018-11-13 电子科技大学 Eyeground contrastographic picture search method based on the adjustment of medical image features weight
CN110659391A (en) * 2019-08-29 2020-01-07 苏州千视通视觉科技股份有限公司 Video detection method and device

Also Published As

Publication number Publication date
CN111814690A (en) 2020-10-23

Similar Documents

Publication Publication Date Title
CN111814690B (en) Target re-identification method, device and computer readable storage medium
US11017215B2 (en) Two-stage person searching method combining face and appearance features
CN107153817B (en) Pedestrian re-identification data labeling method and device
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN111582126B (en) Pedestrian re-recognition method based on multi-scale pedestrian contour segmentation fusion
CN111881741B (en) License plate recognition method, license plate recognition device, computer equipment and computer readable storage medium
CN112800967B (en) Posture-driven shielded pedestrian re-recognition method
CN111401308B (en) Fish behavior video identification method based on optical flow effect
CN110796074A (en) Pedestrian re-identification method based on space-time data fusion
Gualdi et al. Contextual information and covariance descriptors for people surveillance: an application for safety of construction workers
CN116311063A (en) Personnel fine granularity tracking method and system based on face recognition under monitoring video
Najibi et al. Towards the success rate of one: Real-time unconstrained salient object detection
CN112686122B (en) Human body and shadow detection method and device, electronic equipment and storage medium
Bi et al. A hierarchical salient-region based algorithm for ship detection in remote sensing images
CN106934339B (en) Target tracking and tracking target identification feature extraction method and device
CN112699842A (en) Pet identification method, device, equipment and computer readable storage medium
JP2024516642A (en) Behavior detection method, electronic device and computer-readable storage medium
Dutra et al. Re-identifying people based on indexing structure and manifold appearance modeling
Shehnaz et al. An object recognition algorithm with structure-guided saliency detection and SVM classifier
Tsechpenakis et al. Image analysis techniques to accompany a new in situ ichthyoplankton imaging system
Olaode et al. Unsupervised region of intrest detection using fast and surf
Matuska et al. A novel system for non-invasive method of animal tracking and classification in designated area using intelligent camera system
Asnani et al. Unconcealed Gun Detection using Haar-like and HOG Features-A Comparative Approach.
Wang et al. Pedestrian detection in highly crowded scenes using “online” dictionary learning for occlusion handling
CN117746477B (en) Outdoor face recognition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant