WO2018121287A1 - Procédé et dispositif de ré-identification de cible - Google Patents

Procédé et dispositif de ré-identification de cible Download PDF

Info

Publication number
WO2018121287A1
WO2018121287A1 PCT/CN2017/116330 CN2017116330W WO2018121287A1 WO 2018121287 A1 WO2018121287 A1 WO 2018121287A1 CN 2017116330 W CN2017116330 W CN 2017116330W WO 2018121287 A1 WO2018121287 A1 WO 2018121287A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature information
feature
tracking
image
model
Prior art date
Application number
PCT/CN2017/116330
Other languages
English (en)
Chinese (zh)
Inventor
唐矗
孙晓路
Original Assignee
纳恩博(北京)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 纳恩博(北京)科技有限公司 filed Critical 纳恩博(北京)科技有限公司
Publication of WO2018121287A1 publication Critical patent/WO2018121287A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Definitions

  • the present invention relates to the field of video image processing, and in particular to a method and apparatus for re-identifying a target.
  • Image-based target re-identification generally refers to identifying a given target from different images and videos. Such techniques are generally used in the field of target tracking, content-based image retrieval, and the like.
  • the traditional method is to use the paired image data in different scenes of the same target and the image data of different target pairs, respectively extract the specified features, such as color histograms, as feature vectors, and then learn a similarity by using the metric learning method.
  • the metric function in the application, uses the similarity metric function to calculate the similarity of the two targets, thereby determining whether it is the same target.
  • the tracking-by-detection tracking system can be regarded as a target re-identification process when judging whether the target is the same.
  • a similarity measure function is obtained offline, and in the application, it is directly judged whether the two images are the same. Due to the influence of changes in the appearance of the tracking target caused by the environment and illumination changes during the tracking process, if the target re-identification method is directly applied to the tracking system, two images are used to determine whether the target is the same, and the tracking system is often limited by the environment.
  • the target re-identification problem in visual tracking has a certain difference from the pure target re-recognition.
  • the target in the tracking is re-identified. It is necessary to judge whether the target in the subsequent video frame is the same as the initial setting. A re-identification that finds the same target from an open set, not in a broad sense.
  • the visual tracking maintains an online update template, and uses the template to find the tracking target in a new frame.
  • this method is affected by the appearance change of the tracking target caused by the environment and illumination changes, and the tracking occurs.
  • Errors, and constantly zoom in, difficult to correct, and one of the disadvantages of such methods is that it is difficult to accurately determine whether the tracking target is lost, or it is difficult to retrieve the initial tracking target after the target is lost; in addition, after the target is lost Due to the changes in lighting and environment during the tracking process, the appearance of the target between frames will also change significantly, and it is very difficult to accurately retrieve the target through the appearance.
  • At least some embodiments of the present invention provide a target re-identification method and apparatus to at least solve the technical problem of poor robustness of a re-identification technique for tracking targets in existing tracking technologies.
  • a method for re-identifying a target includes: acquiring a tracking target and an image region of the tracking target; extracting feature information from the image region of the tracking target, and constructing the feature model according to the feature information;
  • the credibility of the tracking result of the frame image determines the tracking state of the tracking target, wherein the credibility of the tracking result is determined by the similarity between the feature information of the preset image region of the current frame and the feature model; and the tracking target is determined according to the tracking state.
  • the feature model is updated according to the tracking result of the current frame image.
  • the feature model is obtained by replacing the original feature information in the preset model with the feature information extracted from the image region of the tracking target.
  • the preset feature information in the feature replaces any one of the feature models with a preset probability to update the feature model.
  • acquiring the feature information of the current frame image and the Pap address of the last updated feature model obtaining the preset probability by using the following formula: Where p is the preset probability, d median is the Pap address of the feature information and the most recently updated feature model, and ⁇ is a preset constant.
  • determining a median value of the feature information of the current frame image and the plurality of Pall's distances of the plurality of feature information in the most recently updated model is the Paging distance of the feature information and the most recently updated model.
  • the background image in the image area of the tracking target is removed; the image area in which the background image is removed is divided into a plurality of images in a preset direction; and the feature information of the plurality of images after the equalization is acquired; The feature information of the images is connected in the order of division, and the feature information of the image of the tracking target is obtained.
  • the feature information is image color feature information, wherein the image color feature information includes: color name information and/or tone information.
  • a target re-identification device comprising: an acquisition module configured to acquire a tracking target and an image region of the tracking target; and a constructing module configured to extract feature information from the image region of the tracking target And constructing a feature model according to the feature information; the determining module is configured to determine a tracking state of the tracking target according to the credibility of the tracking result of the current frame image, wherein the credibility of the tracking result is determined by the current frame The similarity between the feature information of the image preset area and the feature model is determined; the update module is configured to update the feature model according to the tracking result of the current frame image if it is determined that the tracking target is not lost according to the tracking state.
  • the constructing module includes: an initializing submodule configured to replace the original feature information in the preset model by the feature information extracted from the image region of the tracking target to obtain the feature model.
  • the updating module includes: an extracting submodule, configured to perform feature extraction on the image region that obtains the tracking result of the current frame image, and perform normalization processing to obtain corresponding feature information; the first acquiring submodule is set to Obtaining a preset probability; the replacing sub-module is configured to replace any one of the feature models with a preset probability by using preset feature information in the feature information corresponding to the acquired current frame image to update the feature model.
  • the first obtaining sub-module includes: an acquiring unit, configured to acquire feature information of the current frame image and a Pap address distance of the feature model of the latest update; and the calculating unit is configured to obtain the preset probability by using the following formula: Where p is the preset probability, d median is the Pap address of the feature information and the most recently updated feature model, and ⁇ is a preset constant.
  • the obtaining unit includes: a determining subunit, configured to determine a median value of the plurality of Pap s distances of the feature information of the current frame image and the plurality of feature information in the most recently updated model as the feature information and the latest update The Pap sm distance of the model.
  • the constructing module includes: a background removing sub-module configured to remove a background image in the image area of the tracking target; and a dividing sub-module configured to divide the image area from which the background image is removed into a plurality of images in a preset direction;
  • the acquiring sub-module is configured to acquire feature information of the plurality of images after the equalization; and the connecting sub-module is configured to connect the feature information of the plurality of images that are equally divided according to the order of the segmentation, to obtain the feature of the image of the tracking target information.
  • the feature information is image color feature information, wherein the image color feature information includes: color name information and/or tone information.
  • an image region of a tracking target and a tracking target is acquired, feature information is extracted from an image region of the tracking target, and a feature model is constructed according to the feature information, and the reliability of the tracking result of the current frame image is determined.
  • the above scheme constructs a feature model according to the feature information of the image region of the tracking target, and continuously updates the feature model according to different tracking results during the tracking process, and uses the feature model as a tracking model to track, thereby improving the tracking model.
  • the stickiness further solves the technical problem that the re-identification technology of the tracking target in the existing tracking technology is poor in robustness.
  • FIG. 1 is a flow chart of a method of object re-identification according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram of a target re-identification device in accordance with an embodiment of the present invention.
  • an embodiment of a method of object re-identification is provided, it being noted that the steps illustrated in the flowchart of the figures may be performed in a computer system, such as a set of computer-executable instructions. Also, although logical sequences are shown in the flowcharts, in some cases the steps shown or described may be performed in a different order than the ones described herein.
  • FIG. 1 is a flow chart of a method for re-identifying a target according to an embodiment of the present invention. As shown in FIG. 1, the method includes the following steps:
  • Step S102 acquiring a tracking target and an image area of the tracking target.
  • the tracking target may be a target specified by a person or determined by a pedestrian detector
  • the image area of the tracking target may be an artificially included area including a tracking target indicated in a certain frame image of the video, or
  • the pedestrian detector determines the image area in a certain frame of the video.
  • Step S104 extracting feature information from the image region of the tracking target, and constructing the feature model according to the feature information.
  • the extracted feature may be a color feature, an edge feature, or the like of the image. Since the tracking target is usually dynamic in the video, it is difficult to track the shape of the tracking target only, and the accuracy is low. However, for a continuous image in a video, the tracking target's time shape changes continuously with the change of the timestamp, but the features of the image are generally consistent, so the above steps construct the model by the extracted image features.
  • Step S106 Determine a tracking state of the tracking target according to the credibility of the tracking result of the current frame image, wherein the reliability of the tracking result is determined by the similarity between the feature information of the current frame image preset region and the feature model.
  • the foregoing tracking result includes an area and a credibility of the tracking target in the image, and the tracking status of the tracking target may include three states: no loss, low credibility, and loss.
  • the credibility threshold may be set, and if it is determined that the credibility of the tracking result exceeds the preset credibility threshold, it is determined that it is not lost.
  • the image may be determined using the first feature information of the image, or may be determined by using a plurality of feature information of the image.
  • Step S108 In the case that it is determined that the tracking target is not lost according to the tracking state, the feature model is updated according to the tracking result of the current frame image.
  • the foregoing solution may be used to track the process of determining the loss, that is, the process of determining whether the tracking target is lost.
  • the tracking target is a specific person, for example.
  • the tracking task is to track the specific character in multiple videos, and after analyzing each frame of the video, a determination result of determining whether to lose or not is obtained, and if the feature information of the preset area of the current frame image is detected, If the similarity with the feature model is higher than the preset value, it is determined that the image is not lost.
  • the feature model is updated after the tracking result in the current frame image, and the tracking is continued. If the feature information and the feature model of the preset image region of the current frame are detected, If the similarity is lower than the preset value, it is determined that the current frame image has been lost, and the step of retrieving the original tracking target is entered.
  • the foregoing solution may be used to re-recover the original tracking target after the tracking target is lost, for example, the tracking model that can be updated recently when determining that the current frame image has been lost.
  • the previous frame tracking model is used to retrieve the tracking target. Since the previous frame image is in an unfollowed state, the tracking result of the tracking image is updated after obtaining the tracking result of the previous frame image, thereby enabling The feature model of the tracking target is retrieved to be the closest feature model.
  • a specific task may be in a dynamic state in the video, and other environmental information in the video also changes with time, that is, the shape of the specific task is constantly changing in the video, and the video The illumination and environment in the environment are also changing. Therefore, it is very difficult to track or retrieve it by simply tracking the shape of the target. Further, tracking with the feature model of the initially determined tracking target is not accurate. As a result, the feature model of the tracking target introduced by the above scheme can effectively remove the influence of the environment change or the change of the target shape in the process of tracking or retrieving, thereby upgrading the robustness of the tracking model.
  • the above steps of the present application acquire the tracking target and the image region of the tracking target, extract the feature information from the image region of the tracking target, and construct the feature model according to the feature information, and determine the tracking according to the credibility of the tracking result of the current frame image.
  • the tracking state of the target in the case where it is determined that the tracking target is not lost according to the tracking state, the feature model is updated according to the tracking result of the current frame image.
  • the above scheme constructs a feature model according to the feature information of the image region of the tracking target, and continuously updates the feature model according to different tracking results during the tracking process, and uses the feature model as a tracking model to track, thereby improving the tracking model.
  • the stickiness further solves the technical problem that the re-identification technology of the tracking target in the existing tracking technology is poor in robustness.
  • step S102 constructing a feature model according to the feature information, including:
  • Step S1021 The feature model is obtained by replacing the original feature information in the preset model with the feature information extracted from the image region of the tracking target.
  • the corresponding feature model is a color feature model
  • the tracking target may be selected, and the color feature is extracted for the selected target image region, where Using the color histogram as the color feature information, the original model consists of N color histograms, and the initialization stage replaces the N histograms in the original model with the normalized feature histogram h 0 extracted by the selected tracking target image.
  • step S108 updating the feature model according to the tracking result of the current frame image, including:
  • Step S1081 Perform feature extraction on the image region of the obtained tracking result of the current frame image, and perform normalization processing to obtain a corresponding plurality of feature information.
  • step S1083 a preset probability is obtained.
  • Step S1085 Replace any feature information in the feature model with a preset probability by using preset feature information in the feature information corresponding to the acquired current frame image to update the feature model.
  • the above solution of the present application updates the feature model by replacing any one of the feature models with the preset probability by using the feature information of the current frame image, so that the feature model can be changed according to the change of the tracking target.
  • Introducing the latest target features in the new model can effectively preserve the characteristics of each moment in the historical process of target tracking, thus ensuring the diversity of feature information in the model.
  • sexuality thereby improving the robustness of the model, thereby minimizing the impact of the environment, light, etc. on the tracking in the video.
  • step S1083, obtaining a preset probability including:
  • Step S1083a Acquire the Paging distance of the feature information of the current frame image and the feature model of the latest update.
  • step S1083b the preset probability is obtained by the following formula:
  • p is the preset probability
  • d median is the Pap address of the feature information and the most recently updated feature model
  • is a preset constant
  • the preset constant ⁇ is used to determine the probability of controlling the update.
  • step S1083a acquiring the feature information of the current frame image and the Pap address of the last updated feature model includes: determining feature information of the current frame image and the model in the latest update model.
  • the median of the plurality of Pap sigma distances of the plurality of feature information is the Pap address of the feature information and the most recently updated model.
  • the 1-d median obtained by using the above calculation may be used as the color confidence; in the tracking target re-retrieving step, the method may also be used to calculate the candidate target and the tracking target. Similarity is used to select candidate targets; the above scheme takes into account changes in the environment and illumination during long-term target tracking, which leads to changes in the appearance of the target.
  • N feature vectors are used to express the characteristics of the target in different environments; The method of random replacement with a certain probability not only ensures the difference of N feature vectors in the model, but also preserves the historical information of the target, which can effectively improve the robustness of the long-term tracking system.
  • the model does not define specific color features, and the simplest color histogram can be used, or complex feature vector calculation methods can be used.
  • step S104 extracting feature information from the image area of the tracking target, including:
  • Step S1041 removing the background image in the image area of the tracking target.
  • step S1043 the image area from which the background image is removed is divided into a plurality of images in a preset direction.
  • the preset direction may be determined according to a preset tracking target, so as to track the target as a walking person.
  • the preset direction may be a vertical direction.
  • Step S1045 acquiring feature information of the plurality of images after the sharing.
  • step S1047 the feature information of the plurality of divided images is connected in the order of division, and the feature information of the image of the tracking target is obtained.
  • the image features used in the modeling process are color features, specifically, a Color Name histogram, and in addition, a Color Name is calculated.
  • Saliency Segmentation is performed on the image to remove the background interference. Taking the tracking object as the walking character as an example, the pedestrian is mostly in an upright state.
  • the master Before calculating the histogram, the master will be The image after component segmentation is divided into M equal parts in the vertical direction, and each block image is separately statistically histogram; after the M block image histograms are sequentially connected, normalized as color feature information.
  • the feature information is image color feature information, wherein the image color feature information includes: color name information and/or tone information.
  • FIG. 2 is a schematic diagram of a target re-identification device according to an embodiment of the present invention. As shown in FIG. 2, the device includes:
  • the acquisition module 10 is configured to acquire a tracking target and an image area of the tracking target.
  • the tracking target may be a target specified by a person or determined by a pedestrian detector
  • the image area of the tracking target may be an artificially included area including a tracking target indicated in a certain frame image of the video, or by a pedestrian detector.
  • the constructing module 20 is configured to extract feature information from the image region of the tracking target and construct the feature model based on the feature information.
  • the extracted feature may be a color feature, an edge feature, or the like of the image. Since the tracking target is usually dynamic in the video, it is difficult to track the shape of the tracking target only, and the accuracy is low. However, for a continuous image in a video, the tracking target's time shape changes continuously with the change of the timestamp, but the features of the image are generally consistent, so the above steps construct the model by the extracted image features.
  • the determining module 30 is configured to determine a tracking state of the tracking target according to the credibility of the tracking result of the current frame image, wherein the reliability of the tracking result is determined by the similarity between the feature information of the current frame image preset region and the feature model.
  • the foregoing tracking result includes an area and a credibility of the tracking target in the image, and the tracking target is followed by The trace status can include three states: no loss, low confidence, and loss.
  • the credibility threshold may be set, and if it is determined that the credibility of the tracking result exceeds the preset credibility threshold, it is determined that it is not lost.
  • the image may be determined using the first feature information of the image, or may be determined by using a plurality of feature information of the image.
  • the update module 40 is configured to update the feature model according to the tracking result of the current frame image if it is determined that the tracking target is not lost according to the tracking state.
  • a specific task may be in a dynamic state in the video, and other environmental information in the video also changes with time, that is, the shape of the specific task is constantly changing in the video, and in the video. The lighting and environment are also changing. Therefore, it is very difficult to track or retrieve the target by simply tracking the shape of the target. Further, tracking with the feature model of the initially determined tracking target cannot be accurately obtained. As a result, therefore, the feature model of the tracking target introduced by the above scheme can effectively remove the influence of the change of the environment or the change of the shape of the target in the process of tracking or retrieving, thereby upgrading the robustness of the tracking model.
  • the above solution of the present application acquires the tracking target and the image region of the tracking target through the acquiring module, extracts the feature information from the image region of the tracking target through the constructing module, and constructs the feature model according to the feature information, and determines the module according to the current frame image.
  • the credibility of the tracking result determines the tracking state of the tracking target
  • the update module updates the feature model according to the tracking result of the current frame image in the case that the tracking target is not lost according to the tracking state.
  • the above scheme constructs a feature model according to the feature information of the image region of the tracking target, and continuously updates the feature model according to different tracking results during the tracking process, and uses the feature model as a tracking model to track, thereby improving the tracking model.
  • the stickiness further solves the technical problem that the re-identification technology of the tracking target in the existing tracking technology is poor in robustness.
  • the foregoing constructing module includes:
  • the initialization sub-module is configured to replace the original feature information in the preset model by the feature information extracted from the image region of the tracking target to obtain the feature model.
  • the foregoing update module includes:
  • Extracting a sub-module performing feature extraction on an image region that obtains a tracking result of the current frame image, and performing normalization processing to obtain a corresponding plurality of feature information
  • the first obtaining submodule is configured to acquire a preset probability
  • the replacement sub-module is configured to replace any one of the feature models with a preset probability by using preset feature information in the feature information corresponding to the acquired current frame image to update the feature model.
  • the foregoing first obtaining submodule includes:
  • An obtaining unit configured to acquire a plurality of feature information of the current frame image and a Pap address of the most recently updated feature model
  • the calculation unit is set to obtain the preset probability by the following formula:
  • p is the preset probability
  • d median is the Pap address of the feature information and the most recently updated feature model
  • is a preset constant
  • the acquiring unit includes:
  • the determining subunit is set to determine that the median information of the feature information of the current frame image and the plurality of Pap sigma distances of the most recently updated model is the Paging distance of the plurality of feature information and the most recently updated model.
  • the foregoing constructing module includes:
  • a dividing sub-module configured to divide the image area from which the background image is removed into a plurality of images in a preset direction
  • a second obtaining submodule configured to acquire feature information of the plurality of images after the equalization
  • the connection submodule is configured to connect the feature information of the plurality of divided images in the order of division to obtain the feature information of the image of the tracking target.
  • the feature information is image color feature information, wherein the image color feature information includes: color name information and/or tone information.
  • a storage medium wherein the storage medium comprises a stored program, wherein the device in which the storage medium is located is controlled to execute the target re-identification method when the program is running.
  • the above storage medium may include, but is not limited to, a U disk, a read only memory (ROM), a random access memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like, which can store program codes.
  • a processor configured to execute a program, wherein the target re-identification method is executed when the program is running.
  • the above processor may include, but is not limited to, a processing device such as a microprocessor (MCU) or a programmable logic device (FPGA).
  • MCU microprocessor
  • FPGA programmable logic device
  • the disclosed technical contents may be implemented in other manners.
  • the device embodiments described above are only schematic.
  • the division of the unit may be a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, unit or module, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • At least some embodiments of the present invention provide a target re-identification method and apparatus having the following beneficial effects: constructing a feature model according to feature information of an image region of a tracking target, and continuously according to different tracking results during the tracking process.
  • the feature model is updated, and the feature model is used as the tracking model to track, thus improving the robustness of the tracking model.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé et un dispositif de ré-identification de cible. Le procédé comporte les étapes consistant à: acquérir une cible suivie et une zone d'image de la cible suivie (S102); extraire des informations de caractéristiques de la zone d'image de la cible suivie, et construire un modèle de caractéristiques d'après les informations de caractéristiques (S104); déterminer l'état de suivi de la cible suivie d'après la crédibilité d'un résultat de suivi d'une image de trame actuelle (S106), la crédibilité du résultat de suivi étant déterminée par la similarité entre les informations de caractéristiques concernant la zone préétablie de l'image de trame actuelle et le modèle de caractéristiques; et lorsqu'il est déterminé, d'après l'état de suivi, que la cible suivie n'est pas perdue, mettre à jour le modèle de caractéristiques d'après le résultat de suivi de l'image de trame actuelle (S108). Le problème technique de la robustesse médiocre de la technologie de ré-identification d'une cible suivie dans la technologie de suivi existante est résolu.
PCT/CN2017/116330 2016-12-30 2017-12-15 Procédé et dispositif de ré-identification de cible WO2018121287A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611265859.0A CN108268823B (zh) 2016-12-30 2016-12-30 目标再识别方法和装置
CN201611265859.0 2016-12-30

Publications (1)

Publication Number Publication Date
WO2018121287A1 true WO2018121287A1 (fr) 2018-07-05

Family

ID=62710168

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/116330 WO2018121287A1 (fr) 2016-12-30 2017-12-15 Procédé et dispositif de ré-identification de cible

Country Status (2)

Country Link
CN (1) CN108268823B (fr)
WO (1) WO2018121287A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034109A (zh) * 2018-08-16 2018-12-18 新智数字科技有限公司 一种基于聚类算法的行人重识别方法和装置
CN110907909A (zh) * 2019-10-30 2020-03-24 南京市德赛西威汽车电子有限公司 一种基于概率统计的雷达目标识别方法
CN111709978A (zh) * 2020-05-06 2020-09-25 广东康云科技有限公司 一种跨屏目标跟踪方法、***、装置及存储介质
CN111738063A (zh) * 2020-05-08 2020-10-02 华南理工大学 船舶目标跟踪方法、***、计算机设备及存储介质
CN111860190A (zh) * 2020-06-24 2020-10-30 国汽(北京)智能网联汽车研究院有限公司 目标追踪的方法、装置、设备及存储介质
CN112784648A (zh) * 2019-11-07 2021-05-11 中国科学技术大学 一种优化视频行人重识别***特征提取的方法及装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111127508B (zh) * 2018-10-31 2023-05-02 杭州海康威视数字技术股份有限公司 一种基于视频的目标跟踪方法及装置
CN109635657B (zh) * 2018-11-12 2023-01-06 平安科技(深圳)有限公司 目标跟踪方法、装置、设备及存储介质
CN111753601B (zh) * 2019-03-29 2024-04-12 华为技术有限公司 一种图像处理的方法、装置以及存储介质
CN113536914A (zh) * 2021-06-09 2021-10-22 重庆中科云从科技有限公司 一种对象跟踪识别方法、***、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240266A (zh) * 2014-09-04 2014-12-24 成都理想境界科技有限公司 基于颜色-结构特征的目标对象跟踪方法
CN105335986A (zh) * 2015-09-10 2016-02-17 西安电子科技大学 基于特征匹配和MeanShift算法的目标跟踪方法
CN106023155A (zh) * 2016-05-10 2016-10-12 电子科技大学 基于水平集的在线目标轮廓跟踪方法
CN106056627A (zh) * 2016-05-30 2016-10-26 河海大学 一种基于局部鉴别性稀疏表示的鲁棒目标跟踪方法
CN106097385A (zh) * 2016-05-31 2016-11-09 海信集团有限公司 一种目标跟踪的方法和装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6590999B1 (en) * 2000-02-14 2003-07-08 Siemens Corporate Research, Inc. Real-time tracking of non-rigid objects using mean shift
CN101888479B (zh) * 2009-05-14 2012-05-02 汉王科技股份有限公司 检测和跟踪目标图像的方法及装置
CN101901486B (zh) * 2009-11-17 2012-10-03 华为技术有限公司 运动目标检测方法和装置
CN102339390B (zh) * 2011-09-16 2014-06-04 天津市亚安科技股份有限公司 一种视频监控***目标模板的更新方法及***
CN104020466B (zh) * 2014-06-17 2016-05-25 西安电子科技大学 基于变结构多模型的机动目标跟踪方法
CN104036523A (zh) * 2014-06-18 2014-09-10 哈尔滨工程大学 一种基于SURF特征的改进Mean Shift的目标跟踪方法
CN104252178B (zh) * 2014-09-12 2017-11-03 西安电子科技大学 一种基于强机动的目标跟踪方法
CN105654508B (zh) * 2015-12-24 2018-06-01 武汉大学 基于自适应背景分割的监控视频移动目标跟踪方法及***
CN105931269A (zh) * 2016-04-22 2016-09-07 海信集团有限公司 一种视频中的目标跟踪方法及装置
CN105930833B (zh) * 2016-05-19 2019-01-22 重庆邮电大学 一种基于视频监控的车辆跟踪与分割方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104240266A (zh) * 2014-09-04 2014-12-24 成都理想境界科技有限公司 基于颜色-结构特征的目标对象跟踪方法
CN105335986A (zh) * 2015-09-10 2016-02-17 西安电子科技大学 基于特征匹配和MeanShift算法的目标跟踪方法
CN106023155A (zh) * 2016-05-10 2016-10-12 电子科技大学 基于水平集的在线目标轮廓跟踪方法
CN106056627A (zh) * 2016-05-30 2016-10-26 河海大学 一种基于局部鉴别性稀疏表示的鲁棒目标跟踪方法
CN106097385A (zh) * 2016-05-31 2016-11-09 海信集团有限公司 一种目标跟踪的方法和装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034109A (zh) * 2018-08-16 2018-12-18 新智数字科技有限公司 一种基于聚类算法的行人重识别方法和装置
CN109034109B (zh) * 2018-08-16 2021-03-23 新智数字科技有限公司 一种基于聚类算法的行人重识别方法和装置
CN110907909A (zh) * 2019-10-30 2020-03-24 南京市德赛西威汽车电子有限公司 一种基于概率统计的雷达目标识别方法
CN110907909B (zh) * 2019-10-30 2023-09-12 南京市德赛西威汽车电子有限公司 一种基于概率统计的雷达目标识别方法
CN112784648A (zh) * 2019-11-07 2021-05-11 中国科学技术大学 一种优化视频行人重识别***特征提取的方法及装置
CN112784648B (zh) * 2019-11-07 2022-09-06 中国科学技术大学 一种优化视频行人重识别***特征提取的方法及装置
CN111709978A (zh) * 2020-05-06 2020-09-25 广东康云科技有限公司 一种跨屏目标跟踪方法、***、装置及存储介质
CN111738063A (zh) * 2020-05-08 2020-10-02 华南理工大学 船舶目标跟踪方法、***、计算机设备及存储介质
CN111738063B (zh) * 2020-05-08 2023-04-18 华南理工大学 船舶目标跟踪方法、***、计算机设备及存储介质
CN111860190A (zh) * 2020-06-24 2020-10-30 国汽(北京)智能网联汽车研究院有限公司 目标追踪的方法、装置、设备及存储介质
CN111860190B (zh) * 2020-06-24 2024-04-12 国汽(北京)智能网联汽车研究院有限公司 目标追踪的方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN108268823B (zh) 2021-07-20
CN108268823A (zh) 2018-07-10

Similar Documents

Publication Publication Date Title
WO2018121287A1 (fr) Procédé et dispositif de ré-identification de cible
CN109145766B (zh) 模型训练方法、装置、识别方法、电子设备及存储介质
CN109977262B (zh) 从视频中获取候选片段的方法、装置及处理设备
CN107346409B (zh) 行人再识别方法和装置
WO2018121286A1 (fr) Procédé et dispositif de suivi de cible
CN108229322B (zh) 基于视频的人脸识别方法、装置、电子设备及存储介质
Noh et al. A new framework for background subtraction using multiple cues
US9129191B2 (en) Semantic object selection
CN108288051B (zh) 行人再识别模型训练方法及装置、电子设备和存储介质
US9129192B2 (en) Semantic object proposal generation and validation
US20160307057A1 (en) Fully Automatic Tattoo Image Processing And Retrieval
WO2019128254A1 (fr) Appareil et procédé d'analyse d'image, et dispositif électronique et support d'informations lisible
WO2017181892A1 (fr) Procédé et dispositif de segmentation de premier plan
CN110019895B (zh) 一种图像检索方法、装置及电子设备
CN112016531A (zh) 模型训练方法、对象识别方法、装置、设备及存储介质
WO2016139964A1 (fr) Dispositif d'extraction de région d'intérêt et procédé d'extraction de région d'intérêt
WO2019197021A1 (fr) Dispositif et procédé de segmentation de niveau d'instance d'une image
CN106127222B (zh) 一种基于视觉的字符串相似度计算方法及相似性判断方法
US10110846B2 (en) Computationally efficient frame rate conversion system
Banerjee et al. Iris segmentation using geodesic active contours and grabcut
CN112633221A (zh) 一种人脸方向的检测方法及相关装置
CN112036362A (zh) 图像处理方法、装置、计算机设备和可读存储介质
CN112270204A (zh) 目标识别方法及装置、存储介质及电子设备
KR102166117B1 (ko) 시멘틱 매칭 장치 및 방법
CN111079757A (zh) 服饰属性识别方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17887160

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17887160

Country of ref document: EP

Kind code of ref document: A1