CN116912664A - Gait recognition method and device based on pre-training large model - Google Patents

Gait recognition method and device based on pre-training large model Download PDF

Info

Publication number
CN116912664A
CN116912664A CN202310967164.0A CN202310967164A CN116912664A CN 116912664 A CN116912664 A CN 116912664A CN 202310967164 A CN202310967164 A CN 202310967164A CN 116912664 A CN116912664 A CN 116912664A
Authority
CN
China
Prior art keywords
gait
text
training
gait recognition
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310967164.0A
Other languages
Chinese (zh)
Inventor
冯镔
熊海军
刘文予
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN202310967164.0A priority Critical patent/CN116912664A/en
Publication of CN116912664A publication Critical patent/CN116912664A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a gait recognition method based on a pre-training large model, which comprises the following steps: inputting a video sequence to a pre-trained gait recognition model to extract gait characteristics; selecting the positive samples and the negative samples with the same quantity for each sample according to gait characteristics to construct positive sample pairs and negative sample pairs; generating a text description for each constructed sample pair from the prompt; embedding a learnable prompt token after tokenizing the text description, and generating text features through a transducer of a pre-trained large-model text encoder; performing feature stitching on the constructed sample pairs to generate visual features; calculating the similarity between the text features and the visual features; and (3) performing supervision and fine adjustment network training on gait characteristics and similarity. The invention aims to learn the similarity among sequences through pre-training rich semantic relations contained in a large model, so that a gait recognition model learns more rich advanced semantic features, and the recognition performance is improved. The invention also provides a corresponding gait recognition device based on the pre-training large model.

Description

Gait recognition method and device based on pre-training large model
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a gait recognition method and device based on a pre-training large model.
Background
Gait is a walking pattern of a person that can be easily acquired over a long distance even using a low resolution camera, as compared to other biological features such as the face, iris, voice, fingerprint, vein, etc. Gait recognition is currently widely used in many fields, for example: intelligent security, man-machine interaction, etc.
Most of the existing gait recognition methods are based on convolutional neural networks (Convolutional Neural Network, CNN), however, as the methods do not utilize high-level semantic information of the gait, the recognition accuracy is low, and particularly, the method is extremely obvious in the practical scene of changing clothes.
Therefore, a gait recognition method based on a pre-training large model needs to be designed, and similarity among sequences (sample pairs) can be learned through rich semantic relations contained in the CLIP pre-training model, so that the gait recognition model learns more abundant advanced semantic features, performance of the gait recognition model is improved, and the gait recognition method can be more robust for practical application such as changing clothes.
Disclosure of Invention
Aiming at the defects of the existing method, the gait recognition method based on the pre-training large model provided by the invention can utilize the high-level semantic information of pedestrians in the gait data as much as possible, so that the performance of the gait recognition model is improved, and the gait recognition method can be more robust for practical application such as changing clothes.
To achieve the above object, according to one aspect of the present invention, there is provided a gait recognition method based on a pre-training large model, comprising the steps of:
step one: for each input gait sequence, the crop image resolution is 64 x 44 and 30 frames are sampled consecutively for training;
step two: inputting the gait sequence with the 30 frames with the resolution of 64 multiplied by 44 obtained in the step one into a gait recognition model, and extracting gait characteristics of the sequence;
step three: selecting positive samples and negative samples with the same number for each sample according to the gait characteristics obtained in the second step to construct positive sample pairs and negative sample pairs;
step four: generating a text description for each sample constructed in the step three according to the prompt;
step five: inputting the text description obtained in the fourth step into a pre-trained CLIP text encoder, embedding a learnable prompt token into a text token, generating text features through a transducer in the CLIP pre-training model text encoder, and performing feature mapping on the text features by using a multi-layer perceptron;
step six: performing feature stitching operation on the sample pair constructed in the step three to generate visual features, and performing feature mapping by using a multi-layer perceptron to obtain mapped visual features so as to align the visual features with the text features mapped in the step five;
step seven: calculating similarity between the text features mapped in the fifth step and the visual features mapped in the sixth step;
step eight: performing supervision training on the gait characteristics obtained in the second step and the similarity obtained in the seventh step, so that the similarity between sequences is learned through rich semantic relations contained in the CLIP pre-training model, and the gait recognition model learns more rich advanced semantic characteristics, thereby obtaining a trained gait recognition model;
step nine: and predicting the identity of the pedestrian in the gait sequence to be tested by using the trained gait recognition model.
In one embodiment of the present invention, the gait recognition model in the second step adopts GaitGL, and the CLIP pre-training model text encoder in the fifth step adopts CLIP ViT-B/32.
In one embodiment of the present invention, a CLIP pre-training model is introduced, and in the third step, the number of negative samples selected in the constructed sample pair is the same as the number of positive samples, and the number of negative samples is the same as the number of samples with the same labels in the samples of the current training batch.
In one embodiment of the present invention, the method for constructing the sample pair in the third step is as follows: for each sample, the same label is selected as a positive sample from the current trained batch samples, and the negative sample, namely the difficult sample, is selected to be closest from the different batch samples of the label.
In one embodiment of the present invention, the method for constructing the distance in the sample pair in the third step is a euclidean distance, and the expression is: d (D) mn =∑(a m -a n ) 2 Wherein a is m And a n Gait characteristics for the mth sample and the nth sample generated in the step two.
In one embodiment of the present invention, the text description in the fourth step includes whether the sample pair is from the same person, and whether the viewing angles of the sample pair are the same.
In one embodiment of the present invention, the text feature obtaining method in the fifth step specifically includes: the text description generated in the fourth step is firstly tokenized, a certain number of learnable prompt tokens are embedded behind the text tokens, and the embedded tokens are input into a transducer in a CLIP pre-training model text encoder to generate text features.
In one embodiment of the present invention, the specific expression of the multi-layer perceptron used in the fifth and sixth steps is:
y=σ(FC(σ(FC(x))))
wherein sigma is a LeakyReLU activation function, FC is a full connection layer, the dimension of an MLP middle hidden layer is 512, and the dimension of a final output layer is 256.
In one embodiment of the present invention, the computed similarity in the step seven is cosine similarity, and the computation formula is:
wherein x is the visual feature mapped in the step six, y is the text feature mapped in the step five, and I.I. is L 2 Norms.
In general, through the above technical solutions conceived by the present invention, compared with the prior art, the present invention has the following advantages:
(1) The method is novel: in the existing gait recognition technology, no method for combining the CLIP pre-training model into the gait recognition field exists, and the method is applied, so that the similarity among sequences (sample pairs) is learned through rich semantic relations contained in the CLIP pre-training model, the gait recognition model learns more abundant advanced semantic features, and the recognition performance is improved;
(2) The flexibility is strong: the design result of the invention does not change the original gait recognition model, and only the last gait feature for recognition is added with the invention for subsequent operation, and the invention can be matched with various existing gait recognition models;
(3) The robustness is strong: the method can utilize the high-level semantic information of pedestrians in the gait data as much as possible, so that the performance of the gait recognition model is improved, and the method is more robust for practical application such as changing clothes.
Drawings
FIG. 1 is a schematic diagram of an overview of a gait recognition network structure based on a pre-trained large model in accordance with the present invention;
FIG. 2 is a flow chart of a gait recognition method based on a pre-trained large model of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. In addition, the technical features of the embodiments of the present invention described below may be combined with each other as long as they do not collide with each other.
The technical terms of the present invention are explained and explained below:
CASIA-B dataset: the data set is a large-scale multi-view human gait data set established by the national academy of sciences automation institute (CASIA), and consists of gait video sequences of 124 subjects (with the serial numbers of 001-124), wherein the gait video sequences of each subject comprise 11 view angles ranging from 0 degrees to 180 degrees (with the increment of 18 degrees), 10 gait video sequences are respectively arranged under each view angle, and the 10 gait video sequences comprise gait sequences under three different walking states, namely 6 normal walking (NM) state sequences, 2 knapsack walking (BG) state sequences and 2 coat-wearing walking (CL) state sequences. The present invention uses the first 74 subjects as a training set and the remaining 50 subjects as a test set. In the test phase, the first four sequences in NM state (NM#1-4) are used as a gallery set (filler), the remaining six sequences comprising the last two sequences in NM state (NM#5-6), the two sequences in BG state (BG#1-2) and the two sequences in CL state (CL#1-2) as probe subsets (prob).
Fig. 1 is a schematic view of a gait recognition network structure based on a pre-training large model, in an embodiment of the present invention, where the network model includes: a pre-trained gait recognition model GaitGL; the sample pair construction module is used for generating a positive sample pair and a negative sample pair; the text in the text description generation module comprises whether the sample pairs are from the same person or not, and whether the angles of view of the sample pairs are the same or not; the CLIP text encoder adopts CLIP ViT-B/32, and the prompt is a leachable token embedded behind the text token; the dimension after the multi-layer perceptron is mapped is unified to 256; the loss function includes the triplet loss L tri Cross entropy loss function L ce And a mean square error loss function L MSE
As shown in fig. 2, the present invention provides a gait recognition method based on a pre-training large model, comprising the steps of:
step one: for each input gait sequence, the crop image resolution is 64 x 44 and 30 frames are sampled consecutively for training;
step two: inputting the gait sequence with the continuous 30 frames with the resolution of 64 multiplied by 44 obtained in the step one into a gait recognition model GaitGL, and extracting gait characteristics of the sequence;
step three: selecting positive samples and negative samples with the same number for each sample according to the gait characteristics obtained in the second step to construct positive and negative sample pairs, wherein the number of the selected negative samples is the same as that of the positive samples, and the positive and negative samples are all labels in the samples of the current training batchThe same number of samples. The method comprises the following steps: for each sample, the same label is selected as a positive sample from the batch samples in the current training, and the negative sample (difficult sample) is selected as the closest sample from the batch samples with different labels, wherein the distance calculation expression is as follows: d (D) mn =∑(a m -a n ) 2 Wherein a is m And a n Gait features for the mth sample and the nth sample generated in the second step;
step four: generating a text description for each sample pair constructed in the step three according to the prompt, wherein the text description comprises whether the sample pairs are from the same person or not and whether the angles of view of the sample pairs are the same or not;
step five: inputting the text description obtained in the step four into a pre-trained CLIP text encoder, wherein the text description is firstly tokenized, then a certain number of leachable prompt tokens are embedded behind the text tokens, the embedded tokens are input into a transducer in the CLIP pre-training model text encoder to generate text characteristics, and then a Multi-layer Perceptron (MLP) is used for carrying out characteristic mapping, wherein the specific expression is as follows:
y=σ(FC(σ(FC(x))))
wherein sigma is a LeakyReLU activation function, FC is a full connection layer, the dimension of an MLP middle hidden layer is 512, and the dimension of a final output layer is 256;
step six: and (3) performing splicing operation on the sample pair features constructed in the step (III) to generate visual features, and performing feature mapping by using a multi-layer perceptron (MLP), wherein the specific expression is as follows:
y=σ(FC(σ(FC(x))))
wherein sigma is a LeakyReLU activation function, FC is a full connection layer, the dimension of an MLP middle hiding layer is 512, and the dimension of a final output layer is 256, so that the final output layer is aligned with the text features mapped in the step five;
step seven: and D, calculating the similarity between the text features mapped in the fifth step and the visual features mapped in the sixth step, wherein the calculation formula is as follows:
wherein x is the visual feature mapped in the step six, y is the text feature mapped in the step five, and I.I. is L 2 A norm;
step eight: performing supervision training on the gait characteristics obtained in the second step and the similarity obtained in the seventh step, wherein the purpose is to learn the similarity between sequences through rich semantic relations contained in the CLIP pre-training model, so that the gait recognition model learns more rich advanced semantic characteristics and a trained gait recognition model is obtained;
step nine: and predicting the identity of the pedestrian in the gait sequence to be tested by using the trained gait recognition model.
The invention further provides a gait recognition device based on the pre-training large model, which comprises at least one processor and a memory, wherein the at least one processor and the memory are connected through a data bus, the memory stores instructions executed by the at least one processor, and the instructions are used for completing the gait recognition method based on the pre-training large model after being executed by the processor.
The effectiveness of the invention is proved by the experimental examples, and the experimental results prove that the invention can improve the recognition accuracy of gait recognition.
The present invention was tested on the CASIA-B dataset, compared to 5 existing gait recognition methods, and Table 1 is a comparison of the results of the present invention to the results of 5 methods, compared in three cases NM (normal walking), BG (backpack walking) and CL (coat-worn walking). The larger the numerical value of the result is, the higher the accuracy of gait recognition is, and the performance improvement of the invention is obvious from the table, which shows that the similarity between sequences (sample pairs) is learned through rich semantic relations contained in the CLIP pre-training model, so that the gait recognition model learns more rich high-grade semantic features.
TABLE 1 accuracy of different methods on CASIA-B dataset
Method NM BG CL Average of
GaitSet 95.0 87.2 70.4 84.2
GaitPart 96.2 91.5 78.7 88.8
GaitGL 97.4 94.5 83.6 91.8
CSTL 97.8 93.6 84.2 91.9
3DLocal 97.5 94.3 83.7 91.8
The invention is that 97.8 94.7 84.9 92.5
It will be readily appreciated by those skilled in the art that the foregoing description is merely a preferred embodiment of the invention and is not intended to limit the invention, but any modifications, equivalents, improvements or alternatives falling within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (10)

1. A gait recognition method based on a pre-trained large model, the method comprising the steps of:
step one: for each input gait sequence, the crop image resolution is 64 x 44 and 30 frames are sampled consecutively for training;
step two: inputting the gait sequence with the 30 frames with the resolution of 64 multiplied by 44 obtained in the step one into a gait recognition model, and extracting gait characteristics of the sequence;
step three: selecting positive samples and negative samples with the same number for each sample according to the gait characteristics obtained in the second step to construct positive sample pairs and negative sample pairs;
step four: generating a text description for each sample pair constructed in the step three according to the prompt;
step five: inputting the text description obtained in the fourth step into a pre-trained CLIP text encoder, embedding a learnable prompt token after tokenizing the text, generating text features through a transducer in the CLIP pre-training model text encoder, and performing feature mapping on the text features by using a multi-layer perceptron;
step six: performing feature stitching operation on the sample pair constructed in the step three to generate visual features, and performing feature mapping by using a multi-layer perceptron to obtain mapped visual features so as to align the visual features with the text features mapped in the step five;
step seven: calculating similarity between the text features mapped in the fifth step and the visual features mapped in the sixth step;
step eight: performing supervision training on the gait characteristics obtained in the second step and the similarity obtained in the seventh step, so that the similarity between sequences is learned through rich semantic relations contained in the CLIP pre-training model, and the gait recognition model learns more rich advanced semantic characteristics, thereby obtaining a trained gait recognition model;
step nine: and predicting the identity of the pedestrian in the gait sequence to be tested by using the trained gait recognition model.
2. The gait recognition method based on the pre-training large model according to claim 1, wherein the gait recognition model in the second step adopts GaitGL, and the CLIP pre-training model text encoder in the fifth step adopts CLIP ViT-B/32.
3. The gait recognition method based on the pre-training large model according to claim 1, wherein the number of negative samples selected in the constructed sample pair in the step three is the same as the number of positive samples, and the number of samples is the same as the number of labels in the samples of the current training batch.
4. A gait recognition method based on a pre-trained large model according to any one of claims 1 to 3, wherein the method of constructing a sample pair in step three is: for each sample, the same label is selected as a positive sample from the current trained batch samples, and the negative sample, namely the difficult sample, is selected to be closest from the different batch samples of the label.
5. The pre-trained large model based on claim 4The gait recognition method is characterized in that the distance calculation method is Euclidean distance, and the expression is: d (D) mn =∑(a m -a n ) 2 Wherein a is m And a n Gait characteristics for the mth sample and the nth sample generated in the step two.
6. The method of claim 1, wherein the text description in the fourth step includes whether the sample pair is from the same person and whether the angles of view of the sample pair are the same.
7. The gait recognition method based on the pre-training large model according to claim 1, wherein the text feature acquisition method in the fifth step is specifically as follows:
the text description generated in the fourth step is firstly tokenized, a certain number of learnable prompt tokens are embedded behind the text description tokens, and the embedded tokens are input into a transducer in a CLIP pre-training model text encoder to generate text features.
8. The gait recognition method based on the pre-training large model according to claim 1, wherein the specific expressions of the multi-layer perceptron used in the fifth and sixth steps are:
y=σ(FC(σ(FC(x))))
wherein sigma is a LeakyReLU activation function, FC is a full connection layer, the dimension of an MLP middle hidden layer is 512, and the dimension of a final output layer is 256.
9. The gait recognition method based on the pre-training large model according to claim 1, wherein the computed similarity in the seventh step is cosine similarity, and the computation formula is:
wherein x is a stepThe visual features mapped in the step six, y is the text features mapped in the step five, and I, I is L 2 Norms.
10. Gait recognition device based on pre-training large model, which is characterized in that:
comprising at least one processor and a memory connected by a data bus, the memory storing instructions for execution by the at least one processor, the instructions, when executed by the processor, for performing the pre-trained large model based gait recognition method of any of claims 1-9.
CN202310967164.0A 2023-08-02 2023-08-02 Gait recognition method and device based on pre-training large model Pending CN116912664A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310967164.0A CN116912664A (en) 2023-08-02 2023-08-02 Gait recognition method and device based on pre-training large model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310967164.0A CN116912664A (en) 2023-08-02 2023-08-02 Gait recognition method and device based on pre-training large model

Publications (1)

Publication Number Publication Date
CN116912664A true CN116912664A (en) 2023-10-20

Family

ID=88358137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310967164.0A Pending CN116912664A (en) 2023-08-02 2023-08-02 Gait recognition method and device based on pre-training large model

Country Status (1)

Country Link
CN (1) CN116912664A (en)

Similar Documents

Publication Publication Date Title
Wang et al. SaliencyGAN: Deep learning semisupervised salient object detection in the fog of IoT
CN108875807B (en) Image description method based on multiple attention and multiple scales
CN107506702B (en) Multi-angle-based face recognition model training and testing system and method
Liu et al. Incdet: In defense of elastic weight consolidation for incremental object detection
CN109344759A (en) A kind of relatives' recognition methods based on angle loss neural network
CN110503076B (en) Video classification method, device, equipment and medium based on artificial intelligence
CN111027576B (en) Cooperative significance detection method based on cooperative significance generation type countermeasure network
CN111582044A (en) Face recognition method based on convolutional neural network and attention model
CN107491729B (en) Handwritten digit recognition method based on cosine similarity activated convolutional neural network
CN110598018B (en) Sketch image retrieval method based on cooperative attention
Zhong et al. Combining multilevel feature extraction and multi-loss learning for person re-identification
CN110472652A (en) A small amount of sample classification method based on semanteme guidance
Fang et al. Dynamic gesture recognition using inertial sensors-based data gloves
CN112784929A (en) Small sample image classification method and device based on double-element group expansion
CN109271546A (en) The foundation of image retrieval Feature Selection Model, Database and search method
CN112949481A (en) Lip language identification method and system for irrelevant speakers
CN111507184B (en) Human body posture detection method based on parallel cavity convolution and body structure constraint
CN111241326B (en) Image visual relationship indication positioning method based on attention pyramid graph network
CN113065409A (en) Unsupervised pedestrian re-identification method based on camera distribution difference alignment constraint
Liu et al. Deeply coupled convolution–transformer with spatial–temporal complementary learning for video-based person re-identification
CN115909488A (en) Method for re-identifying shielded pedestrian through attitude guidance and dynamic feature extraction
CN116912664A (en) Gait recognition method and device based on pre-training large model
CN115546848A (en) Confrontation generation network training method, cross-device palmprint recognition method and system
CN112016661B (en) Pedestrian re-identification method based on erasure significance region
CN114627496A (en) Robust pedestrian re-identification method based on depolarization batch normalization of Gaussian process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination