CN111898431B - Pedestrian re-identification method based on attention mechanism part shielding - Google Patents

Pedestrian re-identification method based on attention mechanism part shielding Download PDF

Info

Publication number
CN111898431B
CN111898431B CN202010587406.XA CN202010587406A CN111898431B CN 111898431 B CN111898431 B CN 111898431B CN 202010587406 A CN202010587406 A CN 202010587406A CN 111898431 B CN111898431 B CN 111898431B
Authority
CN
China
Prior art keywords
feature
pedestrian
pedestrian target
attention
global
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010587406.XA
Other languages
Chinese (zh)
Other versions
CN111898431A (en
Inventor
韩光
艾岳川
朱梦成
刘耀明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Posts and Telecommunications
Original Assignee
Nanjing University of Posts and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Posts and Telecommunications filed Critical Nanjing University of Posts and Telecommunications
Priority to CN202010587406.XA priority Critical patent/CN111898431B/en
Publication of CN111898431A publication Critical patent/CN111898431A/en
Application granted granted Critical
Publication of CN111898431B publication Critical patent/CN111898431B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pedestrian re-identification method based on attention mechanism component shielding, which comprises the steps of extracting basic characteristics of a target through a Resnet50 network; extracting local features, global features and attention features of the target respectively through a global feature extractor, a component shielding feature extractor and an attention feature extractor; and respectively obtaining a predicted feature vector of the pedestrian through the global feature, the component shielding feature and the attention feature, and carrying out end-to-end training on the whole network to obtain a pedestrian re-identification model. The method fully utilizes the global characteristics and the component shielding characteristics of the pedestrian images, and simultaneously fuses the space attention mechanism and the channel attention mechanism, thereby effectively improving the discriminability of the predicted characteristics.

Description

Pedestrian re-identification method based on attention mechanism part shielding
Technical Field
The invention belongs to the technical field of pattern recognition, and particularly relates to a pedestrian re-recognition method based on attention mechanism component shielding.
Background
Pedestrian re-identification is a technology for researching fire and heat in the field of computer vision in recent years, identifies pedestrians through characteristics such as wearing, body states and hair styles of the pedestrians, and is mainly oriented to identification and retrieval of the pedestrians across cameras and scenes. The method has wide application prospects in the fields of video monitoring, intelligent security and the like, and the development of the pedestrian re-identification technology has important significance for building safe cities.
Pedestrian re-identification is a very challenging computer vision task, whose task is to retrieve pedestrians under different cameras, where the difficulties include varying backgrounds, lighting, blurring of pictures, different poses of pedestrians, and occlusion of sundries. In the past, the means for solving the pedestrian re-identification mainly utilizes manually extracted features to carry out retrieval, and with the application of deep learning in pedestrian re-identification, the research on pedestrian re-identification is greatly developed, but the problems of blurring, shielding and the like of pictures are not well solved.
Most of the existing pedestrian re-identification algorithms are based on the global features of pedestrian images for identification, because the backgrounds of the pedestrian images are very complicated, the detailed information of the pedestrian images is not fully utilized, the blocking performance is not robust enough, and the accuracy of pedestrian matching is not high.
Disclosure of Invention
The invention aims to provide a pedestrian re-identification method based on attention mechanism component shielding, and aims to solve the problems that an existing pedestrian re-identification model is insensitive to local detail characteristics of a target and poor in shielding robustness.
In order to solve the technical problems, the invention adopts the technical scheme that:
a pedestrian re-identification method based on attention mechanism component shielding comprises the following steps:
inputting the complete pedestrian target map into a pre-trained pedestrian re-identification model;
the pedestrian re-identification model gives a result whether the complete pedestrian target map has a specific pedestrian or not;
the pedestrian re-identification model comprises a global feature extractor, a component shielding feature extractor, an attention feature extractor and a basic feature extraction model, wherein the output end of the basic feature extraction model is connected with the input ends of the global feature extractor, the component shielding feature extractor and the attention feature extractor respectively.
Further, the training method of the pedestrian re-identification model comprises the following steps:
acquiring a complete pedestrian target map carrying tag data;
according to the complete pedestrian target image, the basic feature extraction model extracts a complete pedestrian target feature image;
according to the complete pedestrian target feature map, the global feature extractor extracts a global feature vector of a pedestrian target;
according to the complete pedestrian target feature map, the component shielding feature extractor extracts component shielding feature vectors of the pedestrian target;
according to the complete pedestrian target feature map, the attention feature extractor extracts an attention feature vector of a pedestrian target;
and based on the global feature vector of the pedestrian target, the component shielding feature vector of the pedestrian target, the attention feature vector of the pedestrian target and the pedestrian label vector of the complete pedestrian target graph, taking the linear addition of a cross entropy loss function and a triple loss function as a total loss function, taking the trend of the total loss function value to be reduced as a target, and training the pedestrian re-identification model.
Further, the basic feature extraction model is based on a ResNet-50 convolutional neural network which is pre-trained by ImageNet and removes a terminal down-sampling layer and a full connection layer; the basic feature extraction model compresses an input image into an image of a predetermined size before extracting image features.
Further, the extracting, by the global feature extractor, a global feature vector of a pedestrian target according to the complete pedestrian target feature map includes:
inputting the complete pedestrian target feature map into a global feature extractor;
and the global feature extractor performs global average pooling processing and convolution dimension reduction processing on the input complete pedestrian target feature map to obtain a global feature vector of the pedestrian target.
Further, the extracting, according to the complete pedestrian target feature map, a component occlusion feature vector of the pedestrian target by the component occlusion feature extractor includes:
inputting a complete pedestrian target feature map into a component occlusion feature extractor;
the component occlusion feature extractor horizontally and uniformly divides an input complete pedestrian target feature map into four component blocks;
randomly selecting one of the four component blocks through mask operation, setting the response value of the selected block to be zero, keeping the response values of the rest component blocks unchanged, and acquiring a pedestrian target characteristic diagram shielded by the components;
and obtaining an intermediate feature vector by performing global maximum pooling on the pedestrian target feature map shielded by the component, and then performing dimension reduction on the intermediate feature vector through convolution, batch normalization and ReLU operation to obtain the component shielding feature vector of the pedestrian target.
Further, the extracting, according to the complete pedestrian target feature map, the attention feature extractor obtains an attention feature vector of a pedestrian target, and includes:
inputting the complete pedestrian target feature map into an attention feature extractor;
the attention feature extractor includes an attention module including a spatial attention module and a channel attention module;
the channel attention module is used for respectively carrying out global maximum pooling and global average pooling on the complete pedestrian target feature map, then respectively carrying out convolution dimension reduction, ReLU and convolution dimension increase processing, then carrying out element addition and combination on the feature maps obtained after the respective processing, and multiplying the combined feature map obtained after Sigmoid operation with the complete pedestrian target feature map to obtain a channel attention weighted feature map;
the spatial attention module is used for respectively averaging and maximizing complete pedestrian target feature maps in channel dimensions, then respectively passing through a full connection layer, a ReLU and a full connection layer, then carrying out element addition and combination on the feature maps obtained after respective processing, multiplying the combined feature map obtained after Sigmoid operation with the complete pedestrian target feature map, and finally obtaining a spatial attention weighted feature map;
carrying out element addition and combination on the space attention weighted feature map, the channel attention weighted feature map and the complete pedestrian target feature map to obtain an attention feature map;
and carrying out global average pooling on the attention feature map to obtain an intermediate feature vector, and then carrying out dimension reduction processing on the intermediate feature vector through convolution, batch normalization and ReLU operation to obtain the attention feature vector of the pedestrian target.
Compared with the prior art, the invention has the following beneficial effects and advantages:
the pedestrian re-identification method based on attention mechanism part shielding is reasonable in design, global features, part shielding features and attention features are combined in the feature extraction process, the overall information of a pedestrian image can be obtained through the global features, and most visual clues are used for identifying pedestrians; the component occlusion feature has better robustness to occlusion; the attention characteristic comprises space attention and channel attention, the space attention mechanism captures context dependency information of each position of the target characteristic, and the channel attention mechanism captures channel dependency among characteristic graphs, so that better characteristic representation and recognition effects are obtained.
Drawings
FIG. 1 is a schematic flow chart of a pedestrian re-identification method based on attention mechanism component occlusion according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a spatial attention module according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a channel attention module according to an embodiment of the invention.
Detailed Description
The invention is further described with reference to specific examples. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
Aiming at the problem that the existing pedestrian re-identification model is poor in shielding robustness, the method utilizes the masks of the component shielding branches to carry out random shielding on the components on the feature map, and improves the shielding robustness through shielding training; acquiring global information of the pedestrian image by using the global feature branch; aiming at the problem that the complex background in the pedestrian image interferes with the identification, the context dependence information of each position of the pedestrian image and the interdependence relation between characteristic diagram channels are obtained by utilizing the attention characteristic branch so as to enhance the discriminability of the output characteristics.
Fig. 1 is a schematic flow chart of a pedestrian re-identification method based on attention mechanism component occlusion according to an embodiment of the present invention, where the method includes the following steps:
step 1, acquiring a complete pedestrian target map carrying tag data as a training sample.
And 2, extracting a complete pedestrian target characteristic diagram by the basic characteristic extraction model according to the complete pedestrian target diagram.
The specific implementation method of the step is as follows:
and (3) scaling the obtained complete pedestrian target image into a fixed size of 384 multiplied by 128, taking a ResNet-50 convolutional neural network pre-trained by ImageNet as a basic feature extraction model, removing a final down sampling layer and a full connection layer, and outputting a pedestrian target feature image with the size of 24 multiplied by 8.
And 3, extracting the global feature vector of the pedestrian target by a global feature extractor according to the complete pedestrian target feature map.
The specific implementation method of the step is as follows:
completely inputting the pedestrian target feature map generated by the basic feature extraction model into a global feature extractor, down-sampling the feature map into feature vectors of 2048 multiplied by 1 through global average pooling, and then reducing the dimensions of the feature vectors by using convolution of 1 multiplied by 1, batch normalization and ReLU operation to generate feature vectors of 512 multiplied by 1.
And 4, extracting the component shielding characteristic vector of the pedestrian target by the component shielding characteristic extractor according to the complete pedestrian target characteristic diagram.
The specific implementation method of the step is as follows:
step 1), inputting a pedestrian target feature map generated by a basic feature extraction model into a component occlusion feature extractor;
step 2), the component occlusion feature extractor horizontally and uniformly divides the input pedestrian target feature map into four component blocks;
step 3), randomly selecting one of the four component blocks through mask operation, setting the response value of the selected component block as zero, keeping the response values of the other component blocks unchanged, and acquiring a pedestrian target characteristic diagram shielded by the components;
and 4) generating a characteristic vector with the size of 2048 multiplied by 1 by using the pedestrian target characteristic diagram shielded by the mask through global maximum pooling, and then reducing the dimension of the characteristic vector by using convolution with the size of 1 multiplied by 1, batch normalization and ReLU operation to generate a characteristic vector with the size of 1024 multiplied by 1.
And 5, extracting the attention feature vector of the pedestrian target by an attention feature extractor according to the complete pedestrian target feature map.
The specific implementation mode of the step is as follows:
step 1), inputting a pedestrian target feature map generated by a basic feature extraction model into an attention feature extractor; the attention feature extractor includes an attention module including a spatial attention module and a channel attention module;
step 2), the channel attention module carries out global maximum pooling and global average pooling on the input pedestrian target feature map, then carries out convolution dimension reduction, ReLU and convolution dimension increase processing, then carries out element addition and combination on the feature maps obtained after respective processing, carries out Sigmoid operation, multiplies the obtained feature map by the original pedestrian target feature map, and finally obtains a channel attention weighted feature map;
step 3), the spatial attention module respectively averages and maximizes the input pedestrian target characteristic graphs on channel dimensions, then respectively passes through a full connection layer, a ReLU and a full connection layer, then performs element addition and combination on the characteristic graphs obtained after respective processing, performs Sigmoid operation, multiplies the obtained characteristic graph by the original pedestrian target characteristic graph, and finally obtains a spatial attention weighted characteristic graph;
step 4), carrying out element addition and combination on the space attention weighted feature map, the channel attention weighted feature map and the original pedestrian target feature map to obtain an attention feature map;
and 5) generating 2048 multiplied by 1 feature vector by using the attention feature map through global average pooling, and then generating 512 multiplied by 1 attention feature vector by reducing dimensions of the feature vector through 1 multiplied by 1 convolution, batch normalization and ReLU operation.
And 6, based on the global feature vector of the pedestrian target, the component shielding feature vector of the pedestrian target, the attention feature vector of the pedestrian target and the pedestrian label vector of the complete pedestrian target graph, linearly adding the cross entropy loss function and the triple loss function to serve as a total objective function of network training, and training to obtain a pedestrian re-identification model based on the shielding of the attention component by taking the trend of the total objective function value as a target.
The specific implementation mode of the step is as follows:
step 1), calculating the triple loss and the cross entropy loss of a global feature vector and a pedestrian label vector of a pedestrian target, the triple loss and the cross entropy loss of a component shielding feature vector and a pedestrian label vector, and the triple loss and the cross entropy loss of an attention feature vector and a pedestrian label vector respectively, linearly adding the three triple losses and the three cross entropy losses, and updating model parameters by using a back propagation algorithm;
and step 2), setting the batch size to be 64 and the training epoch to be 400, optimizing the network by adopting an Adam algorithm, and evaluating the model performance by using Rank1 and mAP after the network training is finished.
And 7, inputting the new complete pedestrian target image into the trained pedestrian re-identification model to obtain a result of whether the complete pedestrian target image has a specific pedestrian.
It can be seen from the above embodiments that the present invention follows the idea of pcb (part conditional base) algorithm, including: extracting basic characteristics of the target through a Resnet50 network; respectively extracting local features, global features and attention features of the target by a global feature extractor and a component shielding feature extractor; in the component shielding characteristic branch, horizontally and uniformly dividing the last layer of characteristic layer into four component blocks, randomly shielding one component block, and obtaining component shielding characteristics through the rest unshielded components; in the global feature branch, inputting the feature image of the whole target into a global feature extractor to obtain global features; in the attention feature branch, inputting a complete feature map into a space attention module and a channel attention module to respectively obtain a space attention feature and a channel attention feature, and then fusing the two attention features into one feature; and respectively obtaining a predicted feature vector of the pedestrian through the global feature, the component shielding feature and the attention feature, and carrying out end-to-end training on the whole network to obtain a pedestrian re-identification model.
The method fully utilizes the global characteristics and the component shielding characteristics of the pedestrian images, and simultaneously fuses the space attention mechanism and the channel attention mechanism, thereby effectively improving the discriminability of the predicted characteristics.
The present invention has been disclosed in terms of the preferred embodiment, but is not intended to be limited to the embodiment, and all technical solutions obtained by substituting or converting equivalents thereof fall within the scope of the present invention.

Claims (5)

1. A pedestrian re-identification method based on attention mechanism part shielding is characterized by comprising the following steps:
inputting the complete pedestrian target map into a pre-trained pedestrian re-identification model;
the pedestrian re-identification model gives a result whether the complete pedestrian target map has a specific pedestrian or not;
the pedestrian re-identification model comprises a global feature extractor, a component shielding feature extractor, an attention feature extractor and a basic feature extraction model, wherein the output end of the basic feature extraction model is respectively connected with the input ends of the global feature extractor, the component shielding feature extractor and the attention feature extractor;
the training method of the pedestrian re-identification model comprises the following steps:
acquiring a complete pedestrian target map carrying tag data;
according to the complete pedestrian target image, the basic feature extraction model extracts a complete pedestrian target feature image;
according to the complete pedestrian target feature map, the global feature extractor extracts a global feature vector of a pedestrian target;
according to the complete pedestrian target feature map, the component shielding feature extractor extracts component shielding feature vectors of the pedestrian target;
according to the complete pedestrian target feature map, the attention feature extractor extracts an attention feature vector of a pedestrian target;
and based on the global feature vector of the pedestrian target, the component shielding feature vector of the pedestrian target, the attention feature vector of the pedestrian target and the pedestrian label vector of the complete pedestrian target graph, taking the linear addition of a cross entropy loss function and a triplet loss function as a total loss function, taking the reduction of the total loss function value as a target, and training the pedestrian re-identification model.
2. The method of claim 1, wherein the basic feature extraction model is based on a ResNet-50 convolutional neural network pre-trained by ImageNet and with terminal downsampling and full connectivity layers removed; the basic feature extraction model compresses an input image into an image of a predetermined size before extracting image features.
3. The method of claim 1, wherein the global feature extractor extracts a global feature vector of a pedestrian target according to the complete pedestrian target feature map, and comprises:
inputting the complete pedestrian target feature map into a global feature extractor;
and the global feature extractor performs global average pooling processing and convolution dimension reduction processing on the input complete pedestrian target feature map to obtain a global feature vector of the pedestrian target.
4. The method according to claim 1, wherein the extracting component occlusion feature vectors of the pedestrian target according to the complete pedestrian target feature map comprises:
inputting a complete pedestrian target feature map into a component occlusion feature extractor;
the component shielding characteristic extractor horizontally and uniformly divides the input complete pedestrian target characteristic graph into four component blocks;
randomly selecting one of the four component blocks through mask operation, setting the response value of the selected block to be zero, keeping the response values of the rest component blocks unchanged, and acquiring a pedestrian target characteristic diagram shielded by the components;
and obtaining an intermediate feature vector by performing global maximum pooling on the pedestrian target feature map shielded by the component, and then performing dimension reduction on the intermediate feature vector through convolution, batch normalization and ReLU operation to obtain the component shielding feature vector of the pedestrian target.
5. The method of claim 1, wherein the extracting attention feature vectors of the pedestrian object according to the complete pedestrian object feature map by the attention feature extractor comprises:
inputting the complete pedestrian target feature map into an attention feature extractor;
the attention feature extractor includes an attention module including a spatial attention module and a channel attention module;
the channel attention module is used for respectively carrying out global maximum pooling and global average pooling on the complete pedestrian target feature map, then respectively carrying out convolution dimension reduction, ReLU and convolution dimension increase processing, then carrying out element addition and combination on the feature maps obtained after the respective processing, and multiplying the combined feature map obtained after Sigmoid operation with the complete pedestrian target feature map to obtain a channel attention weighted feature map;
the spatial attention module is used for respectively averaging and maximizing complete pedestrian target feature maps in channel dimensions, then respectively passing through a full connection layer, a ReLU and a full connection layer, then carrying out element addition and combination on feature maps obtained after respective processing, multiplying a combined feature map obtained after Sigmoid operation by the complete pedestrian target feature map, and finally obtaining a spatial attention weighted feature map;
carrying out element addition and combination on the space attention weighted feature map, the channel attention weighted feature map and the complete pedestrian target feature map to obtain an attention feature map;
and carrying out global average pooling on the attention feature map to obtain an intermediate feature vector, and then carrying out dimension reduction processing on the intermediate feature vector through convolution, batch normalization and ReLU operation to obtain the attention feature vector of the pedestrian target.
CN202010587406.XA 2020-06-24 2020-06-24 Pedestrian re-identification method based on attention mechanism part shielding Active CN111898431B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010587406.XA CN111898431B (en) 2020-06-24 2020-06-24 Pedestrian re-identification method based on attention mechanism part shielding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010587406.XA CN111898431B (en) 2020-06-24 2020-06-24 Pedestrian re-identification method based on attention mechanism part shielding

Publications (2)

Publication Number Publication Date
CN111898431A CN111898431A (en) 2020-11-06
CN111898431B true CN111898431B (en) 2022-07-26

Family

ID=73207861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010587406.XA Active CN111898431B (en) 2020-06-24 2020-06-24 Pedestrian re-identification method based on attention mechanism part shielding

Country Status (1)

Country Link
CN (1) CN111898431B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112232300B (en) * 2020-11-11 2024-01-19 汇纳科技股份有限公司 Global occlusion self-adaptive pedestrian training/identifying method, system, equipment and medium
CN112560662B (en) * 2020-12-11 2022-10-21 湖北科技学院 Occlusion image identification method based on multi-example attention mechanism
CN112766353B (en) * 2021-01-13 2023-07-21 南京信息工程大学 Double-branch vehicle re-identification method for strengthening local attention
CN113255821B (en) * 2021-06-15 2021-10-29 中国人民解放军国防科技大学 Attention-based image recognition method, attention-based image recognition system, electronic device and storage medium
CN113627477A (en) * 2021-07-07 2021-11-09 武汉魅瞳科技有限公司 Vehicle multi-attribute identification method and system
CN113947782B (en) * 2021-10-14 2024-06-07 哈尔滨工程大学 Pedestrian target alignment method based on attention mechanism
CN114926886B (en) * 2022-05-30 2023-04-25 山东大学 Micro-expression action unit identification method and system
CN116311105B (en) * 2023-05-15 2023-09-19 山东交通学院 Vehicle re-identification method based on inter-sample context guidance network
CN116912635B (en) * 2023-09-12 2024-06-07 深圳须弥云图空间科技有限公司 Target tracking method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070073A (en) * 2019-05-07 2019-07-30 国家广播电视总局广播电视科学研究院 Pedestrian's recognition methods again of global characteristics and local feature based on attention mechanism
CN110659589A (en) * 2019-09-06 2020-01-07 中国科学院自动化研究所 Pedestrian re-identification method, system and device based on attitude and attention mechanism

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110070073A (en) * 2019-05-07 2019-07-30 国家广播电视总局广播电视科学研究院 Pedestrian's recognition methods again of global characteristics and local feature based on attention mechanism
CN110659589A (en) * 2019-09-06 2020-01-07 中国科学院自动化研究所 Pedestrian re-identification method, system and device based on attitude and attention mechanism

Also Published As

Publication number Publication date
CN111898431A (en) 2020-11-06

Similar Documents

Publication Publication Date Title
CN111898431B (en) Pedestrian re-identification method based on attention mechanism part shielding
CN108764308B (en) Pedestrian re-identification method based on convolution cycle network
CN111126379B (en) Target detection method and device
CN112801008B (en) Pedestrian re-recognition method and device, electronic equipment and readable storage medium
CN111639564B (en) Video pedestrian re-identification method based on multi-attention heterogeneous network
CN112163498B (en) Method for establishing pedestrian re-identification model with foreground guiding and texture focusing functions and application of method
Kang et al. Deep learning-based weather image recognition
CN112836646A (en) Video pedestrian re-identification method based on channel attention mechanism and application
CN115841683B (en) Lightweight pedestrian re-identification method combining multi-level features
CN111597978B (en) Method for automatically generating pedestrian re-identification picture based on StarGAN network model
CN112183240A (en) Double-current convolution behavior identification method based on 3D time stream and parallel space stream
CN112464730A (en) Pedestrian re-identification method based on domain-independent foreground feature learning
CN115482508A (en) Reloading pedestrian re-identification method, reloading pedestrian re-identification device, reloading pedestrian re-identification equipment and computer-storable medium
CN114255456A (en) Natural scene text detection method and system based on attention mechanism feature fusion and enhancement
CN111340758A (en) Novel efficient iris image quality evaluation method based on deep neural network
CN110222568B (en) Cross-visual-angle gait recognition method based on space-time diagram
CN116363358A (en) Road scene image real-time semantic segmentation method based on improved U-Net
CN117409476A (en) Gait recognition method based on event camera
CN115100684A (en) Clothes-changing pedestrian re-identification method based on attitude and style normalization
Gan et al. Multilevel image dehazing algorithm using conditional generative adversarial networks
CN114333062A (en) Pedestrian re-recognition model training method based on heterogeneous dual networks and feature consistency
CN112085680B (en) Image processing method and device, electronic equipment and storage medium
CN110348395B (en) Skeleton behavior identification method based on space-time relationship
CN111079585B (en) Pedestrian re-identification method combining image enhancement with pseudo-twin convolutional neural network
CN116229580A (en) Pedestrian re-identification method based on multi-granularity pyramid intersection network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant