CN113887358B - Gait recognition method based on partial learning decoupling characterization - Google Patents

Gait recognition method based on partial learning decoupling characterization Download PDF

Info

Publication number
CN113887358B
CN113887358B CN202111113754.4A CN202111113754A CN113887358B CN 113887358 B CN113887358 B CN 113887358B CN 202111113754 A CN202111113754 A CN 202111113754A CN 113887358 B CN113887358 B CN 113887358B
Authority
CN
China
Prior art keywords
features
gait
feature
typical
pooling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111113754.4A
Other languages
Chinese (zh)
Other versions
CN113887358A (en
Inventor
文学志
龚逸正
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202111113754.4A priority Critical patent/CN113887358B/en
Publication of CN113887358A publication Critical patent/CN113887358A/en
Application granted granted Critical
Publication of CN113887358B publication Critical patent/CN113887358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a gait recognition method based on partial learning decoupling characterization, which comprises the following steps: (1) Decomposing the original image into appearance features, typical features and pose features by a self-encoder; (2) Extracting typical features as static gait features by using a horizontal pyramid matching HPM; (3) Accumulating the gesture features into dynamic gait features by utilizing a micro motion capture module MCM; (4) And splicing the static gait characteristics and the dynamic gait characteristics to realize characteristic embedding operation. According to the invention, the horizontal pyramid matching and the micro motion capturing module are used for respectively extracting the static features and the dynamic features, so that the recognition performance of the model at an extreme angle is remarkably improved.

Description

Gait recognition method based on partial learning decoupling characterization
Technical Field
The invention relates to the technical field of gait recognition, in particular to a gait recognition method based on partial learning decoupling characterization.
Background
Gait recognition is to recognize identity according to individual walking mode, analyze and process moving image sequence containing person, and has remarkable advantages compared with existing biological recognition methods such as fingerprint, retina scanning and face recognition: the device has the characteristics of non-contact type, no influence of distance and environment and the like. Model-free gait recognition methods are becoming increasingly popular due to the feature extraction mechanism of deep learning algorithms. Compared with a model-based method, the model-free method gait recognition method does not need to manually design a feature extraction algorithm, reduces the calculation cost brought by the feature extraction algorithm, but the model-free method such as GEI (gait energy image) is more sensitive to the change of irrelevant features such as clothes, carrying objects and visual angles.
In order to solve the defects of the model-free method, an automatic coding framework GaitNet adopts a decoupling representation learning method, and the self-learning capacity of the model is utilized to separate the gesture characteristics and the appearance characteristics so as to realize the elimination of irrelevant characteristics; however, the LSTM model in GaitNet has many time characteristics, increases training difficulty, and is easy to generate over-learning phenomenon, so that the method is not suitable for aggregating periodic gait characteristics.
Disclosure of Invention
The invention aims to: aiming at the problems, the invention aims to provide a gait recognition method based on partial learning decoupling characterization.
The technical scheme is as follows: the invention discloses a gait recognition method based on partial learning decoupling characterization, which comprises the following steps:
(1) Decomposing the original image into appearance features, typical features and pose features by a self-encoder;
(2) Extracting typical features as static gait features by using a horizontal pyramid matching HPM;
(3) Accumulating the gesture features into dynamic gait features by utilizing a micro motion capture module MCM;
(4) And splicing the static gait characteristics and the dynamic gait characteristics to realize characteristic embedding operation.
Further, in step (2), extracting the typical features as static gait features using horizontal pyramid matching includes:
(201) Inputting typical features into HPM for pooling, mapping feature images of different parts into fixed length vectors by using horizontal pyramids, wherein the typical features are denoted by c, the number of the horizontal pyramids is S, and the typical feature image f c in the mth pyramid is divided into 2 m-1 layers to be summed The ith Xg partial typical feature/>, was obtained using global average pooled GAP and global maximum pooled GMPThe expression is:
Wherein i represents an ith pyramid, and g represents a g layer of the pyramid; avgpool denotes global average pooling, maxpool denotes global maximum pooling;
(202) Definition of a typical unity loss function To obtain the unique characteristic features, expressed as:
In the method, in the process of the invention, Representative of the p-th part under the condition con at time t, h represents the length of the gait sequence; the first item in brackets ensures the uniformity of typical features in the same sequence, and the second item ensures the uniformity of typical features of the same subject under different conditions;
(203) Carrying out mean value calculation on the typical feature c obtained by pooling the horizontal pyramid, then carrying out 1×1 convolution operation, and reducing the dimension D of the typical feature c to D to obtain the expression of the static gait feature F sta as follows:
Further, in step (3), accumulating the gesture features into dynamic gait features using the micro motion capture module MCM includes:
the gesture feature matrix p contains part of gesture features at the moment t, and p i is used for representing the gesture features of n time slices of the jth part, namely
(301) The micro motion capture module MCM uses the micro motion template generator MTB to extract frames from the gesture features p j, which are recorded asWherein t is the current processed frame, r is the distance from the farthest frame to the current frame, and k represents the processed frame range;
(302) Compressing and weight defining and inner product are respectively carried out on the extracted frames R j(pj and R) to obtain micro-motion characteristics M j, the template function is the sum of one-dimensional average pooling and one-dimensional maximum pooling, and a attention mechanism between channels is introduced through one-dimensional convolution and Sigmoid activation;
(303) The micro-motion features M j are aggregated into partial pose feature vectors v j by time pooling:
Assuming a gait cycle with N video frames, it is necessary to have a function f acting on the characteristics of the N time slices and the same as on the N video frames, both the statistical function mean and the maximum meet this requirement, but considering the uncertainty of the gait video length, the gait cycles within the video differ, using the maximum as a time pooling function, i.e. for Where M j (N) is a feature over N time slices and M j (N) is a feature over N video frames;
(304) The m partial gesture feature vectors v j are spliced together, each part is mapped to another feature space with higher resolution through a full connection layer FC to form a dynamic gait feature matrix F dyn, and the expression is:
Fdyn=FC([v1,v2,…,vm]-1)。
further, in the step (4), the splicing formula is: f= [ F sta,Fdyn ].
The beneficial effects are that: compared with the prior art, the invention has the remarkable advantages that: the invention optimizes based on GaitPart frames, so that the model is more suitable for RGB data sets, and the static characteristics and the dynamic characteristics are respectively extracted by using a horizontal pyramid matching and micro motion capturing module, thereby obviously improving the recognition performance of the model at extreme angles such as 0 DEG and 180 DEG and overcoming the over-learning problem of LSTM.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of HPM structure;
Fig. 3 is a schematic flow chart of a process for acquiring dynamic gait characteristics.
Detailed Description
A flow chart of a gait recognition method based on partial learning decoupling characterization according to the embodiment is shown in fig. 1, and includes the following steps:
(1) Decomposing the original image into appearance features, typical features and pose features by a self-encoder; wherein the appearance features refer to wearing, knapsack, carrying articles, and the like, and the typical features refer to body type, height, limb length, and the like.
(2) Typical features are extracted as static gait features using horizontal pyramid matching HPM:
(201) Inputting typical features into HPM for pooling, mapping feature images of different parts into fixed length vectors by using horizontal pyramids, wherein the typical features are denoted by c, the number of the horizontal pyramids is S, as shown in HPM structure diagram in FIG. 2, H, W, D in the diagram represent height, width and depth information of the feature images respectively, and in m pyramid typical feature image f c is divided into 2 m-1 layers, and the total is calculated The ith Xg partial typical feature/>, was obtained using global average pooled GAP and global maximum pooled GMPThe expression is:
Wherein i represents an ith pyramid, and g represents a g layer of the pyramid; avgpool denotes global average pooling, maxpool denotes global maximum pooling;
(202) The typical characteristic is the human body characteristic of the main body, which is not changed with different frames and conditions, namely the typical characteristic of the main body k is unified at any time t 1 and t 2 of the same sequence, being uniform under the two sequences of conditions con 1 and con 2, it is also necessary to constrain 2 S -1 parts separately, i.e. the final loss is the average of all parts. Definition of a typical unity loss function To obtain the unique characteristic features, expressed as:
In the method, in the process of the invention, Representative of the p-th part under the condition con at time t, h represents the length of the gait sequence; the first item in brackets ensures the uniformity of typical features in the same sequence, and the second item ensures the uniformity of typical features of the same subject under different conditions;
(203) The typical characteristic c obtained by horizontal pyramid pooling is a fixed value, the average value of the typical characteristic c obtained by horizontal pyramid pooling is calculated, then 1×1 convolution operation is carried out, the dimension D of the typical characteristic c is reduced to D, and the expression for obtaining the static gait characteristic F sta is as follows:
(3) Accumulating the attitude features into dynamic gait features by utilizing a micro motion capture module MCM:
The gesture feature matrix p contains part of gesture features at the moment t, and p j is used for representing the gesture features of n time slices of the jth part, namely
(301) The micro motion capture module MCM uses the micro motion template generator MTB to extract frames from the gesture features p j, which are recorded asWherein t is the current processed frame, r is the distance from the farthest frame to the current frame, and k represents the processed frame range; as shown in fig. 3, the jth MTB is processing the second frame of the signature sequence and r=1, and the blank square represents the filling with 0 vectors.
(302) Compressing and weight defining and inner product are respectively carried out on the extracted frames R j(pj and R) to obtain micro-motion characteristics M j, the template function is the sum of one-dimensional average pooling and one-dimensional maximum pooling, and a attention mechanism between channels is introduced through one-dimensional convolution and Sigmoid activation;
(303) The micro-motion features M j are aggregated into partial pose feature vectors v j by time pooling:
Assuming a gait cycle with N video frames, it is necessary to have a function f acting on the characteristics of the N time slices and the same as on the N video frames, both the statistical function mean and the maximum meet this requirement, but considering the uncertainty of the gait video length, the gait cycles within the video may be different, using the maximum as a time pooling function, i.e. for Where M j (N) is a feature over N time slices and M j (N) is a feature over N video frames;
(304) The m partial gesture feature vectors v j are spliced together, each part is mapped to another feature space with higher resolution through a full connection layer FC to form a dynamic gait feature matrix F dyn, and the expression is:
Fdyn=FC([v1,v2,…,vm]-1)。
(4) The static gait characteristics and the dynamic gait characteristics are spliced to realize characteristic embedding operation, and a splicing formula is as follows: f= [ F sta,Fdyn ].

Claims (2)

1. The gait recognition method based on the partial learning decoupling characterization is characterized by comprising the following steps:
(1) Decomposing the original image into appearance features, typical features and pose features by a self-encoder;
(2) Extracting typical features as static gait features by using a horizontal pyramid matching HPM;
(3) Accumulating the gesture features into dynamic gait features by utilizing a micro motion capture module MCM;
(4) Splicing the static gait characteristics and the dynamic gait characteristics to realize characteristic embedding operation;
In step (2), extracting the representative features as static gait features using horizontal pyramid matching includes:
(201) Inputting typical features into HPM for pooling, mapping the feature images of different parts into fixed-length vectors by using horizontal pyramids, wherein the typical features are denoted by c, the number of the horizontal pyramids is S, and the typical feature image f c in the mth pyramid is divided into 2 m-1 layers, and totalizing The ith Xg partial typical feature/>, was obtained using global average pooled GAP and global maximum pooled GMPThe expression is:
Wherein i represents an ith pyramid, and g represents a g layer of the pyramid; avgpool denotes global average pooling, maxpool denotes global maximum pooling;
(202) Definition of a typical unity loss function The unique characteristic is obtained, and the expression is:
In the method, in the process of the invention, Representative of the p-th part under the condition con at time t, h represents the length of the gait sequence;
(203) The average value of the typical feature c obtained by pooling the horizontal pyramid is calculated, then 1×1 convolution operation is carried out, the dimension D of the typical feature c is reduced to D, and the expression of the static gait feature F sta is obtained:
In step (3), accumulating the gesture features into dynamic gait features using the micro motion capture module MCM includes:
The gesture feature matrix p contains part of gesture features at the moment t, and p j is used for representing the gesture features of n time slices of the jth part, namely
(301) The micro motion capture module MCM uses the micro motion template generator MTB to extract frames from the gesture features p j, which are recorded asWherein t is the current processed frame, r is the distance from the farthest frame to the current frame, and k represents the processed frame range;
(302) Respectively compressing and defining weights of the extracted frames R j(pj and R) and then carrying out inner product to obtain micro-motion characteristics M j, wherein a template function is the sum of one-dimensional average pooling and one-dimensional maximum pooling, and a attention mechanism between channels is introduced through one-dimensional convolution and Sigmoid activation;
(303) The micro-motion features M j are aggregated into partial pose feature vectors v j by time pooling:
Assuming a gait cycle with N video frames, it is necessary to have a function f acting on the characteristics of the N time slices and the same as on the N video frames, both the statistical function mean and the maximum meet this requirement, but considering the uncertainty of the gait video length, the gait cycles within the video differ, using the maximum as a time pooling function, i.e. for Where M j (N) is a feature over N time slices and M j (N) is a feature over N video frames;
(304) The m partial gesture feature vectors v j are spliced together, each part is mapped to another feature space with higher resolution through a full connection layer FC to form a dynamic gait feature matrix F dyn, and the expression is:
Fdyn=FC([v1,v2,…,vm]-1)。
2. the gait recognition method according to claim 1, wherein in step 4, the concatenation formula is: f= [ F sta,Fdyn ].
CN202111113754.4A 2021-09-23 2021-09-23 Gait recognition method based on partial learning decoupling characterization Active CN113887358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111113754.4A CN113887358B (en) 2021-09-23 2021-09-23 Gait recognition method based on partial learning decoupling characterization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111113754.4A CN113887358B (en) 2021-09-23 2021-09-23 Gait recognition method based on partial learning decoupling characterization

Publications (2)

Publication Number Publication Date
CN113887358A CN113887358A (en) 2022-01-04
CN113887358B true CN113887358B (en) 2024-05-31

Family

ID=79010231

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111113754.4A Active CN113887358B (en) 2021-09-23 2021-09-23 Gait recognition method based on partial learning decoupling characterization

Country Status (1)

Country Link
CN (1) CN113887358B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205983B (en) * 2022-09-14 2022-12-02 武汉大学 Cross-perspective gait recognition method, system and equipment based on multi-feature aggregation

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414426A (en) * 2019-07-26 2019-11-05 西安电子科技大学 A kind of pedestrian's Approach for Gait Classification based on PC-IRNN
CN110969087A (en) * 2019-10-31 2020-04-07 浙江省北大信息技术高等研究院 Gait recognition method and system
CN111539320A (en) * 2020-04-22 2020-08-14 山东大学 Multi-view gait recognition method and system based on mutual learning network strategy
CN112861605A (en) * 2020-12-26 2021-05-28 江苏大学 Multi-person gait recognition method based on space-time mixed characteristics
CN113177464A (en) * 2021-04-27 2021-07-27 浙江工商大学 End-to-end multi-modal gait recognition method based on deep learning

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201113143D0 (en) * 2011-07-29 2011-09-14 Univ Ulster Gait recognition methods and systems
US11315363B2 (en) * 2020-01-22 2022-04-26 Board Of Trustees Of Michigan State University Systems and methods for gait recognition via disentangled representation learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110414426A (en) * 2019-07-26 2019-11-05 西安电子科技大学 A kind of pedestrian's Approach for Gait Classification based on PC-IRNN
CN110969087A (en) * 2019-10-31 2020-04-07 浙江省北大信息技术高等研究院 Gait recognition method and system
CN111539320A (en) * 2020-04-22 2020-08-14 山东大学 Multi-view gait recognition method and system based on mutual learning network strategy
CN112861605A (en) * 2020-12-26 2021-05-28 江苏大学 Multi-person gait recognition method based on space-time mixed characteristics
CN113177464A (en) * 2021-04-27 2021-07-27 浙江工商大学 End-to-end multi-modal gait recognition method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像空间金字塔SURF-BoW的步态识别;史东承;贾令尧;梁超;王新颖;;计算机工程;20170915(第09期);全文 *

Also Published As

Publication number Publication date
CN113887358A (en) 2022-01-04

Similar Documents

Publication Publication Date Title
WO2022036777A1 (en) Method and device for intelligent estimation of human body movement posture based on convolutional neural network
JP6873600B2 (en) Image recognition device, image recognition method and program
CN110348330B (en) Face pose virtual view generation method based on VAE-ACGAN
CN107341452B (en) Human behavior identification method based on quaternion space-time convolution neural network
CN111639692A (en) Shadow detection method based on attention mechanism
Ou et al. Automatic facial expression recognition using Gabor filter and expression analysis
CN112364757B (en) Human body action recognition method based on space-time attention mechanism
CN111797683A (en) Video expression recognition method based on depth residual error attention network
Caroppo et al. Comparison between deep learning models and traditional machine learning approaches for facial expression recognition in ageing adults
Jalal et al. Daily human activity recognition using depth silhouettes and transformation for smart home
CN107767416B (en) Method for identifying pedestrian orientation in low-resolution image
Shirke et al. Literature review: Model free human gait recognition
CN109376787B (en) Manifold learning network and computer vision image set classification method based on manifold learning network
CN111523377A (en) Multi-task human body posture estimation and behavior recognition method
Ali et al. Object recognition for dental instruments using SSD-MobileNet
CN113887358B (en) Gait recognition method based on partial learning decoupling characterization
CN111967358B (en) Neural network gait recognition method based on attention mechanism
Bose et al. In-situ recognition of hand gesture via Enhanced Xception based single-stage deep convolutional neural network
CN111062308A (en) Face recognition method based on sparse expression and neural network
Abdu-Aguye et al. VersaTL: Versatile Transfer Learning for IMU-based Activity Recognition using Convolutional Neural Networks.
CN116895098A (en) Video human body action recognition system and method based on deep learning and privacy protection
CN116363535A (en) Ship detection method in unmanned aerial vehicle aerial image based on convolutional neural network
CN108764233B (en) Scene character recognition method based on continuous convolution activation
Özbay et al. 3D Human Activity Classification with 3D Zernike Moment Based Convolutional, LSTM-Deep Neural Networks.
Khan et al. Feature extraction and dimensions reduction using R transform and principal component analysis for abnormal human activity recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant