CN110110668B - Gait recognition method based on feedback weight convolutional neural network and capsule neural network - Google Patents

Gait recognition method based on feedback weight convolutional neural network and capsule neural network Download PDF

Info

Publication number
CN110110668B
CN110110668B CN201910380823.4A CN201910380823A CN110110668B CN 110110668 B CN110110668 B CN 110110668B CN 201910380823 A CN201910380823 A CN 201910380823A CN 110110668 B CN110110668 B CN 110110668B
Authority
CN
China
Prior art keywords
neural network
vector
input image
gait
formula
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910380823.4A
Other languages
Chinese (zh)
Other versions
CN110110668A (en
Inventor
吴亚联
侯健
苏永新
吴呈呈
黄盟标
赵鑫
朱紫琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiangtan University
Original Assignee
Xiangtan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiangtan University filed Critical Xiangtan University
Priority to CN201910380823.4A priority Critical patent/CN110110668B/en
Publication of CN110110668A publication Critical patent/CN110110668A/en
Application granted granted Critical
Publication of CN110110668B publication Critical patent/CN110110668B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a gait recognition method based on a feedback weight convolution neural network and a capsule neural network, which specifically comprises the following steps: taking a pair of gait energy maps as the input of a network; matching features of the input image from the bottom layer; extracting gait features of an input image by using a convolutional neural network; updating the input image by pixel-level feedback weights; reshaping the data shape using a primary capsule neural network; the similarity of the images is output using an improved capsule neural network. The method has strong robustness under the condition of a small data set, can effectively reflect the importance of different parts of a body to the gait recognition accuracy, adopts a vector mode to represent an entity, retains the isodegeneration of gait characteristics, and effectively improves the accuracy of cross-view gait recognition.

Description

Gait recognition method based on feedback weight convolutional neural network and capsule neural network
Technical Field
The invention relates to the field of deep learning, computer vision and pattern recognition, in particular to a gait recognition method based on a feedback weight convolution neural network and a capsule neural network.
Background
Scientific research on identification has attracted considerable attention due to rising criminal incidents and serious security issues. The gait recognition technology is the same as the iris recognition technology, the face recognition technology and the fingerprint recognition technology. Gait recognition is a technique of recognizing the identity of a person using gait information. The gait recognition technology has the advantages of long-distance recognition, no need of matching recognition targets and difficulty in disguising gait. This makes it have a wider application context including scientific research, traffic, criminal monitoring, etc.
Deep learning has been highly successful in the field of visual recognition, and some gait recognition methods directly represent the output of a complete connection layer in a convolutional neural network as a feature, which hardly highlights the importance of different parts of the body to the gait recognition accuracy. While convolutional neural networks are inherently a lossy process, for maximum pooling, they retain only the most prominent features, while avoiding overfitting, while at the same time discarding information that may play an important role in the underlying neural network.
The deep learning method in gait recognition faces three problems. First, the deep learning method requires a large amount of training data per class to train a high-precision classifier, but in practice the training data is generally small. Secondly, in the deep learning method, the training data is required to cover all covariates to ensure the robustness, but the training data generally has difficulty in meeting the condition of covering all covariates. Finally, intra-class variance is sometimes greater than inter-class variance due to the influence of covariates in gait recognition.
Disclosure of Invention
In order to overcome the problems in the related art, the invention provides a gait recognition method based on a feedback weight convolutional neural network and a capsule neural network.
The invention aims to provide a robust cross-perspective gait recognition model trained under the condition of limited data set.
The invention adopts the technical scheme that the gait recognition method based on the feedback weight convolution neural network and the capsule neural network comprises the following steps:
inputting a gait energy map;
matching features of the input image from the bottom layer;
extracting gait features of an input image by a convolutional neural network;
updating the input image by pixel-level feedback weights;
reshaping the data shape using a primary capsule neural network;
outputting a similarity of images using the improved capsule neural network;
the method for extracting the gait features of the input image by the convolutional neural network comprises the following steps:
performing convolution operation on the input image to obtain a feature map
Figure 399643DEST_PATH_IMAGE001
Carrying out batch standardization;
performing pooling treatment on the convolved data;
wherein updating the input image by pixel-level feedback weights comprises the steps of:
from feature maps
Figure 14295DEST_PATH_IMAGE002
Extracting feature vector
Figure 186650DEST_PATH_IMAGE003
According to the feature vector
Figure 138426DEST_PATH_IMAGE003
Training vector function
Figure 283099DEST_PATH_IMAGE004
According to vector functions
Figure 865390DEST_PATH_IMAGE004
Feature vector
Figure 587359DEST_PATH_IMAGE003
Conversion into a weighting matrix
Figure 546087DEST_PATH_IMAGE005
Will matrix
Figure 545267DEST_PATH_IMAGE005
As weights for each receptive field of the input image;
updating the input image by the weighted receptive field;
retraining the network with the updated input image;
wherein the reshaping of the data shape using the primary capsule neural network comprises the steps of:
continuously extracting gait features of the image by using 8 different 2D convolutional neural networks;
remodeling the data shape and linking the data shape into a capsule neuron with a vector dimension of 8;
wherein outputting the similarity of the images using the improved capsule neural network comprises:
affine transformation is performed on the input vector, and the formula is as follows:
Figure 564039DEST_PATH_IMAGE006
in the formula (I), the compound is shown in the specification,
Figure 773303DEST_PATH_IMAGE007
is an input vector representing a feature in an image
Figure 942248DEST_PATH_IMAGE008
Figure 858251DEST_PATH_IMAGE009
Is a certain entity in the image that is,
Figure 110241DEST_PATH_IMAGE010
is characterized in that
Figure 744484DEST_PATH_IMAGE011
Ejecting entity
Figure 982699DEST_PATH_IMAGE012
Is determined by the position vector of (a),
Figure 753209DEST_PATH_IMAGE013
is the affine that accomplishes this process;
for affine transformed vector
Figure 910521DEST_PATH_IMAGE014
The weighted sum is performed, and the formula is as follows:
Figure 969744DEST_PATH_IMAGE015
in the formula (I), the compound is shown in the specification,
Figure 808387DEST_PATH_IMAGE016
for weighting, the position vectors of different features corresponding to the same entity are multiplied by the weight
Figure 761299DEST_PATH_IMAGE017
Then summing to form a weighted output vector
Figure 292775DEST_PATH_IMAGE018
For vector
Figure 305205DEST_PATH_IMAGE018
The scaling operation is performed, and the formula is as follows:
Figure 275435DEST_PATH_IMAGE019
in the formula (I), the compound is shown in the specification,
Figure 20538DEST_PATH_IMAGE020
is a constant value, usually in [0,1 ]]Take values between, in order to prevent the output from being 0, will
Figure 660597DEST_PATH_IMAGE021
Value of 10-8
Figure 491150DEST_PATH_IMAGE022
Is a constant that, in order to amplify the vector norm,
Figure 733913DEST_PATH_IMAGE022
the value is 0.5, the capsule neural network measures the probability of the certain entity by the magnitude of the modulus value of the vector, if the value is
Figure 271204DEST_PATH_IMAGE023
The larger the modulus value, the more the entity
Figure 144482DEST_PATH_IMAGE024
The larger the occurrence probability is, the more the formula is adopted as a compression function, and the sum of the modulus values of the output vectors is possibly greater than 1, so that the identification accuracy of a certain entity in gait identification can be improved;
updating weights by dynamic routing algorithms
Figure 524648DEST_PATH_IMAGE025
The formula is as follows:
Figure 180889DEST_PATH_IMAGE026
Figure 900583DEST_PATH_IMAGE027
in the formula (I), the compound is shown in the specification,
Figure 7079DEST_PATH_IMAGE028
to be a coupling coefficient, when
Figure 812224DEST_PATH_IMAGE014
And
Figure 537735DEST_PATH_IMAGE029
when they are similar, the dot product of the two is positive, corresponding
Figure 111936DEST_PATH_IMAGE030
Become larger, then correspond to
Figure 389333DEST_PATH_IMAGE014
Weight of (2)
Figure 416195DEST_PATH_IMAGE031
Become larger, here choosing to iterateThe times are 3;
according to the vector
Figure 945396DEST_PATH_IMAGE032
The magnitude of the modulus of the output image;
the loss function of the training capsule neural network is as follows:
Figure 170841DEST_PATH_IMAGE033
in the formula
Figure 556823DEST_PATH_IMAGE034
Is a classification of the type in which,
Figure 274244DEST_PATH_IMAGE035
is an indicative function of the classification (classification)
Figure 403874DEST_PATH_IMAGE034
The presence of the one or more of the one,
Figure 749404DEST_PATH_IMAGE035
is 1, otherwise
Figure 571867DEST_PATH_IMAGE035
Is 0) in the first step,
Figure 511004DEST_PATH_IMAGE036
false negatives are penalized for upper edge penalties (in case no existing classification is predicted),
Figure 178746DEST_PATH_IMAGE037
for lower edge penalties, false positives are penalized (in case of non-existent classification is predicted),
Figure 644362DEST_PATH_IMAGE038
is a scaling factor.
The invention relates to a gait recognition method based on a feedback weight convolutional neural network and a capsule neural network, which adopts a feedback weight method to update an input image, solves the problems of smaller training data set, more covariates and larger intra-class variance than inter-class variance to a certain extent, can effectively reflect the importance of different parts of a body to the gait recognition accuracy, simultaneously combines an improved capsule neural network and represents an entity in a vector mode, not only learns the attributes of the entity like the convolutional neural network, but also maintains the relationship among each attribute in the entity, maintains the isovariability of gait characteristics, and effectively improves the accuracy of cross-view gait recognition.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, serve to explain the invention and not to limit the invention.
FIG. 1 illustrates a flow chart of a method of gait recognition based on feedback weight convolutional neural network and capsule neural network according to an embodiment of the invention;
fig. 2 is a flowchart illustrating a gait recognition method based on a feedback weight convolutional neural network and a capsule neural network according to a preferred embodiment of the present invention.
Detailed Description
The present invention will now be described in detail with reference to the drawings, which are given by way of illustration and explanation only and should not be construed to limit the scope of the present invention in any way. Furthermore, features from embodiments in this document and from different embodiments may be combined accordingly by a person skilled in the art from the description in this document.
Fig. 1 is a flowchart of a gait recognition method based on a feedback weight convolutional neural network and a capsule neural network according to an embodiment of the invention, which includes the following steps:
s10, inputting a gait energy map;
s20, matching the characteristics of the input image from the bottom layer;
s30, extracting the gait characteristics of the input image by the convolutional neural network;
s40, updating the input image by the pixel-level feedback weight;
s50, reshaping the data shape by the primary capsule neural network;
s60, similarity of the output images of the improved capsule neural network.
Deep learning has been highly successful in the field of visual recognition, and the output of the fully connected layer is directly used as a feature representation in some gait recognition works, which hardly highlights the importance of different parts of the body to the gait recognition accuracy. And convolutional neural networks are inherently a lossy process, retaining only the most prominent features for maximum pooling, while avoiding overfitting, but discarding information that may play a more important role in the underlying layers.
The embodiment is a gait recognition method based on a feedback weight convolutional neural network and a capsule neural network, solves the problems of small training data set, more covariates and intra-class variance larger than inter-class variance to a certain extent, and can effectively reflect the importance of different body parts to gait recognition. Meanwhile, the improved capsule neural network is combined, so that not only can the attributes of the entity be learned like a convolutional neural network, but also the relationship between each attribute in the entity is reserved.
Fig. 2 is a flowchart illustrating a gait recognition method based on a feedback weight convolutional neural network and a capsule neural network according to a preferred embodiment of the present invention.
Preferably, as shown in fig. 2, S30 includes the steps of:
A. the convolutional neural network extracts gait characteristics of an input image, and comprises the following steps:
a) performing convolution operation on the input image to obtain a feature map
Figure 637726DEST_PATH_IMAGE001
The input gait energy map is
Figure 795650DEST_PATH_IMAGE039
Width of
Figure 329400DEST_PATH_IMAGE040
Height of
Figure 587206DEST_PATH_IMAGE041
Convolution kernel
Figure 423575DEST_PATH_IMAGE042
Has a width of
Figure 399621DEST_PATH_IMAGE043
Height of
Figure 737062DEST_PATH_IMAGE044
Step size of 1, position in the feature map
Figure 521478DEST_PATH_IMAGE045
The response of (c) is:
Figure 856644DEST_PATH_IMAGE046
in the formula (I), the compound is shown in the specification,
Figure 116724DEST_PATH_IMAGE047
indicating input image position
Figure 867643DEST_PATH_IMAGE048
The value of the pixel of (a) is,
Figure 100041DEST_PATH_IMAGE049
representing a convolution kernel
Figure 402846DEST_PATH_IMAGE050
Position of
Figure 353485DEST_PATH_IMAGE051
The value of (a) is (b),
Figure 908094DEST_PATH_IMAGE052
indicating a deviation;
b) batch standardization:
the distribution of internal neurons is reduced, the difference of different sample value domains is reduced, the convergence of the network is accelerated, and the generalization capability of the network is improved;
c) performing pooling treatment:
applying maximum pooling to feature maps
Figure 994999DEST_PATH_IMAGE053
And compressing to reduce the feature graph and simplify the network computation complexity, and compressing the features to extract the main features.
Preferably, as shown in fig. 2, S40 includes the steps of:
B. pixel-level feedback weight updating an input image, comprising:
a) from feature maps
Figure 468705DEST_PATH_IMAGE053
Extracting feature vector
Figure 641061DEST_PATH_IMAGE054
The extracted feature vector is
Figure 999361DEST_PATH_IMAGE055
The corresponding receptive field is
Figure 737510DEST_PATH_IMAGE056
Wherein
Figure 585380DEST_PATH_IMAGE057
The number of pixels;
b) training vector functions from feature vectors
Figure 182715DEST_PATH_IMAGE004
Approximating by a linear function
Figure 141443DEST_PATH_IMAGE004
The linear formula is:
Figure 265257DEST_PATH_IMAGE058
wherein
Figure 284029DEST_PATH_IMAGE059
Is a linear matrix of which the number of lines,
Figure 103080DEST_PATH_IMAGE060
is a deviation;
c) according to vector functions
Figure 865500DEST_PATH_IMAGE004
Converting feature vectors into weighting matrices
Figure 843820DEST_PATH_IMAGE005
d) Will matrix
Figure 33493DEST_PATH_IMAGE005
As weights for each receptive field of the input image;
e) updating the input image by the weighted receptive field;
f) the network is retrained with the updated input images.
Preferably, as shown in fig. 2, S60 includes the steps of:
C. improved similarity of capsule neural network output images, comprising:
a) affine transformation is carried out on the input vector, and the formula is as follows:
Figure 602490DEST_PATH_IMAGE006
in the formula (I), the compound is shown in the specification,
Figure 903021DEST_PATH_IMAGE007
is an input vector representing a feature in an image
Figure 735848DEST_PATH_IMAGE008
Figure 96422DEST_PATH_IMAGE009
Is a certain entity in the image that is,
Figure 890066DEST_PATH_IMAGE010
is characterized in that
Figure 994288DEST_PATH_IMAGE011
Ejecting entity
Figure 947201DEST_PATH_IMAGE012
Is determined by the position vector of (a),
Figure 478676DEST_PATH_IMAGE013
is the affine that accomplishes this process;
b) for affine transformed vector
Figure 759616DEST_PATH_IMAGE014
The weighted sum is performed, and the formula is as follows:
Figure 401950DEST_PATH_IMAGE015
in the formula (I), the compound is shown in the specification,
Figure 209369DEST_PATH_IMAGE016
for weighting, the position vectors of different features corresponding to the same entity are multiplied by the weight
Figure 583850DEST_PATH_IMAGE017
Then summing to form a weighted output vector
Figure 414402DEST_PATH_IMAGE018
c) For vector
Figure 922744DEST_PATH_IMAGE018
The scaling operation is performed, and the formula is as follows:
Figure 787932DEST_PATH_IMAGE019
in the formula (I), the compound is shown in the specification,
Figure 598893DEST_PATH_IMAGE020
is a constant value, usually in [0,1 ]]Take values between, in order to prevent the output from being 0, will
Figure 651163DEST_PATH_IMAGE021
Value of 10-8
Figure 963195DEST_PATH_IMAGE022
Is a constant that, in order to amplify the vector norm,
Figure 682890DEST_PATH_IMAGE022
the value is 0.5, the capsule neural network measures the probability of the certain entity by the magnitude of the modulus value of the vector, if the value is
Figure 664752DEST_PATH_IMAGE023
The larger the modulus value, the more the entity
Figure 1056DEST_PATH_IMAGE024
The larger the occurrence probability is, the more the formula is adopted as a compression function, and the sum of the output vector modulus values is possibly more than 1, so that the identification accuracy of a certain entity in gait identification can be improved;
d) updating weights by dynamic routing algorithms
Figure 788883DEST_PATH_IMAGE025
The formula is as follows:
Figure 300767DEST_PATH_IMAGE026
Figure 515848DEST_PATH_IMAGE027
in the formula (I), the compound is shown in the specification,
Figure 870606DEST_PATH_IMAGE028
to be a coupling coefficient, when
Figure 196545DEST_PATH_IMAGE014
And
Figure 828514DEST_PATH_IMAGE029
when they are similar, the dot product of the two is positive, corresponding
Figure 214496DEST_PATH_IMAGE030
Become larger, then correspond to
Figure 790971DEST_PATH_IMAGE014
Weight of (2)
Figure 920601DEST_PATH_IMAGE031
Increasing, the number of iterations is selected to be 3;
e) according to the vector
Figure 404148DEST_PATH_IMAGE032
The magnitude of the modulus of the output image;
f) the loss function of the training capsule neural network is as follows:
Figure 961031DEST_PATH_IMAGE033
in the formula
Figure 24802DEST_PATH_IMAGE034
Is a classification of the type in which,
Figure 958123DEST_PATH_IMAGE035
is an indicative function of the classification (classification)
Figure 299105DEST_PATH_IMAGE034
The presence of the one or more of the one,
Figure 26890DEST_PATH_IMAGE035
is 1, otherwise
Figure 577957DEST_PATH_IMAGE035
Is 0) in the first step,
Figure 987073DEST_PATH_IMAGE036
false negatives are penalized for upper edge penalties (in case no existing classification is predicted),
Figure 244879DEST_PATH_IMAGE037
for lower edge penalties, false positives are penalized (in case of non-existent classification is predicted),
Figure 205881DEST_PATH_IMAGE038
is a scaling factor.
Compared with the prior art, the embodiment of the invention has the advantages that:
the invention relates to a gait recognition method based on a feedback weight convolutional neural network and a capsule neural network, which adopts a feedback weight method to update an input image, solves the problems of smaller training data set, more covariates and larger intra-class variance than inter-class variance to a certain extent, can effectively reflect the importance of different parts of a body to the gait recognition accuracy, simultaneously combines an improved capsule neural network and represents an entity in a vector mode, not only learns the attributes of the entity like the convolutional neural network, but also maintains the relationship among each attribute in the entity, maintains the isovariability of gait characteristics, and effectively improves the accuracy of cross-view gait recognition.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It will be understood that the invention is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the invention is limited only by the appended claims.

Claims (1)

1. A gait recognition method based on a feedback weight convolution neural network and a capsule neural network is characterized by comprising the following steps:
inputting a gait energy map;
matching features of the input image from the bottom layer;
extracting gait features of an input image by a convolutional neural network;
updating the input image by pixel-level feedback weights;
reshaping the data shape using a primary capsule neural network;
similarity of output images using an improved capsule neural network;
the method for extracting the gait features of the input image by the convolutional neural network comprises the following steps:
performing convolution operation on an input image to obtain a feature map m;
carrying out batch standardization;
performing pooling treatment on the convolved data;
wherein updating the input image by pixel-level feedback weights comprises the steps of:
extracting a feature vector v from the feature map m;
training a vector function f according to the feature vector v;
converting the eigenvector v into a weighting matrix F according to a vector function F;
taking the matrix F as the weight of each receptive field of the input image;
updating the input image by the weighted receptive field;
retraining the network with the updated input image;
wherein the reshaping of the data shape using the primary capsule neural network comprises the steps of:
continuously extracting gait features of the image by using 8 different 2D convolutional neural networks;
remodeling the data shape and linking the data shape into a capsule neuron with a vector dimension of 8;
wherein outputting the similarity of the images using the improved capsule neural network comprises:
affine transformation is performed on the input vector, and the formula is as follows:
Figure FDA0003566875070000021
in the formula uiIs an input vector representing a feature i in an image, j is an entity in the image,
Figure FDA0003566875070000022
is the position vector of the feature i derived entity j, WijIs the affine that accomplishes this process;
for affine transformed vector
Figure FDA0003566875070000023
The weighted sum is performed, and the formula is as follows:
Figure FDA0003566875070000024
in the formula, cijFor weighting, the position vectors of different features corresponding to the same entity are multiplied by a weight cijAnd then summed to form a weighted output vector sj
For vector sjThe scaling operation is performed, and the formula is as follows:
Figure FDA0003566875070000025
where epsilon is a constant, typically in the range of [0, 1%]Take value between, to prevent the output from being 0, take value epsilon to 10-8A isA constant, in order to amplify vector norm, A is 0.5, capsule neural network uses the magnitude of vector module to measure the probability of some entity, if V isjThe larger the modulus value is, the larger the probability of occurrence of the entity j is, the sum of the output vector modulus values is possibly more than 1 by adopting the formula as a compression function, and thus, the identification accuracy of a certain entity in gait identification can be improved;
updating the weight c by a dynamic routing algorithmijThe formula is as follows:
Figure FDA0003566875070000026
Figure FDA0003566875070000031
in the formula, bijAs a coupling coefficient, when
Figure FDA0003566875070000032
And VjWhen the two are similar, the dot product of the two is positive, corresponding to bijBecome larger, then correspond to
Figure FDA0003566875070000033
Weight c ofijIncreasing, the number of iterations is selected to be 3;
according to vector VjThe magnitude of the modulus of the output image;
the loss function of the training capsule neural network is as follows:
Lk=Tkmax(0,m+-||vk||)2+λ(1-Tk)max(0,||vk||-m-)2
where k is class, TkIs an indicator function of the class, class k exists, TkIs 1, otherwise TkIs 0, m+Punishment is carried out for the upper edge, false negatives are punished, and the existing classification condition is not predicted; m is-Punishing for the lower edge, punishing false positive, and predicting the condition of nonexistent classification; λ is a proportionality coefficient.
CN201910380823.4A 2019-05-08 2019-05-08 Gait recognition method based on feedback weight convolutional neural network and capsule neural network Active CN110110668B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910380823.4A CN110110668B (en) 2019-05-08 2019-05-08 Gait recognition method based on feedback weight convolutional neural network and capsule neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910380823.4A CN110110668B (en) 2019-05-08 2019-05-08 Gait recognition method based on feedback weight convolutional neural network and capsule neural network

Publications (2)

Publication Number Publication Date
CN110110668A CN110110668A (en) 2019-08-09
CN110110668B true CN110110668B (en) 2022-05-17

Family

ID=67488858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910380823.4A Active CN110110668B (en) 2019-05-08 2019-05-08 Gait recognition method based on feedback weight convolutional neural network and capsule neural network

Country Status (1)

Country Link
CN (1) CN110110668B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110599457B (en) * 2019-08-14 2022-12-16 广东工业大学 Citrus huanglongbing classification method based on BD capsule network
CN111507288A (en) * 2020-04-22 2020-08-07 上海眼控科技股份有限公司 Image detection method, image detection device, computer equipment and storage medium
CN111528832B (en) * 2020-05-28 2023-04-18 四川大学华西医院 Arrhythmia classification method and validity verification method thereof
CN112364920B (en) * 2020-11-12 2023-05-23 西安电子科技大学 Thyroid cancer pathological image classification method based on deep learning
CN114140873A (en) * 2021-11-09 2022-03-04 武汉众智数字技术有限公司 Gait recognition method based on convolutional neural network multi-level features
CN115050101B (en) * 2022-07-18 2024-03-22 四川大学 Gait recognition method based on fusion of skeleton and contour features

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3257437A1 (en) * 2016-06-13 2017-12-20 Friedrich-Alexander-Universität Erlangen-Nürnberg Method and system for analyzing human gait
CN107103277B (en) * 2017-02-28 2020-11-06 中科唯实科技(北京)有限公司 Gait recognition method based on depth camera and 3D convolutional neural network
CN107292250A (en) * 2017-05-31 2017-10-24 西安科技大学 A kind of gait recognition method based on deep neural network
CN108985316B (en) * 2018-05-24 2022-03-01 西南大学 Capsule network image classification and identification method for improving reconstruction network

Also Published As

Publication number Publication date
CN110110668A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN110110668B (en) Gait recognition method based on feedback weight convolutional neural network and capsule neural network
CN112307958B (en) Micro-expression recognition method based on space-time appearance motion attention network
WO2022036777A1 (en) Method and device for intelligent estimation of human body movement posture based on convolutional neural network
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN106845478B (en) A kind of secondary licence plate recognition method and device of character confidence level
CN107529650B (en) Closed loop detection method and device and computer equipment
CN109977757B (en) Multi-modal head posture estimation method based on mixed depth regression network
CN108647583B (en) Face recognition algorithm training method based on multi-target learning
CN106295694B (en) Face recognition method for iterative re-constrained group sparse representation classification
CN111444881A (en) Fake face video detection method and device
CN115496928B (en) Multi-modal image feature matching method based on multi-feature matching
CN111582044A (en) Face recognition method based on convolutional neural network and attention model
CN111709313B (en) Pedestrian re-identification method based on local and channel combination characteristics
CN111783748A (en) Face recognition method and device, electronic equipment and storage medium
CN112232184B (en) Multi-angle face recognition method based on deep learning and space conversion network
JP2021532453A (en) Extraction of fast and robust skin imprint markings using feedforward convolutional neural networks
CN113298189A (en) Cross-domain image classification method based on unsupervised domain self-adaption
CN113763417B (en) Target tracking method based on twin network and residual error structure
CN112766376A (en) Multi-label eye fundus image identification method based on GACNN
CN112396036A (en) Method for re-identifying blocked pedestrians by combining space transformation network and multi-scale feature extraction
Yang et al. A Face Detection Method Based on Skin Color Model and Improved AdaBoost Algorithm.
KR101658528B1 (en) NIGHT VISION FACE RECOGNITION METHOD USING 2-Directional 2-Dimensional Principal Component Analysis ALGORITHM AND Polynomial-based Radial Basis Function Neural Networks
CN117523626A (en) Pseudo RGB-D face recognition method
CN114360058A (en) Cross-visual angle gait recognition method based on walking visual angle prediction
CN113435315A (en) Expression recognition method based on double-path neural network feature aggregation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant