CN111985332A - Gait recognition method for improving loss function based on deep learning - Google Patents

Gait recognition method for improving loss function based on deep learning Download PDF

Info

Publication number
CN111985332A
CN111985332A CN202010696163.3A CN202010696163A CN111985332A CN 111985332 A CN111985332 A CN 111985332A CN 202010696163 A CN202010696163 A CN 202010696163A CN 111985332 A CN111985332 A CN 111985332A
Authority
CN
China
Prior art keywords
loss function
gait
training
network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010696163.3A
Other languages
Chinese (zh)
Other versions
CN111985332B (en
Inventor
胡海根
汪鹏飞
吴泽成
周乾伟
李小薪
钱汉望
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN202010696163.3A priority Critical patent/CN111985332B/en
Publication of CN111985332A publication Critical patent/CN111985332A/en
Application granted granted Critical
Publication of CN111985332B publication Critical patent/CN111985332B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

A gait recognition method of an improved loss function based on deep learning comprises the following steps: step 1, acquiring a pedestrian gait data set; step 2, preprocessing the training data obtained in the step 1, and cutting the data into 64 × 64 by using a center line principle; step 3, building a deep convolutional neural network; step 4, designing a loss function; step 5, initializing neural network parameters; step 6, training the constructed neural network, inputting the training sample obtained in the step 2 as input and the corresponding actual identity label as output into the network in batches, and adjusting the network parameters and the weight of a loss function through a back propagation algorithm after calculating loss; and 7, identifying unknown data by using the trained network, and dividing the identification into two stages of registration and identification. The method can better retain the motion information on time and space dimensions, and achieve better recognition effect under complex scenes such as backpacks, dresses and the like.

Description

Gait recognition method for improving loss function based on deep learning
Technical Field
The invention belongs to the technical field of computer vision, and relates to a gait recognition method for improving a loss function based on deep learning.
Technical Field
The gait recognition carries out the identity recognition through the walking posture of people, compared with other biological characteristic recognition technologies, the gait recognition has the advantages of non-contact, long distance, difficulty in camouflage and the like, and has wide application in crime prevention, forensic identification and social security.
Currently, gait recognition is mainly divided into two methods, namely image recognition and video sequence recognition. The former compresses all gait contour maps into an image, and recognizes gait recognition as an image matching problem, and obviously, the method ignores information on a time dimension in gait and cannot model fine information of a space dimension; the later method extracts features from the contour, and can well model information of time and space dimensions in gait recognition by using an LSTM, a 3D-CNN or a double-flow method, but the calculation cost is high, and the training is not easy. At present, gait recognition methods are basically carried out on a binary image without a background, and the accuracy is influenced by factors such as wearing, dressing, angles of a camera and the like of a target.
Disclosure of Invention
In order to overcome the defects of the prior art, time and space dimension information is not lost while training is easy, and meanwhile, the accuracy rate under complex scenes that a target wears overcoat, a backpack and the like can be improved, the invention provides a gait recognition method for improving a loss function based on deep learning, wherein gait images are taken as an image set, and the loss function is improved.
In order to solve the above technical problems, the present invention can provide the following technical solutions:
a gait recognition method of an improved loss function based on deep learning, the method comprising the steps of:
step 1, using a gait recognition data set or self-establishing the data set, wherein the gait recognition data set is CASIA-B or OU-MVLP, and preprocessing the data set, and the process comprises the following steps:
1.1) if an image acquisition device is used for acquiring a gait image of a pedestrian, extracting a human body target contour from the acquired image by depeplabv 3+ and converting the human body target contour into a binary image;
1.2) cutting the image into 64 x 64 by using the center line principle;
1.3) dividing the data set into a training set and a testing set;
step 2, training stage, namely training the deep convolutional neural network on the training set, wherein the process is as follows:
2.1) constructing a deep convolutional neural network, extracting frame level features of an image by a CNN module, extracting sequence level features from the frame level features by an SP module, extracting sequence information of different levels by an MGP module, and extracting local and global features simultaneously by an HPM;
2.2) designing a loss function, and defining the loss function as follows:
Figure BDA0002591083500000021
Figure BDA0002591083500000022
Figure BDA0002591083500000023
Figure BDA0002591083500000024
where an represents the original sample, po represents the sample of the same class as an,ne represents samples of different classes from an, d (x, y) represents Euclidean distances of x and y in an embedding space, margin is a positive integer for enlarging the distance between different label samples, N represents the number of samples in a batch, M represents the number of classes, P represents the number of people in a batch, K represents the number of pictures of each person in a batch, P (X) represents the distribution of the true samples, Q (X) represents the distribution of the network prediction, LBCEAnd LBFIs an improved loss function;
2.3) weighting σ of the loss function1And σ2As a parameter of the network;
2.4) initializing neural network parameters;
2.5) taking the training sample obtained in the step 1 as input, taking a corresponding actual identity label as output, inputting the training sample into the network in batches, and adjusting the weight of the network parameter and the loss function through a back propagation algorithm after calculating the loss;
2.6) repeating 2.5) until the training is finished;
step 3, in the testing stage, the testing data is a testing set or collected data, and the process is as follows:
3.1) registering, inputting a gait image sequence set G, and carrying out forward propagation on each image sequence G in the G through the networkiCalculating the characteristic vector to obtain a characteristic vector set FgAnd storing in a gait database;
3.2) identifying, inputting a gait image sequence Q, traversing all sequences in an image sequence set G to find the same identity label, and obtaining a characteristic vector F through network forward propagationqAnd gait database FgAnd calculating Euclidean distance by each feature vector, wherein the identity label corresponding to the feature vector with the minimum distance is the label of Q.
Further, in step 2, the training phase is set as follows: the optimizer uses Adam with a learning rate of 1e-4, a total number of iterations of 80K, a batch size of (8,8), meaning that 8 people are taken for one batch, 8 images per person, LBA+Is set to 2, the weight of the loss function σ1And σ2Are all initialized to 0.5.
The technical conception of the invention is as follows: extracting space dimension information of the gait by using a convolutional neural network, and extracting time dimension information of the gait by using an attention mechanism; secondly, the loss function is improved, the weight of the loss function is used as the parameter of the network for training, and the weight can be made to be self-adaptive.
The invention has the following beneficial effects: the input gait images do not need to be ordered, and the accuracy rate of the gait images under complex scenes of target wearing of overcoat, backpack and the like is improved.
Drawings
Fig. 1 is a network architecture diagram of the method of the present invention.
Fig. 2 is a schematic centerline principle cut.
Fig. 3 is a flow chart of the method of the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 3, a gait recognition method based on a deep learning improved loss function, which regards gait as an image sequence composed of independent frames, extracts image space features and time features at the same time, and is not affected by frame arrangement. The network firstly extracts frame-level features from a plurality of images through CNN feature extraction; then, using a Set Pooling-based multi-feature Set for Pooling, and extracting sequence-level features from the frame-level features; simultaneously, multi-feature fusion based on multi-layer execution of a full flow pipeline MGP is used for sequence information of different levels; finally, HPM-based multi-scale feature identification is used to extract local and global features simultaneously.
The process of cutting the image into 64 x 64 by the center line principle refers to fig. 2.
Referring to fig. 3, the gait recognition method based on the improved loss function of deep learning comprises the following steps:
step 1, using a gait recognition data set or self-establishing the data set, wherein the gait recognition data set is CASIA-B or OU-MVLP, and preprocessing the data set, and the process comprises the following steps:
1.1) if an image acquisition device is used for acquiring a gait image of a pedestrian, extracting a human body target contour from the acquired image by depeplabv 3+ and converting the human body target contour into a binary image;
1.2) cutting the image into 64 x 64 by using the center line principle;
1.3) dividing the data set into a training set and a testing set;
step 2, training stage, namely training the deep convolutional neural network on the training set, wherein the process is as follows:
2.1) constructing a deep convolutional neural network, extracting frame level features of an image by a CNN module, extracting sequence level features from the frame level features by an SP module, extracting sequence information of different levels by an MGP module, and extracting local and global features simultaneously by an HPM;
2.2) designing a loss function, and defining the loss function as follows:
Figure BDA0002591083500000041
Figure BDA0002591083500000042
Figure BDA0002591083500000043
Figure BDA0002591083500000044
wherein an represents an original sample, po represents a sample of the same category as an, ne represents a sample of a different category from an, d (x, y) represents Euclidean distance of x and y in an embedding space, margin is a positive integer for enlarging the distance between different label samples, N represents the number of samples in one batch, M represents the number of categories, P represents the number of people in one batch, K represents the number of pictures of each person in one batch, P (X) represents the real distribution of samples, Q (X) represents the distribution of network prediction, L (X) represents the real distribution of samples, andBCEand LBFIs an improved loss function;
2.3) weighting σ of the loss function1And σ2As a parameter of the network;
2.4) initializing neural network parameters;
2.5) taking the training sample obtained in the step 1 as input, taking a corresponding actual identity label as output, inputting the training sample into the network in batches, and adjusting the weight of the network parameter and the loss function through a back propagation algorithm after calculating the loss;
2.6) repeating 2.5) until the training is finished;
step 3, in the testing stage, the testing data is a testing set or collected data, and the process is as follows:
3.1) registering, inputting a gait image sequence set G, and carrying out forward propagation on each image sequence G in the G through the networkiCalculating the characteristic vector to obtain a characteristic vector set FgAnd storing in a gait database;
3.2) identifying, inputting a gait image sequence Q, traversing all sequences in an image sequence set G to find the same identity label, and obtaining a characteristic vector F through network forward propagationqAnd gait database FgAnd calculating Euclidean distance by each feature vector, wherein the identity label corresponding to the feature vector with the minimum distance is the label of Q.
Further, in step 2, the training phase is set as follows: the optimizer uses Adam with a learning rate of 1e-4, a total number of iterations of 80K, a batch size of (8,8), meaning that 8 people are taken for one batch, 8 images per person, LBA+Is set to 2, the weight of the loss function σ1And σ2Are all initialized to 0.5.
According to the scheme of the embodiment, the accuracy of the network in two complex scenes, namely BG (carrying bag) and CL (wearing overcoat), of the CASIA-B data set is improved through improvement of the loss function.

Claims (2)

1. A gait recognition method of an improved loss function based on deep learning is characterized by comprising the following steps:
step 1, using a gait recognition data set or self-establishing the data set, wherein the gait recognition data set comprises CASIA-B or OU-MVLP, and preprocessing the data set, wherein the process comprises the following steps:
1.1) if an image acquisition device is used for acquiring a gait image of a pedestrian, extracting a human body target contour from the acquired image by depeplabv 3+ and converting the human body target contour into a binary image;
1.2) cutting the image into 64 x 64 by using the center line principle;
1.3) dividing the data set into a training set and a testing set;
step 2, training stage, namely training the deep convolutional neural network on the training set, wherein the process is as follows:
2.1) constructing a deep convolutional neural network, extracting frame level features of an image by a CNN module, extracting sequence level features from the frame level features by an SP module, extracting sequence information of different levels by an MGP module, and extracting local and global features simultaneously by an HPM;
2.2) designing a loss function, and defining the loss function as follows:
Figure FDA0002591083490000011
Figure FDA0002591083490000012
Figure FDA0002591083490000013
Figure FDA0002591083490000014
Figure FDA0002591083490000015
where an denotes the original sample, po denotes the sample of the same class as an, ne denotes the sample of a different class from an, d(x, y) represents Euclidean distance of x and y in embedding space, margin is a positive integer for expanding the distance between different label samples, N represents the number of samples in a batch, M represents the number of categories, P represents the number of people in a batch, K represents the number of pictures of each person in a batch, P (X) represents the true distribution of the samples, Q (X) represents the distribution of network prediction, LBCEAnd LBFIs an improved loss function;
2.3) weighting σ of the loss function1And σ2As a parameter of the network;
2.4) initializing neural network parameters;
2.5) taking the training sample obtained in the step 1 as input, taking a corresponding actual identity label as output, inputting the training sample into the network in batches, and adjusting the weight of the network parameter and the loss function through a back propagation algorithm after calculating the loss;
2.6) repeating 2.5) until the training is finished;
step 3, in the testing stage, the testing data is a testing set or collected data, and the process is as follows:
3.1) registering, inputting a gait image sequence set G, and carrying out forward propagation on each image sequence G in the G through the networkiCalculating the characteristic vector to obtain a characteristic vector set FgAnd storing in a gait database;
3.2) identifying, inputting a gait image sequence Q, traversing all sequences in an image sequence set G to find the same identity label, and obtaining a characteristic vector F through network forward propagationqAnd gait database FgAnd calculating Euclidean distance by each feature vector, wherein the identity label corresponding to the feature vector with the minimum distance is the label of Q.
2. The gait recognition method for improving the loss function based on deep learning as claimed in claim 1, wherein in the step 2, the training phase is set as follows: the optimizer uses Adam with a learning rate of 1e-4, a total number of iterations of 80K, a batch size of (8,8), meaning that 8 people are taken for one batch, 8 images per person, LBA+Is set to 2, the weight of the function is lostWeight σ1And σ2Are all initialized to 0.5.
CN202010696163.3A 2020-07-20 2020-07-20 Gait recognition method of improved loss function based on deep learning Active CN111985332B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010696163.3A CN111985332B (en) 2020-07-20 2020-07-20 Gait recognition method of improved loss function based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010696163.3A CN111985332B (en) 2020-07-20 2020-07-20 Gait recognition method of improved loss function based on deep learning

Publications (2)

Publication Number Publication Date
CN111985332A true CN111985332A (en) 2020-11-24
CN111985332B CN111985332B (en) 2024-05-10

Family

ID=73439277

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010696163.3A Active CN111985332B (en) 2020-07-20 2020-07-20 Gait recognition method of improved loss function based on deep learning

Country Status (1)

Country Link
CN (1) CN111985332B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801008A (en) * 2021-02-05 2021-05-14 电子科技大学中山学院 Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN112818808A (en) * 2021-01-27 2021-05-18 南京大学 High-precision gait recognition method combining two vector embedding spaces
CN112906673A (en) * 2021-04-09 2021-06-04 河北工业大学 Lower limb movement intention prediction method based on attention mechanism
CN114140873A (en) * 2021-11-09 2022-03-04 武汉众智数字技术有限公司 Gait recognition method based on convolutional neural network multi-level features

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921019A (en) * 2018-05-27 2018-11-30 北京工业大学 A kind of gait recognition method based on GEI and TripletLoss-DenseNet
CN110503053A (en) * 2019-08-27 2019-11-26 电子科技大学 Human motion recognition method based on cyclic convolution neural network
CN111160294A (en) * 2019-12-31 2020-05-15 西安理工大学 Gait recognition method based on graph convolution network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921019A (en) * 2018-05-27 2018-11-30 北京工业大学 A kind of gait recognition method based on GEI and TripletLoss-DenseNet
CN110503053A (en) * 2019-08-27 2019-11-26 电子科技大学 Human motion recognition method based on cyclic convolution neural network
CN111160294A (en) * 2019-12-31 2020-05-15 西安理工大学 Gait recognition method based on graph convolution network

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112818808A (en) * 2021-01-27 2021-05-18 南京大学 High-precision gait recognition method combining two vector embedding spaces
CN112818808B (en) * 2021-01-27 2024-01-19 南京大学 High-precision gait recognition method combining two vector embedding spaces
CN112801008A (en) * 2021-02-05 2021-05-14 电子科技大学中山学院 Pedestrian re-identification method and device, electronic equipment and readable storage medium
CN112801008B (en) * 2021-02-05 2024-05-31 电子科技大学中山学院 Pedestrian re-recognition method and device, electronic equipment and readable storage medium
CN112906673A (en) * 2021-04-09 2021-06-04 河北工业大学 Lower limb movement intention prediction method based on attention mechanism
CN114140873A (en) * 2021-11-09 2022-03-04 武汉众智数字技术有限公司 Gait recognition method based on convolutional neural network multi-level features

Also Published As

Publication number Publication date
CN111985332B (en) 2024-05-10

Similar Documents

Publication Publication Date Title
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN111985332B (en) Gait recognition method of improved loss function based on deep learning
CN111259786B (en) Pedestrian re-identification method based on synchronous enhancement of appearance and motion information of video
CN108520216B (en) Gait image-based identity recognition method
CN109815826B (en) Method and device for generating face attribute model
CN109325952B (en) Fashionable garment image segmentation method based on deep learning
CN108921019B (en) Gait recognition method based on GEI and TripletLoss-DenseNet
CN112818931A (en) Multi-scale pedestrian re-identification method based on multi-granularity depth feature fusion
CN109522853B (en) Face datection and searching method towards monitor video
CN107145842A (en) With reference to LBP characteristic patterns and the face identification method of convolutional neural networks
CN108288051B (en) Pedestrian re-recognition model training method and device, electronic equipment and storage medium
CN111310668B (en) Gait recognition method based on skeleton information
CN110992351B (en) sMRI image classification method and device based on multi-input convolution neural network
CN105956560A (en) Vehicle model identification method based on pooling multi-scale depth convolution characteristics
CN109902565B (en) Multi-feature fusion human behavior recognition method
CN104077742B (en) Human face sketch synthetic method and system based on Gabor characteristic
CN111914643A (en) Human body action recognition method based on skeleton key point detection
CN111582154A (en) Pedestrian re-identification method based on multitask skeleton posture division component
CN111340758A (en) Novel efficient iris image quality evaluation method based on deep neural network
CN112669343A (en) Zhuang minority nationality clothing segmentation method based on deep learning
CN111914762A (en) Gait information-based identity recognition method and device
CN115100684A (en) Clothes-changing pedestrian re-identification method based on attitude and style normalization
CN112131950B (en) Gait recognition method based on Android mobile phone
CN104376312A (en) Face recognition method based on word bag compressed sensing feature extraction
CN106971176A (en) Tracking infrared human body target method based on rarefaction representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant