WO2016065534A1 - 一种基于深度学习的歩态识别方法 - Google Patents

一种基于深度学习的歩态识别方法 Download PDF

Info

Publication number
WO2016065534A1
WO2016065534A1 PCT/CN2014/089698 CN2014089698W WO2016065534A1 WO 2016065534 A1 WO2016065534 A1 WO 2016065534A1 CN 2014089698 W CN2014089698 W CN 2014089698W WO 2016065534 A1 WO2016065534 A1 WO 2016065534A1
Authority
WO
WIPO (PCT)
Prior art keywords
gait
neural network
convolutional neural
energy map
video sequence
Prior art date
Application number
PCT/CN2014/089698
Other languages
English (en)
French (fr)
Inventor
谭铁牛
王亮
黄永祯
吴子丰
Original Assignee
中国科学院自动化研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院自动化研究所 filed Critical 中国科学院自动化研究所
Priority to PCT/CN2014/089698 priority Critical patent/WO2016065534A1/zh
Priority to US15/521,751 priority patent/US10223582B2/en
Publication of WO2016065534A1 publication Critical patent/WO2016065534A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the invention relates to computer vision and pattern recognition, in particular to a gait recognition method based on deep learning.
  • the more common method is to first obtain a human contour from all sequences of the video, and calculate the Gait Energy Image (GEI), and then compare the energy between the asynchronous energy maps. Similarity, finally matched by a nearest neighbor classifier.
  • GEI Gait Energy Image
  • the deep learning theory has achieved very good results in the fields of speech recognition, image target classification and detection, especially the deep convolutional neural network has a strong autonomous learning ability and a high degree of nonlinear mapping, which is a complex high precision design.
  • the classification model offers possibilities.
  • the present invention proposes a gait recognition method based on deep learning, which uses a gait energy map to describe a gait sequence, through depth.
  • the convolutional neural network trains the matching model to match the gait to identify the person's identity.
  • the method includes a training process and a recognition process as follows:
  • the training process S1 is: extracting a gait energy map from the trained gait video sequence with the labeled identity, and repeatedly selecting any two of the matching models based on the convolutional neural network to train until the model converges;
  • the identification process S2 is: extracting a gait energy map for the single-view to-be-identified gait video and the registered gait video sequence respectively, and using the trained convolutional neural network-based matching module in S1.
  • the type calculates the similarity between the gait energy map of the gait video to be recognized and the gait energy map of the registered gait video sequence, performs identity prediction according to the similarity degree, and outputs the recognition result.
  • the convolutional neural network based matching model comprises a feature extraction function module and a perceptron function module.
  • the steps of the training process S1 are as follows:
  • Step S11 extracting a gait energy map from a training gait video sequence including multiple perspectives
  • Step S12 extracting the gait energy map pair with the same identity as the positive sample, and extracting the gait energy map with different identities to the negative sample;
  • Step S13 selecting a positive sample or a negative sample to be sent to a feature extraction function module of a matching model based on a convolutional neural network, and extracting a gait energy map included in the sample to a corresponding feature pair;
  • Step S14 output the matching result by the perceptron function module that sends the feature pair obtained in S13 to the matching model based on the convolutional neural network;
  • Step S15 calculating an error between the matching result and the real result, and optimizing the above-described matching model based on the convolutional neural network;
  • Step S16 The steps S13 to S15 are repeated until the above-described convolutional neural network-based matching model converges.
  • the steps of the training process S2 are as follows:
  • Step S21 extracting a sequence of gait energy maps of the registered gait video sequence
  • Step S22 input the gait energy map sequence of the registered gait video sequence into the feature extraction function module of the matching model based on the convolutional neural network, and respectively calculate the corresponding feature sequence;
  • Step S23 extracting a gait energy map of the single-view to-be-identified gait video
  • Step S24 input the gait energy map of the single-view gait video to be input into the trained feature extraction function module of the matching model based on the convolutional neural network, and calculate corresponding features;
  • Step S25 passing the feature obtained in S24 and the feature sequence obtained in S22 based on The perceptron function module of the matching model of the convolutional neural network calculates the similarity respectively;
  • step S26 the result of the identification is calculated by the classifier according to the similarity obtained in S25.
  • the step of determining the first recognition process is added in S21. If the gait energy map of the registered gait video sequence is extracted for the first recognition process, S22 to S26 are sequentially executed; if not, the process is performed from S23 to S23. S26;
  • a matching library is set in S22, and the gait energy map of the registered gait video sequence and the corresponding feature calculated in S22 are saved in the matching library.
  • the plurality of viewing angles of the training gait video sequence are divided into 11 viewing angles according to the observation angle from 0 degrees to 180 degrees.
  • each registered gait video in the registered gait video sequence only needs to extract a gait energy map at a viewing angle.
  • the extraction of the gait energy map in S12 should be extracted from the gait energy maps of different viewing angles according to equal probability.
  • the ratio of the positive sample to the negative sample in S12 should be equal to the set value.
  • the number of positive and negative samples in S12 is equal.
  • the invention constructs a matching model based on convolutional neural network, trains the model through training gait video sequences with multiple perspectives, and optimizes corresponding parameters, so that the training model based on convolutional neural network has cross-view recognition.
  • the ability of gait; using the matching model based on convolutional neural network to perform feature extraction similarity calculation for single-view gait video and registered gait video sequence, and then to single-view gait video The identification of a person's identity is highly accurate when dealing with cross-view gait recognition.
  • the method can be widely applied to scenes equipped with video surveillance, such as: security monitoring of airports and supermarkets, personnel identification, criminal detection, and the like.
  • 1 is a schematic diagram of an algorithm framework of the present invention.
  • FIG. 2 is a schematic flow chart of the gait-based identity verification algorithm of the present invention.
  • Fig. 3 is a diagram showing a multi-view gait energy map of the present invention.
  • test process is equivalent to the recognition process in the actual application
  • test gait video is equivalent to the single-view to-be-identified step in the actual application. State video.
  • a convolutional neural network-based matching model is constructed by using a dual-channel convolutional neural network with shared weights, and the model includes a feature extraction function module and a perceptron function module, and the embodiment specifically includes a training process and a testing process, and a combination diagram.
  • Figure 2 shows the steps of the method in this embodiment as follows:
  • Step S11 extracting gait energy map sequences GEI-1, ..., GEI-i, ..., GEI-N from the training gait video sequence involving multiple viewing angles.
  • the traditional segmentation method based on the mixed Gaussian model is used to extract the silhouette of the human from the gait video sequence.
  • the foreground region is located and cut according to the center of gravity of the silhouette, normalized to the same scale by scaling, and then each is found.
  • the average silhouette of the sequence this is the gait energy map.
  • a multi-view walking video that has been marked by 100 people is used as a training gait video sequence, including a plurality of viewing angles, as shown in FIG. 3, and in a state where the height is substantially horizontal, the observation angle is divided into 0, 18, ... There are 11 angles of view at 180 degrees, and the identity of the pedestrian in each sequence is marked.
  • the human body silhouette is extracted from the above 1100 gait video sequences, and the gait energy map is calculated.
  • step S12 a positive sample and a negative sample are extracted.
  • the gait energy maps with the same identity are extracted as positive samples, and the gait energy maps with different identities are extracted to the negative samples.
  • the gait energy maps should be extracted from the gait energy maps of different perspectives according to the equal probability.
  • the first is training gait video
  • the gait energy maps of different perspectives in the sequence of gait energy maps are extracted with equal probability to train the matching model based on convolutional neural networks according to various cross-view conditions of fair extraction.
  • the second is to use positive and negative samples according to the set ratio.
  • the gait energy map of the same identity is far less than the gait energy map pair of different identities, if the ratio of the positive and negative samples is not constrained according to the natural probability, there will be too few positive samples, resulting in The matching model of the convolutional neural network over-fits during training. .
  • the probability of occurrence of positive and negative samples is equal.
  • each pair of gait energy maps constituting the positive and negative samples in S12 are respectively sent to a matching model based on a convolutional neural network, and their corresponding features are extracted by a forward propagation algorithm.
  • the feature extraction function module based on the convolutional neural network-based matching model extracts the corresponding features of the gait energy maps GEI-a and GEI-b as feature a and feature b. Because it requires the same operation on the two gait energy maps of the sample, it appears as two channels sharing weights.
  • a typical network parameter configuration is: the first layer has 16 7 ⁇ 7 convolutions, the step size is 1, with a 2 ⁇ 2 spatial aggregation layer of 2 steps; the second layer has 64 a 7 ⁇ 7 convolution with a step size of 1, with a spatial aggregation layer of 2 ⁇ 2 and a step size of 2; the third layer has 256 11 ⁇ 11 convolutions with a step size of 5;
  • Step S14 this step compares the features of the two gait energy maps extracted in S13 by the perceptron function module of the matching model based on the convolutional neural network and gives the similarity score, and performs identity judgment, and outputs the matching result. For example, when the similarity value is between 0 and 1, it can be set that when the similarity is greater than 0.5, the gait video sequence corresponding to the pair of features is predicted to have the same identity; otherwise, it is predicted to have different identities. Positive and negative samples
  • step S15 the error back propagation algorithm is used to train the matching model based on the convolutional neural network by using the error between the matching result and the real result.
  • step S16 the steps S13 to S15 are repeated until the above-described convolutional neural network-based matching model converges.
  • the above error back propagation algorithm is mainly used for the training of multi-layer models.
  • the main body is the iterative iteration of the two parts of the excitation propagation and the weight update until the convergence condition is reached.
  • the feature a and the feature b are first sent to the perceptron function module of the matching model based on the convolutional neural network to obtain the matching result, and then the matching result is compared with the real result, thereby obtaining the error of the output layer and the supervising layer. .
  • the known error is multiplied by the derivative of the function of the previous layer response to the previous layer to obtain the gradient of the weight matrix between the two layers, and then the weight is adjusted in a certain proportion in the opposite direction of the gradient. matrix. Then, the gradient is treated as the error of the previous layer to calculate the weight matrix of the previous layer. This is done to update the entire model.
  • the process mainly uses the matching model based on convolutional neural network trained in S1 to perform feature extraction and similarity calculation on the registered gait video sequence and test gait video respectively, thereby performing identity judgment.
  • a registered gait video sequence with pre-registered identity information is required, ie a gait sequence containing multiple people (eg 1000 people) and the identity of the corresponding person.
  • the data of the plurality of views provided in the registered gait video sequence can enhance the recognition effect, since the model trained in S15 has the ability to recognize the gait across the angle of view, the registered step here.
  • Each registered gait video in the video sequence only needs to include a gait video at an angle.
  • the test task here is that, given the above registered gait video sequence, for a single-view test gait video, the corresponding identity is predicted, as follows:
  • Step S21 referring to the method described in S11, using the training model based on the convolutional neural network trained in S1, extracting the gait energy map sequence of the registered gait video sequence;
  • Step S22 the gait energy map sequence of the registered gait video sequence is input into the feature extraction function module of the matching model based on the convolutional neural network, and the feature sequences robust to the cross-view change are respectively extracted. Reduce computational complexity in this way.
  • the example network structure given in step S13 increases the sampling interval of the third layer, and the length of the feature is 2304 (3 ⁇ 3 ⁇ 256) for the 128 ⁇ 128 gait energy map input;
  • Step S23 Referring to the method described in S11, using the trained convolutional neural network-based matching model in S1, extracting a gait energy map of the test gait video;
  • Step S24 using a matching model based on a convolutional neural network for testing gait video
  • a feature extraction function module that calculates features that are robust to changes in cross-view
  • step S25 the feature obtained in S24 and the feature sequence obtained in S22 are respectively calculated by the perceptron function module of the matching model based on the convolutional neural network;
  • the nearest neighbor classifier can be used to determine the identity of the current subject, that is, to give the identity registered by the sequence in the matching database with the highest similarity.
  • the step of judging the first test process may be added in S21. If the gait energy map of the registered gait video sequence is extracted for the first test process, S22 to S26 are sequentially executed; if it is not the first test The process is sequentially executed from S23 to S26. A matching library is set in S22, and the gait energy map of the registered gait video sequence and the corresponding feature calculated in S22 are saved in the matching library. In this way, in the first test process, the steps of feature extraction of the registered gait video sequence are omitted, and when the S25 is reached, the feature obtained in S24 can be directly compared with the feature saved in the matching library, which saves a lot of time.
  • a matching model based on convolutional neural network is constructed.
  • the model is trained by training gait video sequences with multiple perspectives, and the corresponding parameters are optimized, so that the training model based on convolutional neural network has a cross-view.
  • the ability to recognize gait; using the trained convolutional neural network-based matching model to perform feature extraction similarity calculation for single-view test gait video and registered gait video sequence, and then test gait video The identification of a person's identity is highly accurate when dealing with cross-view gait recognition.
  • the method can be widely applied to scenes equipped with video surveillance, such as: security monitoring of airports and supermarkets, personnel identification, criminal detection, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开一种基于深度学习的歩态识别方法,包括:利用深度学习的卷积神经网络的较强的学习能力,通过共享权重的双通道卷积神经网络来根据视频中人的歩态来识别其身份。该方法对跨较大视角的歩态变化有很强的鲁棒性,有效地解决了现有歩态识别技术在处理跨视角歩态识别时精度不高的问题。该方法可被广泛应用于配备视频监控的场景,如:机场及超市的安全监控,人员识别,罪犯检测等。

Description

一种基于深度学习的步态识别方法 技术领域
本发明涉及计算机视觉和模式识别,特别涉及一种基于深度学习的步态识别方法。
背景技术
在步态识别方法中,比较常见的方法是首先从视频所有序列中得到一个人的轮廓,并计算得到其步态能量图(Gait Energy Image,GEI),然后比较不同步态能量图之间的相似度,最后通过一个最近邻分类器来进行匹配。但是,以往的方法在遇到较严重的跨视角问题时都难以达到可实用的精度。
深度学习理论在语音识别、图像目标分类与检测等领域都取得了非常好的结果,尤其是深度卷积神经网络具有非常强的自主学习能力和高度的非线性映射,这为设计复杂的高精度分类模型提供了可能性。
发明内容
为了解决现有步态识别技术在处理跨视角步态识别时精度不高的问题,本发明提出了一种基于深度学习的步态识别方法,采用步态能量图来描述步态序列,通过深度卷积神经网络训练匹配模型,从而匹配步态识别人的身份。该方法包括训练过程和识别过程,具体如下:
训练过程S1为:对已标记好身份的训练步态视频序列提取步态能量图,重复选取其中任意两个对基于卷积神经网络的匹配模型进行训练,直至模型收敛;
识别过程S2为:对单视角待识别步态视频和已注册步态视频序列分别提取步态能量图,利用S1中训练好的基于卷积神经网络的匹配模 型计算单视角待识别步态视频的步态能量图与已注册步态视频序列每个步态能量图的相似度,依据相似度的大小进行身份预测,并输出识别结果。
优选的,所述的基于卷积神经网络的匹配模型包含特征提取功能模块和感知机功能模块。
优选的,训练过程S1的步骤如下:
步骤S11:从包含多个视角的训练步态视频序列提取步态能量图;
步骤S12:抽取身份相同的步态能量图对作为正样本,抽取身份不同的步态能量图对负样本;
步骤S13:选取一个正样本或负样本送入基于卷积神经网络的匹配模型的特征提取功能模块,提取出该样本所包含步态能量图对相应的特征对;
步骤S14:将S13中所得到的特征对送入基于卷积神经网络的匹配模型的感知机功能模块输出匹配结果;
步骤S15:计算匹配结果与真实结果的误差,并优化上述的基于卷积神经网络的匹配模型;
步骤S16:重复S13至S15步骤,直至上述基于卷积神经网络的匹配模型收敛。
优选的,训练过程S2的步骤如下:
步骤S21:提取已注册步态视频序列的步态能量图序列;
步骤S22:将已注册步态视频序列的步态能量图序列输入基于卷积神经网络的匹配模型的特征提取功能模块,分别计算出相应的特征序列;
步骤S23:提取单视角待识别步态视频的步态能量图;
步骤S24:将单视角待识别步态视频的步态能量图输入训练好的基于卷积神经网络的匹配模型的特征提取功能模块,计算出相应的特征;
步骤S25:将S24中获得的特征与S22中获得的特征序列通过基于 卷积神经网络的匹配模型的感知机功能模块分别计算相似度;
步骤S26,根据S25中所得到的相似度,用分类器计算出身份识别的结果。
优选的,S21中增设首次识别过程判断的步骤,若为首次识别过程则提取已注册步态视频序列的步态能量图后依次执行S22至S26;若为非首次识别过程则从S23依次执行至S26;
S22中设置有匹配库,将已注册步态视频序列的步态能量图及S22中计算出相应的特征保存存入匹配库。
优选的,所述多个视角的训练步态视频序列,其视角按照观察角度从0度至180度均分为11个视角。
优选的,所述已注册步态视频序列中每一个已注册步态视频只需提取一个视角下的步态能量图。
优选的,S12中步态能量图的抽取应按照相等的几率从不同视角的步态能量图中抽取。
优选的,S12中正样本和负样本的数量比应等于设定值。
优选的,S12中正样本和负样本的数量相等。
本发明构建了基于卷积神经网络的匹配模型,通过包含多个视角的训练步态视频序列训练该模型,并优化相应参数,使得训练得到的基于卷积神经网络的匹配模型已具备跨视角识别步态的能力;在识别过程中利用基于卷积神经网络的匹配模型对单视角待识别步态视频和已注册步态视频序列进行特征提取相似度计算,进而对单视角待识别步态视频中人的身份进行识别,在处理跨视角步态识别时具有较高的准确度。该方法可被广泛应用于配备视频监控的场景,如:机场及超市的安全监控,人员识别,罪犯检测等。
附图说明
图1是本发明算法框架示意图。
图2是本发明基于步态的身份验证算法流程示意图。
图3是本发明多视角步态能量图样例。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明作进一步的详细说明。
为了更好地结合具体实施例进行描述,本实施例结合实际的测试实例进行描述,其中的测试过程相当于实际应用中的识别过程,测试步态视频相当于实际应用中的单视角待识别步态视频。
本实施例利用共享权重的双通道卷积神经网络构建了基于卷积神经网络的匹配模型,该模型包含特征提取功能模块和感知机功能模块,该实施例具体包括训练过程和测试过程,结合图1、图2对本实施例方法步骤描述如下:
训练过程:
步骤S11,从涉及多个视角的训练步态视频序列中,提取步态能量图序列GEI-1、…、GEI-i、…、GEI-N。首先利用传统的基于混合高斯模型的前景分割方法从步态视频序列中提取出人的剪影,根据剪影的重心定位并剪切出前景区域,通过缩放归一化至同一尺度,然后求出每个序列的平均剪影图,此即为步态能量图。
例如,利用100人已标注的多视角步行视频作为训练步态视频序列,包括多个视角,具体如图3所示,与人高度大致水平状态下,按照观察角度分为0,18,……,180度共11个视角,并且标注了每一个序列中行人的身份,对上述1100个步态视频序列提取人体剪影,计算步态能量图。
步骤S12,抽取正样本与负样本。抽取身份相同的步态能量图对作为正样本,抽取身份不同的步态能量图对负样本,步态能量图的抽取应按照相等的几率从不同视角的步态能量图中抽取。首先是训练步态视频 序列的步态能量图序列中不同视角的步态能量图被抽取出来的几率要相等,以依照公平抽取的各种跨视角情况来训练基于卷积神经网络的匹配模型。其次是依照设定的比率使用正、负样本。由于同身份的步态能量图对要远少于不同身份的步态能量图对,若不对正、负样本的比率进行约束按自然几率进行抽取,会出现正样本太少的情况,从而导致基于卷积神经网络的匹配模型在训练过程中过拟合。。优选的,可令正、负样本出现的概率相等。
步骤S13,将S12中构成正、负样本的每对步态能量图分别送入基于卷积神经网络的匹配模型,采用前向传播算法提取它们的相应的特征。如图1中所示的,利用基于卷积神经网络的匹配模型的特征提取功能模块提取步态能量图GEI-a、GEI-b的相应特征为特征a、特征b。因为需要对样本的两幅步态能量图做相同的操作,所以它表现为共享权重的两个通道。例如,一个典型的网络的参数配置为:第一层有16个7×7的卷积子,步长为1,带有2×2且步长为2的空间聚集层;第二层有64个7×7的卷积子,步长为1,带有2×2且步长为2的空间聚集层;第三层有256个11×11的卷积子,步长为5;
步骤S14,此步骤通过基于卷积神经网络的匹配模型的感知机功能模块将S13中提取的两个步态能量图的特征进行比较并给出相似度分数,并进行身份判断,输出匹配结果。例如,相似度取值为0到1之间时,可以设定当相似度大于0.5时,预测该对特征对应的步态视频序列具有相同的身份;反之,预测其具有不同的身份。正、负样本
步骤S15,利用匹配结果与真实结果的误差,采用误差反向传播算法来训练基于卷积神经网络的匹配模型。
步骤S16,重复S13至S15步骤,直至上述基于卷积神经网络的匹配模型收敛。
上述误差反向传播算法主要用于多层模型的训练,其主体是激励传播以及权重更新两个环节的反复迭代,直至达到收敛条件时停止。在激 励传播阶段,先将特征a和特征b送入通过基于卷积神经网络的匹配模型的感知机功能模块获得匹配结果,然后将匹配结果与真实结果求差,从而获得输出层与监督层的误差。在权重更新阶段,先将已知误差与本层响应对前一层响应的函数的导数相乘,从而获得两层之间权重矩阵的梯度,然后沿这个梯度的反方向以某个比例调整权重矩阵。随后,将该梯度当作前一层的误差从而计算前一层的权重矩阵。以此类推完成对整个模型的更新。
测试过程,该过程主要利用S1中训练好的基于卷积神经网络的匹配模型分别对已注册步态视频序列、测试步态视频进行特征提取和相似度计算,从而进行身份判断。需要一个预先注册了身份信息的已注册步态视频序列,即包含多人(例如1000人)的步态序列及对应的人的身份。应当指出的是,虽然已注册步态视频序列中提供多个视角的数据能够增强识别的效果,但是由于S15中训练得到的模型已具备跨视角识别步态的能力,所以此处的已注册步态视频序列中的每一个已注册步态视频只需要包含一个角度下的步态视频即可。此处的测试任务为,给定上述已注册步态视频序列,对于一个单视角的测试步态视频,预测出其对应的身份,具体如下:
步骤S21,参照S11中所述方法,利用S1中训练好的基于卷积神经网络的匹配模型,提取已注册步态视频序列的步态能量图序列;
步骤S22,将已注册步态视频序列的步态能量图序列输入基于卷积神经网络的匹配模型的特征提取功能模块,分别提取出其对跨视角变化鲁棒的特征序列。以这种方式减小计算复杂度。考虑到特征体积的问题,步骤S13中给出的示例网络结构增大了第三层的采样间隔,对于128×128的步态能量图输入,特征的长度为2304(3×3×256);
步骤S23:参照S11中所述方法,利用S1中训练好的基于卷积神经网络的匹配模型,提取测试步态视频的步态能量图;
步骤S24,对测试步态视频,利用基于卷积神经网络的匹配模型的 特征提取功能模块,计算其对跨视角变化鲁棒的特征;
步骤S25,将S24中获得的特征与S22中获得的特征序列通过基于卷积神经网络的匹配模型的感知机功能模块分别计算相似度;。
步骤S26,在最简单的情况下,可以利用最近邻分类器决定当前被试的身份,即给出相似度最高的匹配库中的序列所注册的身份。
为了更好的提升匹配速度,可在S21中增设首次测试过程判断的步骤,若为首次测试过程则提取已注册步态视频序列的步态能量图后依次执行S22至S26;若为非首次测试过程则从S23依次执行至S26。S22中设置有匹配库,将已注册步态视频序列的步态能量图及S22中计算出相应的特征保存存入匹配库。这样再非首次测试过程中,省去了已注册步态视频序列特征提取的步骤,到达S25时可将S24中获得的特征直接与匹配库保存的特征进行相似度计算,节省了大量时间。
本实施例构建了基于卷积神经网络的匹配模型,通过包含多个视角的训练步态视频序列训练该模型,并优化相应参数,使得训练得到的基于卷积神经网络的匹配模型已具备跨视角识别步态的能力;在测试过程中利用训练好的基于卷积神经网络的匹配模型对单视角的测试步态视频和已注册步态视频序列进行特征提取相似度计算,进而测试步态视频中人的身份进行识别,在处理跨视角步态识别时具有较高的准确度。该方法可被广泛应用于配备视频监控的场景,如:机场及超市的安全监控,人员识别,罪犯检测等。
以上所述,仅为本发明中的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉该技术的人在本发明所揭露的技术范围内,可理解想到的变换或替换,都应涵盖在本发明的包含范围之内,因此,本发明的保护范围应该以权利要求书的保护范围为准。

Claims (10)

  1. 一种基于深度学习的步态识别方法,其特征在于,所述方法包括训练过程和识别过程,如下:
    训练过程S1为:对已标记好身份的训练步态视频序列提取步态能量图,重复选取其中任意两个对基于卷积神经网络的匹配模型进行训练,直至模型收敛;
    识别过程S2为:对单视角待识别步态视频和已注册步态视频序列分别提取步态能量图,利用S1中训练好的基于卷积神经网络的匹配模型计算单视角待识别步态视频的步态能量图与已注册步态视频序列每个步态能量图的相似度,依据相似度的大小进行身份预测,并输出识别结果。
  2. 根据权利要求1所述的方法,其特征在于,所述的基于卷积神经网络的匹配模型包含特征提取功能模块和感知机功能模块。
  3. 根据权利要求2所述的方法,其特征在于,训练过程S1的步骤如下:
    步骤S11:从包含多个视角的训练步态视频序列提取步态能量图;
    步骤S12:抽取身份相同的步态能量图对作为正样本,抽取身份不同的步态能量图对负样本;
    步骤S13:选取一个正样本或负样本送入基于卷积神经网络的匹配模型的特征提取功能模块,提取出该样本所包含步态能量图对相应的特征对;
    步骤S14:将S13中所得到的特征对送入基于卷积神经网络的匹配模型的感知机功能模块输出匹配结果;
    步骤S15:计算匹配结果与真实结果的误差,并优化上述的基于卷积神经网络的匹配模型;
    步骤S16:重复S13至S15步骤,直至上述基于卷积神经网络的匹 配模型收敛。
  4. 根据权利要求2或3所述的方法,其特征在于,训练过程S2的步骤如下:
    步骤S21:提取已注册步态视频序列的步态能量图序列;
    步骤S22:将已注册步态视频序列的步态能量图序列输入基于卷积神经网络的匹配模型的特征提取功能模块,分别计算出相应的特征序列;
    步骤S23:提取单视角待识别步态视频的步态能量图;
    步骤S24:将单视角待识别步态视频的步态能量图输入训练好的基于卷积神经网络的匹配模型的特征提取功能模块,计算出相应的特征;
    步骤S25:将S24中获得的特征与S22中获得的特征序列通过基于卷积神经网络的匹配模型的感知机功能模块分别计算相似度;
    步骤S26,根据S25中所得到的相似度,用分类器计算出身份识别的结果。
  5. 根据权利要求4所述的方法,其特征在于,S21中增设首次识别过程判断的步骤,若为首次识别过程则提取已注册步态视频序列的步态能量图后依次执行S22至S26;若为非首次识别过程则从S23依次执行至S26;
    S22中设置有匹配库,将已注册步态视频序列的步态能量图及S22中计算出相应的特征保存存入匹配库。
  6. 根据权利要求5所述的方法,其特征在于,所述多个视角的训练步态视频序列,其视角按照观察角度从0度至180度均分为11个视角。
  7. 根据权利要求6所述的方法,其特征在于,所述已注册步态视频序列中每一个已注册步态视频只需提取一个视角下的步态能量图。
  8. 根据权利要求7所述的方法,其特征在于,S12中步态能量图的抽取应按照相等的几率从不同视角的步态能量图中抽取。
  9. 根据权利要求8所述的方法,其特征在于,S12中正样本和负样本的数量比应等于设定值。
  10. 根据权利要求9所述的方法,其特征在于,所述的设定值为1。
PCT/CN2014/089698 2014-10-28 2014-10-28 一种基于深度学习的歩态识别方法 WO2016065534A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2014/089698 WO2016065534A1 (zh) 2014-10-28 2014-10-28 一种基于深度学习的歩态识别方法
US15/521,751 US10223582B2 (en) 2014-10-28 2014-10-28 Gait recognition method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/089698 WO2016065534A1 (zh) 2014-10-28 2014-10-28 一种基于深度学习的歩态识别方法

Publications (1)

Publication Number Publication Date
WO2016065534A1 true WO2016065534A1 (zh) 2016-05-06

Family

ID=55856354

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/089698 WO2016065534A1 (zh) 2014-10-28 2014-10-28 一种基于深度学习的歩态识别方法

Country Status (2)

Country Link
US (1) US10223582B2 (zh)
WO (1) WO2016065534A1 (zh)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944374A (zh) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 视频数据中特定对象检测方法及装置、计算设备
CN108470138A (zh) * 2018-01-24 2018-08-31 博云视觉(北京)科技有限公司 用于目标检测的方法和装置
CN108921114A (zh) * 2018-07-10 2018-11-30 福州大学 基于多非线性局部保持投影的两视角步态识别方法
CN109902605A (zh) * 2019-02-20 2019-06-18 哈尔滨工程大学 一种基于单能量图自适应分割的步态识别方法
CN109902558A (zh) * 2019-01-15 2019-06-18 安徽理工大学 一种基于cnn-lstm的人体健康深度学习预测方法
CN110097029A (zh) * 2019-05-14 2019-08-06 西安电子科技大学 基于Highway网络多视角步态识别的身份认证方法
CN110321801A (zh) * 2019-06-10 2019-10-11 浙江大学 一种基于自编码网络的换衣行人重识别方法及***
CN110378170A (zh) * 2018-04-12 2019-10-25 腾讯科技(深圳)有限公司 视频处理方法及相关装置,图像处理方法及相关装置
CN111144165A (zh) * 2018-11-02 2020-05-12 银河水滴科技(北京)有限公司 一种步态信息识别方法、***及存储介质
CN111259700A (zh) * 2018-12-03 2020-06-09 北京京东尚科信息技术有限公司 用于生成步态识别模型的方法和装置
CN111340090A (zh) * 2020-02-21 2020-06-26 浙江每日互动网络科技股份有限公司 图像特征比对方法及装置、设备、计算机可读存储介质
CN107742141B (zh) * 2017-11-08 2020-07-28 重庆西南集成电路设计有限责任公司 基于rfid技术的智能身份信息采集方法及***
CN111476077A (zh) * 2020-01-07 2020-07-31 重庆邮电大学 一种基于深度学习的多视角步态识别方法
CN111914762A (zh) * 2020-08-04 2020-11-10 浙江大华技术股份有限公司 基于步态信息的身份识别方法及装置
CN112001254A (zh) * 2020-07-23 2020-11-27 浙江大华技术股份有限公司 一种行人识别的方法及相关装置
CN112131970A (zh) * 2020-09-07 2020-12-25 浙江师范大学 一种基于多通道时空网络和联合优化损失的身份识别方法
CN112434622A (zh) * 2020-11-27 2021-03-02 浙江大华技术股份有限公司 基于卷积神经网络的行人分割与步态识别一体化方法
CN112580445A (zh) * 2020-12-03 2021-03-30 电子科技大学 基于生成对抗网络的人体步态图像视角转化方法
CN113139499A (zh) * 2021-05-10 2021-07-20 中国科学院深圳先进技术研究院 一种基于轻量注意力卷积神经网络的步态识别方法和***
CN113591552A (zh) * 2021-06-18 2021-11-02 新绎健康科技有限公司 一种基于步态加速度数据进行身份识别的方法及***
CN113963437A (zh) * 2021-10-15 2022-01-21 武汉众智数字技术有限公司 一种基于深度学习的步态识别序列获取方法和***
CN116524602A (zh) * 2023-07-03 2023-08-01 华东交通大学 基于步态特征的换衣行人重识别方法及***

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102459677B1 (ko) * 2015-11-05 2022-10-28 삼성전자주식회사 알고리즘 학습 방법 및 장치
US20170277955A1 (en) * 2016-03-23 2017-09-28 Le Holdings (Beijing) Co., Ltd. Video identification method and system
GB201704373D0 (en) * 2017-03-20 2017-05-03 Rolls-Royce Ltd Surface defect detection
CN106897714B (zh) * 2017-03-23 2020-01-14 北京大学深圳研究生院 一种基于卷积神经网络的视频动作检测方法
US10929829B1 (en) * 2017-05-04 2021-02-23 Amazon Technologies, Inc. User identification and account access using gait analysis
EP3759632A4 (en) 2018-03-01 2021-04-21 Infotoo International Limited METHODS AND APPARATUS FOR DETERMINING THE AUTHENTICITY OF AN INFORMATION SUPPORT DEVICE
CN108596211B (zh) * 2018-03-29 2020-08-28 中山大学 一种基于集中学习与深度网络学习的遮挡行人再识别方法
US10769260B2 (en) * 2018-04-10 2020-09-08 Assured Information Security, Inc. Behavioral biometric feature extraction and verification
US11449746B2 (en) 2018-04-10 2022-09-20 Assured Information Security, Inc. Behavioral biometric feature extraction and verification
US10769259B2 (en) 2018-04-10 2020-09-08 Assured Information Security, Inc. Behavioral biometric feature extraction and verification
CN108537206B (zh) * 2018-04-23 2021-08-10 山东浪潮科学研究院有限公司 一种基于卷积神经网络的人脸验证方法
CN108629324B (zh) * 2018-05-11 2022-03-08 武汉拓睿传奇科技有限公司 一种基于深度学习的步态模拟***和方法
CN108763897A (zh) * 2018-05-22 2018-11-06 平安科技(深圳)有限公司 身份合法性的校验方法、终端设备及介质
CN108921051B (zh) * 2018-06-15 2022-05-20 清华大学 基于循环神经网络注意力模型的行人属性识别网络及技术
CN108958482B (zh) * 2018-06-28 2021-09-28 福州大学 一种基于卷积神经网络的相似性动作识别装置及方法
CN109002784B (zh) * 2018-06-29 2021-04-13 国信优易数据股份有限公司 街景识别方法和***
CN109344688A (zh) * 2018-08-07 2019-02-15 江苏大学 一种基于卷积神经网络的监控视频中人的自动识别方法
CN109558834B (zh) * 2018-11-28 2022-06-21 福州大学 一种基于相似度学习及核方法的多视角步态识别方法
CN109902646A (zh) * 2019-03-08 2019-06-18 中南大学 一种基于长短时记忆网络的步态识别方法
CN110084166B (zh) * 2019-04-19 2020-04-10 山东大学 基于深度学习的变电站烟火智能识别监测方法
CN111079516B (zh) * 2019-10-31 2022-12-20 浙江工商大学 基于深度神经网络的行人步态分割方法
CN111291631B (zh) * 2020-01-17 2023-11-07 北京市商汤科技开发有限公司 视频分析方法及其相关的模型训练方法、设备、装置
CN111368787A (zh) * 2020-03-17 2020-07-03 浙江大学 视频处理方法及装置、设备和计算机可读存储介质
CN111639533A (zh) * 2020-04-28 2020-09-08 深圳壹账通智能科技有限公司 基于步态特征的体态检测方法、装置、设备及存储介质
CN111753678B (zh) * 2020-06-10 2023-02-07 西北工业大学 基于超声波的多设备协同步态感知与身份识别方法
CN111814618B (zh) * 2020-06-28 2023-09-01 浙江大华技术股份有限公司 行人重识别方法、步态识别网络训练方法及相关装置
CN112101176B (zh) * 2020-09-09 2024-04-05 元神科技(杭州)有限公司 一种结合用户步态信息的用户身份识别方法及***
CN112257559A (zh) * 2020-10-20 2021-01-22 江南大学 一种基于生物个体步态信息的身份识别方法
CN112214783B (zh) * 2020-11-18 2023-08-25 西北大学 一种基于可信执行环境的步态识别平台及识别方法
CN112613891B (zh) * 2020-12-24 2023-10-03 支付宝(杭州)信息技术有限公司 一种店铺注册信息验证方法、装置及设备
CN112686196A (zh) * 2021-01-07 2021-04-20 每日互动股份有限公司 图像选择方法、电子设备和计算机可读存储介质
CN112560002B (zh) * 2021-02-24 2021-05-18 北京邮电大学 基于步态行为的身份认证方法、装置、设备及存储介质
CN113177464B (zh) * 2021-04-27 2023-12-01 浙江工商大学 基于深度学习的端到端的多模态步态识别方法
CN113469095B (zh) * 2021-07-13 2023-05-16 浙江大华技术股份有限公司 一种基于步态的人物二次核验方法及装置
CN113673537B (zh) * 2021-07-14 2023-08-18 南京邮电大学 一种基于步态序列视频的人物轮廓特征提取方法
CN113556484B (zh) * 2021-07-16 2024-02-06 北京达佳互联信息技术有限公司 视频处理方法、装置、电子设备及计算机可读存储介质
CN113963411B (zh) * 2021-10-26 2022-08-16 电科智动(深圳)科技有限公司 身份识别模型的训练方法、装置、警用滑板车及存储介质
CN114140873A (zh) * 2021-11-09 2022-03-04 武汉众智数字技术有限公司 一种基于卷积神经网络多层次特征的步态识别方法
TWI796072B (zh) * 2021-12-30 2023-03-11 關貿網路股份有限公司 身分辨識系統、方法及其電腦可讀媒體
CN114120076B (zh) * 2022-01-24 2022-04-29 武汉大学 基于步态运动估计的跨视角视频步态识别方法
JP2024065990A (ja) 2022-10-31 2024-05-15 富士通株式会社 照合支援プログラム、照合支援方法、および情報処理装置
JP2024065888A (ja) 2022-10-31 2024-05-15 富士通株式会社 照合支援プログラム、照合支援方法、および情報処理装置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222215A (zh) * 2011-05-24 2011-10-19 北京工业大学 基于二级小波包分解和完全主成分分析的步态识别方法
CN103593651A (zh) * 2013-10-28 2014-02-19 西京学院 基于步态和二维判别分析的煤矿井下人员身份鉴别方法
CN103839081A (zh) * 2014-02-25 2014-06-04 中国科学院自动化研究所 一种基于拓扑表达的跨视角步态识别方法

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9775726B2 (en) * 2002-04-12 2017-10-03 James Jay Martin Electronically controlled prosthetic system
WO2008157622A1 (en) * 2007-06-18 2008-12-24 University Of Pittsburgh-Of The Commonwealth System Of Higher Education Method, apparatus and system for food intake and physical activity assessment
KR102222318B1 (ko) * 2014-03-18 2021-03-03 삼성전자주식회사 사용자 인식 방법 및 장치
US10335091B2 (en) * 2014-03-19 2019-07-02 Tactonic Technologies, Llc Method and apparatus to infer object and agent properties, activity capacities, behaviors, and intents from contact and pressure images
US20160005050A1 (en) * 2014-07-03 2016-01-07 Ari Teman Method and system for authenticating user identity and detecting fraudulent content associated with online activities
CN105574510A (zh) * 2015-12-18 2016-05-11 北京邮电大学 一种步态识别方法及装置
US9984284B2 (en) * 2016-09-19 2018-05-29 King Fahd University Of Petroleum And Minerals Apparatus and method for gait recognition

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222215A (zh) * 2011-05-24 2011-10-19 北京工业大学 基于二级小波包分解和完全主成分分析的步态识别方法
CN103593651A (zh) * 2013-10-28 2014-02-19 西京学院 基于步态和二维判别分析的煤矿井下人员身份鉴别方法
CN103839081A (zh) * 2014-02-25 2014-06-04 中国科学院自动化研究所 一种基于拓扑表达的跨视角步态识别方法

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107742141B (zh) * 2017-11-08 2020-07-28 重庆西南集成电路设计有限责任公司 基于rfid技术的智能身份信息采集方法及***
CN107944374A (zh) * 2017-11-20 2018-04-20 北京奇虎科技有限公司 视频数据中特定对象检测方法及装置、计算设备
CN108470138A (zh) * 2018-01-24 2018-08-31 博云视觉(北京)科技有限公司 用于目标检测的方法和装置
CN110378170B (zh) * 2018-04-12 2022-11-08 腾讯科技(深圳)有限公司 视频处理方法及相关装置,图像处理方法及相关装置
CN110378170A (zh) * 2018-04-12 2019-10-25 腾讯科技(深圳)有限公司 视频处理方法及相关装置,图像处理方法及相关装置
CN110443232A (zh) * 2018-04-12 2019-11-12 腾讯科技(深圳)有限公司 视频处理方法及相关装置,图像处理方法及相关装置
CN108921114A (zh) * 2018-07-10 2018-11-30 福州大学 基于多非线性局部保持投影的两视角步态识别方法
CN108921114B (zh) * 2018-07-10 2021-09-28 福州大学 基于多非线性局部保持投影的两视角步态识别方法
CN111144165B (zh) * 2018-11-02 2024-04-12 银河水滴科技(宁波)有限公司 一种步态信息识别方法、***及存储介质
CN111144165A (zh) * 2018-11-02 2020-05-12 银河水滴科技(北京)有限公司 一种步态信息识别方法、***及存储介质
CN111259700B (zh) * 2018-12-03 2024-04-09 北京京东尚科信息技术有限公司 用于生成步态识别模型的方法和装置
CN111259700A (zh) * 2018-12-03 2020-06-09 北京京东尚科信息技术有限公司 用于生成步态识别模型的方法和装置
CN109902558A (zh) * 2019-01-15 2019-06-18 安徽理工大学 一种基于cnn-lstm的人体健康深度学习预测方法
CN109902558B (zh) * 2019-01-15 2022-12-20 安徽理工大学 一种基于cnn-lstm的人体健康深度学习预测方法
CN109902605A (zh) * 2019-02-20 2019-06-18 哈尔滨工程大学 一种基于单能量图自适应分割的步态识别方法
CN109902605B (zh) * 2019-02-20 2023-04-07 哈尔滨工程大学 一种基于单能量图自适应分割的步态识别方法
CN110097029A (zh) * 2019-05-14 2019-08-06 西安电子科技大学 基于Highway网络多视角步态识别的身份认证方法
CN110097029B (zh) * 2019-05-14 2022-12-06 西安电子科技大学 基于Highway网络多视角步态识别的身份认证方法
CN110321801A (zh) * 2019-06-10 2019-10-11 浙江大学 一种基于自编码网络的换衣行人重识别方法及***
CN110321801B (zh) * 2019-06-10 2021-08-03 浙江大学 一种基于自编码网络的换衣行人重识别方法及***
CN111476077A (zh) * 2020-01-07 2020-07-31 重庆邮电大学 一种基于深度学习的多视角步态识别方法
CN111340090A (zh) * 2020-02-21 2020-06-26 浙江每日互动网络科技股份有限公司 图像特征比对方法及装置、设备、计算机可读存储介质
CN112001254B (zh) * 2020-07-23 2021-07-13 浙江大华技术股份有限公司 一种行人识别的方法及相关装置
CN112001254A (zh) * 2020-07-23 2020-11-27 浙江大华技术股份有限公司 一种行人识别的方法及相关装置
CN111914762A (zh) * 2020-08-04 2020-11-10 浙江大华技术股份有限公司 基于步态信息的身份识别方法及装置
CN112131970A (zh) * 2020-09-07 2020-12-25 浙江师范大学 一种基于多通道时空网络和联合优化损失的身份识别方法
CN112434622A (zh) * 2020-11-27 2021-03-02 浙江大华技术股份有限公司 基于卷积神经网络的行人分割与步态识别一体化方法
CN112580445A (zh) * 2020-12-03 2021-03-30 电子科技大学 基于生成对抗网络的人体步态图像视角转化方法
CN113139499A (zh) * 2021-05-10 2021-07-20 中国科学院深圳先进技术研究院 一种基于轻量注意力卷积神经网络的步态识别方法和***
CN113591552A (zh) * 2021-06-18 2021-11-02 新绎健康科技有限公司 一种基于步态加速度数据进行身份识别的方法及***
CN113963437A (zh) * 2021-10-15 2022-01-21 武汉众智数字技术有限公司 一种基于深度学习的步态识别序列获取方法和***
CN116524602A (zh) * 2023-07-03 2023-08-01 华东交通大学 基于步态特征的换衣行人重识别方法及***
CN116524602B (zh) * 2023-07-03 2023-09-19 华东交通大学 基于步态特征的换衣行人重识别方法及***

Also Published As

Publication number Publication date
US10223582B2 (en) 2019-03-05
US20170243058A1 (en) 2017-08-24

Similar Documents

Publication Publication Date Title
WO2016065534A1 (zh) 一种基于深度学习的歩态识别方法
CN109740413B (zh) 行人重识别方法、装置、计算机设备及计算机存储介质
WO2021098261A1 (zh) 一种目标检测方法与装置
CN107194341B (zh) Maxout多卷积神经网络融合人脸识别方法和***
CN107145842B (zh) 结合lbp特征图与卷积神经网络的人脸识别方法
CN103984915B (zh) 一种监控视频中行人重识别方法
US10262190B2 (en) Method, system, and computer program product for recognizing face
CN104299012B (zh) 一种基于深度学习的步态识别方法
Yar et al. Optimized dual fire attention network and medium-scale fire classification benchmark
CN108960184A (zh) 一种基于异构部件深度神经网络的行人再识别方法
Qu et al. Moving vehicle detection with convolutional networks in UAV videos
CN106415594A (zh) 用于面部验证的方法和***
CN109214263A (zh) 一种基于特征复用的人脸识别方法
Wang et al. Face live detection method based on physiological motion analysis
CN111414875B (zh) 基于深度回归森林的三维点云头部姿态估计***
CN111985332B (zh) 一种基于深度学习的改进损失函数的步态识别方法
CN109697236A (zh) 一种多媒体数据匹配信息处理方法
Zheng et al. Fatigue driving detection based on Haar feature and extreme learning machine
Kong et al. 3D face recognition algorithm based on deep Laplacian pyramid under the normalization of epidemic control
Das et al. Human face detection in color images using HSV color histogram and WLD
CN110490053B (zh) 一种基于三目摄像头深度估计的人脸属性识别方法
CN109711232A (zh) 基于多目标函数的深度学习行人重识别方法
Patil et al. IpSegNet: deep convolutional neural network based segmentation framework for iris and pupil
Parasnis et al. RoadScan: A Novel and Robust Transfer Learning Framework for Autonomous Pothole Detection in Roads
Karthigayani et al. A novel approach for face recognition and age estimation using local binary pattern, discriminative approach using two layered back propagation network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14904979

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15521751

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14904979

Country of ref document: EP

Kind code of ref document: A1