WO2017071083A1 - 指纹识别方法及装置 - Google Patents

指纹识别方法及装置 Download PDF

Info

Publication number
WO2017071083A1
WO2017071083A1 PCT/CN2015/099511 CN2015099511W WO2017071083A1 WO 2017071083 A1 WO2017071083 A1 WO 2017071083A1 CN 2015099511 W CN2015099511 W CN 2015099511W WO 2017071083 A1 WO2017071083 A1 WO 2017071083A1
Authority
WO
WIPO (PCT)
Prior art keywords
fingerprint
feature
coding
training
classifier
Prior art date
Application number
PCT/CN2015/099511
Other languages
English (en)
French (fr)
Inventor
张涛
汪平仄
张胜凯
Original Assignee
小米科技有限责任公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 小米科技有限责任公司 filed Critical 小米科技有限责任公司
Priority to MX2016005225A priority Critical patent/MX361142B/es
Priority to RU2016129191A priority patent/RU2642369C2/ru
Priority to KR1020167005169A priority patent/KR101992522B1/ko
Priority to JP2017547061A priority patent/JP2018500707A/ja
Publication of WO2017071083A1 publication Critical patent/WO2017071083A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19173Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/192Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
    • G06V30/194References adjustable by an adaptive method, e.g. learning

Definitions

  • the present disclosure relates to the field of image recognition technologies, and in particular, to a fingerprint identification method and apparatus.
  • fingerprint identification Since the beginning of fingerprint identification in around 1980, fingerprint identification has been very mature and applied in the civil and post-1990 fields.
  • fingerprint recognition in the related art generally requires that the fingerprint of the user cannot be too dry, and the quality of the fingerprint image is sufficiently clear, thereby ensuring the global feature points of the fingerprint and the extraction of local feature points.
  • the quality of the fingerprint image is poor, it may be due to The global feature points on the fingerprint and the local feature points are not recognized, resulting in inaccurate final fingerprint recognition, thus constraining the user experience of the fingerprint identification product to a certain extent.
  • the embodiments of the present disclosure provide a fingerprint identification method and device, which are used to improve the accuracy of low-quality fingerprint images in fingerprint recognition.
  • a fingerprint identification method including:
  • the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image and the second fingerprint image are obtained.
  • a second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature;
  • the automatic codec network includes at least one coding layer, and the method may further include:
  • the coding feature parameters of each of the at least one coding layer are trained by the unlabeled fingerprint samples, and the coding feature representation parameters corresponding to the coding layers of each layer are obtained;
  • the reconstruction error reaches a minimum value
  • the training of the automatic codec network is stopped, and the automatic codec network after the first training is obtained.
  • a classifier is connected to a last coding layer of the first coded automatic codec network, and the method may further include:
  • the training of the classifier is stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
  • a classifier is connected to a last coding layer of the first coded automatic codec network, and the method may further include:
  • the method may further include:
  • the determining whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the cosine distance of the third fingerprint feature and the fourth fingerprint feature may include:
  • the cosine distance is less than or equal to the preset threshold, determining that the first fingerprint image and the second fingerprint image are different fingerprints.
  • a fingerprint identification apparatus including:
  • the first extraction module is configured to perform feature extraction on the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database by using an automatic code decoding network, to obtain a first fingerprint corresponding to the first fingerprint image. And acquiring a second fingerprint feature corresponding to the second fingerprint image, wherein the first fingerprint feature and the second fingerprint feature have the same dimension;
  • the dimension reduction processing module is configured to perform a dimensionality reduction process on the first fingerprint feature and the second fingerprint feature extracted by the first extraction module, to obtain a third fingerprint feature and a fourth fingerprint feature, respectively,
  • the dimension of the third fingerprint feature and the fourth fingerprint feature are the same, and are smaller than the dimension of the first fingerprint feature and the second fingerprint feature;
  • the identification module is configured to determine whether the first fingerprint image and the second fingerprint image are the same according to the third fingerprint feature of the dimensionality reduction processing module and the cosine distance of the fourth fingerprint feature fingerprint.
  • the automatic codec network includes at least one coding layer
  • the apparatus may further include:
  • the first training module is configured to train the coding feature parameters of each of the at least one coding layer by using the unlabeled fingerprint samples, to obtain the coding feature representation parameters corresponding to the coding layers of each layer;
  • the first reconstruction module is configured to perform data reconstruction on the coding feature representation parameter corresponding to each coding layer that is trained by the first training module, and obtain the labelless fingerprint by using a decoding layer corresponding to the coding layer. Fingerprint reconstruction data of the sample;
  • a first determining module configured to determine a reconstruction error of the fingerprint reconstruction data determined by the first reconstruction module and the unlabeled fingerprint sample
  • the adjusting module is configured to adjust the coding feature representation parameter of each coding layer according to the reconstruction error determined by the first determining module;
  • the first control module is configured to stop training the automatic codec network when the reconstruction error determined by the first determining module reaches a minimum value, to obtain an automatic codec network after the first training.
  • a classifier is connected to a last coding layer of the first coded automatic codec network, and the device may further include:
  • a first processing module configured to input a tagged fingerprint sample to the first coded automatic codec network to obtain a first output result
  • a second training module configured to input the first output result obtained by the processing module to the classifier, and train the classifier by using the labeled fingerprint sample
  • the second control module is configured to control the second training module to stop training the classifier when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are minimum.
  • a classifier is connected to a last coding layer of the first coded automatic codec network, and the device may further include:
  • a second processing module configured to input the tagged fingerprint sample to the first coded automatic codec network to obtain a second output result
  • a third training module configured to input the second output result obtained by the second processing module to the classifier, and train the classifier by the labeled fingerprint sample and the first Fine-tuning the coding feature representation parameters of each coding layer of the automatic coding and decoding network after the secondary training;
  • a third control module configured to control the third training module to stop training the classifier and to each of the classifiers when the result of the classifier output and the reconstructed error of the tagged fingerprint sample are minimum.
  • the apparatus may further include:
  • a second extraction module configured to extract, by the trained automatic codec network, a coding feature representation parameter of a first set dimension of the unlabeled fingerprint sample
  • a fourth training module configured to perform linear discriminant analysis LDA training on the coded feature representation parameter of the first set dimension extracted by the second extraction module, to obtain a second set dimension of the LDA Projection matrix.
  • the identification module may include:
  • Comparing the submodule configured to compare a cosine distance of the third fingerprint feature and the fourth fingerprint feature with a preset threshold
  • a first determining submodule configured to determine that the first fingerprint image and the second fingerprint image are the same fingerprint if a comparison result of the comparison submodule indicates that the cosine distance is greater than the preset threshold
  • a second determining submodule configured to determine that the first fingerprint image and the second fingerprint image are different fingerprints if a comparison result of the comparison submodule indicates that the cosine distance is less than or equal to the preset threshold.
  • a fingerprint identification apparatus including:
  • a memory for storing processor executable instructions
  • processor is configured to:
  • the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image and the second fingerprint image are obtained.
  • a second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature;
  • the technical solution provided by the embodiment of the present disclosure may include the following beneficial effects: since the AED network has been subjected to fingerprint features obtained by training a large number of fingerprint images, the first fingerprint feature and the second fingerprint image of the first fingerprint image are extracted by the AED network.
  • the second fingerprint feature may include a fingerprint feature that is advantageous for fingerprint recognition, and the related technology must avoid the global feature point of the fingerprint and the local feature point to realize fingerprint recognition.
  • the AED network is advantageous for fingerprint recognition by recognition.
  • the feature ensures that when the quality of the first fingerprint image is low, the global feature points and local feature points of the fingerprint can be extracted, and the fingerprint recognition can still be realized, thereby greatly improving the accuracy of the low-quality fingerprint image in fingerprint recognition; Dimensionality reduction of the first fingerprint feature and the second fingerprint feature can greatly reduce the computational complexity in the fingerprint recognition process.
  • FIG. 1A is a flowchart of a fingerprint identification method, according to an exemplary embodiment.
  • FIG. 1B is a schematic structural diagram of an AED network according to an exemplary embodiment.
  • FIG. 2A is a flow chart of a fingerprint identification method, according to an exemplary embodiment.
  • FIG. 2B is a schematic structural diagram of an AED network according to an exemplary embodiment.
  • 2C is a schematic diagram showing how to train an AED network, according to an exemplary embodiment.
  • FIG. 3A is a flow diagram showing how parameters of an AED network are fine tuned by a tagged fingerprint sample, according to an exemplary embodiment.
  • 3B is a flow chart showing how fine tuning of parameters of a classifier connected to an AED network is performed by tagged fingerprint samples, in accordance with yet another exemplary embodiment.
  • FIG. 3C is a schematic structural diagram of an AED network and a classifier according to still another exemplary embodiment.
  • FIG. 4 is a flowchart of a fingerprint identification method, according to an exemplary embodiment.
  • FIG. 5 is a block diagram of a fingerprint identification apparatus, according to an exemplary embodiment.
  • FIG. 6 is a block diagram of another fingerprint recognition apparatus, according to an exemplary embodiment.
  • FIG. 7 is a block diagram of a fingerprint recognition device, according to an exemplary embodiment.
  • FIG. 1A is a flowchart of a fingerprint identification method according to an exemplary embodiment
  • FIG. 1B is a schematic structural diagram of an automatic codec network according to an exemplary embodiment
  • the fingerprint identification method may be applied to a fingerprint with a fingerprint sensor installed Identify devices (eg smartphones and tablets with fingerprint authentication, fingerprint punchers).
  • Identify devices eg smartphones and tablets with fingerprint authentication, fingerprint punchers.
  • the fingerprint identification method includes the following steps S101-S103:
  • step S101 the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image is matched with the second fingerprint image.
  • the second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature.
  • a fingerprint image of a user within a certain range that has been acquired may be stored in the database.
  • the fingerprint image of all employees in company A when user B needs fingerprint authentication, collect user B through the fingerprint sensor.
  • the first fingerprint image can be.
  • an Auto Encode Decode (AED) network may include an encoding layer and a decoding layer, and input the first fingerprint image to the encoding layer, and the output of the encoding layer is the encoding of the first fingerprint image. And the feature is re-inputted into the decoding layer corresponding to the coding layer, and the output of the decoding layer is the first fingerprint feature of the first fingerprint image.
  • the second fingerprint image in the database can be obtained in the same manner as the first fingerprint image to obtain the second fingerprint feature of the second fingerprint image.
  • step S102 the first fingerprint feature and the second fingerprint feature are subjected to dimensionality reduction processing to obtain a third fingerprint feature and a fourth fingerprint feature respectively, wherein the third fingerprint feature and the fourth fingerprint feature have the same dimension and are smaller than The dimension of the first fingerprint feature and the second fingerprint feature.
  • the first fingerprint feature and the second fingerprint feature may be reduced by an already trained Linear Discriminant Analysis (LDA).
  • LDA Linear Discriminant Analysis
  • the fingerprint feature of the first set dimension of the unlabeled fingerprint sample is extracted by the trained automatic codec network; the LDA training is performed on the fingerprint feature of the first set dimension by linear discriminant analysis to obtain the LDA
  • the projection matrix of the second set dimension For example, the unlabeled fingerprint sample is output from the AED network with the first set dimension of the 500-dimensional coded feature representation parameter.
  • the dimension can be reduced from the trained LDA to the second set dimension of 200.
  • the coding feature of the dimension represents the parameter. Thereby the computational complexity in calculating the cosine distance can be reduced.
  • step S103 it is determined whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the cosine distance of the third fingerprint feature and the fourth fingerprint feature.
  • the cosine distance of the third fingerprint feature and the fourth fingerprint feature may be compared with a preset threshold. If the cosine distance is greater than a preset threshold, determining that the first fingerprint image and the second fingerprint image are the same fingerprint, if The cosine distance is less than or equal to the preset threshold, and the first fingerprint image and the second fingerprint image are determined to be different fingerprints.
  • the first fingerprint image of the user A is collected by the fingerprint sensor 11, and the first fingerprint image and the second fingerprint image already stored in the database 12 are collected.
  • the automatic codec network 13 outputs the first fingerprint feature of the first fingerprint image and the second fingerprint feature of the second fingerprint image.
  • the first fingerprint feature and the second fingerprint feature are both 500-dimensional fingerprint features.
  • the 500-dimensional first fingerprint feature and the second fingerprint are dimension-reduced by the projection matrix of the LDA module 14.
  • the LDA module 14 reduces the first fingerprint feature and the second fingerprint feature from 500 dimensions to 200 dimensions, that is, the LDA module 14 outputs the third fingerprint feature after the first fingerprint feature dimension reduction and the second fingerprint feature is reduced in dimension.
  • the fourth fingerprint feature For example, the dimensions of the third fingerprint feature and the fourth fingerprint feature are both 200 dimensions.
  • the distance calculation module 15 calculates the cosine distances of the two 200-dimensional third fingerprint features and the fourth fingerprint feature, and the result output module 16 compares the cosine distance with a threshold, that is, divides the cosine distance by the threshold. If the cosine distance is greater than the threshold, the output module 16 outputs a result that the first fingerprint image and the second fingerprint image belong to the same fingerprint. If the cosine distance is less than or equal to the threshold, the result output module 16 outputs the first fingerprint image and the second fingerprint. The image belongs to the result of a different fingerprint.
  • the first fingerprint feature of the first fingerprint image extracted by the AED network and the second fingerprint feature of the second fingerprint image may include favorable
  • the related technology must avoid the global feature points of the fingerprint and the local feature points to achieve fingerprint recognition.
  • the AED network ensures the first feature by identifying the features that are favorable for fingerprint recognition.
  • the quality of the fingerprint image is low, the fingerprint can be realized because the global feature points and local feature points of the fingerprint can not be extracted, which greatly improves the accuracy of the low-quality fingerprint image in fingerprint recognition;
  • the second fingerprint feature is used for dimensionality reduction, which can greatly reduce the computational complexity in the fingerprint identification process.
  • the automatic codec network includes at least one coding layer
  • the fingerprint identification method may further include:
  • the coding feature parameters of each coding layer in the at least one coding layer are trained by the unlabeled fingerprint samples, and the coding feature representation parameters corresponding to each layer of the coding layer are obtained;
  • the reconstruction error reaches the minimum value
  • the training of the automatic code decoding network is stopped, and the automatic code decoding network after the first training is obtained.
  • the last coding layer of the automatic codec network after the first training is connected with a classifier, and the method may further include:
  • the training of the classifier is stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
  • the last coding layer of the automatic codec network after the first training is connected with a classifier, and the method may further include:
  • the method may further include:
  • a linear discriminant analysis LDA training is performed on the coded feature representation parameters of the first set dimension, and a projection matrix of the second set dimension of the LDA is obtained.
  • determining whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the cosine distance of the third fingerprint feature and the fourth fingerprint feature may include:
  • the cosine distance is less than or equal to the preset threshold, it is determined that the first fingerprint image and the second fingerprint image are different fingerprints.
  • the above method provided by the embodiment of the present disclosure can prevent the related technology from being able to realize fingerprint recognition through the global feature points of the fingerprint and the local feature points, and ensure that the global feature of the fingerprint cannot be extracted when the quality of the first fingerprint image is low. Points and local feature points can still realize fingerprint recognition, greatly improve the accuracy of low-quality fingerprint images in fingerprint recognition, and can greatly reduce the computational complexity in the fingerprint recognition process.
  • FIG. 2A is a flowchart of a fingerprint identification method according to an exemplary embodiment
  • FIG. 2B is a schematic structural diagram of an AED network according to an exemplary embodiment
  • FIG. 2C is a schematic diagram showing how to implement an AED network according to an exemplary embodiment.
  • the fingerprint identification method includes the following steps:
  • step S201 the coding feature parameters of each coding layer in the at least one coding layer are trained by the unlabeled fingerprint samples, and the coding feature representation parameters corresponding to each layer of the coding layer are obtained.
  • step S202 the coding feature representation parameter corresponding to each coding layer is reconstructed by the decoding layer corresponding to the coding layer, and the fingerprint reconstruction data of the unlabeled fingerprint sample is obtained.
  • step S203 a reconstruction error of the fingerprint reconstruction data and the unlabeled fingerprint sample is determined.
  • step S204 the coding feature representation parameters of each coding layer are adjusted according to the reconstruction error.
  • step S205 the training of the automatic codec network is stopped when the reconstruction error reaches a minimum value.
  • the AED network includes at least one coding layer.
  • the AED network 20 as shown in FIG. 2B includes three coding layers (the coding layer 21, the coding layer 22, and the coding layer 23, respectively).
  • the training of the encoding layer 21 is taken as an example.
  • each unlabeled fingerprint sample can be input and encoded.
  • the layer 21 obtains, from the encoding layer 21, a coding feature representation parameter of an unlabeled fingerprint sample, the coding feature representation parameter being a representation of the input unlabeled fingerprint sample, in order to verify whether the coding feature representation parameter and the unlabeled fingerprint sample.
  • the encoded feature representation parameters can be input to the decoding layer 24, and the reconstruction error between the output information of the decoding layer 24 and the unlabeled fingerprint samples can be calculated by the reconstruction error calculation module 25. If the reconstruction error has not reached the minimum value, the coding feature representation parameter of the coding layer 21 may be adjusted according to the reconstruction error until the reconstruction error reaches a minimum value, which may be regarded as the coding feature representation parameter capable of representing the unlabeled fingerprint at the coding layer 21. sample.
  • the coding layer 22 and the coding layer 23 can be verified by the respective corresponding decoding layers whether the coding feature representation parameters corresponding to the coding layer 22 and the coding layer 23 are consistent with the unlabeled fingerprint sample. Until the encoding layer 22 and the encoding layer 23 can represent the unlabeled fingerprint sample, the disclosure will not be described in detail.
  • the AED network can encode the fingerprint image, and the fingerprint feature image is used to represent the fingerprint image.
  • the image can be made.
  • the trained AED network can identify the image features in the fingerprint image that are advantageous for fingerprint recognition, and avoid the situation that the fingerprint recognition error is caused by the failure of the low-quality fingerprint image to extract the global feature points and the local feature points.
  • FIG. 3A is a flow diagram showing how to fine tune parameters of an AED network by tagged fingerprint samples
  • FIG. 3B is a diagram showing how to connect to an AED through a tagged fingerprint sample pair, according to yet another exemplary embodiment, in accordance with an exemplary embodiment.
  • FIG. 3C is a schematic structural diagram of an AED network and a classifier according to still another exemplary embodiment.
  • the fingerprint identification method includes the following steps:
  • step S301 the tagged fingerprint sample is input to the first coded automatic codec network to obtain a first output result.
  • step S302 the first output result is input to the classifier, and the classifier is trained by the tagged fingerprint sample.
  • step S303 the training of the classifier is stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
  • a plurality of coding layers of the AED network 20 (the coding layer 21, the coding layer 22, and the coding layer 23 shown in FIG. 3C) can be obtained.
  • the coding feature represents parameters, and each of the coding layers will obtain different expressions of the unlabeled fingerprint samples.
  • Those skilled in the art can understand that the disclosure does not limit the number of layers of the AED network.
  • a classifier 31 can be added at the topmost coding layer of the AED network (e.g., coding layer 23).
  • the classifier 31 may be, for example, a classifier such as Rogers return, SVM, or the like.
  • the classifier 31 is trained by the supervised training method of the standard multi-layer neural network (for example, the gradient descent method) using the first output result of the tagged fingerprint sample, and the result of the classifier output calculated by the reconstructed error calculating module 32 is When the reconstruction error of the tagged fingerprint sample is the smallest, the classification of the classifier 31 is stopped, thereby enabling the AED network 20 to implement the classification function.
  • the fingerprint identification method includes the following steps:
  • step S311 the tagged fingerprint sample is input to the automatic codec network after the first training to obtain a first output result.
  • step S312 the first output result is input to the classifier, the classifier is trained by the tagged fingerprint sample, and the coding feature representation parameter of each coding layer of the first coded automatic codec network is fine-tuned.
  • step S313 the training of the classifier and the fine adjustment of the coding feature representation parameters for each coding layer are stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
  • the supervisory training method eg, gradient descent method
  • the supervisory training method employs the first output result of the tagged fingerprint sample to train the classifier 31 and the fine-tuning coding layer 21 and the coding layer 22
  • the coding features corresponding to the layers of the coding layer 23 represent parameters.
  • the reconstruction error calculation module 32 calculates the result of the classifier output and the reconstruction error of the tagged fingerprint sample is the smallest, the classification of the classifier 31 is stopped.
  • the AED network 20 can achieve fine-tuning of the AED network 20 on the basis of classification, and when the data of the tag fingerprint sample is sufficient, the AED network can achieve end-to-end learning, thereby improving The accuracy of the AED network and classifier in fingerprint recognition.
  • FIG. 4 is a flowchart of a fingerprint identification method according to an exemplary embodiment. This embodiment uses the above method provided by the embodiment of the present disclosure to illustrate how to perform fingerprint recognition by a cosine distance as an example, as shown in FIG. 4 . As shown, the following steps are included:
  • step S401 the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image is matched with the second fingerprint image.
  • the second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature.
  • step S402 the first fingerprint feature and the second fingerprint feature are subjected to dimensionality reduction processing to obtain a third fingerprint feature and a fourth fingerprint feature respectively, wherein the third fingerprint feature and the fourth fingerprint feature have the same dimension and are smaller than The dimension of the first fingerprint feature and the second fingerprint feature.
  • step S403 the cosine distances of the third fingerprint feature and the fourth fingerprint feature are compared with a preset threshold. If the cosine distance is greater than the preset threshold, step S404 is performed. If the cosine distance is less than or equal to the preset threshold, step S405 is performed. .
  • step S404 if the cosine distance is greater than the preset threshold, it is determined that the first fingerprint image and the second fingerprint image are the same fingerprint.
  • step S405 if the cosine distance is less than or equal to the preset threshold, it is determined that the first fingerprint image and the second fingerprint image are different fingerprints.
  • a suitable preset threshold may be obtained by training a large number of fingerprint samples in the sample database, and the preset threshold may be a recognition error rate acceptable to the user. For example, if there are 100,000 pairs of samples in the sample database and 1 million pairs of samples in the class, in order to maintain the recognition error rate of one thousandth, each pair can be calculated by the cosine distance to obtain a between 0-1. Value, where the value of the cosine distance of the sample in the class is 100,000, and the value of the cosine distance of the sample between the classes is 1 million, that is, the value of the 1.1 million cosine distance is obtained, and the value of the 1.1 million cosine distance is obtained. The value is combined with the recognition error rate to determine a suitable preset threshold.
  • the fingerprint is identified by the cosine distance of the third fingerprint feature and the fourth fingerprint feature, since the preset threshold can be obtained through a large number of fingerprint samples and combined with the user acceptable The recognition error rate, thus improving the user experience of the fingerprint identification product to some extent.
  • FIG. 5 is a block diagram of a fingerprint identification apparatus according to an exemplary embodiment. As shown in FIG. 5, the fingerprint identification apparatus includes:
  • the first extraction module 51 is configured to perform feature extraction on the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database through the automatic code decoding network, to obtain a first fingerprint feature corresponding to the first fingerprint image and a second fingerprint feature corresponding to the second fingerprint image, wherein the first fingerprint feature and the second fingerprint feature have the same dimension;
  • the dimension reduction processing module 52 is configured to perform a dimensionality reduction process on the first fingerprint feature and the second fingerprint feature extracted by the first extraction module 51 to obtain a third fingerprint feature and a fourth fingerprint feature, respectively, wherein the third fingerprint feature And the fourth fingerprint feature has the same dimension and is smaller than the dimension of the first fingerprint feature and the second fingerprint feature;
  • the identification module 53 is configured to determine whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the third fingerprint feature after the dimension reduction processing module 52 and the cosine distance of the fourth fingerprint feature.
  • FIG. 6 is a block diagram of another fingerprint recognition apparatus, according to an exemplary embodiment.
  • the automatic codec network includes at least one coding layer, and the apparatus may further include:
  • the first training module 54 is configured to train the coding feature parameters of each coding layer in the at least one coding layer by using the unlabeled fingerprint samples, to obtain the coding feature representation parameters corresponding to each layer of the coding layer;
  • the first reconstruction module 55 is configured to perform data reconstruction on the coding feature representation parameter corresponding to each coding layer that is trained by the first training module 54 by using a decoding layer corresponding to the coding layer, to obtain a fingerprint weight of the unlabeled fingerprint sample. Construct data
  • the first determining module 56 is configured to determine a reconstruction error of the fingerprint reconstruction data determined by the first reconstruction module 55 and the unlabeled fingerprint sample;
  • the adjusting module 57 is configured to adjust the coding feature representation parameter of each coding layer according to the reconstruction error determined by the first determination module 56;
  • the first control module 58 is configured to stop training the automatic codec network when the reconstruction error determined by the first determining module 57 reaches a minimum value, and obtain an automatic codec network after the first training.
  • the last coding layer of the automatic codec network after the first training may also be connected to the classifier, and the device may further include:
  • the first processing module 59 is configured to input the tagged fingerprint sample to the first coded automatic codec network to obtain a first output result
  • the second training module 60 is configured to input the first output result obtained by the first processing module 59 to the classifier, and train the classifier through the tagged fingerprint sample;
  • the second control module 61 is configured to control the second training module 60 to stop training the classifier when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are minimum.
  • the last coding layer of the automatic codec network after the first training may also be connected to the classifier, and the device may further include:
  • the second processing module 62 is configured to input the tagged fingerprint sample to the automatic codec network after the first training to obtain a second output result;
  • the third training module 63 is configured to input the second output result obtained by the second processing module 62 to the classifier, train the classifier through the tagged fingerprint sample, and each of the automatic codec network after the first training
  • An encoding feature of a coding layer represents parameters for fine tuning
  • the third control module 64 is configured to minimize the error of the output of the classifier and the reconstructed error of the tagged fingerprint sample At this time, the third training module 63 is controlled to stop the training of the classifier and the fine adjustment of the coding feature representation parameters for each coding layer.
  • the apparatus may further include:
  • the second extraction module 65 is configured to extract, by the trained automatic code decoding network, the coding feature representation parameter of the first set dimension of the unlabeled fingerprint sample;
  • the fourth training module 66 is configured to perform linear discriminant analysis LDA training on the coded feature representation parameters of the first set dimension extracted by the second extraction module 65 to obtain a projection matrix of the second set dimension of the LDA.
  • the identification module 53 can include:
  • the comparison sub-module 531 is configured to compare the cosine distance of the third fingerprint feature and the fourth fingerprint feature with a preset threshold
  • the first determining sub-module 532 is configured to determine that the first fingerprint image and the second fingerprint image are the same fingerprint if the comparison result of the comparison sub-module 531 indicates that the cosine distance is greater than a preset threshold;
  • the second determining submodule 533 is configured to determine that the first fingerprint image and the second fingerprint image are different fingerprints if the comparison result of the comparison submodule 531 indicates that the cosine distance is less than or equal to the preset threshold.
  • FIG. 7 is a block diagram of a fingerprint recognition device, according to an exemplary embodiment.
  • device 700 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
  • apparatus 700 can include one or more of the following components: processing component 702, memory 704, power component 706, multimedia component 708, audio component 710, input/output (I/O) interface 712, sensor component 714, And a communication component 716.
  • Processing component 702 typically controls the overall operation of device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
  • Processing component 702 can include one or more processors 720 to execute instructions to perform all or part of the steps described above.
  • processing component 702 can include one or more modules to facilitate interaction between component 702 and other components.
  • processing component 702 can include a multimedia module to facilitate interaction between multimedia component 708 and processing component 702.
  • Memory 704 is configured to store various types of data to support operation at device 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phone book data, messages, pictures, videos, and the like. Memory 704 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Disk Disk or Optical Disk.
  • Power component 706 provides power to various components of device 700.
  • Power component 706 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 700.
  • the multimedia component 708 includes a screen between the device 700 and the user that provides an output interface.
  • the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
  • the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
  • the multimedia component 708 includes a front camera and/or a rear camera. When the device 700 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
  • the audio component 710 is configured to output and/or input an audio signal.
  • audio component 710 includes a microphone (MIC) that is configured to receive an external audio signal when device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
  • the received audio signal may be further stored in memory 704 or transmitted via communication component 716.
  • audio component 710 also includes a speaker for outputting an audio signal.
  • the I/O interface 712 provides an interface between the processing component 702 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
  • Sensor assembly 714 includes one or more sensors for providing device 700 with various aspects of status assessment.
  • sensor component 714 can detect an open/closed state of device 700, a relative positioning of components, such as the display and keypad of device 700, and sensor component 714 can also detect a change in position of one component of device 700 or device 700. The presence or absence of user contact with device 700, device 700 orientation or acceleration/deceleration, and temperature variation of device 700.
  • Sensor assembly 714 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
  • Sensor component 714 can also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
  • the sensor component 714 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 716 is configured to facilitate wired or wireless communication between device 700 and other devices.
  • the device 700 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
  • communication component 716 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
  • the communication component 716 also includes a near field communication (NFC) module to facilitate short range communication.
  • NFC near field communication
  • the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • apparatus 700 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A gate array
  • controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
  • non-transitory computer readable storage medium comprising instructions, such as a package
  • the memory 704 of instructions is executable by the processor 720 of the apparatus 700 to perform the above method.
  • the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Collating Specific Patterns (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种指纹识别方法及装置。方法包括:对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到第一指纹图像对应的第一指纹特征和第二指纹图像对应的第二指纹特征(S101),其中,第一指纹特征与第二指纹特征的维数相同;对第一指纹特征和第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征(S102),其中,第三指纹特征和第四指纹特征的维数相同;根据第三指纹特征和第四指纹特征的余弦距离确定第一指纹图像与第二指纹图像是否为同一指纹(S103)。所述方法及装置可以提高低质量的指纹图像在指纹识别时的准确率。

Description

指纹识别方法及装置
本申请基于申请号为201510712896.0、申请日为2015年10月28日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及图像识别技术领域,尤其涉及一种指纹识别方法及装置。
背景技术
从1980年左右开始研究指纹识别以来,到1990后指纹识别不论是在民用领域还是在军用领域,都已经非常成熟,应用也非常普遍。然而,相关技术中的指纹识别通常要求用户的指纹不能太过干燥,并且指纹图像质量要足够清晰,从而确保指纹的全局特征点以及局部特征点的提取,当指纹图像质量较差时,会由于识别不出指纹上的全局特征点以及局部特征点而导致最终的指纹识别不准确,因此在一定程度上约束了指纹识别产品的用户体验。
发明内容
为克服相关技术中存在的问题,本公开实施例提供一种指纹识别方法及装置,用以提高低质量的指纹图像在指纹识别时的准确率。
根据本公开实施例的第一方面,提供一种指纹识别方法,包括:
对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;
对所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;
根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
在一实施例中,所述自动编码解码网络包括至少一个编码层,所述方法还可包括:
通过无标签指纹样本对所述至少一个编码层中的每一编码层的编码特征参数进行训练,得到所述每一层编码层对应的编码特征表示参数;
对所述每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到所述无标签指纹样本的指纹重构数据;
确定所述指纹重构数据与所述无标签指纹样本的重构误差;
根据所述重构误差调整所述每一编码层的编码特征表示参数;
在所述重构误差达到最小值时,停止对所述自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
在一实施例中,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述方法还可包括:
将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第二输出结果;
将所述第二输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练;
在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,停止对所述分类器的训练。
在一实施例中,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述方法还可包括:
将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第一输出结果;
将所述第一输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练并对所述第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;
在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,停止对所述分类器的训练和对所述每一个编码层的编码特征表示参数的微调。
在一实施例中,所述方法还可包括:
通过已训练的所述自动编码解码网络提取所述无标签指纹样本的第一设定维数的编码特征表示参数;
对所述第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到所述LDA的第二设定维数的投影矩阵。
在一实施例中,所述根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹,可包括:
将所述第三指纹特征和所述第四指纹特征的余弦距离与预设阈值进行比较;
如果所述余弦距离大于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为同一指纹;
如果所述余弦距离小于或者等于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为不同指纹。
根据本公开实施例的第二方面,提供一种指纹识别装置,包括:
第一提取模块,被配置为对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特 征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;
降维处理模块,被配置为对所述第一提取模块提取到的所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;
识别模块,被配置为根据所述降维处理模块降维后的所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
在一实施例中,所述自动编码解码网络包括至少一个编码层,所述装置还可包括:
第一训练模块,被配置为通过无标签指纹样本对所述至少一个编码层中的每一编码层的编码特征参数进行训练,得到所述每一层编码层对应的编码特征表示参数;
第一重构模块,被配置为对所述第一训练模块训练得到的所述每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到所述无标签指纹样本的指纹重构数据;
第一确定模块,被配置为确定所述第一重构模块确定的所述指纹重构数据与所述无标签指纹样本的重构误差;
调整模块,被配置为根据所述第一确定模块确定的所述重构误差调整所述每一编码层的编码特征表示参数;
第一控制模块,被配置为在所述第一确定模块确定的所述重构误差达到最小值时,停止对所述自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
在一实施例中,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述装置还可包括:
第一处理模块,被配置为将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第一输出结果;
第二训练模块,被配置为将所述处理模块得到的所述第一输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练;
第二控制模块,被配置为在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,控制所述第二训练模块停止对所述分类器的训练。
在一实施例中,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述装置还可包括:
第二处理模块,被配置为将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第二输出结果;
第三训练模块,被配置为将所述第二处理模块得到的所述第二输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练并对所述第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;
第三控制模块,被配置为在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,控制所述第三训练模块停止对所述分类器的训练和对所述每一个编码层的编码特征表示参数的微调。
在一实施例中,所述装置还可包括:
第二提取模块,被配置为通过已训练的所述自动编码解码网络提取所述无标签指纹样本的第一设定维数的编码特征表示参数;
第四训练模块,被配置为对所述第二提取模块提取的所述第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到所述LDA的第二设定维数的投影矩阵。
在一实施例中,所述识别模块可包括:
比较子模块,被配置为将所述第三指纹特征和所述第四指纹特征的余弦距离与预设阈值进行比较;
第一确定子模块,被配置为如果所述比较子模块的比较结果表示所述余弦距离大于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为同一指纹;
第二确定子模块,被配置为如果所述比较子模块的比较结果表示所述余弦距离小于或者等于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为不同指纹。
根据本公开实施例的第三方面,提供一种指纹识别装置,包括:
处理器;
用于存储处理器可执行指令的存储器;
其中,所述处理器被配置为:
对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;
对所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;
根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
本公开的实施例提供的技术方案可以包括以下有益效果:由于AED网络已经经过大量的指纹图像训练得到的指纹特征,因此AED网络所提取第一指纹图像的第一指纹特征和第二指纹图像的第二指纹特征可以包含有利于指纹识别的指纹特征,避免了相关技术必须通过指纹的全局特征点以及局部特征点才能实现指纹识别,当指纹图像质量较差时,AED网络通过识别有利于指纹识别的特征,确保在第一指纹图像的质量较低时由于提取不出指纹的全局特征点以及局部特征点仍能够实现指纹识别,大大提高了低质量的指纹图像在指纹识别时的准确率;通过对第一指纹特征和第二指纹特征进行降维,可以大大降低在指纹识别过程中的计算复杂度。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
图1A是根据一示例性实施例示出的指纹识别方法的流程图。
图1B是根据一示例性实施例示出的AED网络的结构示意图。
图2A是根据一示例性实施例示出的指纹识别方法的流程图。
图2B是根据一示例性实施例示出的AED网络的结构示意图。
图2C是根据一示例性实施例示出的如何对AED网络进行训练的示意图。
图3A是根据一示例性实施例示出的如何通过有标签指纹样本对AED网络的参数进行微调的流程图。
图3B是根据又一示例性实施例示出的如何通过有标签指纹样本对连接至AED网络的分类器的参数进行微调的流程图。
图3C是根据又一示例性实施例示出的AED网络和分类器的结构示意图。
图4是根据一示例性实施例示出的指纹识别方法的流程图。
图5是根据一示例性实施例示出的一种指纹识别装置的框图。
图6是根据一示例性实施例示出的另一种指纹识别装置的框图。
图7是根据一示例性实施例示出的一种适用于指纹识别装置的框图。
具体实施方式
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本发明相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本发明的一些方面相一致的装置和方法的例子。
图1A是根据一示例性实施例示出的指纹识别方法的流程图,图1B是根据一示例性实施例示出的自动编码解码网络的结构示意图;该指纹识别方法可以应用在安装有指纹传感器的指纹识别设备(例如:具有指纹认证的智能手机和平板电脑、指纹打卡机)上。如图1A所示,该指纹识别方法包括以下步骤S101-S103:
在步骤S101中,对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到第一指纹图像对应的第一指纹特征和第二指纹图像对应的第二指纹特征,其中,第一指纹特征与第二指纹特征的维数相同。
在一实施例中,数据库中可以存储有已经采集的一定范围内的用户的指纹图像。例如,A公司内的全部员工的指纹图像,当用户B需要指纹认证时,通过指纹传感器采集用户B 的第一指纹图像即可。在一实施例中,自动编码解码(Auto Encode Decode,简称为AED)网络可以包括编码层和解码层,将第一指纹图像输入到编码层,经过编码层的输出即为第一指纹图像的编码特征,将该编码特征再输入到与该编码层相对应的解码层中,解码层的输出即为第一指纹图像的第一指纹特征。相应地,可以将数据库中的第二指纹图像经过与第一指纹图像相同的方式,得到第二指纹图像的第二指纹特征。
在步骤S102中,对第一指纹特征和第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,第三指纹特征和第四指纹特征的维数相同,且小于第一指纹特征和第二指纹特征的维数。
在一实施例中,可以通过已经训练的线性判别式分析(Linear Discriminant Analysis,简称为LDA)对第一指纹特征和第二指纹特征进行降维。在一实施例中,通过已训练的自动编码解码网络提取无标签指纹样本的第一设定维数的指纹特征;对第一设定维数的指纹特征进行线性判别式分析LDA训练,得到LDA的第二设定维数的投影矩阵。例如,无标签指纹样本从AED网络输出的了第一设定维数为500维的编码特征表示参数,经过对LDA训练后,可以从训练后的LDA降维至第二设定维数为200维的编码特征表示参数。从而可以降低在计算余弦距离时的计算复杂度。
在步骤S103中,根据第三指纹特征和第四指纹特征的余弦距离确定第一指纹图像与第二指纹图像是否为同一指纹。
在一实施例中,可以将第三指纹特征和第四指纹特征的余弦距离与预设阈值进行比较,如果余弦距离大于预设阈值,确定第一指纹图像与第二指纹图像为同一指纹,如果余弦距离小于或者等于预设阈值,确定第一指纹图像与第二指纹图像为不同指纹。
作为一个示例性场景,如图1B所示,当用户B需要进行指纹认证时,通过指纹传感器11采集用户A的第一指纹图像,将第一指纹图像和数据库12中已经存储的第二指纹图像输入至已经训练的自动编码解码网络13中,自动编码解码网络13输出第一指纹图像的第一指纹特征和第二指纹图像的第二指纹特征。例如,第一指纹特征和第二指纹特征均为500维的指纹特征。利用LDA模块14的投影矩阵对500维的第一指纹特征和第二指纹特进行降维。例如,LDA模块14将第一指纹特征和第二指纹特征从500维降到了200维,也即LDA模块14输出了第一指纹特征降维后的第三指纹特征和第二指纹特征降维后的第四指纹特征。例如,第三指纹特征和第四指纹特征的维数均为200维。距离计算模块15计算这两个200维的第三指纹特征和第四指纹特征的余弦距离,结果输出模块16将该余弦距离与阈值进行比较,也即,通过该阈值对余弦距离进行分割。如果余弦距离大于该阈值,结果输出模块16输出第一指纹图像和第二指纹图像属于同一个指纹的结果,如果余弦距离小于或者等于该阈值,结果输出模块16输出第一指纹图像和第二指纹图像属于不同指纹的结果。
本实施例中,由于AED网络已经经过大量的指纹图像训练得到的指纹特征,因此AED网络所提取第一指纹图像的第一指纹特征和第二指纹图像的第二指纹特征可以包含有利 于指纹识别的指纹特征,避免了相关技术必须通过指纹的全局特征点以及局部特征点才能实现指纹识别,当指纹图像质量较差时,AED网络通过识别有利于指纹识别的特征,确保在第一指纹图像的质量较低时由于提取不出指纹的全局特征点以及局部特征点仍能够实现指纹识别,大大提高了低质量的指纹图像在指纹识别时的准确率;通过对第一指纹特征和第二指纹特征进行降维,可以大大降低在指纹识别过程中的计算复杂度。
在一实施例中,自动编码解码网络包括至少一个编码层,所述指纹识别方法还可包括:
通过无标签指纹样本对至少一个编码层中的每一编码层的编码特征参数进行训练,得到每一层编码层对应的编码特征表示参数;
对每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到无标签指纹样本的指纹重构数据;
确定指纹重构数据与无标签指纹样本的重构误差;
根据重构误差调整每一编码层的编码特征表示参数;
在重构误差达到最小值时,停止对自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
在一实施例中,在第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,方法还可包括:
将有标签指纹样本输入至第一次训练后的自动编码解码网络,得到第一输出结果;
将第一输出结果输入到分类器,通过有标签指纹样本对分类器进行训练;
在分类器输出的结果与有标签指纹样本的重构误差最小时,停止对分类器的训练。
在一实施例中,在第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,方法还可包括:
将有标签指纹样本输入至第一次训练后的自动编码解码网络,得到第二输出结果;
将第二输出结果输入到分类器,通过有标签指纹样本对分类器进行训练并对第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;
在分类器输出的结果与有标签指纹样本的重构误差最小时,停止对分类器的训练和对每一个编码层的编码特征表示参数的微调。
在一实施例中,方法还可包括:
通过已训练的自动编码解码网络提取无标签指纹样本的第一设定维数的编码特征表示参数;
对第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到LDA的第二设定维数的投影矩阵。
在一实施例中,根据第三指纹特征和第四指纹特征的余弦距离确定第一指纹图像与第二指纹图像是否为同一指纹,可包括:
将第三指纹特征和第四指纹特征的余弦距离与预设阈值进行比较;
如果余弦距离大于预设阈值,确定第一指纹图像与第二指纹图像为同一指纹;
如果余弦距离小于或者等于预设阈值,确定第一指纹图像与第二指纹图像为不同指纹。
具体如何实现指纹识别的,请参考后续实施例。
至此,本公开实施例提供的上述方法,可以避免相关技术必须通过指纹的全局特征点以及局部特征点才能实现指纹识别,确保在第一指纹图像的质量较低时由于提取不出指纹的全局特征点以及局部特征点仍能够实现指纹识别,大大提高低质量的指纹图像在指纹识别时的准确率,还可以大大降低在指纹识别过程中的计算复杂度。
下面以具体实施例来说明本公开实施例提供的技术方案。
图2A是根据一示例性实施例示出的指纹识别方法的流程图,图2B是根据一示例性实施例示出的AED网络的结构示意图,图2C是根据一示例性实施例示出的如何对AED网络进行训练的示意图;本实施例利用本公开实施例提供的上述方法,以如何通过无标签指纹样本对AED网络和LDA进行训练为例进行示例性说明。如图2A所示,所述指纹识别方法包括如下步骤:
在步骤S201中,通过无标签指纹样本对至少一个编码层中的每一编码层的编码特征参数进行训练,得到每一层编码层对应的编码特征表示参数。
在步骤S202中,对每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到无标签指纹样本的指纹重构数据。
在步骤S203中,确定指纹重构数据与无标签指纹样本的重构误差。
在步骤S204中,根据重构误差调整每一编码层的编码特征表示参数。
在步骤S205中,在重构误差达到最小值时,停止对自动编码解码网络的训练。
在一实施例中,AED网络包括至少一个编码层。例如,如图2B所示的AED网络20包括3个编码层(分别为编码层21、编码层22、编码层23)。如图2C所示,以对编码层21进行训练为例进行示例性说明,对于大量(例如,60万张的无标签指纹样本)的无标签指纹样本,可以将每一个无标签指纹样本输入编码层21,从编码层21得到一个无标签指纹样本的编码特征表示参数,该编码特征表示参数也就是输入的无标签指纹样本的一个表示,为了验证该编码特征表示参数是否与该无标签指纹样本一致,可以将编码特征表示参数输入到解码层24,通过重构误差计算模块25计算得到解码层24的输出信息与无标签指纹样本之间的重构误差。如果重构误差尚未达到最小值,可以根据重构误差调整编码层21的编码特征表示参数,直至重构误差达到最小值,可以视为该编码特征表示参数在编码层21能够表示该无标签指纹样本。
与上述编码层21类似的训练方式,可以对编码层22和编码层23通过各自对应的解码层来验证编码层22和编码层23各自对应的编码特征表示参数是否与该无标签指纹样本一致,直至编码层22和编码层23能够表示该无标签指纹样本,本公开不再详述。
本实施例中,通过对AED网络进行训练,可以使AED网络对指纹图像进行编码,通过编码特征表示参数来表示指纹图像,当无标签指纹样本的数量达到一定数量时,可以使 训练后的AED网络能够识别出指纹图像中有利于指纹识别的图像特征,避免出现由于低质量指纹图像提取全局特征点和局部特征点失败导致指纹识别错误的情形。
图3A是根据一示例性实施例示出的如何通过有标签指纹样本对AED网络的参数进行微调的流程图,图3B是根据又一示例性实施例示出的如何通过有标签指纹样本对连接至AED网络的分类器的参数进行微调的流程图,图3C是根据又一示例性实施例示出的AED网络和分类器的结构示意图。
如图3A所示,所述指纹识别方法包括如下步骤:
在步骤S301中,将有标签指纹样本输入至第一次训练后的自动编码解码网络,得到第一输出结果。
在步骤S302中,将第一输出结果输入到分类器,通过有标签指纹样本对分类器进行训练。
在步骤S303中,在分类器输出的结果与有标签指纹样本的重构误差最小时,停止对分类器的训练。
如图3C所示,在经过上述图2A所示实施例对AED进行训练后,可以得到AED网络20的多个编码层(图3C所示的编码层21、编码层22、编码层23)的编码特征表示参数,通过每一个编码层都会得到无标签指纹样本的不同表达,本领域技术人员可以理解的是,本公开对AED网络的层数不做限制。
为了使AED网络能够实现分类,可以在AED网络的的最顶层的编码层(例如,编码层23)添加一个分类器31。该分类器31例如可以为罗杰斯特回归、SVM等分类器。通过标准的多层神经网络的监督训练方法(例如,梯度下降法)采用有标签指纹样本的第一输出结果去训练分类器31,当重构误差计算模块32计算得到的分类器输出的结果与有标签指纹样本的重构误差最小时,停止对分类器31的分类,从而使AED网络20实现分类的功能。
如图3B所示,所述指纹识别方法包括如下步骤:
在步骤S311中,将有标签指纹样本输入至第一次训练后的自动编码解码网络,得到第一输出结果。
在步骤S312中,将第一输出结果输入到分类器,通过有标签指纹样本对分类器进行训练并对第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调。
在步骤S313中,在分类器输出的结果与有标签指纹样本的重构误差最小时,停止对分类器的训练和对每一个编码层的编码特征表示参数的微调。
与上述图3A的描述类似,通过标准的多层神经网络的监督训练方法(例如,梯度下降法)采用有标签指纹样本的第一输出结果去训练分类器31以及微调编码层21、编码层22和编码层23各层对应的编码特征表示参数。当重构误差计算模块32计算得到的分类器输出的结果与有标签指纹样本的重构误差最小时,停止对分类器31的分类。在AED网络 20既能实现分类的基础上,还可以实现对AED网络20的微调,当有标签指纹样本的数据足够多时,可以使AED网络达到实现端对端学习(end-to-end learning),从而提高AED网络和分类器在指纹识别时的准确度。
图4是根据一示例性实施例示出的指纹识别方法的流程图;本实施例利用本公开实施例提供的上述方法,以如何通过余弦距离来进行指纹识别为例进行示例性说明,如图4所示,包括如下步骤:
在步骤S401中,对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到第一指纹图像对应的第一指纹特征和第二指纹图像对应的第二指纹特征,其中,第一指纹特征与第二指纹特征的维数相同。
在步骤S402中,对第一指纹特征和第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,第三指纹特征和第四指纹特征的维数相同,且小于第一指纹特征和第二指纹特征的维数。
步骤S401和步骤S402的相关描述请参见上述图1A所示实施例的描述,在此不再详述。
在步骤S403中,将第三指纹特征和第四指纹特征的余弦距离与预设阈值进行比较,如果余弦距离大于预设阈值,执行步骤S404,如果余弦距离小于或者等于预设阈值,执行步骤S405。
在步骤S404中,如果余弦距离大于预设阈值,确定第一指纹图像与第二指纹图像为同一指纹。
在步骤S405中,如果余弦距离小于或者等于预设阈值,确定第一指纹图像与第二指纹图像为不同指纹。
在步骤S403中,可以通过对样本数据库中大量的指纹样本进行训练得到一个合适的预设阈值,预设阈值可以为用户能够接受的识别错误率。例如,如果样本数据库中有类内样本10万对,类间样本100万对,为了保持千分之一的识别错误率,可以对每一对通过余弦距离计算,得到一个0-1之间的值,其中,类内样本的余弦距离的值有10万个,类间样本的余弦距离的值100万个,也即,得到了110万个余弦距离的值,通过该110万个余弦距离的值并结合识别错误率来确定一个合适的预设阈值即可。
本实施例在具有上述实施例有益技术效果的基础上,通过第三指纹特征和第四指纹特征的余弦距离来识别指纹,由于预设阈值可以通过大量的指纹样本训练得到并结合了用户可接受的识别错误率,因此在一定程度上提高了指纹识别产品的用户体验。
图5是根据一示例性实施例示出的一种指纹识别装置的框图,如图5所示,指纹识别装置包括:
第一提取模块51,被配置为对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到第一指纹图像对应的第一指纹特征和第二指纹图像对应的第二指纹特征,其中,第一指纹特征与第二指纹特征的维数相同;
降维处理模块52,被配置为对第一提取模块51提取到的第一指纹特征和第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,第三指纹特征和第四指纹特征的维数相同,且小于第一指纹特征和第二指纹特征的维数;
识别模块53,被配置为根据降维处理模块52降维后的第三指纹特征和第四指纹特征的余弦距离确定第一指纹图像与第二指纹图像是否为同一指纹。
图6是根据一示例性实施例示出的另一种指纹识别装置的框图。如图6所示,在上述图5所示实施例的基础上,在一实施例中,自动编码解码网络包括至少一个编码层,装置还可包括:
第一训练模块54,被配置为通过无标签指纹样本对至少一个编码层中的每一编码层的编码特征参数进行训练,得到每一层编码层对应的编码特征表示参数;
第一重构模块55,被配置为对第一训练模块54训练得到的每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到无标签指纹样本的指纹重构数据;
第一确定模块56,被配置为确定第一重构模块55确定的指纹重构数据与无标签指纹样本的重构误差;
调整模块57,被配置为根据第一确定模块56确定的重构误差调整每一编码层的编码特征表示参数;
第一控制模块58,被配置为在第一确定模块57确定的重构误差达到最小值时,停止对自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
在一实施例中,在第一次训练后的自动编码解码网络的最后一个编码层还可以连接分类器,装置还可包括:
第一处理模块59,被配置为将有标签指纹样本输入至第一次训练后的自动编码解码网络,得到第一输出结果;
第二训练模块60,被配置为将第一处理模块59得到的第一输出结果输入到分类器,通过有标签指纹样本对分类器进行训练;
第二控制模块61,被配置为在分类器输出的结果与有标签指纹样本的重构误差最小时,控制第二训练模块60停止对分类器的训练。
在一实施例中,在第一次训练后的自动编码解码网络的最后一个编码层还可以连接分类器,装置还可包括:
第二处理模块62,被配置为将有标签指纹样本输入至第一次训练后的自动编码解码网络,得到第二输出结果;
第三训练模块63,被配置为将第二处理模块62得到的第二输出结果输入到分类器,通过有标签指纹样本对分类器进行训练并对第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;
第三控制模块64,被配置为在分类器输出的结果与有标签指纹样本的重构误差最小 时,控制第三训练模块63停止对分类器的训练和对每一个编码层的编码特征表示参数的微调。
在一实施例中,装置还可包括:
第二提取模块65,被配置为通过已训练的自动编码解码网络提取无标签指纹样本的第一设定维数的编码特征表示参数;
第四训练模块66,被配置为对第二提取模块65提取的第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到LDA的第二设定维数的投影矩阵。
在一实施例中,识别模块53可包括:
比较子模块531,被配置为将第三指纹特征和第四指纹特征的余弦距离与预设阈值进行比较;
第一确定子模块532,被配置为如果比较子模块531的比较结果表示余弦距离大于预设阈值,确定第一指纹图像与第二指纹图像为同一指纹;
第二确定子模块533,被配置为如果比较子模块531的比较结果表示余弦距离小于或者等于预设阈值,确定第一指纹图像与第二指纹图像为不同指纹。
关于上述实施例中的装置,其中各个模块执行操作的具体方式已经在有关该方法的实施例中进行了详细描述,此处将不做详细阐述说明。
图7是根据一示例性实施例示出的一种适用于指纹识别装置的框图。例如,装置700可以是移动电话,计算机,数字广播终端,消息收发设备,游戏控制台,平板设备,医疗设备,健身设备,个人数字助理等。
参照图7,装置700可以包括以下一个或多个组件:处理组件702,存储器704,电源组件706,多媒体组件708,音频组件710,输入/输出(I/O)的接口712,传感器组件714,以及通信组件716。
处理组件702通常控制装置700的整体操作,诸如与显示,电话呼叫,数据通信,相机操作和记录操作相关联的操作。处理元件702可以包括一个或多个处理器720来执行指令,以完成上述的方法的全部或部分步骤。此外,处理组件702可以包括一个或多个模块,便于处理组件702和其他组件之间的交互。例如,处理部件702可以包括多媒体模块,以方便多媒体组件708和处理组件702之间的交互。
存储器704被配置为存储各种类型的数据以支持在设备700的操作。这些数据的示例包括用于在装置700上操作的任何应用程序或方法的指令,联系人数据,电话簿数据,消息,图片,视频等。存储器704可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(SRAM),电可擦除可编程只读存储器(EEPROM),可擦除可编程只读存储器(EPROM),可编程只读存储器(PROM),只读存储器(ROM),磁存储器,快闪存储器,磁盘或光盘。
电力组件706为装置700的各种组件提供电力。电力组件706可以包括电源管理***,一个或多个电源,及其他与为装置700生成、管理和分配电力相关联的组件。
多媒体组件708包括在所述装置700和用户之间的提供一个输出接口的屏幕。在一些实施例中,屏幕可以包括液晶显示器(LCD)和触摸面板(TP)。如果屏幕包括触摸面板,屏幕可以被实现为触摸屏,以接收来自用户的输入信号。触摸面板包括一个或多个触摸传感器以感测触摸、滑动和触摸面板上的手势。所述触摸传感器可以不仅感测触摸或滑动动作的边界,而且还检测与所述触摸或滑动操作相关的持续时间和压力。在一些实施例中,多媒体组件708包括一个前置摄像头和/或后置摄像头。当设备700处于操作模式,如拍摄模式或视频模式时,前置摄像头和/或后置摄像头可以接收外部的多媒体数据。每个前置摄像头和后置摄像头可以是一个固定的光学透镜***或具有焦距和光学变焦能力。
音频组件710被配置为输出和/或输入音频信号。例如,音频组件710包括一个麦克风(MIC),当装置700处于操作模式,如呼叫模式、记录模式和语音识别模式时,麦克风被配置为接收外部音频信号。所接收的音频信号可以被进一步存储在存储器704或经由通信组件716发送。在一些实施例中,音频组件710还包括一个扬声器,用于输出音频信号。
I/O接口712为处理组件702和***接口模块之间提供接口,上述***接口模块可以是键盘,点击轮,按钮等。这些按钮可包括但不限于:主页按钮、音量按钮、启动按钮和锁定按钮。
传感器组件714包括一个或多个传感器,用于为装置700提供各个方面的状态评估。例如,传感器组件714可以检测到设备700的打开/关闭状态,组件的相对定位,例如所述组件为装置700的显示器和小键盘,传感器组件714还可以检测装置700或装置700一个组件的位置改变,用户与装置700接触的存在或不存在,装置700方位或加速/减速和装置700的温度变化。传感器组件714可以包括接近传感器,被配置用来在没有任何的物理接触时检测附近物体的存在。传感器组件714还可以包括光传感器,如CMOS或CCD图像传感器,用于在成像应用中使用。在一些实施例中,该传感器组件714还可以包括加速度传感器,陀螺仪传感器,磁传感器,压力传感器或温度传感器。
通信组件716被配置为便于装置700和其他设备之间有线或无线方式的通信。装置700可以接入基于通信标准的无线网络,如WiFi,2G或3G,或它们的组合。在一个示例性实施例中,通信部件716经由广播信道接收来自外部广播管理***的广播信号或广播相关信息。在一个示例性实施例中,所述通信部件716还包括近场通信(NFC)模块,以促进短程通信。例如,在NFC模块可基于射频识别(RFID)技术,红外数据协会(IrDA)技术,超宽带(UWB)技术,蓝牙(BT)技术和其他技术来实现。
在示例性实施例中,装置700可以被一个或多个应用专用集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理设备(DSPD)、可编程逻辑器件(PLD)、现场可编程门阵列(FPGA)、控制器、微控制器、微处理器或其他电子元件实现,用于执行上述方法。
在示例性实施例中,还提供了一种包括指令的非临时性计算机可读存储介质,例如包 括指令的存储器704,上述指令可由装置700的处理器720执行以完成上述方法。例如,所述非临时性计算机可读存储介质可以是ROM、随机存取存储器(RAM)、CD-ROM、磁带、软盘和光数据存储设备等。
本领域技术人员在考虑说明书及实践这里公开的公开后,将容易想到本公开的其它实施方案。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由下面的权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限制。

Claims (13)

  1. 一种指纹识别方法,其特征在于,所述方法包括:
    对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;
    对所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;
    根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
  2. 根据权利要求1所述的方法,其特征在于,所述自动编码解码网络包括至少一个编码层,所述方法还包括:
    通过无标签指纹样本对所述至少一个编码层中的每一编码层的编码特征参数进行训练,得到所述每一层编码层对应的编码特征表示参数;
    对所述每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到所述无标签指纹样本的指纹重构数据;
    确定所述指纹重构数据与所述无标签指纹样本的重构误差;
    根据所述重构误差调整所述每一编码层的编码特征表示参数;
    在所述重构误差达到最小值时,停止对所述自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
  3. 根据权利要求2所述的方法,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述方法还包括:
    将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第一输出结果;
    将所述第一输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练;
    在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,停止对所述分类器的训练。
  4. 根据权利要求2所述的方法,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述方法还包括:
    将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第二输出结果;
    将所述第二输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练并对所述第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;
    在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,停止对所述分类器的训练和对所述每一个编码层的编码特征表示参数的微调。
  5. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    通过已训练的所述自动编码解码网络提取所述无标签指纹样本的第一设定维数的编码特征表示参数;
    对所述第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到所述LDA的第二设定维数的投影矩阵。
  6. 根据权利要求1所述的方法,其特征在于,所述根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹,包括:
    将所述第三指纹特征和所述第四指纹特征的余弦距离与预设阈值进行比较;
    如果所述余弦距离大于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为同一指纹;
    如果所述余弦距离小于或者等于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为不同指纹。
  7. 一种指纹识别装置,其特征在于,所述装置包括:
    第一提取模块,被配置为对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;
    降维处理模块,被配置为对所述第一提取模块提取到的所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;
    识别模块,被配置为根据所述降维处理模块降维后的所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
  8. 根据权利要求7所述的装置,其特征在于,所述自动编码解码网络包括至少一个编码层,所述装置还包括:
    第一训练模块,被配置为通过无标签指纹样本对所述至少一个编码层中的每一编码层的编码特征参数进行训练,得到所述每一层编码层对应的编码特征表示参数;
    第一重构模块,被配置为对所述第一训练模块训练得到的所述每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到所述无标签指纹样本的指纹重构数据;
    第一确定模块,被配置为确定所述第一重构模块确定的所述指纹重构数据与所述无标签指纹样本的重构误差;
    调整模块,被配置为根据所述第一确定模块确定的所述重构误差调整所述每一编码层的编码特征表示参数;
    第一控制模块,被配置为在所述第一确定模块确定的所述重构误差达到最小值时,停止对所述自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
  9. 根据权利要求8所述的装置,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述装置还包括:
    第一处理模块,被配置为将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第一输出结果;
    第二训练模块,被配置为将所述第一处理模块得到的所述第一输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练;
    第二控制模块,被配置为在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,控制所述第二训练模块停止对所述分类器的训练。
  10. 根据权利要求8所述的装置,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述装置还包括:
    第二处理模块,被配置为将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第二输出结果;
    第三训练模块,被配置为将所述第二处理模块得到的所述第二输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练并对所述第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;
    第三控制模块,被配置为在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,控制所述第三训练模块停止对所述分类器的训练和对所述每一个编码层的编码特征表示参数的微调。
  11. 根据权利要求8所述的装置,其特征在于,所述装置还包括:
    第二提取模块,被配置为通过已训练的所述自动编码解码网络提取所述无标签指纹样本的第一设定维数的编码特征表示参数;
    第四训练模块,被配置为对所述第二提取模块提取的所述第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到所述LDA的第二设定维数的投影矩阵。
  12. 根据权利要求7所述的装置,其特征在于,所述识别模块包括:
    比较子模块,被配置为将所述第三指纹特征和所述第四指纹特征的余弦距离与预设阈值进行比较;
    第一确定子模块,被配置为如果所述比较子模块的比较结果表示所述余弦距离大于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为同一指纹;
    第二确定子模块,被配置为如果所述比较子模块的比较结果表示所述余弦距离小于或者等于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为不同指纹。
  13. 一种指纹识别装置,其特征在于,所述装置包括:
    处理器;
    用于存储处理器可执行指令的存储器;
    其中,所述处理器被配置为:
    对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;
    对所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;
    根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
PCT/CN2015/099511 2015-10-28 2015-12-29 指纹识别方法及装置 WO2017071083A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
MX2016005225A MX361142B (es) 2015-10-28 2015-12-29 Método y aparato de reconocimiento de huellas dactilares.
RU2016129191A RU2642369C2 (ru) 2015-10-28 2015-12-29 Аппарат и способ распознавания отпечатка пальца
KR1020167005169A KR101992522B1 (ko) 2015-10-28 2015-12-29 지문 인식 방법, 장치, 프로그램 및 기록매체
JP2017547061A JP2018500707A (ja) 2015-10-28 2015-12-29 指紋認証方法及びその装置、プログラム及び記録媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510712896.0A CN105335713A (zh) 2015-10-28 2015-10-28 指纹识别方法及装置
CN201510712896.0 2015-10-28

Publications (1)

Publication Number Publication Date
WO2017071083A1 true WO2017071083A1 (zh) 2017-05-04

Family

ID=55286229

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/099511 WO2017071083A1 (zh) 2015-10-28 2015-12-29 指纹识别方法及装置

Country Status (8)

Country Link
US (1) US9904840B2 (zh)
EP (1) EP3163508A1 (zh)
JP (1) JP2018500707A (zh)
KR (1) KR101992522B1 (zh)
CN (1) CN105335713A (zh)
MX (1) MX361142B (zh)
RU (1) RU2642369C2 (zh)
WO (1) WO2017071083A1 (zh)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713466B2 (en) 2014-03-07 2020-07-14 Egis Technology Inc. Fingerprint recognition method and electronic device using the same
CN105825098B (zh) * 2016-03-16 2018-03-27 广东欧珀移动通信有限公司 一种电子终端的屏幕解锁方法、图像采集方法及装置
US10452951B2 (en) * 2016-08-26 2019-10-22 Goodrich Corporation Active visual attention models for computer vision tasks
CN107885312A (zh) * 2016-09-29 2018-04-06 青岛海尔智能技术研发有限公司 一种手势控制方法及装置
IT201600105253A1 (it) * 2016-10-19 2018-04-19 Torino Politecnico Dispositivo e metodi per l'autenticazione di unn apparato d'utente
CN106599807A (zh) * 2016-12-01 2017-04-26 中科唯实科技(北京)有限公司 一种基于自编码的行人检索方法
GB2563599A (en) * 2017-06-19 2018-12-26 Zwipe As Incremental enrolment algorithm
JP6978665B2 (ja) * 2017-07-25 2021-12-08 富士通株式会社 生体画像処理装置、生体画像処理方法及び生体画像処理プログラム
CN107478869A (zh) * 2017-08-08 2017-12-15 惠州Tcl移动通信有限公司 一种移动终端指纹模组的测试夹具和测试方法
TWI676911B (zh) * 2017-10-12 2019-11-11 神盾股份有限公司 指紋識別方法以及使用指紋識別方法的電子裝置
CN108563767B (zh) 2018-04-19 2020-11-27 深圳市商汤科技有限公司 图像检索方法及装置
US11924349B2 (en) * 2022-06-09 2024-03-05 The Government of the United States of America, as represented by the Secretary of Homeland Security Third party biometric homomorphic encryption matching for privacy protection
US11843699B1 (en) 2022-06-09 2023-12-12 The Government of the United States of America, as represented by the Secretary of Homeland Security Biometric identification using homomorphic primary matching with failover non-encrypted exception handling
CN116311389B (zh) * 2022-08-18 2023-12-12 荣耀终端有限公司 指纹识别的方法和装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160262A1 (en) * 2006-01-11 2007-07-12 Samsung Electronics Co., Ltd. Score fusion method and apparatus
CN102646190A (zh) * 2012-03-19 2012-08-22 腾讯科技(深圳)有限公司 一种基于生物特征的认证方法、装置及***
CN102750528A (zh) * 2012-06-27 2012-10-24 西安理工大学 一种基于手掌特征提取身份识别方法
CN103324944A (zh) * 2013-06-26 2013-09-25 电子科技大学 一种基于svm和稀疏表示的假指纹检测方法
CN103839041A (zh) * 2012-11-27 2014-06-04 腾讯科技(深圳)有限公司 客户端特征的识别方法和装置
CN105354560A (zh) * 2015-11-25 2016-02-24 小米科技有限责任公司 指纹识别方法及装置

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6069914A (en) * 1996-09-19 2000-05-30 Nec Research Institute, Inc. Watermarking of image data using MPEG/JPEG coefficients
TW312770B (en) * 1996-10-15 1997-08-11 Japen Ibm Kk The hiding and taking out method of data
JP2815045B2 (ja) * 1996-12-16 1998-10-27 日本電気株式会社 画像特徴抽出装置,画像特徴解析装置,および画像照合システム
US6373970B1 (en) * 1998-12-29 2002-04-16 General Electric Company Image registration using fourier phase matching
AU2001217335A1 (en) * 2000-12-07 2002-07-01 Kyowa Hakko Kogyo Co. Ltd. Method and system for high precision classification of large quantity of information described with mutivariable
US7058815B2 (en) * 2001-01-22 2006-06-06 Cisco Technology, Inc. Method and system for digitally signing MPEG streams
US7142699B2 (en) * 2001-12-14 2006-11-28 Siemens Corporate Research, Inc. Fingerprint matching using ridge feature maps
US7092584B2 (en) * 2002-01-04 2006-08-15 Time Warner Entertainment Company Lp Registration of separations
US7254275B2 (en) * 2002-12-17 2007-08-07 Symbol Technologies, Inc. Method and system for image compression using image symmetry
US7787667B2 (en) * 2003-10-01 2010-08-31 Authentec, Inc. Spot-based finger biometric processing method and associated sensor
US20060104484A1 (en) * 2004-11-16 2006-05-18 Bolle Rudolf M Fingerprint biometric machine representations based on triangles
JP2006330873A (ja) * 2005-05-24 2006-12-07 Japan Science & Technology Agency 指紋照合装置、方法およびプログラム
KR100797897B1 (ko) * 2006-11-27 2008-01-24 연세대학교 산학협력단 생체정보 최적화 변환 함수를 이용한 인증시스템
JP5207870B2 (ja) * 2008-08-05 2013-06-12 日立コンピュータ機器株式会社 次元削減方法、パターン認識用辞書生成装置、及びパターン認識装置
US8520979B2 (en) * 2008-08-19 2013-08-27 Digimarc Corporation Methods and systems for content processing
US8934545B2 (en) * 2009-02-13 2015-01-13 Yahoo! Inc. Extraction of video fingerprints and identification of multimedia using video fingerprinting
US8863096B1 (en) * 2011-01-06 2014-10-14 École Polytechnique Fédérale De Lausanne (Epfl) Parallel symbolic execution on cluster of commodity hardware
US8831350B2 (en) * 2011-08-29 2014-09-09 Dst Technologies, Inc. Generation of document fingerprints for identification of electronic document types
RU2486590C1 (ru) * 2012-04-06 2013-06-27 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Юго-Западный государственный университет" (ЮЗ ГУ) Способ и устройство инвариантной идентификации отпечатков пальцев по ключевым точкам
US9305559B2 (en) * 2012-10-15 2016-04-05 Digimarc Corporation Audio watermark encoding with reversing polarity and pairwise embedding
US9401153B2 (en) * 2012-10-15 2016-07-26 Digimarc Corporation Multi-mode audio recognition and auxiliary data encoding and decoding
US9232242B2 (en) * 2012-12-11 2016-01-05 Cbs Interactive, Inc. Techniques to broadcast a network television program
KR20150039908A (ko) * 2013-10-04 2015-04-14 노두섭 영상 인식 장치 및 방법
KR101791518B1 (ko) * 2014-01-23 2017-10-30 삼성전자주식회사 사용자 인증 방법 및 장치
CA2882968C (en) * 2015-02-23 2023-04-25 Sulfur Heron Cognitive Systems Inc. Facilitating generation of autonomous control information
KR102396514B1 (ko) * 2015-04-29 2022-05-11 삼성전자주식회사 지문 정보 처리 방법 및 이를 지원하는 전자 장치
CN104899579A (zh) * 2015-06-29 2015-09-09 小米科技有限责任公司 人脸识别方法和装置
US10339178B2 (en) * 2015-06-30 2019-07-02 Samsung Electronics Co., Ltd. Fingerprint recognition method and apparatus
FR3041423B1 (fr) * 2015-09-22 2019-10-04 Idemia Identity And Security Procede d'extraction de caracteristiques morphologiques d'un echantillon de materiel biologique
KR20170055811A (ko) * 2015-11-12 2017-05-22 삼성전자주식회사 디스플레이를 구비한 전자 장치 및 전자 장치에서 디스플레이의 동작을 제어하기 위한 방법

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070160262A1 (en) * 2006-01-11 2007-07-12 Samsung Electronics Co., Ltd. Score fusion method and apparatus
CN102646190A (zh) * 2012-03-19 2012-08-22 腾讯科技(深圳)有限公司 一种基于生物特征的认证方法、装置及***
CN102750528A (zh) * 2012-06-27 2012-10-24 西安理工大学 一种基于手掌特征提取身份识别方法
CN103839041A (zh) * 2012-11-27 2014-06-04 腾讯科技(深圳)有限公司 客户端特征的识别方法和装置
CN103324944A (zh) * 2013-06-26 2013-09-25 电子科技大学 一种基于svm和稀疏表示的假指纹检测方法
CN105354560A (zh) * 2015-11-25 2016-02-24 小米科技有限责任公司 指纹识别方法及装置

Also Published As

Publication number Publication date
KR101992522B1 (ko) 2019-06-24
RU2642369C2 (ru) 2018-01-24
MX361142B (es) 2018-11-27
US9904840B2 (en) 2018-02-27
JP2018500707A (ja) 2018-01-11
US20170124379A1 (en) 2017-05-04
EP3163508A1 (en) 2017-05-03
CN105335713A (zh) 2016-02-17
KR20180063774A (ko) 2018-06-12
MX2016005225A (es) 2017-08-31

Similar Documents

Publication Publication Date Title
WO2017071083A1 (zh) 指纹识别方法及装置
TWI717146B (zh) 圖像處理方法及裝置、電子設備和儲存介質
WO2021017561A1 (zh) 人脸识别方法及装置、电子设备和存储介质
CN109446994B (zh) 手势关键点检测方法、装置、电子设备及存储介质
JP7221258B2 (ja) 声紋抽出モデル訓練方法及び声紋認識方法、その装置並びに媒体
US10115019B2 (en) Video categorization method and apparatus, and storage medium
WO2017128767A1 (zh) 指纹模板录入方法及装置
CN109934275B (zh) 图像处理方法及装置、电子设备和存储介质
CN106228556B (zh) 图像质量分析方法和装置
CN110532956B (zh) 图像处理方法及装置、电子设备和存储介质
US10824891B2 (en) Recognizing biological feature
US10122916B2 (en) Object monitoring method and device
CN105335684B (zh) 人脸检测方法及装置
CN105354560A (zh) 指纹识别方法及装置
CN110191085B (zh) 基于多分类的入侵检测方法、装置及存储介质
CN111259967B (zh) 图像分类及神经网络训练方法、装置、设备及存储介质
CN110717399A (zh) 人脸识别方法和电子终端设备
KR20210110562A (ko) 정보 인식 방법, 장치, 시스템, 전자 디바이스, 기록 매체 및 컴퓨터 프로그램
CN105392056B (zh) 电视情景模式的确定方法及装置
CN114446318A (zh) 音频数据分离方法、装置、电子设备及存储介质
CN107423757B (zh) 聚类处理方法及装置
CN111047049B (zh) 基于机器学习模型处理多媒体数据的方法、装置及介质
US10085050B2 (en) Method and apparatus for adjusting video quality based on network environment
CN111860552A (zh) 基于核自编码器的模型训练方法、装置及存储介质
WO2023060574A1 (zh) 仪表识别方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 20167005169

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2017547061

Country of ref document: JP

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: MX/A/2016/005225

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 2016129191

Country of ref document: RU

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15907144

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15907144

Country of ref document: EP

Kind code of ref document: A1