WO2017071083A1 - 指纹识别方法及装置 - Google Patents
指纹识别方法及装置 Download PDFInfo
- Publication number
- WO2017071083A1 WO2017071083A1 PCT/CN2015/099511 CN2015099511W WO2017071083A1 WO 2017071083 A1 WO2017071083 A1 WO 2017071083A1 CN 2015099511 W CN2015099511 W CN 2015099511W WO 2017071083 A1 WO2017071083 A1 WO 2017071083A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- fingerprint
- feature
- coding
- training
- classifier
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1365—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/191—Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
- G06V30/19173—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1347—Preprocessing; Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/19—Recognition using electronic means
- G06V30/192—Recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references
- G06V30/194—References adjustable by an adaptive method, e.g. learning
Definitions
- the present disclosure relates to the field of image recognition technologies, and in particular, to a fingerprint identification method and apparatus.
- fingerprint identification Since the beginning of fingerprint identification in around 1980, fingerprint identification has been very mature and applied in the civil and post-1990 fields.
- fingerprint recognition in the related art generally requires that the fingerprint of the user cannot be too dry, and the quality of the fingerprint image is sufficiently clear, thereby ensuring the global feature points of the fingerprint and the extraction of local feature points.
- the quality of the fingerprint image is poor, it may be due to The global feature points on the fingerprint and the local feature points are not recognized, resulting in inaccurate final fingerprint recognition, thus constraining the user experience of the fingerprint identification product to a certain extent.
- the embodiments of the present disclosure provide a fingerprint identification method and device, which are used to improve the accuracy of low-quality fingerprint images in fingerprint recognition.
- a fingerprint identification method including:
- the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image and the second fingerprint image are obtained.
- a second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature;
- the automatic codec network includes at least one coding layer, and the method may further include:
- the coding feature parameters of each of the at least one coding layer are trained by the unlabeled fingerprint samples, and the coding feature representation parameters corresponding to the coding layers of each layer are obtained;
- the reconstruction error reaches a minimum value
- the training of the automatic codec network is stopped, and the automatic codec network after the first training is obtained.
- a classifier is connected to a last coding layer of the first coded automatic codec network, and the method may further include:
- the training of the classifier is stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
- a classifier is connected to a last coding layer of the first coded automatic codec network, and the method may further include:
- the method may further include:
- the determining whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the cosine distance of the third fingerprint feature and the fourth fingerprint feature may include:
- the cosine distance is less than or equal to the preset threshold, determining that the first fingerprint image and the second fingerprint image are different fingerprints.
- a fingerprint identification apparatus including:
- the first extraction module is configured to perform feature extraction on the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database by using an automatic code decoding network, to obtain a first fingerprint corresponding to the first fingerprint image. And acquiring a second fingerprint feature corresponding to the second fingerprint image, wherein the first fingerprint feature and the second fingerprint feature have the same dimension;
- the dimension reduction processing module is configured to perform a dimensionality reduction process on the first fingerprint feature and the second fingerprint feature extracted by the first extraction module, to obtain a third fingerprint feature and a fourth fingerprint feature, respectively,
- the dimension of the third fingerprint feature and the fourth fingerprint feature are the same, and are smaller than the dimension of the first fingerprint feature and the second fingerprint feature;
- the identification module is configured to determine whether the first fingerprint image and the second fingerprint image are the same according to the third fingerprint feature of the dimensionality reduction processing module and the cosine distance of the fourth fingerprint feature fingerprint.
- the automatic codec network includes at least one coding layer
- the apparatus may further include:
- the first training module is configured to train the coding feature parameters of each of the at least one coding layer by using the unlabeled fingerprint samples, to obtain the coding feature representation parameters corresponding to the coding layers of each layer;
- the first reconstruction module is configured to perform data reconstruction on the coding feature representation parameter corresponding to each coding layer that is trained by the first training module, and obtain the labelless fingerprint by using a decoding layer corresponding to the coding layer. Fingerprint reconstruction data of the sample;
- a first determining module configured to determine a reconstruction error of the fingerprint reconstruction data determined by the first reconstruction module and the unlabeled fingerprint sample
- the adjusting module is configured to adjust the coding feature representation parameter of each coding layer according to the reconstruction error determined by the first determining module;
- the first control module is configured to stop training the automatic codec network when the reconstruction error determined by the first determining module reaches a minimum value, to obtain an automatic codec network after the first training.
- a classifier is connected to a last coding layer of the first coded automatic codec network, and the device may further include:
- a first processing module configured to input a tagged fingerprint sample to the first coded automatic codec network to obtain a first output result
- a second training module configured to input the first output result obtained by the processing module to the classifier, and train the classifier by using the labeled fingerprint sample
- the second control module is configured to control the second training module to stop training the classifier when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are minimum.
- a classifier is connected to a last coding layer of the first coded automatic codec network, and the device may further include:
- a second processing module configured to input the tagged fingerprint sample to the first coded automatic codec network to obtain a second output result
- a third training module configured to input the second output result obtained by the second processing module to the classifier, and train the classifier by the labeled fingerprint sample and the first Fine-tuning the coding feature representation parameters of each coding layer of the automatic coding and decoding network after the secondary training;
- a third control module configured to control the third training module to stop training the classifier and to each of the classifiers when the result of the classifier output and the reconstructed error of the tagged fingerprint sample are minimum.
- the apparatus may further include:
- a second extraction module configured to extract, by the trained automatic codec network, a coding feature representation parameter of a first set dimension of the unlabeled fingerprint sample
- a fourth training module configured to perform linear discriminant analysis LDA training on the coded feature representation parameter of the first set dimension extracted by the second extraction module, to obtain a second set dimension of the LDA Projection matrix.
- the identification module may include:
- Comparing the submodule configured to compare a cosine distance of the third fingerprint feature and the fourth fingerprint feature with a preset threshold
- a first determining submodule configured to determine that the first fingerprint image and the second fingerprint image are the same fingerprint if a comparison result of the comparison submodule indicates that the cosine distance is greater than the preset threshold
- a second determining submodule configured to determine that the first fingerprint image and the second fingerprint image are different fingerprints if a comparison result of the comparison submodule indicates that the cosine distance is less than or equal to the preset threshold.
- a fingerprint identification apparatus including:
- a memory for storing processor executable instructions
- processor is configured to:
- the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image and the second fingerprint image are obtained.
- a second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature;
- the technical solution provided by the embodiment of the present disclosure may include the following beneficial effects: since the AED network has been subjected to fingerprint features obtained by training a large number of fingerprint images, the first fingerprint feature and the second fingerprint image of the first fingerprint image are extracted by the AED network.
- the second fingerprint feature may include a fingerprint feature that is advantageous for fingerprint recognition, and the related technology must avoid the global feature point of the fingerprint and the local feature point to realize fingerprint recognition.
- the AED network is advantageous for fingerprint recognition by recognition.
- the feature ensures that when the quality of the first fingerprint image is low, the global feature points and local feature points of the fingerprint can be extracted, and the fingerprint recognition can still be realized, thereby greatly improving the accuracy of the low-quality fingerprint image in fingerprint recognition; Dimensionality reduction of the first fingerprint feature and the second fingerprint feature can greatly reduce the computational complexity in the fingerprint recognition process.
- FIG. 1A is a flowchart of a fingerprint identification method, according to an exemplary embodiment.
- FIG. 1B is a schematic structural diagram of an AED network according to an exemplary embodiment.
- FIG. 2A is a flow chart of a fingerprint identification method, according to an exemplary embodiment.
- FIG. 2B is a schematic structural diagram of an AED network according to an exemplary embodiment.
- 2C is a schematic diagram showing how to train an AED network, according to an exemplary embodiment.
- FIG. 3A is a flow diagram showing how parameters of an AED network are fine tuned by a tagged fingerprint sample, according to an exemplary embodiment.
- 3B is a flow chart showing how fine tuning of parameters of a classifier connected to an AED network is performed by tagged fingerprint samples, in accordance with yet another exemplary embodiment.
- FIG. 3C is a schematic structural diagram of an AED network and a classifier according to still another exemplary embodiment.
- FIG. 4 is a flowchart of a fingerprint identification method, according to an exemplary embodiment.
- FIG. 5 is a block diagram of a fingerprint identification apparatus, according to an exemplary embodiment.
- FIG. 6 is a block diagram of another fingerprint recognition apparatus, according to an exemplary embodiment.
- FIG. 7 is a block diagram of a fingerprint recognition device, according to an exemplary embodiment.
- FIG. 1A is a flowchart of a fingerprint identification method according to an exemplary embodiment
- FIG. 1B is a schematic structural diagram of an automatic codec network according to an exemplary embodiment
- the fingerprint identification method may be applied to a fingerprint with a fingerprint sensor installed Identify devices (eg smartphones and tablets with fingerprint authentication, fingerprint punchers).
- Identify devices eg smartphones and tablets with fingerprint authentication, fingerprint punchers.
- the fingerprint identification method includes the following steps S101-S103:
- step S101 the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image is matched with the second fingerprint image.
- the second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature.
- a fingerprint image of a user within a certain range that has been acquired may be stored in the database.
- the fingerprint image of all employees in company A when user B needs fingerprint authentication, collect user B through the fingerprint sensor.
- the first fingerprint image can be.
- an Auto Encode Decode (AED) network may include an encoding layer and a decoding layer, and input the first fingerprint image to the encoding layer, and the output of the encoding layer is the encoding of the first fingerprint image. And the feature is re-inputted into the decoding layer corresponding to the coding layer, and the output of the decoding layer is the first fingerprint feature of the first fingerprint image.
- the second fingerprint image in the database can be obtained in the same manner as the first fingerprint image to obtain the second fingerprint feature of the second fingerprint image.
- step S102 the first fingerprint feature and the second fingerprint feature are subjected to dimensionality reduction processing to obtain a third fingerprint feature and a fourth fingerprint feature respectively, wherein the third fingerprint feature and the fourth fingerprint feature have the same dimension and are smaller than The dimension of the first fingerprint feature and the second fingerprint feature.
- the first fingerprint feature and the second fingerprint feature may be reduced by an already trained Linear Discriminant Analysis (LDA).
- LDA Linear Discriminant Analysis
- the fingerprint feature of the first set dimension of the unlabeled fingerprint sample is extracted by the trained automatic codec network; the LDA training is performed on the fingerprint feature of the first set dimension by linear discriminant analysis to obtain the LDA
- the projection matrix of the second set dimension For example, the unlabeled fingerprint sample is output from the AED network with the first set dimension of the 500-dimensional coded feature representation parameter.
- the dimension can be reduced from the trained LDA to the second set dimension of 200.
- the coding feature of the dimension represents the parameter. Thereby the computational complexity in calculating the cosine distance can be reduced.
- step S103 it is determined whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the cosine distance of the third fingerprint feature and the fourth fingerprint feature.
- the cosine distance of the third fingerprint feature and the fourth fingerprint feature may be compared with a preset threshold. If the cosine distance is greater than a preset threshold, determining that the first fingerprint image and the second fingerprint image are the same fingerprint, if The cosine distance is less than or equal to the preset threshold, and the first fingerprint image and the second fingerprint image are determined to be different fingerprints.
- the first fingerprint image of the user A is collected by the fingerprint sensor 11, and the first fingerprint image and the second fingerprint image already stored in the database 12 are collected.
- the automatic codec network 13 outputs the first fingerprint feature of the first fingerprint image and the second fingerprint feature of the second fingerprint image.
- the first fingerprint feature and the second fingerprint feature are both 500-dimensional fingerprint features.
- the 500-dimensional first fingerprint feature and the second fingerprint are dimension-reduced by the projection matrix of the LDA module 14.
- the LDA module 14 reduces the first fingerprint feature and the second fingerprint feature from 500 dimensions to 200 dimensions, that is, the LDA module 14 outputs the third fingerprint feature after the first fingerprint feature dimension reduction and the second fingerprint feature is reduced in dimension.
- the fourth fingerprint feature For example, the dimensions of the third fingerprint feature and the fourth fingerprint feature are both 200 dimensions.
- the distance calculation module 15 calculates the cosine distances of the two 200-dimensional third fingerprint features and the fourth fingerprint feature, and the result output module 16 compares the cosine distance with a threshold, that is, divides the cosine distance by the threshold. If the cosine distance is greater than the threshold, the output module 16 outputs a result that the first fingerprint image and the second fingerprint image belong to the same fingerprint. If the cosine distance is less than or equal to the threshold, the result output module 16 outputs the first fingerprint image and the second fingerprint. The image belongs to the result of a different fingerprint.
- the first fingerprint feature of the first fingerprint image extracted by the AED network and the second fingerprint feature of the second fingerprint image may include favorable
- the related technology must avoid the global feature points of the fingerprint and the local feature points to achieve fingerprint recognition.
- the AED network ensures the first feature by identifying the features that are favorable for fingerprint recognition.
- the quality of the fingerprint image is low, the fingerprint can be realized because the global feature points and local feature points of the fingerprint can not be extracted, which greatly improves the accuracy of the low-quality fingerprint image in fingerprint recognition;
- the second fingerprint feature is used for dimensionality reduction, which can greatly reduce the computational complexity in the fingerprint identification process.
- the automatic codec network includes at least one coding layer
- the fingerprint identification method may further include:
- the coding feature parameters of each coding layer in the at least one coding layer are trained by the unlabeled fingerprint samples, and the coding feature representation parameters corresponding to each layer of the coding layer are obtained;
- the reconstruction error reaches the minimum value
- the training of the automatic code decoding network is stopped, and the automatic code decoding network after the first training is obtained.
- the last coding layer of the automatic codec network after the first training is connected with a classifier, and the method may further include:
- the training of the classifier is stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
- the last coding layer of the automatic codec network after the first training is connected with a classifier, and the method may further include:
- the method may further include:
- a linear discriminant analysis LDA training is performed on the coded feature representation parameters of the first set dimension, and a projection matrix of the second set dimension of the LDA is obtained.
- determining whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the cosine distance of the third fingerprint feature and the fourth fingerprint feature may include:
- the cosine distance is less than or equal to the preset threshold, it is determined that the first fingerprint image and the second fingerprint image are different fingerprints.
- the above method provided by the embodiment of the present disclosure can prevent the related technology from being able to realize fingerprint recognition through the global feature points of the fingerprint and the local feature points, and ensure that the global feature of the fingerprint cannot be extracted when the quality of the first fingerprint image is low. Points and local feature points can still realize fingerprint recognition, greatly improve the accuracy of low-quality fingerprint images in fingerprint recognition, and can greatly reduce the computational complexity in the fingerprint recognition process.
- FIG. 2A is a flowchart of a fingerprint identification method according to an exemplary embodiment
- FIG. 2B is a schematic structural diagram of an AED network according to an exemplary embodiment
- FIG. 2C is a schematic diagram showing how to implement an AED network according to an exemplary embodiment.
- the fingerprint identification method includes the following steps:
- step S201 the coding feature parameters of each coding layer in the at least one coding layer are trained by the unlabeled fingerprint samples, and the coding feature representation parameters corresponding to each layer of the coding layer are obtained.
- step S202 the coding feature representation parameter corresponding to each coding layer is reconstructed by the decoding layer corresponding to the coding layer, and the fingerprint reconstruction data of the unlabeled fingerprint sample is obtained.
- step S203 a reconstruction error of the fingerprint reconstruction data and the unlabeled fingerprint sample is determined.
- step S204 the coding feature representation parameters of each coding layer are adjusted according to the reconstruction error.
- step S205 the training of the automatic codec network is stopped when the reconstruction error reaches a minimum value.
- the AED network includes at least one coding layer.
- the AED network 20 as shown in FIG. 2B includes three coding layers (the coding layer 21, the coding layer 22, and the coding layer 23, respectively).
- the training of the encoding layer 21 is taken as an example.
- each unlabeled fingerprint sample can be input and encoded.
- the layer 21 obtains, from the encoding layer 21, a coding feature representation parameter of an unlabeled fingerprint sample, the coding feature representation parameter being a representation of the input unlabeled fingerprint sample, in order to verify whether the coding feature representation parameter and the unlabeled fingerprint sample.
- the encoded feature representation parameters can be input to the decoding layer 24, and the reconstruction error between the output information of the decoding layer 24 and the unlabeled fingerprint samples can be calculated by the reconstruction error calculation module 25. If the reconstruction error has not reached the minimum value, the coding feature representation parameter of the coding layer 21 may be adjusted according to the reconstruction error until the reconstruction error reaches a minimum value, which may be regarded as the coding feature representation parameter capable of representing the unlabeled fingerprint at the coding layer 21. sample.
- the coding layer 22 and the coding layer 23 can be verified by the respective corresponding decoding layers whether the coding feature representation parameters corresponding to the coding layer 22 and the coding layer 23 are consistent with the unlabeled fingerprint sample. Until the encoding layer 22 and the encoding layer 23 can represent the unlabeled fingerprint sample, the disclosure will not be described in detail.
- the AED network can encode the fingerprint image, and the fingerprint feature image is used to represent the fingerprint image.
- the image can be made.
- the trained AED network can identify the image features in the fingerprint image that are advantageous for fingerprint recognition, and avoid the situation that the fingerprint recognition error is caused by the failure of the low-quality fingerprint image to extract the global feature points and the local feature points.
- FIG. 3A is a flow diagram showing how to fine tune parameters of an AED network by tagged fingerprint samples
- FIG. 3B is a diagram showing how to connect to an AED through a tagged fingerprint sample pair, according to yet another exemplary embodiment, in accordance with an exemplary embodiment.
- FIG. 3C is a schematic structural diagram of an AED network and a classifier according to still another exemplary embodiment.
- the fingerprint identification method includes the following steps:
- step S301 the tagged fingerprint sample is input to the first coded automatic codec network to obtain a first output result.
- step S302 the first output result is input to the classifier, and the classifier is trained by the tagged fingerprint sample.
- step S303 the training of the classifier is stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
- a plurality of coding layers of the AED network 20 (the coding layer 21, the coding layer 22, and the coding layer 23 shown in FIG. 3C) can be obtained.
- the coding feature represents parameters, and each of the coding layers will obtain different expressions of the unlabeled fingerprint samples.
- Those skilled in the art can understand that the disclosure does not limit the number of layers of the AED network.
- a classifier 31 can be added at the topmost coding layer of the AED network (e.g., coding layer 23).
- the classifier 31 may be, for example, a classifier such as Rogers return, SVM, or the like.
- the classifier 31 is trained by the supervised training method of the standard multi-layer neural network (for example, the gradient descent method) using the first output result of the tagged fingerprint sample, and the result of the classifier output calculated by the reconstructed error calculating module 32 is When the reconstruction error of the tagged fingerprint sample is the smallest, the classification of the classifier 31 is stopped, thereby enabling the AED network 20 to implement the classification function.
- the fingerprint identification method includes the following steps:
- step S311 the tagged fingerprint sample is input to the automatic codec network after the first training to obtain a first output result.
- step S312 the first output result is input to the classifier, the classifier is trained by the tagged fingerprint sample, and the coding feature representation parameter of each coding layer of the first coded automatic codec network is fine-tuned.
- step S313 the training of the classifier and the fine adjustment of the coding feature representation parameters for each coding layer are stopped when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are the smallest.
- the supervisory training method eg, gradient descent method
- the supervisory training method employs the first output result of the tagged fingerprint sample to train the classifier 31 and the fine-tuning coding layer 21 and the coding layer 22
- the coding features corresponding to the layers of the coding layer 23 represent parameters.
- the reconstruction error calculation module 32 calculates the result of the classifier output and the reconstruction error of the tagged fingerprint sample is the smallest, the classification of the classifier 31 is stopped.
- the AED network 20 can achieve fine-tuning of the AED network 20 on the basis of classification, and when the data of the tag fingerprint sample is sufficient, the AED network can achieve end-to-end learning, thereby improving The accuracy of the AED network and classifier in fingerprint recognition.
- FIG. 4 is a flowchart of a fingerprint identification method according to an exemplary embodiment. This embodiment uses the above method provided by the embodiment of the present disclosure to illustrate how to perform fingerprint recognition by a cosine distance as an example, as shown in FIG. 4 . As shown, the following steps are included:
- step S401 the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database are extracted by the automatic code decoding network, and the first fingerprint feature corresponding to the first fingerprint image is matched with the second fingerprint image.
- the second fingerprint feature wherein the first fingerprint feature has the same dimension as the second fingerprint feature.
- step S402 the first fingerprint feature and the second fingerprint feature are subjected to dimensionality reduction processing to obtain a third fingerprint feature and a fourth fingerprint feature respectively, wherein the third fingerprint feature and the fourth fingerprint feature have the same dimension and are smaller than The dimension of the first fingerprint feature and the second fingerprint feature.
- step S403 the cosine distances of the third fingerprint feature and the fourth fingerprint feature are compared with a preset threshold. If the cosine distance is greater than the preset threshold, step S404 is performed. If the cosine distance is less than or equal to the preset threshold, step S405 is performed. .
- step S404 if the cosine distance is greater than the preset threshold, it is determined that the first fingerprint image and the second fingerprint image are the same fingerprint.
- step S405 if the cosine distance is less than or equal to the preset threshold, it is determined that the first fingerprint image and the second fingerprint image are different fingerprints.
- a suitable preset threshold may be obtained by training a large number of fingerprint samples in the sample database, and the preset threshold may be a recognition error rate acceptable to the user. For example, if there are 100,000 pairs of samples in the sample database and 1 million pairs of samples in the class, in order to maintain the recognition error rate of one thousandth, each pair can be calculated by the cosine distance to obtain a between 0-1. Value, where the value of the cosine distance of the sample in the class is 100,000, and the value of the cosine distance of the sample between the classes is 1 million, that is, the value of the 1.1 million cosine distance is obtained, and the value of the 1.1 million cosine distance is obtained. The value is combined with the recognition error rate to determine a suitable preset threshold.
- the fingerprint is identified by the cosine distance of the third fingerprint feature and the fourth fingerprint feature, since the preset threshold can be obtained through a large number of fingerprint samples and combined with the user acceptable The recognition error rate, thus improving the user experience of the fingerprint identification product to some extent.
- FIG. 5 is a block diagram of a fingerprint identification apparatus according to an exemplary embodiment. As shown in FIG. 5, the fingerprint identification apparatus includes:
- the first extraction module 51 is configured to perform feature extraction on the first fingerprint image collected by the fingerprint sensor and the second fingerprint image stored in the database through the automatic code decoding network, to obtain a first fingerprint feature corresponding to the first fingerprint image and a second fingerprint feature corresponding to the second fingerprint image, wherein the first fingerprint feature and the second fingerprint feature have the same dimension;
- the dimension reduction processing module 52 is configured to perform a dimensionality reduction process on the first fingerprint feature and the second fingerprint feature extracted by the first extraction module 51 to obtain a third fingerprint feature and a fourth fingerprint feature, respectively, wherein the third fingerprint feature And the fourth fingerprint feature has the same dimension and is smaller than the dimension of the first fingerprint feature and the second fingerprint feature;
- the identification module 53 is configured to determine whether the first fingerprint image and the second fingerprint image are the same fingerprint according to the third fingerprint feature after the dimension reduction processing module 52 and the cosine distance of the fourth fingerprint feature.
- FIG. 6 is a block diagram of another fingerprint recognition apparatus, according to an exemplary embodiment.
- the automatic codec network includes at least one coding layer, and the apparatus may further include:
- the first training module 54 is configured to train the coding feature parameters of each coding layer in the at least one coding layer by using the unlabeled fingerprint samples, to obtain the coding feature representation parameters corresponding to each layer of the coding layer;
- the first reconstruction module 55 is configured to perform data reconstruction on the coding feature representation parameter corresponding to each coding layer that is trained by the first training module 54 by using a decoding layer corresponding to the coding layer, to obtain a fingerprint weight of the unlabeled fingerprint sample. Construct data
- the first determining module 56 is configured to determine a reconstruction error of the fingerprint reconstruction data determined by the first reconstruction module 55 and the unlabeled fingerprint sample;
- the adjusting module 57 is configured to adjust the coding feature representation parameter of each coding layer according to the reconstruction error determined by the first determination module 56;
- the first control module 58 is configured to stop training the automatic codec network when the reconstruction error determined by the first determining module 57 reaches a minimum value, and obtain an automatic codec network after the first training.
- the last coding layer of the automatic codec network after the first training may also be connected to the classifier, and the device may further include:
- the first processing module 59 is configured to input the tagged fingerprint sample to the first coded automatic codec network to obtain a first output result
- the second training module 60 is configured to input the first output result obtained by the first processing module 59 to the classifier, and train the classifier through the tagged fingerprint sample;
- the second control module 61 is configured to control the second training module 60 to stop training the classifier when the result of the classifier output and the reconstruction error of the tagged fingerprint sample are minimum.
- the last coding layer of the automatic codec network after the first training may also be connected to the classifier, and the device may further include:
- the second processing module 62 is configured to input the tagged fingerprint sample to the automatic codec network after the first training to obtain a second output result;
- the third training module 63 is configured to input the second output result obtained by the second processing module 62 to the classifier, train the classifier through the tagged fingerprint sample, and each of the automatic codec network after the first training
- An encoding feature of a coding layer represents parameters for fine tuning
- the third control module 64 is configured to minimize the error of the output of the classifier and the reconstructed error of the tagged fingerprint sample At this time, the third training module 63 is controlled to stop the training of the classifier and the fine adjustment of the coding feature representation parameters for each coding layer.
- the apparatus may further include:
- the second extraction module 65 is configured to extract, by the trained automatic code decoding network, the coding feature representation parameter of the first set dimension of the unlabeled fingerprint sample;
- the fourth training module 66 is configured to perform linear discriminant analysis LDA training on the coded feature representation parameters of the first set dimension extracted by the second extraction module 65 to obtain a projection matrix of the second set dimension of the LDA.
- the identification module 53 can include:
- the comparison sub-module 531 is configured to compare the cosine distance of the third fingerprint feature and the fourth fingerprint feature with a preset threshold
- the first determining sub-module 532 is configured to determine that the first fingerprint image and the second fingerprint image are the same fingerprint if the comparison result of the comparison sub-module 531 indicates that the cosine distance is greater than a preset threshold;
- the second determining submodule 533 is configured to determine that the first fingerprint image and the second fingerprint image are different fingerprints if the comparison result of the comparison submodule 531 indicates that the cosine distance is less than or equal to the preset threshold.
- FIG. 7 is a block diagram of a fingerprint recognition device, according to an exemplary embodiment.
- device 700 can be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and the like.
- apparatus 700 can include one or more of the following components: processing component 702, memory 704, power component 706, multimedia component 708, audio component 710, input/output (I/O) interface 712, sensor component 714, And a communication component 716.
- Processing component 702 typically controls the overall operation of device 700, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- Processing component 702 can include one or more processors 720 to execute instructions to perform all or part of the steps described above.
- processing component 702 can include one or more modules to facilitate interaction between component 702 and other components.
- processing component 702 can include a multimedia module to facilitate interaction between multimedia component 708 and processing component 702.
- Memory 704 is configured to store various types of data to support operation at device 700. Examples of such data include instructions for any application or method operating on device 700, contact data, phone book data, messages, pictures, videos, and the like. Memory 704 can be implemented by any type of volatile or non-volatile storage device, or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Disk Disk or Optical Disk.
- Power component 706 provides power to various components of device 700.
- Power component 706 can include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 700.
- the multimedia component 708 includes a screen between the device 700 and the user that provides an output interface.
- the screen can include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen can be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touches, slides, and gestures on the touch panel. The touch sensor may sense not only the boundary of the touch or sliding action, but also the duration and pressure associated with the touch or slide operation.
- the multimedia component 708 includes a front camera and/or a rear camera. When the device 700 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 710 is configured to output and/or input an audio signal.
- audio component 710 includes a microphone (MIC) that is configured to receive an external audio signal when device 700 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode.
- the received audio signal may be further stored in memory 704 or transmitted via communication component 716.
- audio component 710 also includes a speaker for outputting an audio signal.
- the I/O interface 712 provides an interface between the processing component 702 and the peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to, a home button, a volume button, a start button, and a lock button.
- Sensor assembly 714 includes one or more sensors for providing device 700 with various aspects of status assessment.
- sensor component 714 can detect an open/closed state of device 700, a relative positioning of components, such as the display and keypad of device 700, and sensor component 714 can also detect a change in position of one component of device 700 or device 700. The presence or absence of user contact with device 700, device 700 orientation or acceleration/deceleration, and temperature variation of device 700.
- Sensor assembly 714 can include a proximity sensor configured to detect the presence of nearby objects without any physical contact.
- Sensor component 714 can also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 714 can also include an acceleration sensor, a gyro sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- Communication component 716 is configured to facilitate wired or wireless communication between device 700 and other devices.
- the device 700 can access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof.
- communication component 716 receives broadcast signals or broadcast associated information from an external broadcast management system via a broadcast channel.
- the communication component 716 also includes a near field communication (NFC) module to facilitate short range communication.
- NFC near field communication
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- apparatus 700 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A gate array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable A gate array
- controller microcontroller, microprocessor, or other electronic component implementation for performing the above methods.
- non-transitory computer readable storage medium comprising instructions, such as a package
- the memory 704 of instructions is executable by the processor 720 of the apparatus 700 to perform the above method.
- the non-transitory computer readable storage medium may be a ROM, a random access memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, and an optical data storage device.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Collating Specific Patterns (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (13)
- 一种指纹识别方法,其特征在于,所述方法包括:对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;对所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
- 根据权利要求1所述的方法,其特征在于,所述自动编码解码网络包括至少一个编码层,所述方法还包括:通过无标签指纹样本对所述至少一个编码层中的每一编码层的编码特征参数进行训练,得到所述每一层编码层对应的编码特征表示参数;对所述每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到所述无标签指纹样本的指纹重构数据;确定所述指纹重构数据与所述无标签指纹样本的重构误差;根据所述重构误差调整所述每一编码层的编码特征表示参数;在所述重构误差达到最小值时,停止对所述自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
- 根据权利要求2所述的方法,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述方法还包括:将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第一输出结果;将所述第一输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练;在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,停止对所述分类器的训练。
- 根据权利要求2所述的方法,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述方法还包括:将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第二输出结果;将所述第二输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练并对所述第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,停止对所述分类器的训练和对所述每一个编码层的编码特征表示参数的微调。
- 根据权利要求2所述的方法,其特征在于,所述方法还包括:通过已训练的所述自动编码解码网络提取所述无标签指纹样本的第一设定维数的编码特征表示参数;对所述第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到所述LDA的第二设定维数的投影矩阵。
- 根据权利要求1所述的方法,其特征在于,所述根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹,包括:将所述第三指纹特征和所述第四指纹特征的余弦距离与预设阈值进行比较;如果所述余弦距离大于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为同一指纹;如果所述余弦距离小于或者等于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为不同指纹。
- 一种指纹识别装置,其特征在于,所述装置包括:第一提取模块,被配置为对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;降维处理模块,被配置为对所述第一提取模块提取到的所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;识别模块,被配置为根据所述降维处理模块降维后的所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
- 根据权利要求7所述的装置,其特征在于,所述自动编码解码网络包括至少一个编码层,所述装置还包括:第一训练模块,被配置为通过无标签指纹样本对所述至少一个编码层中的每一编码层的编码特征参数进行训练,得到所述每一层编码层对应的编码特征表示参数;第一重构模块,被配置为对所述第一训练模块训练得到的所述每一编码层对应的编码特征表示参数通过该编码层对应的解码层进行数据重构,得到所述无标签指纹样本的指纹重构数据;第一确定模块,被配置为确定所述第一重构模块确定的所述指纹重构数据与所述无标签指纹样本的重构误差;调整模块,被配置为根据所述第一确定模块确定的所述重构误差调整所述每一编码层的编码特征表示参数;第一控制模块,被配置为在所述第一确定模块确定的所述重构误差达到最小值时,停止对所述自动编码解码网络的训练,得到第一次训练后的自动编码解码网络。
- 根据权利要求8所述的装置,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述装置还包括:第一处理模块,被配置为将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第一输出结果;第二训练模块,被配置为将所述第一处理模块得到的所述第一输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练;第二控制模块,被配置为在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,控制所述第二训练模块停止对所述分类器的训练。
- 根据权利要求8所述的装置,其特征在于,在所述第一次训练后的自动编码解码网络的最后一个编码层连接有分类器,所述装置还包括:第二处理模块,被配置为将有标签指纹样本输入至所述第一次训练后的自动编码解码网络,得到第二输出结果;第三训练模块,被配置为将所述第二处理模块得到的所述第二输出结果输入到所述分类器,通过所述有标签指纹样本对所述分类器进行训练并对所述第一次训练后的自动编码解码网络的每一个编码层的编码特征表示参数进行微调;第三控制模块,被配置为在所述分类器输出的结果与所述有标签指纹样本的重构误差最小时,控制所述第三训练模块停止对所述分类器的训练和对所述每一个编码层的编码特征表示参数的微调。
- 根据权利要求8所述的装置,其特征在于,所述装置还包括:第二提取模块,被配置为通过已训练的所述自动编码解码网络提取所述无标签指纹样本的第一设定维数的编码特征表示参数;第四训练模块,被配置为对所述第二提取模块提取的所述第一设定维数的编码特征表示参数进行线性判别式分析LDA训练,得到所述LDA的第二设定维数的投影矩阵。
- 根据权利要求7所述的装置,其特征在于,所述识别模块包括:比较子模块,被配置为将所述第三指纹特征和所述第四指纹特征的余弦距离与预设阈值进行比较;第一确定子模块,被配置为如果所述比较子模块的比较结果表示所述余弦距离大于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为同一指纹;第二确定子模块,被配置为如果所述比较子模块的比较结果表示所述余弦距离小于或者等于所述预设阈值,确定所述第一指纹图像与所述第二指纹图像为不同指纹。
- 一种指纹识别装置,其特征在于,所述装置包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:对指纹传感器采集的第一指纹图像与存储在数据库中的第二指纹图像通过自动编码解码网络进行特征提取,得到所述第一指纹图像对应的第一指纹特征和所述第二指纹图像对应的第二指纹特征,其中,所述第一指纹特征与所述第二指纹特征的维数相同;对所述第一指纹特征和所述第二指纹特征进行降维处理,分别得到第三指纹特征和第四指纹特征,其中,所述第三指纹特征和所述第四指纹特征的维数相同,且小于所述第一指纹特征和所述第二指纹特征的维数;根据所述第三指纹特征和所述第四指纹特征的余弦距离确定所述第一指纹图像与所述第二指纹图像是否为同一指纹。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2016005225A MX361142B (es) | 2015-10-28 | 2015-12-29 | Método y aparato de reconocimiento de huellas dactilares. |
RU2016129191A RU2642369C2 (ru) | 2015-10-28 | 2015-12-29 | Аппарат и способ распознавания отпечатка пальца |
KR1020167005169A KR101992522B1 (ko) | 2015-10-28 | 2015-12-29 | 지문 인식 방법, 장치, 프로그램 및 기록매체 |
JP2017547061A JP2018500707A (ja) | 2015-10-28 | 2015-12-29 | 指紋認証方法及びその装置、プログラム及び記録媒体 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510712896.0A CN105335713A (zh) | 2015-10-28 | 2015-10-28 | 指纹识别方法及装置 |
CN201510712896.0 | 2015-10-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017071083A1 true WO2017071083A1 (zh) | 2017-05-04 |
Family
ID=55286229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/099511 WO2017071083A1 (zh) | 2015-10-28 | 2015-12-29 | 指纹识别方法及装置 |
Country Status (8)
Country | Link |
---|---|
US (1) | US9904840B2 (zh) |
EP (1) | EP3163508A1 (zh) |
JP (1) | JP2018500707A (zh) |
KR (1) | KR101992522B1 (zh) |
CN (1) | CN105335713A (zh) |
MX (1) | MX361142B (zh) |
RU (1) | RU2642369C2 (zh) |
WO (1) | WO2017071083A1 (zh) |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10713466B2 (en) | 2014-03-07 | 2020-07-14 | Egis Technology Inc. | Fingerprint recognition method and electronic device using the same |
CN105825098B (zh) * | 2016-03-16 | 2018-03-27 | 广东欧珀移动通信有限公司 | 一种电子终端的屏幕解锁方法、图像采集方法及装置 |
US10452951B2 (en) * | 2016-08-26 | 2019-10-22 | Goodrich Corporation | Active visual attention models for computer vision tasks |
CN107885312A (zh) * | 2016-09-29 | 2018-04-06 | 青岛海尔智能技术研发有限公司 | 一种手势控制方法及装置 |
IT201600105253A1 (it) * | 2016-10-19 | 2018-04-19 | Torino Politecnico | Dispositivo e metodi per l'autenticazione di unn apparato d'utente |
CN106599807A (zh) * | 2016-12-01 | 2017-04-26 | 中科唯实科技(北京)有限公司 | 一种基于自编码的行人检索方法 |
GB2563599A (en) * | 2017-06-19 | 2018-12-26 | Zwipe As | Incremental enrolment algorithm |
JP6978665B2 (ja) * | 2017-07-25 | 2021-12-08 | 富士通株式会社 | 生体画像処理装置、生体画像処理方法及び生体画像処理プログラム |
CN107478869A (zh) * | 2017-08-08 | 2017-12-15 | 惠州Tcl移动通信有限公司 | 一种移动终端指纹模组的测试夹具和测试方法 |
TWI676911B (zh) * | 2017-10-12 | 2019-11-11 | 神盾股份有限公司 | 指紋識別方法以及使用指紋識別方法的電子裝置 |
CN108563767B (zh) | 2018-04-19 | 2020-11-27 | 深圳市商汤科技有限公司 | 图像检索方法及装置 |
US11924349B2 (en) * | 2022-06-09 | 2024-03-05 | The Government of the United States of America, as represented by the Secretary of Homeland Security | Third party biometric homomorphic encryption matching for privacy protection |
US11843699B1 (en) | 2022-06-09 | 2023-12-12 | The Government of the United States of America, as represented by the Secretary of Homeland Security | Biometric identification using homomorphic primary matching with failover non-encrypted exception handling |
CN116311389B (zh) * | 2022-08-18 | 2023-12-12 | 荣耀终端有限公司 | 指纹识别的方法和装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070160262A1 (en) * | 2006-01-11 | 2007-07-12 | Samsung Electronics Co., Ltd. | Score fusion method and apparatus |
CN102646190A (zh) * | 2012-03-19 | 2012-08-22 | 腾讯科技(深圳)有限公司 | 一种基于生物特征的认证方法、装置及*** |
CN102750528A (zh) * | 2012-06-27 | 2012-10-24 | 西安理工大学 | 一种基于手掌特征提取身份识别方法 |
CN103324944A (zh) * | 2013-06-26 | 2013-09-25 | 电子科技大学 | 一种基于svm和稀疏表示的假指纹检测方法 |
CN103839041A (zh) * | 2012-11-27 | 2014-06-04 | 腾讯科技(深圳)有限公司 | 客户端特征的识别方法和装置 |
CN105354560A (zh) * | 2015-11-25 | 2016-02-24 | 小米科技有限责任公司 | 指纹识别方法及装置 |
Family Cites Families (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6069914A (en) * | 1996-09-19 | 2000-05-30 | Nec Research Institute, Inc. | Watermarking of image data using MPEG/JPEG coefficients |
TW312770B (en) * | 1996-10-15 | 1997-08-11 | Japen Ibm Kk | The hiding and taking out method of data |
JP2815045B2 (ja) * | 1996-12-16 | 1998-10-27 | 日本電気株式会社 | 画像特徴抽出装置,画像特徴解析装置,および画像照合システム |
US6373970B1 (en) * | 1998-12-29 | 2002-04-16 | General Electric Company | Image registration using fourier phase matching |
AU2001217335A1 (en) * | 2000-12-07 | 2002-07-01 | Kyowa Hakko Kogyo Co. Ltd. | Method and system for high precision classification of large quantity of information described with mutivariable |
US7058815B2 (en) * | 2001-01-22 | 2006-06-06 | Cisco Technology, Inc. | Method and system for digitally signing MPEG streams |
US7142699B2 (en) * | 2001-12-14 | 2006-11-28 | Siemens Corporate Research, Inc. | Fingerprint matching using ridge feature maps |
US7092584B2 (en) * | 2002-01-04 | 2006-08-15 | Time Warner Entertainment Company Lp | Registration of separations |
US7254275B2 (en) * | 2002-12-17 | 2007-08-07 | Symbol Technologies, Inc. | Method and system for image compression using image symmetry |
US7787667B2 (en) * | 2003-10-01 | 2010-08-31 | Authentec, Inc. | Spot-based finger biometric processing method and associated sensor |
US20060104484A1 (en) * | 2004-11-16 | 2006-05-18 | Bolle Rudolf M | Fingerprint biometric machine representations based on triangles |
JP2006330873A (ja) * | 2005-05-24 | 2006-12-07 | Japan Science & Technology Agency | 指紋照合装置、方法およびプログラム |
KR100797897B1 (ko) * | 2006-11-27 | 2008-01-24 | 연세대학교 산학협력단 | 생체정보 최적화 변환 함수를 이용한 인증시스템 |
JP5207870B2 (ja) * | 2008-08-05 | 2013-06-12 | 日立コンピュータ機器株式会社 | 次元削減方法、パターン認識用辞書生成装置、及びパターン認識装置 |
US8520979B2 (en) * | 2008-08-19 | 2013-08-27 | Digimarc Corporation | Methods and systems for content processing |
US8934545B2 (en) * | 2009-02-13 | 2015-01-13 | Yahoo! Inc. | Extraction of video fingerprints and identification of multimedia using video fingerprinting |
US8863096B1 (en) * | 2011-01-06 | 2014-10-14 | École Polytechnique Fédérale De Lausanne (Epfl) | Parallel symbolic execution on cluster of commodity hardware |
US8831350B2 (en) * | 2011-08-29 | 2014-09-09 | Dst Technologies, Inc. | Generation of document fingerprints for identification of electronic document types |
RU2486590C1 (ru) * | 2012-04-06 | 2013-06-27 | Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Юго-Западный государственный университет" (ЮЗ ГУ) | Способ и устройство инвариантной идентификации отпечатков пальцев по ключевым точкам |
US9305559B2 (en) * | 2012-10-15 | 2016-04-05 | Digimarc Corporation | Audio watermark encoding with reversing polarity and pairwise embedding |
US9401153B2 (en) * | 2012-10-15 | 2016-07-26 | Digimarc Corporation | Multi-mode audio recognition and auxiliary data encoding and decoding |
US9232242B2 (en) * | 2012-12-11 | 2016-01-05 | Cbs Interactive, Inc. | Techniques to broadcast a network television program |
KR20150039908A (ko) * | 2013-10-04 | 2015-04-14 | 노두섭 | 영상 인식 장치 및 방법 |
KR101791518B1 (ko) * | 2014-01-23 | 2017-10-30 | 삼성전자주식회사 | 사용자 인증 방법 및 장치 |
CA2882968C (en) * | 2015-02-23 | 2023-04-25 | Sulfur Heron Cognitive Systems Inc. | Facilitating generation of autonomous control information |
KR102396514B1 (ko) * | 2015-04-29 | 2022-05-11 | 삼성전자주식회사 | 지문 정보 처리 방법 및 이를 지원하는 전자 장치 |
CN104899579A (zh) * | 2015-06-29 | 2015-09-09 | 小米科技有限责任公司 | 人脸识别方法和装置 |
US10339178B2 (en) * | 2015-06-30 | 2019-07-02 | Samsung Electronics Co., Ltd. | Fingerprint recognition method and apparatus |
FR3041423B1 (fr) * | 2015-09-22 | 2019-10-04 | Idemia Identity And Security | Procede d'extraction de caracteristiques morphologiques d'un echantillon de materiel biologique |
KR20170055811A (ko) * | 2015-11-12 | 2017-05-22 | 삼성전자주식회사 | 디스플레이를 구비한 전자 장치 및 전자 장치에서 디스플레이의 동작을 제어하기 위한 방법 |
-
2015
- 2015-10-28 CN CN201510712896.0A patent/CN105335713A/zh active Pending
- 2015-12-29 RU RU2016129191A patent/RU2642369C2/ru active
- 2015-12-29 JP JP2017547061A patent/JP2018500707A/ja active Pending
- 2015-12-29 KR KR1020167005169A patent/KR101992522B1/ko active IP Right Grant
- 2015-12-29 MX MX2016005225A patent/MX361142B/es active IP Right Grant
- 2015-12-29 WO PCT/CN2015/099511 patent/WO2017071083A1/zh active Application Filing
-
2016
- 2016-07-28 US US15/222,107 patent/US9904840B2/en active Active
- 2016-10-27 EP EP16196099.2A patent/EP3163508A1/en not_active Withdrawn
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070160262A1 (en) * | 2006-01-11 | 2007-07-12 | Samsung Electronics Co., Ltd. | Score fusion method and apparatus |
CN102646190A (zh) * | 2012-03-19 | 2012-08-22 | 腾讯科技(深圳)有限公司 | 一种基于生物特征的认证方法、装置及*** |
CN102750528A (zh) * | 2012-06-27 | 2012-10-24 | 西安理工大学 | 一种基于手掌特征提取身份识别方法 |
CN103839041A (zh) * | 2012-11-27 | 2014-06-04 | 腾讯科技(深圳)有限公司 | 客户端特征的识别方法和装置 |
CN103324944A (zh) * | 2013-06-26 | 2013-09-25 | 电子科技大学 | 一种基于svm和稀疏表示的假指纹检测方法 |
CN105354560A (zh) * | 2015-11-25 | 2016-02-24 | 小米科技有限责任公司 | 指纹识别方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
KR101992522B1 (ko) | 2019-06-24 |
RU2642369C2 (ru) | 2018-01-24 |
MX361142B (es) | 2018-11-27 |
US9904840B2 (en) | 2018-02-27 |
JP2018500707A (ja) | 2018-01-11 |
US20170124379A1 (en) | 2017-05-04 |
EP3163508A1 (en) | 2017-05-03 |
CN105335713A (zh) | 2016-02-17 |
KR20180063774A (ko) | 2018-06-12 |
MX2016005225A (es) | 2017-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017071083A1 (zh) | 指纹识别方法及装置 | |
TWI717146B (zh) | 圖像處理方法及裝置、電子設備和儲存介質 | |
WO2021017561A1 (zh) | 人脸识别方法及装置、电子设备和存储介质 | |
CN109446994B (zh) | 手势关键点检测方法、装置、电子设备及存储介质 | |
JP7221258B2 (ja) | 声紋抽出モデル訓練方法及び声紋認識方法、その装置並びに媒体 | |
US10115019B2 (en) | Video categorization method and apparatus, and storage medium | |
WO2017128767A1 (zh) | 指纹模板录入方法及装置 | |
CN109934275B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN106228556B (zh) | 图像质量分析方法和装置 | |
CN110532956B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
US10824891B2 (en) | Recognizing biological feature | |
US10122916B2 (en) | Object monitoring method and device | |
CN105335684B (zh) | 人脸检测方法及装置 | |
CN105354560A (zh) | 指纹识别方法及装置 | |
CN110191085B (zh) | 基于多分类的入侵检测方法、装置及存储介质 | |
CN111259967B (zh) | 图像分类及神经网络训练方法、装置、设备及存储介质 | |
CN110717399A (zh) | 人脸识别方法和电子终端设备 | |
KR20210110562A (ko) | 정보 인식 방법, 장치, 시스템, 전자 디바이스, 기록 매체 및 컴퓨터 프로그램 | |
CN105392056B (zh) | 电视情景模式的确定方法及装置 | |
CN114446318A (zh) | 音频数据分离方法、装置、电子设备及存储介质 | |
CN107423757B (zh) | 聚类处理方法及装置 | |
CN111047049B (zh) | 基于机器学习模型处理多媒体数据的方法、装置及介质 | |
US10085050B2 (en) | Method and apparatus for adjusting video quality based on network environment | |
CN111860552A (zh) | 基于核自编码器的模型训练方法、装置及存储介质 | |
WO2023060574A1 (zh) | 仪表识别方法、装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 20167005169 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2017547061 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2016/005225 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 2016129191 Country of ref document: RU Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15907144 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15907144 Country of ref document: EP Kind code of ref document: A1 |