CN118227823A - Fingerprint retrieval method and device based on fingerprint fixed-length characterization and electronic equipment - Google Patents

Fingerprint retrieval method and device based on fingerprint fixed-length characterization and electronic equipment Download PDF

Info

Publication number
CN118227823A
CN118227823A CN202410395729.7A CN202410395729A CN118227823A CN 118227823 A CN118227823 A CN 118227823A CN 202410395729 A CN202410395729 A CN 202410395729A CN 118227823 A CN118227823 A CN 118227823A
Authority
CN
China
Prior art keywords
fingerprint
minutiae
fixed
length
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410395729.7A
Other languages
Chinese (zh)
Inventor
吴嵩
封举富
王政
贾泽西
黄传崴
费鸿炎
罗鹏程
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Publication of CN118227823A publication Critical patent/CN118227823A/en
Pending legal-status Critical Current

Links

Abstract

The application discloses a fingerprint retrieval method and device based on fixed-length characterization of fingerprints and electronic equipment. Wherein the method comprises the following steps: extracting a minutiae perception fingerprint fixed-length representation and a minutiae center texture feature of a target fingerprint from an image; determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture characteristics and the position information of the fingerprint minutiae in the target fingerprint, wherein the minutiae topological perception fingerprint fixed-length representation comprises minutiae texture characteristic information and minutiae topological characteristic information of the target fingerprint; splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation; a set of fingerprints corresponding to the target fingerprint fixed length representation is retrieved. The application solves the technical problem of low retrieval accuracy caused by insufficient utilization of minutiae information in the fingerprint when constructing the fixed-length characteristic representation of the fingerprint to retrieve the fingerprint in the related technology.

Description

Fingerprint retrieval method and device based on fingerprint fixed-length characterization and electronic equipment
Technical Field
The application relates to the field of image processing, in particular to a fingerprint retrieval method and device based on fixed-length characterization of fingerprints and electronic equipment.
Background
In the related art, when the fixed-length characteristic representation of the fingerprint is adopted to search the fingerprint, the search accuracy is lower because the generated fixed-length characteristic representation of the fingerprint does not fully consider the characteristic related to the minutiae in the fingerprint, and in the scene that the fingerprint image has rotation and translation, the search accuracy is further reduced.
In view of the above problems, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a fingerprint retrieval method and device based on fixed-length characterization of fingerprints and electronic equipment, which at least solve the technical problem that the retrieval accuracy is low due to insufficient utilization of minutiae information in the fingerprints when the fixed-length characteristic representation of the fingerprints is constructed for fingerprint retrieval in the related technology.
According to an aspect of the embodiment of the application, there is provided a fingerprint retrieval method based on fixed-length characterization of fingerprints, including: extracting a minutiae perception fingerprint fixed-length representation of a target fingerprint and a minutiae center texture feature of the target fingerprint from an image containing the target fingerprint, wherein the minutiae perception fingerprint fixed-length representation comprises global feature information of the target fingerprint and minutiae texture feature information in the target fingerprint, and the minutiae perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture features and the position information of the fingerprint minutiae in the target fingerprint, wherein the minutiae topological perception fingerprint fixed-length representation comprises minutiae texture feature information and minutiae topological feature information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation; and searching a fingerprint set corresponding to the target fingerprint fixed-length representation, wherein the fingerprint set comprises a preset number of fingerprints to be matched, and the similarity between the fingerprint fixed-length representation of the fingerprints to be matched and the target fingerprint fixed-length representation meets the preset requirement.
Optionally, extracting the minutiae-aware fingerprint fixed length representation of the target fingerprint and the minutiae-centered textural features of the target fingerprint from the image comprising the target fingerprint comprises: extracting a plurality of feature images from the image through a feature extraction module of the backbone neural network, wherein the feature extraction module comprises a plurality of feature extraction layers which are sequentially connected, the feature extraction layers are used for extracting the feature images, and the plurality of feature images comprise a first feature image, a second feature image, a third feature image, a fourth feature image and a fifth feature image which are ordered from shallow to deep according to feature depths; fusing the third feature map, the fourth feature map and the fifth feature map to obtain a fused feature map; and extracting the minutiae perception fingerprint fixed-length characterization of the target fingerprint and the minutiae center texture feature of the target fingerprint from the fusion feature map.
Optionally, extracting minutiae-aware fingerprint fixed-length characterizations of the target fingerprint from the fused feature map includes: determining a front Jing Xiangliang in the fused feature map; and carrying out weighted aggregation treatment on foreground vectors in the fusion feature map through a weighted average pooling module in the backbone neural network, thereby obtaining the minutiae perception fingerprint fixed-length representation.
Optionally, extracting minutiae center texture features of the target fingerprint from the fused feature map includes: determining a plurality of fingerprint minutiae image areas in the fusion feature map, wherein the fingerprint minutiae image areas are image areas containing fingerprint minutiae; and respectively carrying out weighted aggregation processing on foreground vectors in each fingerprint minutiae image area in the plurality of fingerprint minutiae image areas through a weighted average pooling module in the backbone neural network, so as to obtain the minutiae center texture characteristics of each minutiae of the target fingerprint.
Optionally, determining the plurality of fingerprint minutiae image regions in the fused feature map comprises: determining a plurality of fingerprint minutiae points in the fused feature map; rotating the fusion feature map according to the direction of each fingerprint minutiae point for each fingerprint minutiae point in the plurality of fingerprint minutiae points until the included angle between each fingerprint minutiae point and a preset reference line is zero degrees; and after the rotation is completed, extracting an image block containing each fingerprint minutiae from the fusion feature map, wherein the image block is a fingerprint minutiae image area corresponding to each fingerprint minutiae.
Optionally, determining the minutiae topology aware fixed length representation of the target fingerprint based on the minutiae center texture feature and the location information of the fingerprint minutiae comprises: determining all minutiae point sets in the image of the target fingerprint, and position information and direction information of each fingerprint minutiae point in all minutiae point sets; determining a central minutiae point and neighboring minutiae points of the central minutiae point in all the minutiae point sets, wherein the central minutiae point is any minutiae point in all the minutiae point sets, and the neighboring minutiae point is a minutiae point with a distance smaller than a preset length from the central minutiae point in all the minutiae point sets; taking each fingerprint minutiae as a central minutiae, and determining adjacent minutiae corresponding to the central minutiae, wherein the adjacent minutiae are minutiae in the whole minutiae set, and the distance between the adjacent minutiae and the central minutiae is smaller than a preset distance; determining the topological characteristic of each fingerprint minutiae according to the position information and the direction information of each fingerprint minutiae and the corresponding adjacent minutiae, wherein the topological characteristic comprises the relative position and the angular relation between each fingerprint minutiae and the adjacent minutiae; and determining the minutiae topology perception fingerprint fixed-length representation of the target fingerprint according to the topology characteristics and the minutiae center texture characteristics of each fingerprint minutiae.
Optionally, determining the topology feature of each fingerprint minutiae point according to the location information and the direction information of each fingerprint minutiae point and neighboring minutiae points comprises: rotating the target fingerprint image according to the direction information of each fingerprint minutiae until the included angle between the direction of the target fingerprint minutiae and a preset reference line is zero degrees; after rotating the target fingerprint image, determining the position information and the direction information of each fingerprint minutiae point and corresponding adjacent minutiae points after rotating, and determining the edge characterization of the edge between each fingerprint minutiae point and each adjacent minutiae point according to the rotated position information and direction information; and aggregating the edge representation corresponding to each fingerprint minutiae and the adjacent minutiae information by adopting a minutiae topology encoder to obtain the topology characteristics of each fingerprint minutiae.
Optionally, determining minutiae topology-aware fingerprint fixed length representations of the target fingerprint from the topology features and minutiae center texture features of each fingerprint minutiae includes: establishing a detail point diagram according to the topological features of each fingerprint minutiae and the detail point center texture features, wherein the detail point diagram comprises nodes and edges which are obtained by connecting the nodes, the nodes correspond to the fingerprint minutiae one by one, and the nodes are used for reflecting the information of the topological features of the corresponding fingerprint minutiae and the information of the detail point center texture features; and processing the detail point diagram through a detail topology perception aggregation model to obtain a detail point topology perception fingerprint fixed-length characterization, wherein the detail topology perception aggregation model comprises a plurality of custom layers, a multi-layer perceptron and a pooling layer which are sequentially connected, and each custom layer in the plurality of custom layers comprises a convolution layer, a linear layer with an activation layer and a batch normalization layer.
According to another aspect of the embodiment of the present application, there is also provided a fingerprint retrieval model training method, in which a fingerprint retrieval model is used to execute a fingerprint retrieval method based on a fixed-length characterization of a fingerprint, including: training a fingerprint retrieval model in a first stage, wherein in the training process of the first stage training, only a backbone neural network in the fingerprint retrieval model is trained, and a weighted average pooling module in the backbone neural network and the fingerprint retrieval model is used for extracting minutiae perceived fingerprint fixed-length characterization of a target fingerprint and minutiae texture characteristics of the target fingerprint from a target fingerprint image containing the target fingerprint;
And after the first-stage training is finished, performing second-stage training on the fingerprint retrieval model, wherein in the training process of the second-stage training, only a minutiae topological encoder and a minutiae topological perception aggregation model in the fingerprint retrieval model are trained, and the minutiae topological encoder and the minutiae topological perception aggregation model are used for determining minutiae topological perception fingerprint fixed-length characterization of the target fingerprint according to minutiae texture characteristic information and the target fingerprint image.
Optionally, before the first stage training of the fingerprint retrieval model, the fingerprint retrieval model training method further includes: acquiring original training data, wherein the original training data comprises paired fingerprint data sets; and determining a minutiae matching relationship in the paired fingerprints by adopting an extended group training method, and determining a label according to the minutiae matching relationship, wherein the label is used for determining a positive sample and a negative sample in the first-stage training.
Optionally, the first stage training of the fingerprint retrieval model includes: performing data preprocessing on an initial training image in initial training data to obtain a training image, wherein the data preprocessing processing mode comprises the following steps: rotation and translation; and training the fingerprint retrieval model in a first stage by taking the training image as training data.
According to another aspect of the embodiment of the present application, there is also provided a fingerprint retrieval device based on fixed-length characterization of a fingerprint, including: the first processing module is used for extracting a minutiae perception fingerprint fixed-length representation of the target fingerprint and a minutiae center texture feature of the target fingerprint from an image containing the target fingerprint, wherein the minutiae perception fingerprint fixed-length representation comprises global feature information of the target fingerprint and minutiae texture feature information in the target fingerprint, and the minutiae perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; the second processing module is used for determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture characteristics and the position information of the fingerprint minutiae in the target fingerprint, wherein the minutiae topological perception fingerprint fixed-length representation comprises minutiae texture characteristic information and minutiae topological characteristic information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; the third processing module is used for splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation; the fourth processing module is used for retrieving a fingerprint set corresponding to the target fingerprint fixed-length representation, wherein the fingerprint set comprises a preset number of fingerprints to be matched, and the similarity between the fingerprint fixed-length representation of the fingerprints to be matched and the target fingerprint fixed-length representation meets preset requirements.
According to another aspect of the embodiment of the present application, there is further provided a nonvolatile storage medium, in which a program is stored, where when the program runs, a device in which the nonvolatile storage medium is controlled to execute a fingerprint retrieval method based on fixed-length fingerprint characterization, or a fingerprint retrieval model training method.
According to another aspect of the embodiment of the present application, there is also provided an electronic device, including: the system comprises a memory and a processor, wherein the processor is used for running a program stored in the memory, and the program runs to execute a fingerprint retrieval method based on fingerprint fixed length characterization or a fingerprint retrieval model training method.
According to another aspect of embodiments of the present application, there is also provided a computer program product comprising a computer program which, when executed by a processor, implements a fingerprint retrieval method based on fixed-length characterization of fingerprints, or a fingerprint retrieval model training method.
In the embodiment of the application, a minutiae perception fingerprint fixed-length representation and a minutiae center texture feature of a target fingerprint are extracted from an image containing the target fingerprint, wherein the minutiae perception fingerprint fixed-length representation comprises global feature information of the target fingerprint and minutiae texture feature information in the target fingerprint, and the minutiae perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture features and the position information of the fingerprint minutiae in the target fingerprint, wherein the minutiae topological perception fingerprint fixed-length representation comprises minutiae texture feature information and minutiae topological feature information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation; the fingerprint set corresponding to the target fingerprint fixed-length representation is searched, wherein the fingerprint set comprises a preset number of fingerprints to be matched, the similarity between the fingerprint fixed-length representation of the fingerprints to be matched and the target fingerprint fixed-length representation meets the preset requirement, the target fingerprint fixed-length representation with translational invariance and rotational invariance is constructed by extracting the texture features and the topological features of minutiae in the fingerprints, the purpose of fully utilizing the feature information of the minutiae in the fingerprints is achieved, the technical effect of improving the fingerprint searching accuracy is achieved, and the technical problem that the searching accuracy is low due to the fact that the minutiae information in the fingerprints cannot be fully utilized when the fixed-length feature representation of the fingerprints is constructed in the related technology is solved. The fingerprint fixed-length characterization provided by the embodiment of the application can still ensure the retrieval efficiency when facing factors such as fingerprint translation and rotation, namely the fingerprint fixed-length characterization and the retrieval method based on the fingerprint fixed-length characterization provided by the embodiment of the application can effectively improve the retrieval performance when the fingerprint translates or rotates.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic structural view of a computer terminal (mobile terminal) according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a fingerprint retrieval method based on fixed-length characterization of fingerprints according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a fingerprint image and minutiae points in the fingerprint image provided in accordance with an embodiment of the present application;
FIG. 4 is a schematic diagram of an overall framework of a fingerprint retrieval model provided in accordance with an embodiment of the present application;
FIG. 5 is a schematic diagram of a minutiae-aware fingerprint fixed-length representation and minutiae texture feature information extraction process according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a topology of a fingerprint minutiae point provided in accordance with an embodiment of the present application;
FIG. 7 is a schematic diagram of a detailed topology aware aggregate model provided according to an embodiment of the application;
FIG. 8 is a flowchart of a method for training a fingerprint retrieval model according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a training mode of a training method for a fingerprint retrieval model according to an embodiment of the present application;
FIG. 10 is a schematic diagram showing comparison of search performance indexes of various fingerprint search modes according to an embodiment of the present application;
FIG. 11 is a comparison of search performance indicators for different fingerprint minutiae image region sizes provided in accordance with an embodiment of the present application;
FIG. 12 is a comparative schematic of search performance metrics whether or not minutiae-aware constraints are employed, provided in accordance with an embodiment of the present application;
FIG. 13 is a schematic diagram of a target fingerprint retrieval similarity distribution in a similar fingerprint pair and a dissimilar fingerprint pair according to an embodiment of the present application;
FIG. 14 is a schematic illustration of a minutiae-aware fingerprint fixed length representation of regions in a corresponding fingerprint image, provided in accordance with an embodiment of the present application;
FIG. 15 is a schematic diagram of search results provided according to an embodiment of the present application, which considers minutiae-aware fingerprint fixed-length characterization alone or minutiae-topology-aware fingerprint fixed-length characterization alone;
FIG. 16 is a schematic view of a similarity distribution of target fingerprint fixed-length characterizations of matched and unmatched fingerprints provided in accordance with an embodiment of the present application;
FIG. 17 is a diagram of search performance indicators in a rotating scenario provided in accordance with an embodiment of the present application;
FIG. 18 is a schematic diagram of a similarity distribution of fixed-length characterizations of multiple target fingerprints obtained during rotation of a query fingerprint according to an embodiment of the present application;
Fig. 19 is a schematic view of similarity distribution of target fingerprint fixed-length characterization in a translation scene according to an embodiment of the present application;
FIG. 20 is a schematic diagram of search performance metrics after a query fingerprint is translated, according to an embodiment of the present application;
fig. 21 is a schematic structural diagram of a fingerprint retrieval device based on fingerprint fixed-length characterization according to an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fingerprint is an important biometric information, and is widely used in identification due to its stability and uniqueness. An Automatic Fingerprint Identification System (AFIS) generally comprises four steps, fingerprint acquisition and preprocessing, fingerprint feature extraction, fingerprint retrieval and fingerprint matching. In these steps, fingerprint retrieval is related to the efficiency and accuracy of the overall system. The goal of fingerprint retrieval is to quickly filter out non-matching fingerprints from a vast database and to preserve several candidate fingerprints for use by subsequent matching algorithms. Fingerprint retrieval is a challenging task with the difficulty of (1) the large number of fingerprint classifications and the small differences between similar mismatched fingerprints require that the retrieval algorithm be able to effectively capture small differences in a large database. (2) Fingerprint acquisition typically involves translational and rotational variations, so the search algorithm is robust to these variations. (3) While ensuring accuracy, the efficiency of the search must be ensured.
In addition, the fingerprint retrieval efficiency can be effectively improved by determining the fingerprint representation of the fingerprint in the fingerprint retrieval work. And the adoption of a compact and discriminable fingerprint representation method can reduce the dependence on complex search algorithms.
In the related art, the fixed-length representation of the fingerprint is usually determined by manually designing and determining the features in the fingerprint, and the problem of the mode is that the determined fixed-length representation of the fingerprint is usually limited in expression capacity, poor in generalization capacity, complicated in design process and poor in performance in large-scale fingerprint identification.
In addition, a method for determining the variable length features of the fingerprints by adopting the neural network model is also provided in the related art, but the calculation cost for determining the variable length features of the fingerprints and searching according to the variable length features is higher, and the requirements of a large-scale fingerprint searching scene on the searching speed can not be met although higher precision can be ensured.
In order to solve the above problems, related solutions are provided in the embodiments of the present application, and are described in detail below.
According to an embodiment of the present application, there is provided a method embodiment of a fingerprint retrieval method, it being noted that the steps shown in the flowchart of the figures may be performed in a computer system such as a set of computer executable instructions, and, although a logical order is shown in the flowchart, in some cases, the steps shown or described may be performed in an order other than that shown or described herein.
The method embodiments provided by the embodiments of the present application may be performed in a mobile terminal, a computer terminal, or similar computing device. Fig. 1 shows a block diagram of a hardware architecture of a computer terminal (or mobile device) for implementing a fingerprint retrieval method. As shown in fig. 1, the computer terminal 10 (or mobile device 10) may include one or more (shown as 102a, 102b, … …,102 n) processors 102 (the processors 102 may include, but are not limited to, a microprocessor MCU, a programmable logic device FPGA, etc. processing means), a memory 104 for storing data, and a transmission means 106 for communication functions. In addition, the method may further include: a display, an input/output interface (I/O interface), a Universal Serial BUS (USB) port (which may be included as one of the ports of the BUS), a network interface, a power supply, and/or a camera. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the computer terminal 10 may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
It should be noted that the one or more processors 102 and/or other data processing circuits described above may be referred to generally herein as "data processing circuits. The data processing circuit may be embodied in whole or in part in software, hardware, firmware, or any other combination. Furthermore, the data processing circuitry may be a single stand-alone processing module, or incorporated, in whole or in part, into any of the other elements in the computer terminal 10 (or mobile device). As referred to in embodiments of the application, the data processing circuit acts as a processor control (e.g., selection of the path of the variable resistor termination connected to the interface).
The memory 104 may be used to store software programs and modules of application software, such as in embodiments of the present application (program instructions/data storage devices corresponding to the fingerprint retrieval method), and the processor 102 may execute various functional applications and data processing by running the software programs and modules stored in the memory 104, i.e., to implement the fingerprint retrieval method described above.
The transmission means 106 is arranged to receive or transmit data via a network. The specific examples of the network described above may include a wireless network provided by a communication provider of the computer terminal 10. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The display may be, for example, a touch screen type Liquid Crystal Display (LCD) that may enable a user to interact with a user interface of the computer terminal 10 (or mobile device).
In the above operating environment, the embodiment of the application provides a fingerprint retrieval method based on fingerprint fixed length characterization, as shown in fig. 2, the method comprises the following steps:
Step S202, extracting a minutiae perception fingerprint fixed-length representation of a target fingerprint and a minutiae center texture feature of the target fingerprint from an image containing the target fingerprint, wherein the minutiae perception fingerprint fixed-length representation comprises global feature information of the target fingerprint and minutiae texture feature information in the target fingerprint, and the minutiae perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance;
fig. 3 is a schematic diagram of some fingerprint images and minutiae points in the fingerprint images provided in accordance with an embodiment of the present application.
Wherein the three images in the upper half of fig. 3 are fingerprint images without minutiae, and the three images in the lower half are fingerprint images with minutiae. As can be seen from fig. 3, the fingerprint image contains abundant minutiae information, including minutiae distribution information and texture information. And the distribution condition and texture information of the minutiae in different fingerprint images have certain difference, so that the high-efficiency and accurate fingerprint retrieval can be realized by determining the fixed-length representation of the fingerprint according to the distribution characteristics and the texture characteristics of the minutiae in the fingerprint images.
It should be noted that, since the original fingerprint image may have low quality areas such as complex background noise and broken ridges, these low quality areas may seriously affect the retrieval performance. In addition, the manual labeling of minutiae in the original fingerprint image and corresponding matching minutiae is excessive in workload, and the accuracy cannot be ensured. Therefore, as shown in fig. 4, in order to improve the retrieval efficiency, a FINGERNET model based on a convolutional network can be introduced into the fingerprint retrieval model of the present application, the enhancement, detail and segmentation of the fingerprint can be obtained by processing the original fingerprint image through the FINGERNET model, and the position information and the direction information of each fingerprint minutiae can be determined through the FINGERNET model, wherein the position information is the coordinates of the fingerprint minutiae in a preset plane rectangular coordinate system, and the direction information is the angle between the direction of the fingerprint minutiae and the abscissa axis valence of the plane rectangular coordinate system. For convenience of description, the plane rectangular coordinate system may be a positive direction of an abscissa axis with a left vertex of an image as an origin, and a vertical downward direction as a positive direction of an ordinate axis.
In addition, an extended clique method can be used for obtaining the detailed matching pair, and a label of a training stage is determined in the process of obtaining the matching pair, wherein the label is used for determining positive and negative samples in the training stage so as to obtain training data.
In the technical solution provided in step S202, as shown in fig. 5, extracting a minutiae-aware fingerprint fixed-length representation of a target fingerprint and a minutiae-center texture feature of the target fingerprint from an image including the target fingerprint includes: extracting a plurality of feature images from the image through a feature extraction module of the backbone neural network, wherein the feature extraction module comprises a plurality of feature extraction layers which are sequentially connected, the feature extraction layers are used for extracting the feature images, and the plurality of feature images comprise a first feature image, a second feature image, a third feature image, a fourth feature image and a fifth feature image which are ordered from shallow to deep according to feature depths; fusing the third feature map, the fourth feature map and the fifth feature map to obtain a fused feature map; and extracting the minutiae perception fingerprint fixed-length characterization of the target fingerprint and the minutiae center texture feature of the target fingerprint from the fusion feature map.
Specifically, the minutiae-aware fingerprint fixed-length characterization (Minutiae-aware Representations, abbreviated as MaRs) provided in the embodiments of the present application can describe not only the similarity of global patterns of different fingerprints, but also the similarity of neighboring regions of minutiae. This means that the similarity of MaRs of the matched fingerprints with the same global pattern and minutiae distribution will be significantly larger than the unmatched fingerprints. While the mismatched fingerprints may have similar global patterns, but differ in minutiae distribution, so the similarity between MaRs of the mismatched fingerprints may be low.
In some embodiments of the present application, when extracting MaRs and minutiae center texture features (Minutiae-centered Texture Embedding, simply McTE) from a target fingerprint image after FINGERNET processing and repairing, as shown in fig. 3, feature extraction modules in a backbone neural network may be first used to extract feature maps with different feature depths, where shallow feature maps may contain more texture space feature information, and deep feature maps may contain more semantic information.
In order to better utilize the semantic information in the feature map, as shown in fig. 3, a plurality of feature extraction layers are sequentially connected in series in the backbone neural network, and each feature extraction layer comprises a 1×1 convolution layer, a batch normalization layer and a ReLU activation layer. A Multi-scale feature fusion (Multi-Scale Feature Fusion, abbreviated MSFF) module is also included to fuse feature maps from conv3_x, conv4_x, and conv5_x. And processing each feature map by using the convolution blocks and the up-sampling layers of the three non-shared parameters, so that the channel number and the space size of the feature map are unified, and the fusion of the feature maps can be realized.
The three upsampled feature maps are then concatenated and input into a1 x1 convolutional layer to obtain the final fused feature map.
By the method, texture and semantic information of each layer of the neural network can be effectively fused. In addition, it should be noted that, in the model provided by the embodiment of the application, a spatial pyramid structure is not used, that is, the respective fingerprint fixed-length representations are extracted on different spatial scales, and the scale features are fused in the feature space to obtain a more powerful feature map.
As an alternative embodiment, extracting minutiae-aware fingerprint fixed length characterizations of a target fingerprint from the fused feature map includes: determining a front Jing Xiangliang in the fused feature map; and carrying out weighted aggregation treatment on the foreground vectors in the fusion feature map through a weighted average pooling module, so as to obtain the minutiae perception fingerprint fixed-length representation. The fingerprint image foreground is the fingerprint area on the image and does not contain the background area except the fingerprint area.
Specifically, as can be seen from fig. 5, in order to obtain MaRs, a weighted average pooling (Weighted Global Average Pooling, abbreviated as WGAP) module is further required to perform weighted aggregation on the foreground vectors in the fused feature map, so as to obtain MaRs. The concrete formula of the WGAP module for weighting and aggregating the foreground vectors to obtain MaRs is as follows:
In the above formula, g represents MaRs and N S represents the number of non-zero elements S ij in the minutiae weighted perceptual feature map S A∈{0,1,α}H×W. Wherein the non-zero element S ij in S A indicates that the pixel with coordinates (i, j) in the fusion feature map is in the foreground region. f ij denotes a feature vector of a point with coordinates (i, j) in the fusion feature map. The super parameter α may be used to adjust the weight of each image area centered on the minutiae point, where the larger the α value, the more relevant the area around the minutiae point will be during calculation, and in addition, in the embodiment of the present application, the range of α value is any number greater than 1. The minutiae weighted perception feature map is a map determined according to the fusion feature map, wherein the value of each point in the map is zero or one, and when the value is zero, the pixel point corresponding to the point in the fusion feature map is not in the foreground region, and when the value is zero, the pixel point corresponding to the point in the fusion feature map is in the foreground region.
Optionally, extracting minutiae center texture features of the target fingerprint from the fused feature map includes: determining a plurality of fingerprint minutiae image areas in the fusion feature map, wherein the fingerprint minutiae image areas are image areas containing fingerprint minutiae; and respectively carrying out weighted aggregation processing on foreground vectors in each fingerprint minutiae image area in the plurality of fingerprint minutiae image areas through a weighted average pooling module in the backbone neural network, so as to obtain the minutiae center texture characteristics of each minutiae of the target fingerprint.
As shown in fig. 5, the extraction of McTE and MaRs uses the same backbone network. The difference is that when McTE is extracted, ROI-alignment operation is required to be performed on the fusion feature map according to the minutiae points, so as to intercept fingerprint minutiae feature patches corresponding to the minutiae points in the fusion feature map. And then, weighting and aggregating foreground vectors in the feature small blocks by adopting WGAP modules aiming at each fingerprint minutiae image area, so as to obtain McTE corresponding to each fingerprint minutiae. McTE may be represented by l and g in the above formula for weighted aggregation of foreground vectors may be replaced by l. The minutiae weighted perceptual feature map in the formula is a minutiae weighted perceptual feature map corresponding to each minutiae image region in the fingerprint, and is used for determining whether the pixels in the minutiae image region of the fingerprint belong to a foreground region or not, and determining the weight of the feature vector of each region in final weighting.
Thus, the fusion feature map is processed through the backbone neural network, and the target fingerprint x and the minutiae set of the target fingerprint x are processedMaRs of the target fingerprint x may be acquired through the same backbone neural network, and McTE corresponding to each minutiae: /(I)Where l i denotes McTE of the ith fingerprint minutiae, N m is the total number of fingerprint minutiae,/>Respectively, the abscissa and the ordinate of the ith fingerprint minutiae point and direction information, wherein the direction information is the included angle between the direction of the minutiae point and the positive direction of the abscissa axis of the plane rectangular coordinate system.
In some embodiments of the present application, the model used to extract MaRs receives the triplet loss function constraint. McTE can be used as a supervision object of minutiae perception constraints in a model training stage, and can also be used for determining minutiae topology perception fingerprint fixed-length characterization (Minutiae Topology-aware Representations, MTaRs for short) in a subsequent process.
To ensure that the subsequently derived fingerprint minutiae topology aware fingerprint fixed length representation has rotational and translational invariance, determining each minutiae feature region in the fused feature map comprises: determining fingerprint minutiae position information in the fusion feature map; when extracting the characteristic region of each minutiae in a plurality of fingerprint minutiae, rotating the fingerprint image according to the minutiae direction and sending the fingerprint image into a backbone network, and then intercepting minutiae characteristic small blocks at the minutiae position on the generated characteristic image, wherein the rotating standard is that the included angle between the fingerprint minutiae and a preset datum line is zero degrees. The feature block is a feature area of the fingerprint minutiae corresponding to each fingerprint minutiae.
It should be noted that the planar rectangular coordinate system remains unchanged throughout the image rotation process. Therefore, when McTE of the fingerprint minutiae are extracted, the fusion characteristic images are rotated in advance, so that the fact that the included angle between the selected fingerprint minutiae and the abscissa axis of the plane rectangular coordinate system is zero degrees is guaranteed to be McTE of the fingerprint minutiae is extracted, minutiae coordinates on each fingerprint can be changed due to rotation and translation, but when the minutiae directions are utilized for alignment and rotation, adverse effects caused by translation and rotation of the fingerprints are avoided, and therefore McTE can be enabled to have rotation and translation invariance.
In addition, for convenience of description, different parameters are not named to represent the position information and the angle information of the rotated fingerprint minutiae, and the coordinates of each fingerprint minutiae after default rotation are aligned in the calculation process.
Step S204, determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture feature and the position information of the fingerprint minutiae in the target fingerprint, wherein the minutiae topological perception fingerprint fixed-length representation comprises minutiae texture feature information and minutiae topological feature information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance;
In the technical solution provided in step S204, determining the minutiae topology aware fingerprint fixed length representation of the target fingerprint according to the minutiae texture feature information and the target fingerprint image includes: determining all minutiae point sets in the image of the target fingerprint, and position information and direction information of each fingerprint minutiae point in all minutiae point sets; determining a central minutiae point and neighboring minutiae points of the central minutiae point in all the minutiae point sets, wherein the central minutiae point is any minutiae point in all the minutiae point sets, and the neighboring minutiae point is a minutiae point with a distance smaller than a preset length from the central minutiae point in all the minutiae point sets; taking each fingerprint minutiae as a central minutiae, and determining adjacent minutiae corresponding to the central minutiae, wherein the adjacent minutiae are minutiae in the whole minutiae set, and the distance between the adjacent minutiae and the central minutiae is smaller than a preset distance; determining the topological characteristic of each fingerprint minutiae according to the position information and the direction information of each fingerprint minutiae and the corresponding adjacent minutiae, wherein the topological characteristic comprises the relative position and the angular relation between each fingerprint minutiae and the adjacent minutiae; and determining the minutiae topology perception fingerprint fixed-length representation of the target fingerprint according to the topology characteristics and the minutiae center texture characteristics of each fingerprint minutiae.
As shown in fig. 6, for any selected fingerprint minutiae that is a central minutiae, the connection between the fingerprint minutiae and neighboring fingerprint minutiae may represent topology information of the fingerprint minutiae. The circle in fig. 6 represents the selection range of the neighboring minutiae points of the central minutiae point at the center, and the fingerprint minutiae points within the circle and not at the center are neighboring minutiae points of the central minutiae point at the center. As can be seen from fig. 6, the neighboring minutiae points corresponding to different minutiae points when the different minutiae points are central minutiae points are different, and the set of edges between the central minutiae point and the neighboring minutiae points corresponding to different minutiae points when the different minutiae points are central minutiae points are also different, i.e. the different minutiae points will have different topological characteristics. And the number of similar neighboring minutiae points in the matched minutiae pairs will be greater. The topological structure characteristics of minutiae points can be utilized to improve fingerprint retrieval efficiency.
As an alternative embodiment, in order to make the acquired topological feature of the fingerprint minutiae rotationally and translationally invariant, determining the topological feature of each fingerprint minutiae based on the position information and the direction information of each fingerprint minutiae and neighboring minutiae comprises the steps of: rotating the target fingerprint image according to the direction information of each fingerprint minutiae until the included angle between the direction of the target fingerprint minutiae and a preset reference line is zero degrees; after rotating the target fingerprint image, determining the position information and the direction information of each fingerprint minutiae point and corresponding adjacent minutiae points after rotating, and determining the edge characterization of the edge between each fingerprint minutiae point and each adjacent minutiae point according to the rotated position information and direction information; and aggregating the edge representation corresponding to each fingerprint minutiae and the adjacent minutiae information by using a minutiae topology encoder (Minutiae Topology Encoder, called MTE for short) to obtain the topology characteristics of each fingerprint minutiae.
Specifically, given a central minutiae point (i.e., the minutiae point at the center of the circle in FIG. 6, the central minutiae point may be any minutiae point in the fingerprint image)And setting the set of all minutiae points in the fingerprint image as M, then the set/> -of neighboring minutiae points of the central minutiae point Where m j in the formula represents the neighboring minutiae point, D (,) is the l 2 distance between the central minutiae point and the neighboring minutiae point. R is a super parameter for controlling the selection range of adjacent detail points, namely a preset distance. Too large a value of R may cause the nonlinear deformation to affect the edge characteristics too much, while too small a value of R may cause too small the number of neighboring nodes, resulting in incomplete topology information of the minutiae.
The edge token vector e ij between any one fingerprint minutiae m i and the neighboring minutiae m j of that minutiae can be determined using the following calculation formula:
In the above calculation formula, the range of Δm x and Δm y is-R to R, and the range of Δm θ is-359-359. Since Δm θ is not linearly variable, that is to say Δm θ differs less significantly between 0 ° and 359 ° than between 0 ° and 10 °. In order to normalize the value ranges of the elements in e ij and better reflect the angle difference, the calculation formula of the edge representation e ij can be refined so as to obtain the calculation formula of the following representation e ij:
In the above formula The e ij thus determined has rotational and translational invariance since the image has been rotated in advance before processing the fingerprint minutiae by the MTE.
As can be seen from fig. 6, the number and arrangement order of adjacent minutiae points are not the same. It is also possible to additionally identify some false minutiae or missing part of minutiae using a minutiae extraction algorithm, which also affects the number and order of neighboring minutiae. To eliminate the impact of the number and order of neighboring minutiae points, MTE may be used to aggregate neighboring minutiae point information for each minutiae point. Wherein MTE is a module based on a graph neural network. In this way, the aggregation of the adjacent minutiae information of each minutiae through the MTE can represent the star topology of each fingerprint minutiae as a fixed-length vector, so as to facilitate subsequent continued processing. The expression of the aggregation process of the MTE module is as follows:
In the above expression, the function The aggregation operator representing all features of the summary e ij, the function h (-) represents an updated function mapping the original edge representation e ij obtained in the edge representation e ij to the representation space, and t (m i) represents the minutiae topology representation (Minutiae-aware Topology Embedding, abbreviated as MATE) of each minutiae. For the same edge, its edge characterization is the same. In the above expression, since the averaging pooling layer is used as the aggregation operator, the more similar edges in the topology of two minutiae points, the more similar the two structures can be considered, i.e., the more similar the MATE of the two minutiae points. And in the case that most of the determined minutiae points are accurate, the existence of partial false minutiae points and missing minutiae points can ensure sufficient retrieval accuracy. Therefore, the MATE obtained by the embodiment of the application has rotation and translation invariance, and can tolerate the influence caused by the number or the ordering of the adjacent minutiae points to a certain extent.
In some embodiments of the application, determining minutiae topology-aware fingerprint fixed length representations of the target fingerprint from the topology features and minutiae texture feature information of each fingerprint minutiae includes: establishing a detail point diagram according to the topological features of each fingerprint minutiae and the detail point center texture features, wherein the detail point diagram comprises nodes and edges which are obtained by connecting the nodes, the nodes correspond to the fingerprint minutiae one by one, and the nodes are used for reflecting the information of the topological features of the corresponding fingerprint minutiae and the information of the detail point center texture features; and processing the detail point diagram through a detail topology perception aggregation model to obtain a detail point topology perception fingerprint fixed-length characterization, wherein the detail topology perception aggregation model comprises a plurality of custom layers, a multi-layer perceptron and a pooling layer which are sequentially connected, and each custom layer in the plurality of custom layers comprises a convolution layer, a linear layer with an activation layer and a batch normalization layer.
Specifically, discrimination feature information that can be used to judge the similarity between fingerprints exists not only in the star topology of each minutiae but also in the entire minutiae set. To take advantage of this feature information, embodiments of the present application provide a customized neural network to update the representation of each minutiae point and aggregate them into a fixed length representation. The customized graph neural network in the embodiment of the application is named as a detail topology aware aggregation (Architecture of the Minutiae Topology-aware Aggregation, which is called MTaA for short) module, and MTaA has rotation invariance. In addition, the embodiment of the application also provides a method for obtaining better retrieval performance by combining the texture information of the minutiae points.
Note that McTE has rotation invariance because the image has been rotated according to the direction of the minutiae points at the time McTE of each fingerprint minutiae point was acquired.
As an alternative embodiment, to make full use of McTE and MATE, a detail graph g= (V, E) as shown on the left side in fig. 7 can be constructed from McTE and MATE, where V represents the set of all nodes in the graph and E represents the set of all edges in the graph. Each minutiae point in the target fingerprint image has a corresponding node in the minutiae point map. The formula for determining the edges in the detail point diagram is as follows:
The meaning of the above formula is that when the distance between the corresponding minutiae points of the nodes v i and v j is not greater than R, it is determined that there is an edge between the two nodes, and the two nodes are connected in the minutiae point diagram. Otherwise, it indicates that there is no edge between the two nodes, and the edge is represented in the detail node diagram, that is, the two nodes are not connected. Any two nodes are not connected in the detail point diagram because this would result in a loss of topology.
The calculation formula of the initial characterization of each node in the detail point diagram is as follows:
di=li+ti
L i in the above formula represents McTE and t i represents MATE. That is, the initial characterization of each node represents both the texture information and the topology information of the minutiae.
After the minutiae map is obtained, MTaA may be used to process the minutiae map to obtain MTaRs, as shown in FIG. 7. As can be seen in fig. 7, MTaA is a graph-neural-network-based model that includes multiple custom layers. Each custom layer consists of one EdgeConv convolution layer, one linear layer with GeLU activation layer, and one batch normalization layer. The custom layer may implement three key functions including message passing, message aggregation, and node updating.
Wherein the message transfer function is defined as follows:
The superscript in the above formula indicates the number of layers, where k equals zero indicating the initial characterization in the initial input detail plot, and phi 1 and phi 2 are parameters updated by back propagation.
The expression of the message aggregation function for node v i is as follows:
the above formula represents that in the set of neighboring minutiae points, The maximum value in each dimension, E, represents the respective conjoined edges in the detail point diagram.
MTaA for each fingerprint minutiae, the information of its neighboring minutiae is passed through a message transfer function and aggregated through a message aggregation function. After aggregation, the aggregation result and the original representation in the node corresponding to the detail point can be combined so as to finish updating the node. Specifically, the neighbor node information and the original representation in the original node can be updated through an MLP layer, and the formula is as follows:
After processing of multiple custom layers, the average pool may be used to aggregate the characterizations of each node in the output of the last custom layer, resulting in MTaRs.
Step S206, splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation;
In the technical solution provided in step S206, maRs and MTaRs obtained according to the target fingerprint image may be spliced, so as to obtain a target fingerprint fixed-length representation (Translation and rotation invariance Minutiae-aware Representations, abbreviated as Tri-MaRs) with translational and rotational invariance.
Step S208, a fingerprint set corresponding to the target fingerprint fixed-length representation is searched, wherein the fingerprint set comprises a preset number of fingerprints to be matched, and the similarity between the fingerprint fixed-length representation of the fingerprints to be matched and the target fingerprint fixed-length representation meets preset requirements.
In the technical scheme provided in step S208, a preset number of fingerprints to be matched, whose similarity with the target fingerprint fixed-length representation is greater than the set similarity, may be searched through the target fingerprint fixed-length representation, and further, the target fingerprint or the fingerprint with the maximum similarity with the target fingerprint is determined from the plurality of searched fingerprints to be matched. In addition, the fingerprint set can be formed by determining a preset number of fingerprints to be matched with the maximum similarity between the corresponding Tri-MaRs and the target fingerprint fixed-length characterization in a fingerprint library through the target fingerprint fixed-length characterization, and the rest fingerprints are removed. And then carrying out fine matching on each fingerprint in the fingerprint set, thereby determining the target fingerprint or the fingerprint to be matched with the maximum similarity between the target fingerprint and the target fingerprint in the fingerprint set.
As an alternative embodiment, during the online retrieval phase, each fingerprint has a corresponding Tri-MaRs. In order to normalize the range of similarity between two Tri-MaRs, each Tri-MaRs may be normalized in advance. For example, the similarity of the two fingerprints of x 1 and x 2 can be effectively measured by the inner products g 1 and g 2 of their Tri-MaRs. And since normalization is performed, the inner product is equal to cosine similarity, defined as:
sim(x1,x2)=<g1,g2>
Extracting a minutiae perception fingerprint fixed-length representation of a target fingerprint and a minutiae center texture feature of the target fingerprint from an image containing the target fingerprint, wherein the minutiae perception fingerprint fixed-length representation comprises global feature information and minutiae texture feature information of the target fingerprint, and the minutiae perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture features and the position information of the fingerprint minutiae, wherein the minutiae topological perception fingerprint fixed-length representation comprises minutiae texture feature information and minutiae topological feature information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance; splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation; the method for searching the target fingerprint corresponding to the target fingerprint fixed-length representation achieves the aim of fully utilizing the characteristic information of the minutiae in the fingerprint by extracting the texture characteristics and the topological characteristics of the minutiae in the fingerprint to construct the target fingerprint fixed-length representation with translational invariance, thereby achieving the technical effect of improving the fingerprint searching accuracy, and further solving the technical problem of low searching accuracy caused by the fact that the minutiae information in the fingerprint is not fully utilized when the fixed-length characteristic representation of the fingerprint is constructed to search the fingerprint in the related technology. The fingerprint fixed-length characterization provided by the embodiment of the application can still ensure the retrieval performance when facing factors such as fingerprint translation and rotation, namely the fingerprint fixed-length characterization and the retrieval method based on the fingerprint fixed-length characterization provided by the embodiment of the application can effectively improve the retrieval performance when the fingerprint translates or rotates.
The embodiment of the application provides a flow diagram of a fingerprint retrieval model training method shown in fig. 8, and the trained fingerprint retrieval model is used for executing the fingerprint retrieval method based on the fingerprint fixed length characterization in any one of fig. 2-7. As can be seen from fig. 8, the method comprises the steps of:
step S802, training a fingerprint retrieval model in a first stage, wherein in the training process of the first stage, only a backbone neural network in the fingerprint retrieval model is trained, and a weighted average pooling module in the backbone neural network and the fingerprint retrieval model is used for extracting minutiae perception fingerprint fixed-length characterization of a target fingerprint and minutiae texture characteristics of the target fingerprint from a target fingerprint image containing the target fingerprint;
step S804, after the first stage training is completed, performing a second stage training on the fingerprint retrieval model, wherein in the training process of the second stage training, only a minutiae topological encoder and a minutiae topological perception aggregation model in the fingerprint retrieval model are trained, and the minutiae topological encoder and the minutiae topological perception aggregation model are used for determining minutiae topological perception fingerprint fixed-length characterization of the target fingerprint according to minutiae texture feature information and the target fingerprint image.
In some embodiments of the present application, as shown in FIG. 4, the fingerprint retrieval model provided by the present application may be divided into two stages when generating Tri-MaRs of the target fingerprint image, wherein the first stage is used to obtain MaRs and McTE, and the second stage is used to obtain MTaRs and final Tri-MaRs. Therefore, when training the fingerprint retrieval model, the whole training process is also divided into two stages.
In the first training phase, triple loss constraints MaRs and McTE, respectively, may be used. In this step McTE is used only as a regularization term to incorporate detail discrimination information in MaRs. Thus under the single direction of MaRs constraints, maRs may focus on a limited area, i.e., global pattern, and not well cope with patterns of minutiae in a fingerprint. To address this problem, WGAP modules may be introduced in the backbone network to increase the weight of detail neighboring areas. In addition, in the first stage of training, the embodiment of the application also provides a detail aware Constraint (Minutiae-aware Constraint, which is abbreviated as MaC) to constrain the neighboring area representations of the matched minutiae to be more similar than those of the unmatched minutiae. This may introduce more information about minutiae points in MaRs.
During the training of the first stage, mcTE can be extracted quickly and accurately using the ROI-alignment operation. It should be noted that the image may not be rotated when extracting McTE in the training process of the first stage. Specifically, given the coordinates (m x,my) of a minutiae point, an approximate location of the minutiae point on the fused feature map can be found according to the correspondence between the input image and the output feature map. And determining the minimum fingerprint minutiae image area corresponding to each minutiae in the fusion feature map through the ROI-Align operation.
Each minutiae-centered fingerprint minutiae image region may then be aggregated and taken McTE. The triplet loss may be used during training to constrain the representation of small detail neighboring regions that match to be more similar than small detail that does not. Under such detail-aware constraints, detail matching relationships may be introduced into the output feature map. Adjacent regions of minutiae points in the output feature map may be used to distinguish between different minutiae points. Thus, the output feature map contains more fine information. The feature map may then be aggregated using WGAP to yield MaRs.
In some embodiments of the present application, the model may be trained in a training manner as shown in FIG. 9 during both training phases of training the fingerprint retrieval model. As can be seen from FIG. 9, two encoders, respectively on-line encoders, are provided during trainingAnd momentum encoder/>And the two encoders are identical in structure. For each representation, a historical feature queue may be maintained during the training process, e.g., maRs corresponding historical feature queue/>In addition, the triples with more information are selected under the online difficult mining (Online Hard Example Mining, OHEM for short) strategy of the triples recombination loss L g. After training is completed, only the online encoder is reserved to perform fingerprint retrieval. It will be appreciated that the structure of the online encoder and momentum encoder in the first training stage is a backbone neural network in the fingerprint retrieval model, comprising WGAP modules. The structures of the online encoder and the momentum encoder in the second training phase are the MTEs and MTaA in the fingerprint retrieval model.
Specifically, when selecting a triplet, an on-line refractory mining strategy with a historical feature queue may be used to sample the information triplet lost by the triplet. Batch size and triplet selection policies are critical to the convergence and presentation performance of the network. Because of the limited GPU display, it is not possible to maintain a large batch size in a single training iteration. For this embodiment of the present application, the momentum network and the feature queue are used, so that the number of samples between classes in each batch is increased with a smaller GPU video memory cost.
It should be noted that, the update policies of the parameters θ q and θ k are different. The parameters θ q of the online network are updated using gradient back propagation as with the common neural network, and the parameters θ k of the momentum network are updated using an exponential moving average line (EMA). Can be maintained during trainingAnd/>Two feature queues hold momentum network generated histories MaRs and McTE, respectively.
During training, when the number of historical features stored in the queue exceeds the capacity of the queue, the earliest feature present in the queue will be systematically replaced with the latest feature. It is further noted that the size of the batch data is small compared to the number of categories in the whole database or the real world. Original OHEM involved selecting the most difficult example from a limited set of examples, which can significantly affect the quality and effectiveness of the final representation. In some embodiments of the application, given a batch of fingerprints, the datasets G q and G k corresponding to MaRs can be obtained from an online encoder, and the datasets L q and L k corresponding to McTE can be obtained from a motion encoder. As shown in fig. 9, the representation from the on-line encoder is an anchor point in the triplet. For each anchor sample, a positive sample may be selected from the batch of motion encoders. Given an anchor-facingThe most difficult negative sample/>, in the feature queue can be selectedTo generate triples
/>
As an alternative embodiment, before the first stage of training is started, the tag may also be determined as follows: acquiring original training data, wherein the original training data comprises paired fingerprint data sets; and determining a minutiae matching relationship in the paired fingerprints by adopting an extended group training method, and determining a label according to the minutiae matching relationship, wherein the label is used for determining a positive sample and a negative sample in the first-stage training.
In some embodiments of the application, when training sample data is selected, for MaRs, the positive sample is MaRs of a fingerprint having the same tag as the anchor fingerprint and the negative sample is MaRs of a fingerprint having a different tag. However, for McTE, it should not be considered only if minutiae points are similar when selecting the negative sample, as this may result in the negative sample containing McTE derived from different minutiae points that match the fingerprint. And because McTE between the different minutiae points of the matched fingerprint pair would have some similarity within the acceptance domain of the convolutional network. If McTE of these similar minutiae points are treated as negative examples, this is detrimental to the determination of MaRs et al.
As an alternative embodiment, the specific formula of the MaC constraint is as follows:
Wherein [. Cndot. + ] represents the operation of max (. Cndot.0), and <. Cndot.cndot.cndot.cndot.cndot.cndot.cndot.is the cosine similarity of the two representations. The superscript j indicates that the search is the most difficult negative sample search in the historical feature queue.
The specific formula for the MaRs triplet loss constraint is as follows:
In order to fully utilize the training space, the global orthogonal regularization loss can be used in the training process, and the specific formula is as follows:
D in the above formula is the number of dimensions of the representation referred to in the present application. The first and second moments of the inner product converge towards the 0 and 1 moments, respectively, meaning that the points are independent and evenly distributed on the unit sphere.
The expression of the total loss function in the first training phase is as follows:
loss=λg·Lgm·Lmgor·Lgor
Lambda is a loss weight that can be customized by the staff. For example, λ g、λm and λ gor may be set to 10, 1, and 1, respectively.
In order to provide rotational and translational invariance to the model output MaRs of the first stage training, the training data may be first subjected to data preprocessing, where the data preprocessing includes rotation and translation, such as rotation of pictures in the training data. It should be noted that the size input into the model is unchanged before and after rotation, so that the region of part of the edges in the rotated image may be lost. Considering that the fingerprint is usually located in the middle region of the image in the fingerprint image, and the influence of the loss of the edge region can be counteracted by rotating the image at different angles multiple times, the loss of the partial region caused by rotation does not significantly affect the final retrieval performance.
In the training process of the second stage, care should be taken to ensure that the rotation translation of MTaRs is unchanged, when McTE is extracted, the fusion feature map needs to be rotated according to the direction of the fingerprint minutiae, and when the included angle between the minutiae and the planar rectangular coordinate system is zero, the fingerprint minutiae corresponding to the minutiae in the fusion feature map is acquired, so as to determine corresponding McTE.
This approach ensures McTE is invariant to rotation and translation of the fingerprint due to the alignment based on the minutiae direction. The McTE and detail neighbor topology representations (MATE) may then be element summed and input into a detail topology aware aggregation (MTaA) module. The expression of the final penalty of training MTaRs is similar to that in the first stage. The second training phase differs from the first phase training process in that MTaRs (selection of triplet pairs) is used instead of MaRs and MTaA enhanced node characterization is used instead of McTE. The selection principle and the label of the triplet pairs are the same as in the first stage. The super parameters lambda g、λm and lambda gor may be set to 1,1. At this stage, minutiae perception constraints may also be used to mitigate excessive smoothing in GNNs. This makes it possible to maintain discrimination information of the center node in the deepest layer of the network even if neighbor information of neighboring minutiae points is propagated.
The embodiment of the application provides a comparison experiment to show the superiority of the fingerprint retrieval method provided by the embodiment of the application in the field of large-scale fingerprint retrieval compared with other methods in related technologies.
Specifically, fingerprint data sets used in the comparison experiments include, but are not limited to, a rolling fingerprint data set (Rolled FINGERPRINT DATASET, abbreviated RFD), NIST SD4, NIST SD14, and an extended rolling fingerprint data set (Extended Rolled FINGERPRINT DATASET, abbreviated ERFD). The fingerprints in RFD and ERFD have no intersection. The spatial size of each fingerprint in the RFD is 640 x 640 pixels and dpi is 500.
NIST SD4 and NIST SD14 are two fingerprint reference data sets for retrieval. In NIST SD4 there are 2000 rolling fingerprint pairs, with a spatial size of 512 x 512 pixels. There are 27,000 rolling fingerprint pairs in NIST SD14, with a spatial size of 832 x 768 pixels. In embodiments of the present application, only the last 2,700 pairs may be used for testing in conventional settings. ERFD has 99,704 rolling fingerprints of 640 x 640 pixels to expand the search database.
In order to accurately embody the technical effects of the fingerprint retrieval method provided by the application in a small-scale fingerprint retrieval scene and a large-scale fingerprint retrieval scene, the top-k accuracy rate is adopted in the embodiment of the application to evaluate the fingerprint identification performance, and the index is more accurate than the error rate under the given penetration rate. In addition, because of higher requirements on the retrieval time length in a large-scale fingerprint retrieval scene, the embodiment of the application also shows the calculation time length of the retrieval method provided by the application.
As an alternative implementation, to evaluate the search performance of Tri-MaRs provided by the present application in different scale fingerprint search scenarios, the search method provided by the present example was compared with a method that did not previously have ERFD on NIST SD4 and NIST SD 14. These related technologies include a variety of fixed length representations and a commercial detail-based matching algorithm, where the retrieval performance of the related technologies in these datasets can be determined by the publications. The top-k index at the end of the different methods is shown in FIG. 10. Also in fig. 10, 1% penetration in the related art was converted into the first 20 accuracies of NIST SD4 and the first 27 accuracies of NIST SD 14.
As can be seen from fig. 10, the fingerprint retrieval method and Tri-MaRs provided by the embodiment of the present application are significantly better than the related art in a small-scale fingerprint retrieval application scenario. In particular, the accuracy index of top-1 is 99.70% and 99.93% respectively on NIST SD4 and NIST SD 14. And at NIST SD14, the matching fingerprint corresponding to each query fingerprint may be ranked in the top 5 bits in the small database application scenario.
In order to further confirm the performance of the fingerprint retrieval method provided by the application in a large-scale application scene, 1,000 to 100,000 fingerprints in ERFD can be combined into a retrieval database. It should be noted that the fingerprints in ERFD have the same characteristics as the fingerprints in the NIST SD4 and NIST SD14 reference test sets.
The final test results showed that the top-1 accuracy of the NIST SD14 test dataset was reduced to 99.81% in a large scale fingerprint retrieval scenario. More than 99.96% of the target fingerprints correspond to the top 100 of the matching fingerprints. It can be seen that the Tri-MaRs provided by the application is still significantly superior to other retrieval modes in a large-scale fingerprint retrieval scene.
In terms of two key indexes of retrieval efficiency and retrieval accuracy, the fingerprint retrieval method still has performance superior to other retrieval methods in the related art. Specifically, in the case of the hardware devices being Intel (R) Xeon (R) CPU E5-2620 [email protected] and GeForce GTX 1080Ti, the method provided in the present application takes 51.8 milliseconds from the start of processing the original image to the end of generating the final Tri-MaRs. On the GPU, the average search time for a gallery of 110 ten thousand fingerprint images is about 3.2 milliseconds.
In order to more clearly describe implementation details of the fingerprint search method provided by the application, ablation experiments are also provided in some embodiments of the application to embody the influence of different parameters on the final search effect. Parameters affecting the retrieval effect include, but are not limited to, the size of the fingerprint minutiae image area corresponding to each minutiae at the time of extraction McTE, maC used during training, and the like. The influence of each of MaRs and MTaRs constituting Tri-MaRs on retrieval performance was further studied in the ablation experimental section.
Specifically, the size of the fingerprint minutiae image area corresponding to each minutiae point may affect the search result in several ways when extracting McTE. On the one hand, the size of the minutiae image region on the fused feature map centered on the minutiae determines how much local information is contained in McTE. On the other hand, the size of the region also affects the extent of the smallest central region considered in the representation of MaC and Weighted Global Average Pooling (WGAP). The effect of different region sizes on the final retrieval performance is shown in fig. 11.
It should be noted that the same region size setting is used in the first stage training of MaRs and the second stage training of MTaRs in the present application. This arrangement ensures that different McTE are extracted from the same backbone, which is crucial for the subsequent phase generation MTaRs and Tri-MaRs.
As can be seen from fig. 11, the region size setting of 2×2 can achieve superior search performance. Specifically, the region size reduction results in a reduction of features of the fingerprint minutiae image region contained in MaRs and allows detail perception constraints to focus on only limited regions adjacent to the detail. The increase in area size causes the nonlinear deformation around each fingerprint minutiae point to be amplified, thereby affecting the constraint on the texture characteristics around the minutiae point.
For MaC, in the stage of training MaRs, minutiae related information can be integrated into MaRs through customized MaC, that is, through training of the first stage of MaC, the fingerprint retrieval model can be integrated into related information of minutiae when MaRs is generated in the actual application process.
In the study of MaC, a more discriminative McTE was also obtained by MaC. Wherein McTE corresponding to each minutiae point is a representation vector obtained according to the feature information of each minutiae point, and can be used for training of the subsequent pair MTaRs.
The effect of MaC on MaRs and MTaRs performance is further discussed in the examples of the present application. In particular, the impact of MaC can be determined by evaluating the performance of the model by comparing those models that retain MaC constraints with those that have been deleted MaC. During the experiment, the other settings of the model other than MaC were kept unchanged. The final test results are shown in fig. 12. As can be seen from FIG. 12, maC can further improve the retrieval performance of the Tri-MaRs provided by the present application.
As shown in FIG. 13, embodiments of the present application also provide a similarity distribution of each Tri-MaRs drawn on NIST SD4 between matched and unmatched fingerprint pairs. The upper half of the distribution in fig. 13 corresponds to the model without MaC and the lower half corresponds to the model with MaC. In fig. 13, the peaks framed by the dashed boxes correspond to the unmatched fingerprint pairs, and the peaks not framed by the boxes correspond to the matched fingerprint pairs. As can be seen in fig. 13, mcTE can be used to better distinguish between matched and unmatched pairs of minutiae points after addition MaC, with fewer overlapping areas, specifically represented in the figure as non-matched and matched portions. In addition MaC can also make MaRs and MTaRs more closely similar at fingerprint level. For MaRs, the introduction of MaC helps focus on details while focusing on global patterns. Information related to the details may improve MaRs performance. For MTaRs, a better, more discriminative McTE is more advantageous for aggregating more discriminative MTaRs with node characterization. Thus, tri-MaRs generated by the fusion of MaRs and MTaRs has better retrieval performance after MaC is added to the model.
Tri-MaRs provided in the examples of the present application consisted of MaRs and MTaRs. Wherein MTaRs contains texture feature information and topology feature information of the fingerprint minutiae. While for MaRs, it can be seen from fig. 14 that MaRs can focus on the areas of singularities in a fingerprint image.
In particular, as can be seen from fig. 14 (a) and (b), when the singular points in the fingerprints are the same and the regions of interest of the features are the same, the similarity between the respective MaRs of the two fingerprints is greater; in contrast, when the singular points in the fingerprint are different and the regions of interest are different, as shown in fig. 14 (a) and (c), the similarity between MaRs is low. MaRs and MTaRs are complementary and focus on both level 1 and level 2 information.
In the embodiment of the application, the retrieval performance of the Tri-MaRs alone or the MaRs alone is tested in some test scenes, the final test result is shown in figure 15, and the retrieval performance is poor when MaRs or MTaRs are considered alone. As shown in fig. 15, although MaRs considers minutiae information to some extent, maRs does not handle fingerprints with similar global patterns well. On the other hand MTaRs, while containing more discriminant minutiae information, has poor ability to model the global features of the fingerprint. Therefore, maRs and Tri-MTaRs are used as complementary features in the fingerprint retrieval method provided by the application to further improve the retrieval performance. The test results using Tri-MaRs are shown in fig. 16, where the similarity distribution between matched and unmatched fingerprints is more compact and the gap between them is larger. In fig. 16, the peaks framed by the dashed boxes correspond to the unmatched pairs of fingerprints, and the non-framed ones correspond to the matched pairs of fingerprints.
In some embodiments of the present application, the rotational and translational invariance of Tri-MaRs was further verified. Specifically, in order to verify the robustness of Tri-MaRs to rotation problems, two test experiments are provided in the embodiments of the present application. In the first test experiment, each query fingerprint in the original database in NIST SD4 and NIST SD14 was rotated by 0 °, 90 ° and 180 °, respectively, for fingerprint retrieval, and the final retrieval performance is shown in fig. 17. Compared with the situation when the fingerprint is not rotated, the Tri-MaRs provided by the application has the advantages that the retrieval performance is not obviously reduced when the fingerprint is subjected to rotation, and the retrieval precision is still higher.
In the second experiment, the query fingerprint is rotated from 0 to 359 degrees, tri-MaRs of the query fingerprint is generated for a plurality of times in the rotation process, and the similarity between Tri-MaRs corresponding to different angles and Tri-MaRs when the angle is 0 degrees is calculated, so that a similarity distribution result shown in FIG. 18 is obtained. As can be seen from fig. 18, tri-MaRs generated at 180 ° has the lowest similarity, but the similarity is still close to 1. It can be seen that, since the input data in the MTA is rotated in advance according to the direction of the minutiae points, the rotation invariance of MTaRs is ensured. And in the training process of the first stage, large-scale rotation processing is carried out on the training image to ensure that MaRs has rotation invariance and that the finally obtained Tri-MaRs also has rotation invariance and translation invariance. And Tri-MaRs can ensure enough inquiry performance in both the scene of inquiring fingerprint rotation and the scene of fingerprint rotation in a fingerprint library.
To verify the robustness of Tri-MaRs to translational factors, two test experiments are also provided in the examples of the present application. In the first test experiment, the same fingerprint can be translated in the horizontal direction and the vertical direction for a plurality of times, corresponding Tri-MaRs is generated for a plurality of times in the translation process, the similarity between the Tri-MaRs is calculated, and a similarity distribution schematic diagram shown in fig. 19 is obtained. As can be seen from FIG. 19, the Tri-MaRs provided by the embodiment of the present application has translational invariance, and can still ensure similarity under the condition of large-scale translation, that is, the influence of translation on the Tri-MaRs is small.
In the second experiment, translation processing is performed on the query fingerprint in the horizontal direction and the vertical direction, corresponding matched fingerprints in the NIST SD4 and the NIST SD14 are kept unchanged, and then Tri-MaRs corresponding to the query fingerprint after the translation processing is generated and retrieved. The final search result is shown in fig. 20. As can be seen from fig. 20, the translational invariance of MaRs is ensured due to the extensive rotation and translation processing of the training data during the first training phase. And MTaRs also has translational invariance since the input of MTA is also translational invariance and MTE and MTaA are also employed. The Tri-MaRs provided by the embodiment of the application also has enough translational invariance and enough effects caused by fingerprint translation.
The embodiment of the application provides a fingerprint retrieval device based on fingerprint fixed-length characterization, fig. 21 is a schematic structural diagram of the fingerprint retrieval device, and as can be seen from fig. 21, the device comprises: a first processing module 210, configured to extract a minutiae-aware fingerprint fixed-length representation of the target fingerprint and a minutiae-center texture feature of the target fingerprint from an image including the target fingerprint, where the minutiae-aware fingerprint fixed-length representation includes global feature information of the target fingerprint and minutiae-texture feature information in the target fingerprint, and the minutiae-aware fingerprint fixed-length representation is a fixed-length representation with translational and rotational invariance; a second processing module 212, configured to determine a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture feature and the position information of the fingerprint minutiae in the target fingerprint, where the minutiae topological perception fingerprint fixed-length representation includes minutiae texture feature information and minutiae topological feature information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational and rotational invariance; a third processing module 214, configured to splice the minutiae perceptual fingerprint fixed-length representation and the minutiae topology perceptual fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation; the fourth processing module 216 is configured to retrieve a fingerprint set corresponding to the target fingerprint fixed-length representation, where the fingerprint set includes a preset number of fingerprints to be matched, and a similarity between the fingerprint fixed-length representation of the fingerprints to be matched and the target fingerprint fixed-length representation meets a preset requirement.
In some embodiments of the present application, the first processing module 210 extracts minutiae-aware fingerprint fixed-length characterizations of a target fingerprint and minutiae-centered textural features of the target fingerprint from an image containing the target fingerprint comprises: extracting a plurality of feature images from the image through a feature extraction module of the backbone neural network, wherein the feature extraction module comprises a plurality of feature extraction layers which are sequentially connected, the feature extraction layers are used for extracting the feature images, and the plurality of feature images comprise a first feature image, a second feature image, a third feature image, a fourth feature image and a fifth feature image which are ordered from shallow to deep according to feature depths; fusing the third feature map, the fourth feature map and the fifth feature map to obtain a fused feature map; and extracting the minutiae perception fingerprint fixed-length characterization of the target fingerprint and the minutiae center texture feature of the target fingerprint from the fusion feature map.
In some embodiments of the present application, the first processing module 210 extracting minutiae-aware fingerprint fixed length representations of the target fingerprint from the fused feature map comprises: determining a front Jing Xiangliang in the fused feature map; and carrying out weighted aggregation treatment on foreground vectors in the fusion feature map through a weighted average pooling module in the backbone neural network, thereby obtaining the minutiae perception fingerprint fixed-length representation.
In some embodiments of the present application, the first processing module 210 extracting minutiae center texture features of the target fingerprint from the fused feature map includes: determining a plurality of fingerprint minutiae image areas in the fusion feature map, wherein the fingerprint minutiae image areas are image areas containing fingerprint minutiae; and respectively carrying out weighted aggregation processing on foreground vectors in each fingerprint minutiae image area in the plurality of fingerprint minutiae image areas through a weighted average pooling module in the backbone neural network, so as to obtain the minutiae center texture characteristics of each minutiae of the target fingerprint.
In some embodiments of the application, the first processing module 210 determining the plurality of fingerprint minutiae image regions in the fused feature map comprises: determining a plurality of fingerprint minutiae points in the fused feature map; rotating the fusion feature map according to the direction of each fingerprint minutiae point for each fingerprint minutiae point in the plurality of fingerprint minutiae points until the included angle between each fingerprint minutiae point and a preset reference line is zero degrees; and after the rotation is completed, extracting an image block containing each fingerprint minutiae from the fusion feature map, wherein the image block is a fingerprint minutiae image area corresponding to each fingerprint minutiae.
In some embodiments of the present application, the second processing module 212 determining minutiae topology-aware fixed length representations of the target fingerprint from minutiae center texture features and location information of fingerprint minutiae includes: determining all minutiae point sets in the image of the target fingerprint, and position information and direction information of each fingerprint minutiae point in all minutiae point sets; determining a central minutiae point and neighboring minutiae points of the central minutiae point in all the minutiae point sets, wherein the central minutiae point is any minutiae point in all the minutiae point sets, and the neighboring minutiae point is a minutiae point with a distance smaller than a preset length from the central minutiae point in all the minutiae point sets; taking each fingerprint minutiae as a central minutiae, and determining adjacent minutiae corresponding to the central minutiae, wherein the adjacent minutiae are minutiae in the whole minutiae set, and the distance between the adjacent minutiae and the central minutiae is smaller than a preset distance; determining the topological characteristic of each fingerprint minutiae according to the position information and the direction information of each fingerprint minutiae and the corresponding adjacent minutiae, wherein the topological characteristic comprises the relative position and the angular relation between each fingerprint minutiae and the adjacent minutiae; and determining the minutiae topology perception fingerprint fixed-length representation of the target fingerprint according to the topology characteristics and the minutiae center texture characteristics of each fingerprint minutiae.
In some embodiments of the present application, the second processing module 212 determining the topological feature of each fingerprint minutiae from the location information and the direction information of each fingerprint minutiae and neighboring minutiae comprises: rotating the target fingerprint image according to the direction information of each fingerprint minutiae until the included angle between the direction of the target fingerprint minutiae and a preset reference line is zero degrees; after rotating the target fingerprint image, determining the position information and the direction information of each fingerprint minutiae point and corresponding adjacent minutiae points after rotating, and determining the edge characterization of the edge between each fingerprint minutiae point and each adjacent minutiae point according to the rotated position information and direction information; and aggregating the edge representation corresponding to each fingerprint minutiae and the adjacent minutiae information by adopting a minutiae topology encoder to obtain the topology characteristics of each fingerprint minutiae.
In some embodiments of the present application, the second processing module 212 determining minutiae topologically aware fingerprint fixed length representations of the target fingerprint from the topological features and minutiae center texture features of each fingerprint minutiae comprises: establishing a detail point diagram according to the topological features of each fingerprint minutiae and the detail point center texture features, wherein the detail point diagram comprises nodes and edges which are obtained by connecting the nodes, the nodes correspond to the fingerprint minutiae one by one, and the nodes are used for reflecting the information of the topological features of the corresponding fingerprint minutiae and the information of the detail point center texture features; and processing the detail point diagram through a detail topology perception aggregation model to obtain a detail point topology perception fingerprint fixed-length characterization, wherein the detail topology perception aggregation model comprises a plurality of custom layers, a multi-layer perceptron and a pooling layer which are sequentially connected, and each custom layer in the plurality of custom layers comprises a convolution layer, a linear layer with an activation layer and a batch normalization layer.
Note that each module in the fingerprint search device based on the fingerprint fixed-length characterization may be a program module (for example, a set of program instructions for implementing a specific function), or may be a hardware module, and for the latter, it may be represented by the following form, but is not limited thereto: the expression forms of the modules are all a processor, or the functions of the modules are realized by one processor.
According to an embodiment of the present application, there is provided a nonvolatile storage medium in which a program is stored, wherein the nonvolatile storage medium is controlled to perform a fingerprint retrieval method as shown in fig. 3 or a fingerprint retrieval model training method as shown in fig. 8 when the program is run.
According to an embodiment of the present application, there is provided an electronic device including a memory and a processor for executing a program stored in the memory, wherein the program executes a fingerprint retrieval method as shown in fig. 3 or a fingerprint retrieval model training method as shown in fig. 8.
According to an embodiment of the present application, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a fingerprint retrieval method as shown in fig. 3, or a fingerprint retrieval model training method as shown in fig. 8.
In the foregoing embodiments of the present application, the descriptions of the embodiments are emphasized, and for a portion of this disclosure that is not described in detail in this embodiment, reference is made to the related descriptions of other embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed technology may be implemented in other manners. The above-described embodiments of the apparatus are merely exemplary, and the division of the units, for example, may be a logic function division, and may be implemented in another manner, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interfaces, units or modules, or may be in electrical or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the related art or all or part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a preferred embodiment of the present application and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present application, which are intended to be comprehended within the scope of the present application.

Claims (15)

1. The fingerprint retrieval method based on the fixed-length characterization of the fingerprint is characterized by comprising the following steps of:
Extracting a minutiae-aware fingerprint fixed-length representation of a target fingerprint and a minutiae center texture feature of the target fingerprint from an image containing the target fingerprint, wherein the minutiae-aware fingerprint fixed-length representation comprises global feature information of the target fingerprint and minutiae texture feature information in the target fingerprint, and the minutiae-aware fingerprint fixed-length representation is a fixed-length representation with translational and rotational invariance;
Determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae central texture feature and the position information of the fingerprint minutiae in the target fingerprint, wherein the minutiae topological perception fingerprint fixed-length representation comprises the minutiae texture feature information and the minutiae topological feature information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance;
Splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation;
And searching a fingerprint set corresponding to the target fingerprint fixed-length representation, wherein the fingerprint set comprises a preset number of fingerprints to be matched, and the similarity between the fingerprint fixed-length representation of the fingerprints to be matched and the target fingerprint fixed-length representation meets preset requirements.
2. The fingerprint retrieval method based on a fixed length fingerprint characterization of claim 1, wherein extracting minutiae-aware fingerprint fixed length characterization of the target fingerprint and minutiae-centered texture features of the target fingerprint from an image containing the target fingerprint comprises:
Extracting a plurality of feature images from the image through a feature extraction module of a backbone neural network, wherein the feature extraction module comprises a plurality of feature extraction layers which are sequentially connected, the feature extraction layers are used for extracting the feature images, and the feature images comprise a first feature image, a second feature image, a third feature image, a fourth feature image and a fifth feature image which are ordered from shallow to deep according to feature depths;
fusing the third feature map, the fourth feature map and the fifth feature map to obtain a fused feature map;
And extracting the minutiae-aware fingerprint fixed-length characterization of the target fingerprint and the minutiae center texture feature of the target fingerprint from the fusion feature map.
3. The fingerprint retrieval method based on fingerprint fixed length characterization of claim 2, wherein extracting minutiae-aware fingerprint fixed length characterization of the target fingerprint from the fused feature map comprises:
Determining a front Jing Xiangliang in the fusion profile;
and carrying out weighted aggregation treatment on the front Jing Xiangliang in the fusion feature map through a weighted average pooling module in the backbone neural network, thereby obtaining the minutiae perception fingerprint fixed-length representation.
4. The fingerprint retrieval method based on fixed length characterization of claim 2, wherein extracting minutiae center texture features of the target fingerprint from the fused feature map comprises:
determining a plurality of fingerprint minutiae image areas in the fusion feature map, wherein the fingerprint minutiae image areas are image areas containing fingerprint minutiae;
And respectively carrying out weighted aggregation processing on foreground vectors in each fingerprint minutiae image area in the plurality of fingerprint minutiae image areas through a weighted average pooling module in the backbone neural network, so as to obtain the minutiae center texture characteristics of each minutiae of the target fingerprint.
5. The fingerprint retrieval method based on fixed length characterization of claim 4, wherein determining a plurality of fingerprint minutiae image regions in the fused feature map comprises:
determining a plurality of fingerprint minutiae points in the fused feature map;
rotating the fusion feature map according to the direction of each fingerprint minutiae point for each fingerprint minutiae point in the plurality of fingerprint minutiae points until the included angle between each fingerprint minutiae point and a preset reference line is zero degrees;
And after the rotation is completed, extracting an image block containing each fingerprint minutiae from the fusion feature map, wherein the image block is a fingerprint minutiae image area corresponding to each fingerprint minutiae.
6. The fingerprint retrieval method based on fingerprint fixed length characterization of claim 1, wherein determining minutiae topology aware fixed length characterization of the target fingerprint from the minutiae center texture features and the location information of the fingerprint minutiae comprises:
determining all minutiae point sets in the image of the target fingerprint, and position information and direction information of each fingerprint minutiae point in the all minutiae point sets;
Determining a central minutiae point and neighboring minutiae points of the central minutiae point in the all-minutiae point set, wherein the central minutiae point is any minutiae point in the all-minutiae point set, and the neighboring minutiae point is a minutiae point in the all-minutiae point set, wherein the distance between the neighboring minutiae point and the central minutiae point is smaller than a preset length;
Respectively taking each fingerprint minutiae as a central minutiae, and determining adjacent minutiae corresponding to the central minutiae, wherein the adjacent minutiae are minutiae in the whole minutiae set, and the distance between the adjacent minutiae and the central minutiae is smaller than a preset distance;
Determining the topological characteristic of each fingerprint minutiae according to the position information and the direction information of each fingerprint minutiae and the corresponding adjacent minutiae, wherein the topological characteristic comprises the relative position and the angular relation between each fingerprint minutiae and the adjacent minutiae;
and determining minutiae topology perception fingerprint fixed-length characterization of the target fingerprint according to the topology characteristics and the minutiae center texture characteristics of each fingerprint minutiae.
7. The fingerprint retrieval method based on fixed length characterization of fingerprint according to claim 6, wherein determining the topological feature of each fingerprint minutiae based on the position information and the direction information of each fingerprint minutiae and the neighboring minutiae comprises:
rotating the target fingerprint image according to the direction information of each fingerprint minutiae until the included angle between the direction of the target fingerprint minutiae and a preset reference line is zero degrees;
after the target fingerprint image is rotated, determining the position information and the direction information of each fingerprint minutiae point and the corresponding adjacent minutiae point after rotation, and determining the edge characterization of the edge between each fingerprint minutiae point and each adjacent minutiae point according to the rotated position information and direction information;
And aggregating the edge representation corresponding to each fingerprint minutiae and adjacent minutiae information by using a minutiae topology encoder to obtain the topology characteristics of each fingerprint minutiae.
8. The fingerprint retrieval method based on fingerprint fixed length characterization of claim 6, wherein determining minutiae topologically perceived fingerprint fixed length characterization of the target fingerprint from the topological features and minutiae center textural features of each of the fingerprint minutiae comprises:
Establishing a detail point diagram according to the topological features and the detail point center texture features of each fingerprint minutiae, wherein the detail point diagram comprises nodes and edges obtained by connecting the nodes, the nodes correspond to the fingerprint minutiae one by one, and the nodes are used for reflecting the information of the topological features and the information of the detail point center texture features of the corresponding fingerprint minutiae;
The detail point diagram is processed through a detail topology perception aggregation model to obtain the detail point topology perception fingerprint fixed-length characterization, wherein the detail topology perception aggregation model comprises a plurality of custom layers, a multi-layer perceptron and a pooling layer which are sequentially connected, and each custom layer in the plurality of custom layers comprises a convolution layer, a linear layer with an activation layer and a batch normalization layer.
9. A fingerprint retrieval model training method, wherein the fingerprint retrieval model is used for executing the fingerprint retrieval method based on the fingerprint fixed length characterization according to any one of claims 1 to 8, and the method comprises the following steps:
Training the fingerprint retrieval model in a first stage, wherein in the training process of the first stage training, only a backbone neural network in the fingerprint retrieval model is trained, and the backbone neural network and a weighted average pooling module in the fingerprint retrieval model are used for extracting minutiae perceived fingerprint fixed-length characterization of the target fingerprint and minutiae texture characteristics of the target fingerprint from a target fingerprint image containing the target fingerprint;
And after the first-stage training is completed, performing second-stage training on the fingerprint retrieval model, wherein in the training process of the second-stage training, only a minutiae topological encoder and a minutiae topological perception aggregation model in the fingerprint retrieval model are trained, and the minutiae topological encoder and the minutiae topological perception aggregation model are used for determining minutiae topological perception fingerprint fixed-length characterization of the target fingerprint according to the minutiae texture feature information and the target fingerprint image.
10. The fingerprint retrieval model training method as recited in claim 9, wherein prior to the first stage training of the fingerprint retrieval model, the fingerprint retrieval model training method further comprises:
acquiring original training data, wherein the original training data comprises paired fingerprint data sets;
and determining minutiae matching relations in the paired fingerprints by adopting an extended group training method, and determining labels according to the minutiae matching relations, wherein the labels are used for determining positive samples and negative samples in the first-stage training.
11. The method of training a fingerprint retrieval model as recited in claim 9, wherein first stage training the fingerprint retrieval model comprises:
Performing data preprocessing on an initial training image in initial training data to obtain a training image, wherein the data preprocessing processing mode comprises the following steps: rotation and translation;
and training the fingerprint retrieval model in the first stage by taking the training image as training data.
12. A fingerprint retrieval device based on fixed length characterization of a fingerprint, comprising:
The first processing module is used for extracting a minutiae perception fingerprint fixed-length representation of the target fingerprint and a minutiae center texture feature of the target fingerprint from an image containing the target fingerprint, wherein the minutiae perception fingerprint fixed-length representation comprises global feature information of the target fingerprint and minutiae texture feature information in the target fingerprint, and the minutiae perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance;
The second processing module is used for determining a minutiae topological perception fingerprint fixed-length representation of the target fingerprint according to the minutiae center texture characteristics and the position information of the fingerprint minutiae in the target fingerprint, wherein the minutiae topological perception fingerprint fixed-length representation comprises the minutiae texture characteristic information and the minutiae topological characteristic information of the target fingerprint, and the minutiae topological perception fingerprint fixed-length representation is a fixed-length representation with translational invariance and rotational invariance;
The third processing module is used for splicing the minutiae perception fingerprint fixed-length representation and the minutiae topology perception fingerprint fixed-length representation to obtain a target fingerprint fixed-length representation;
and the fourth processing module is used for retrieving a fingerprint set corresponding to the target fingerprint fixed-length representation, wherein the fingerprint set comprises a preset number of fingerprints to be matched, and the similarity between the fingerprint fixed-length representation of the fingerprints to be matched and the target fingerprint fixed-length representation meets preset requirements.
13. A non-volatile storage medium, wherein a program is stored in the non-volatile storage medium, and wherein the program, when executed, controls a device in which the non-volatile storage medium is located to perform the fingerprint retrieval method based on the fixed length characterization of a fingerprint as claimed in any one of claims 1 to 8, or the fingerprint retrieval model training method as claimed in any one of claims 9 to 11.
14. An electronic device, comprising: a memory and a processor for executing a program stored in the memory, wherein the program is executed to perform the fingerprint retrieval method based on fingerprint fixed length characterization of any one of claims 1 to 8, or the fingerprint retrieval model training method of any one of claims 9 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements the fingerprint retrieval method based on fingerprint fixed length characterization according to any one of claims 1 to 8, or the fingerprint retrieval model training method according to any one of claims 9 to 11.
CN202410395729.7A 2024-04-02 Fingerprint retrieval method and device based on fingerprint fixed-length characterization and electronic equipment Pending CN118227823A (en)

Publications (1)

Publication Number Publication Date
CN118227823A true CN118227823A (en) 2024-06-21

Family

ID=

Similar Documents

Publication Publication Date Title
JP6966875B2 (en) Image search device and program
KR101531618B1 (en) Method and system for comparing images
Deng et al. Retinal fundus image registration via vascular structure graph matching
CN112801215B (en) Image processing model search, image processing method, image processing apparatus, and storage medium
US9626552B2 (en) Calculating facial image similarity
WO2018021942A2 (en) Facial recognition using an artificial neural network
WO2017199141A1 (en) Point cloud matching method
JP2022502751A (en) Face keypoint detection method, device, computer equipment and computer program
CN109711416B (en) Target identification method and device, computer equipment and storage medium
JP6997369B2 (en) Programs, ranging methods, and ranging devices
CN113762280A (en) Image category identification method, device and medium
CN110598715A (en) Image recognition method and device, computer equipment and readable storage medium
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
Nousias et al. A saliency aware CNN-based 3D model simplification and compression framework for remote inspection of heritage sites
CN114298997B (en) Fake picture detection method, fake picture detection device and storage medium
Sghaier et al. Efficient machine-learning based 3d face identification system under large pose variation
CN113592015B (en) Method and device for positioning and training feature matching network
Gao et al. Occluded person re-identification based on feature fusion and sparse reconstruction
Rodríguez et al. Robust estimation of local affine maps and its applications to image matching
CN113704276A (en) Map updating method and device, electronic equipment and computer readable storage medium
CN111291611A (en) Pedestrian re-identification method and device based on Bayesian query expansion
CN111339342A (en) Three-dimensional model retrieval method based on angle ternary center loss
CN111126436A (en) Visual matching method and device
Molnár et al. ToFNest: Efficient normal estimation for time-of-flight depth cameras
CN118227823A (en) Fingerprint retrieval method and device based on fingerprint fixed-length characterization and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication