CN113469143A - Finger vein image identification method based on neural network learning - Google Patents

Finger vein image identification method based on neural network learning Download PDF

Info

Publication number
CN113469143A
CN113469143A CN202110935812.5A CN202110935812A CN113469143A CN 113469143 A CN113469143 A CN 113469143A CN 202110935812 A CN202110935812 A CN 202110935812A CN 113469143 A CN113469143 A CN 113469143A
Authority
CN
China
Prior art keywords
finger vein
vein image
neural network
image
finger
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110935812.5A
Other languages
Chinese (zh)
Inventor
周颖玥
王欣宇
李佳阳
雷露露
赵家琦
孙蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN202110935812.5A priority Critical patent/CN113469143A/en
Publication of CN113469143A publication Critical patent/CN113469143A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses a finger vein image recognition method based on neural network learning, which comprises the following steps: s1: acquiring and preprocessing a finger vein image; s2: extracting finger vein image features according to S1; s3: performing feature matching and recognition according to S2; s4: experiments were performed according to S3 and results were analyzed; the problem of poor model effect caused by low utilization rate of training samples due to contrast loss and triple loss in the prior art is solved; and the problem of low identification accuracy caused by unqualified image quality is solved by integrating judgment of finger vein image quality in the system.

Description

Finger vein image identification method based on neural network learning
Technical Field
The invention relates to the field of finger vein image recognition, in particular to a finger vein image recognition method based on neural network learning.
Background
Compared with traditional identity authentication information such as passwords, keys and identity cards, the human body biological characteristics have the advantages of being carried about, available at any time and any place, difficult to forget and lose and the like, so that the fields of finance, security protection, daily attendance management, personal article keeping and the like are more favored to carry out personal identity identification and verification through the biological characteristics. Compared with external features such as fingerprints and human faces, the finger vein belongs to the internal structure of a human body and is not easily influenced by the external environment, and vein recognition belongs to living body recognition and is difficult to copy, so that the safety level of the finger vein recognition is higher. Meanwhile, due to the randomness of the distribution of the vein of the finger veins, the distribution of the vein of the finger veins among different individuals has difference, even if the vein distribution characteristics of twins are different, and the foundation of the finger veins for identity recognition is laid.
The main factors influencing the accuracy of the finger vein recognition system are as follows: the quality of the collected vein image and whether the extraction of the vein image features is effective are both used for ensuring that the features in the finger vein image can be fully extracted and expressed. The traditional vein image feature extraction method mainly adopts a feature extraction algorithm designed manually, for example: extracting characteristic features (linear shape, curvature, minutiae and the like) of the vascular structure to represent the features of the vein venation; the method uses a single-dimensional or multi-dimensional principal component analysis method to search the vein image or the low-dimensional expression of the extracted features, thereby effectively reducing the dimension of the feature vector; the method comprises the steps of expressing the characteristics of an image by utilizing global or Local statistical information of a vein image, and taking a Local Binary Pattern (LBP) as a typical representative; in addition, some Scale Invariant Feature extraction (SIFT) techniques, which are widely used in computer vision, are also used for vein Feature expression.
Thanks to the development of neural network technology, many problems in computer vision in recent years are adaptive capturing of effective features of target images by means of neural network models, which have been used by scholars for finger vein recognition problems, typically convolutional neural network models. For example: the Im-AlexNet model is put forward on the basis of the AlexNet model by the pottery aspiration and the like and is used for finger vein recognition, so that the model parameters are effectively reduced, and the recognition accuracy is improved. But because the output nodes of the classification network are fixed, the identification system can only identify a limited number of people. Tang and Xie apply measurement learning to finger vein recognition, Tang utilizes a pre-training model as a teacher network and a lightweight network as a student network, and combines the two networks into a twin network structure, so that the distance between a pair of contrast Loss (contrast Loss) quantification samples is constructed, and the trained network has good performance. Xie researches factors influencing the network recognition rate, obtains an optimal hash model through triple Loss (triple Loss) training, and obtains a better finger vein recognition result. Although the problem of classifying the network structure is solved by metric learning, the utilization rate of the training samples by the contrast loss and the triple loss is not high enough, so that the model effect is poor.
In order to overcome the defects of the method, a thought for training a convolutional neural network based on a Smooth average accuracy Loss function (Smooth-AP) is provided based on a metric learning method, and the performance of the network for recognizing the finger vein image is effectively improved. Further, the following recognition system was constructed: when the personal identity is registered, converting the input finger vein image into a corresponding characteristic vector by using a trained convolutional neural network and storing the characteristic vector, thereby completing the construction of a personal identity database; when the personal identity is identified, the network is used for extracting the characteristics of the finger vein image to be identified, and the characteristics are matched with the existing characteristics in the personal identity database to obtain the identity of the person to be identified. In addition, in order to ensure that the quality of the finger vein image at the input end of the network meets certain requirements, an image quality judgment module based on a neural network is added after the image is collected, and the stability and the reliability of the whole finger vein recognition system are further enhanced.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a finger vein image recognition method based on neural network learning, and solves the problem that the utilization rate of training samples is not high enough due to contrast loss and triple loss in the prior art, so that the model effect is poor; and the problem of low identification accuracy caused by unqualified image quality is solved by integrating judgment of finger vein image quality in the system.
The technical scheme of the invention is as follows: a finger vein image recognition method based on neural network learning comprises the following steps:
s1: acquiring and preprocessing a finger vein image;
s2: extracting finger vein image features according to S1;
s3: performing feature matching and recognition according to S2;
s4: experiments were performed according to S3 and the results were analyzed.
Preferably, S1 includes the following sub-steps:
s11: extracting ROI of the finger vein image;
s12: and judging the quality of the finger vein image based on the MobileNet-V2 network according to the ROI of the finger vein image.
Preferably, S2 includes the following sub-steps:
s21: extracting finger vein image characteristics based on a ResNet-50 network model;
s22: selecting a loss function;
s23: network training is carried out according to the loss function;
s24: proceed to S3.
Preferably, S3 includes the following sub-steps:
s31: solving cosine similarity of the obtained feature vectors and feature vectors in a registry one by one;
s32: and selecting the feature vector category with the cosine similarity as the maximum value as a matching result.
Preferably, S4 includes the following sub-steps:
s41: selecting a proper experimental environment, experimental data and evaluation indexes;
s42: testing the performance of the finger vein image quality judgment module and performing a comparison test on the loss function;
s43: in order to verify the effectiveness of the quality judgment algorithm and the combined feature extraction algorithm, a comparison test is carried out;
s44: and integrating and testing the system.
The finger vein image identification method based on neural network learning has the following beneficial effects:
1. the invention provides an idea of training a convolutional neural network based on a smooth average accuracy loss function based on a metric learning method, and effectively improves the performance of network recognition of finger vein images.
2. In order to ensure that the quality of the finger vein image at the input end of the network meets certain requirements, the invention adds an image quality judgment module based on a neural network after image acquisition, thereby further enhancing the stability and reliability of the whole finger vein recognition system.
Drawings
FIG. 1 is a flow chart of ROI extraction from a finger vein image;
FIG. 2 is a schematic diagram of a MobileNet-V2 network structure;
FIG. 3 is a diagram illustrating the extraction of finger vein features by the ResNet-50 network;
FIG. 4 is an overall architecture diagram of the finger vein recognition system;
FIG. 5 is a pictorial view of a finger vein recognition system;
FIG. 6(a) is a ROC curve evaluated on the SDUMLA test set;
FIG. 6(b) is a ROC curve evaluated on the HKPolyU test set;
fig. 7 is an equal error rate comparison diagram of feature extraction combined with quality evaluation and not combined with quality evaluation.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
Finger vein image acquisition and preprocessing
Finger vein image imaging principle
The arteries in the finger accompany the veins, and hemoglobin in arterial blood carries oxygen molecules, while hemoglobin in venous blood does not. The method comprises the steps of collecting vein images by utilizing the characteristic that hemoglobin losing oxygen in venous blood absorbs near infrared light (the wavelength range is 690-980 nm) and other tissues in a finger do not absorb the near infrared light, wherein dark stripes appearing in the collected images are finger vein veins.
The existing finger vein image acquisition modes are divided into a transmission type, a reflection type and a side-emitting type. In the transmission type acquisition, the near infrared light source and the image sensor are respectively positioned at two sides of the finger, so that the imaging device is a closed device, and the interference of external environment light is avoided; meanwhile, near infrared light vertically penetrates through one side of the finger, reflection and scattering phenomena of the near infrared light are less, and therefore more finger vein information is acquired by using the transmission type acquisition mode than the other two modes.
Finger vein image preprocessing
Background information other than the finger is contained in the finger vein image obtained by the acquisition device, and the information is useless information for finger vein recognition, so that a finger region in the image, that is, a region of interest (ROI) needs to be extracted in advance. Meanwhile, the phenomenon that vein lines are not clear or the whole gray level of the image is too dark or too bright due to improper finger pressing force and improper placing position can occur in the image acquisition process. In order to screen finger vein images with poor quality and provide reliable image sources for subsequent identification, an image quality judging module is added after image acquisition.
Finger vein image ROI extraction
The finger and the background are obviously distinguished at the edge of the finger, a finger edge template is generated through a Sobel edge detection operator, the lowest point of the upper edge and the highest point of the lower edge of the finger are taken from the template to obtain two line segments parallel to the long edge of the picture, and the region between the line segments is cut in the original picture to obtain the finger vein ROI, wherein the specific flow is shown in figure 1.
Finger vein image quality determination
In the existing finger vein image quality evaluation methods, a method for extracting and fusing image quality features based on manual design and a method for detecting the number of vein branch points based on vein lines have certain defects, such as: the manual design of the image quality feature extraction algorithm requires experimental tests such as feature combination, effective feature screening and the like, the algorithm robustness is limited, and vein branch points in vein lines are difficult to accurately detect actually. The quality discrimination feature of the target image is adaptively captured herein by training a Convolutional Neural Network (CNN), the Network selected being MobileNet-V2. The reason for this is that: the network adopts separable convolution to reduce the parameter quantity of model training, so that the forward propagation speed is faster than that of the network with the same level, and meanwhile, the reverse residual error module is adopted to accelerate the convergence of the model, so that the time for judging the image quality can be reduced. By setting the classification layer to be 2 output probabilities, the network has the function of distinguishing the image quality as high quality and low quality, and the network structure is shown in fig. 2.
Finger vein image feature extraction
ResNet-50 model-based feature extraction network
In order to select an effective neural network model to complete the task of extracting the finger vein image features, we compare several typical convolutional neural networks, such as: VGGNet, Inception, ResNet, finally the ResNet-50 model was selected. ResNet-50 belongs to a ResNet network, input information is directly bypassed to output, the integrity of the information is guaranteed, the network only needs to learn the part with the difference between input and output, and the learning goal and difficulty are simplified.
In order to extract a feature vector capable of representing a vein image, the node number of the last full-connection layer of the ResNet-50 is set to be the dimension N of the feature vector (for example, N is 512), the feature vector with N-dimension of the network output features is represented, normalization processing is carried out on the layer, gradient disappearance and gradient explosion during training are avoided, and then the ResNet-50 is trained by using a Smooth-AP-based loss function. The specific network structure is shown in fig. 3.
Selection of a loss function
The type of the task which can be processed by the neural network and the processing capacity of the task are not only related to the network structure, but also the design of the loss function is of great importance, and the processing precision of the neural network to the same task is different according to the difference of the loss functions. In order to improve the feature extraction capability of the ResNet-50 network, inspired by the literature, we construct a loss function based on "Smooth average accuracy (Smooth-AP)" by means of cosine similarity between vectors, and specifically calculate as follows:
let any image in the finger vein image training set be IqAnd q belongs to {1, 2.. eta., n }, wherein n is the total number of samples in the training set, and the characteristic extracted by the samples through the network is assumed to be vq(ii) a The feature set corresponding to the images of the rest finger veins in the training set is VA={vi,i=1,2,...,n,i≠q},VANeutralization of vqSet of vectors of the same kind is noted
Figure BDA0003213068220000071
And vqSet of vectors of different classes is denoted as
Figure BDA0003213068220000072
m is VSNumber of middle samples, then VA=VS∪VD。vqAnd viCosine similarity between them is denoted as siAll of siForm a similarity set SANamely:
Figure BDA0003213068220000073
if v is recordedqAnd
Figure BDA0003213068220000074
the obtained similarity set is SSAnd is and
Figure BDA0003213068220000075
the obtained similarity set is SDThen there is SA=SS∪SD. Obviously, siGreater the description vqAnd viThe more similar and vice versa. According to siIs for all viAre sorted from big to small and are represented as V'AAccordingly, VSAnd VDAlso becomes VS' and VD'. When the neural network is initially trained, the feature extraction capability of the network is not strong, so that the similarity between the feature vector output by the network and a part of heterogeneous samples is possibly greater than the similarity between the features of the homogeneous samples, namely
Figure BDA0003213068220000076
After training, the feature extraction capability of the network is improved, the extracted features are more similar to the features of the same type of samples and are different from the features of different types of samples, so that the ordering of feature vectors is changed and is expressed as
Figure BDA0003213068220000077
In other words, V'AThe sorting mode of the medium feature vectors can reflect the quality of the feature vectors extracted by the network. Therefore, a loss function related to the sorting mode can be defined, and the purpose of obtaining the optimal sorting mode is achieved by minimizing the loss function.
V 'is defined first'AThe Average Precision (AP) of (1) is shown as follows:
Figure BDA0003213068220000078
wherein
Figure BDA0003213068220000081
Are respectively as
Figure BDA0003213068220000082
At V'SAnd V'ARank of (1), can be calculated by:
Figure BDA0003213068220000083
Figure BDA0003213068220000084
wherein I {. is a unit step function, i.e.:
Figure BDA0003213068220000085
obviously, when the feature extraction capability of the neural network on the finger vein image is stronger, v is related toqThe more front samples of the same class are ranked, the APqThe larger the value.
In order to construct a loss function by using an AP (access point) formula and avoid blocking effect of a unit step function on backward propagation in network training, a smooth Sigmoid function is used for approximating the unit step function, namely I {. cndot.) is changed into:
Figure BDA0003213068220000086
substituting formula (6), formula (3) and formula (4) into formula (2) to obtain APqIs expressed approximately as SmoothapqThe formula is as follows:
Figure BDA0003213068220000087
based on SmoothAPqWe define the loss function of neural network model training as:
Figure BDA0003213068220000088
when the network trains toSmoothAP when the loss function converges to 0qThe value approaches 1, meaning that the feature vector v extracted by the ResNet-50 networkqFeature vectors that are more consistent with their cognate class, in other words: the stronger the feature extraction capability of the network.
Feature matching
Extracting an image to be identified through ROI, inputting the image into a trained ResNet-50 network to obtain a characteristic vector vqThen v is further determinedqAnd the feature vector v in the registryiOne-by-one cosine similarity Scos
Figure BDA0003213068220000091
ScosThe greater the similarity between the two, the greater the ScosV being the maximum valueiAs a matching result.
Integral framework of finger vein recognition system
Combining the above steps, we constructed a finger vein recognition system as shown in fig. 4. The four modules in the figure are respectively: the method comprises the steps of finger vein image acquisition, finger vein image preprocessing, finger vein image feature extraction and finger vein image feature matching. The coordination process among the modules is as follows: acquiring an image by using an autonomously designed transmission type vein image acquisition device, screening the image by using an image quality judgment network, and storing a finger vein image with better quality; then, an ROI in the finger vein image is extracted. Training a feature extraction network through a finger vein public data set, extracting the features of the ROI image by using the network, and finally obtaining a category label through feature matching.
The finger vein recognition system is used in a registration process and a recognition process, wherein the registration process comprises finger vein image acquisition, preprocessing and feature extraction, and the recognition process comprises all steps of the registration stage and is additionally provided with a finger vein image feature matching link. And when the personal identity is registered, converting the input finger vein image into a corresponding characteristic vector by using the trained ResNet-50 and storing the characteristic vector, thereby completing the construction of the personal identity database. When the personal identity is identified, the network is only used for extracting the characteristics of the finger vein image to be identified, and the characteristics are matched with the existing characteristics in the personal identity database to obtain the identity of the person to be identified.
The constructed complete finger vein image recognition system is shown in figure 5.
Experiment and result analysis
Experimental environment, experimental data and evaluation index
Testing the finger vein image recognition algorithm proposed herein involves training and debugging the quality judgment model MobileNet-V2 and the feature extraction model ResNet-50, and we use the following computer configuration: the CPU is Intel i5-4200H, the main frequency is 2.80GHz, the system is a Windows 64-bit system, the display card is NVIDIAGeForceGTX950M, and the running memory is 8G. Python3.7 is adopted as a programming language, and PyTorch 1.6 is selected as a deep learning platform.
The experimental data set selected herein is the published finger vein data set of university of Shandong (SDUMLA), and finger vein data set of university of hong Kong tally (HKPolyU). The SDUMLA contains images of the veins of 636 fingers from 106 volunteers, respectively, and collected the index, middle, and ring fingers of the left and right hands of the volunteers, and 6 images of 320X 240 resolution were collected for each finger, for a total of 636 categories. HKPolyU contains finger vein images of 156 volunteers, of which 51 were collected only once and 105 were collected twice at intervals of 1 to 6 months. The index finger and the middle finger of the left hand of the volunteer are collected each time, 6 pictures with 513 × 256 resolutions are collected by each finger, and the first collected image collection of 156 volunteers is used in the experiment, and 312 types are total.
The model is verified by using the most common classification identification evaluation indexes, including Accuracy (Accuracy), Receiver Operating Characteristic Curve (ROC), equal error rate (eererrorrate, EER). The formula for accuracy is defined as:
Figure BDA0003213068220000103
the TP represents the number of the finger vein images of the same type which are successfully matched, the FN represents the number of the finger vein images of the same type which are failed to be matched, the FP represents the number of the finger vein images of different types which are successfully matched, and the TN represents the number of the finger vein images of different types which are failed to be matched.
And setting different classification thresholds to obtain corresponding False Positive Rate (FPR) and True Positive Rate (TPR), wherein the ROC curve is formed by fitting with the FPR as an abscissa and the TPR as an ordinate. The larger the area encompassed by the ROC curve, the better the model performance. The formula for FPR and TPR is defined as:
Figure BDA0003213068220000101
Figure BDA0003213068220000102
EER represents the value when FPR and False Negative Rate (FNR) are equal, and FNR can be calculated by TPR, and the relationship between them is:
Figure BDA0003213068220000111
therefore, under a rectangular coordinate system, the abscissa of the point where the straight line passing through the two points (0,1) and (1,0) intersects with the ROC curve is the EER value, and the smaller the EER value is, the better the model performance is.
Experimental testing
Finger vein image quality judgment module performance test
In the test, firstly, finger vein image samples of high quality and low quality are selected from SDUMLA and HKPMU data sets, and labeled quality judgment data sets are constructed and are represented by SDUMLA-QD and HKPMU-QD. However, the number of "high quality" and "low quality" images is not balanced, and the number of the latter is too small to facilitate network training. To this end, we augment the number of low-quality images by synthesizing a few classes of oversampling techniques (SMOTE) such that the number of low-quality images increases to a number similar to the number of high-quality images. Then, the output node of the MobileNet-V2 model is set to be 2, cross entropy is adopted as a loss function to train the MobileNet-V2, and the model optimizer is Adam. 70% of the images in SDUMLA-QD and HKPMU-QD were selected as training set, and 30% were selected as testing set.
The recognition rate of the MobileNet-V2 for the finger vein images of high and low quality can reach more than 92% regardless of the training with the SDUMLA-QD or HKPIRU-QD data sets. The results of comparison between the image quality determination methods are shown in table 1. From the data, it can be seen that: the finger vein image quality judgment algorithm based on the MobileNet-V2 has an advantageous effect on judging the vein image quality.
TABLE 1 comparison of recognition accuracy of different image quality judgment algorithms on SDUMLA-QD and HKPEU-QD test sets
Figure BDA0003213068220000112
Figure BDA0003213068220000121
Loss function contrast test
To verify the effectiveness of the Smooth-AP-based loss function used in the feature extraction network ResNet-50 of this patent, we compared the effects of two other different loss functions applied to network training: contrast Loss function (CL) and triple Loss function (TL). The SDUMLA and HKPROU data sets were divided into training and testing sets at a ratio of 7: 3. Three loss functions were used to train on the SDUMLA and HKPolyU training sets, respectively, and then the ROC curves and EER values were compared on the test set. ResNet-50 was first trained on the SDUMLA data set and the results evaluated on the test set of SDUMLA and HKPEU are shown in FIG. 6. It can be seen that: the ROC curve obtained by evaluating the network trained by the Smooth-AP loss function comprises the ROC curves obtained by the other two loss functions, and meanwhile, the EER value is the minimum of the three. This shows that the finger vein image features can be better extracted by the network trained based on the Smooth-AP loss function, the EER values of different losses are increased when the HKPolyU data set is used for evaluation, the model trained based on the Smooth-AP loss function is increased minimally, the cross-library recognition can be performed by the network trained based on the Smooth-AP loss function, and the mobility is stronger than that of the network trained by the other two losses.
ResNet-50 was trained on the HKPEU data set and evaluated on the HKPEU and SDUMLA test sets to reach the same conclusion.
Action test of quality judgment algorithm on recognition performance
To verify the effectiveness of the present quality judgment algorithm in combination with the feature extraction algorithm, we compared the differences in EER values after comparing SDUMLA and HKPolyU without and with quality screening, as shown in fig. 7. The Q + S representative identification system utilizes a quality judgment algorithm and a feature extraction algorithm, the S representative identification system only utilizes the feature extraction algorithm, the SDtrain and the SDval representative model training and verification are based on an SDUMLA data set, and the HKtrain and the HKval representative model training and verification are based on an HKPolyU data set. It can be seen that: after quality evaluation, the EER values obtained on the SDUMLA data set and the HKPEU data set are reduced, which shows that the image quality evaluation and screening are firstly carried out, and then the characteristics and matching are extracted, which is beneficial to improving the performance of the identification system.
Contrast test of different finger vein image recognition algorithms
In order to test the performance of the finger vein image recognition algorithm proposed herein, we selected four typical literature methods for comparison experiments. The 4 recognition algorithms and the algorithm herein were used to identify images in the SDUMLA and HKPEU datasets, with the results shown in Table 2. From the performance index, the identification algorithm proposed herein achieves optimal performance on both the SDUMLA and HKPROU libraries.
TABLE 2 comparison of different identification methods
Figure BDA0003213068220000131
Figure BDA0003213068220000141
System integration and testing
The designed finger vein image recognition method is subjected to system integration, and system performance test is carried out on the self-built data set SWUST-FV. In the acquisition of SWUST-FV, a team student is taken as a target object, including index finger, middle finger and ring finger of the left hand and right hand of 20 volunteers, and 6 images with 640 × 480 resolution are acquired for each finger, and the total of 120 types. The specific test process is as follows: regarding each finger in the data set as one type, dividing each type of image into three equal parts, forming a registered person image library by two parts, and forming an image library to be identified by the other part. And (3) intersecting the trisected finger vein image sets to form a registered person image library and an image library to be identified, and obtaining the average identification rate of 99.30% after feature extraction and feature matching.

Claims (5)

1. A finger vein image recognition method based on neural network learning is characterized by comprising the following steps:
s1: acquiring and preprocessing a finger vein image;
s2: extracting finger vein image features according to S1;
s3: performing feature matching and recognition according to S2;
s4: experiments were performed according to S3 and the results were analyzed.
2. The method for recognizing finger vein images based on neural network learning according to claim 1, wherein said S1 comprises the following sub-steps:
s11: extracting ROI of the finger vein image;
s12: and judging the quality of the finger vein image based on the MobileNet-V2 network according to the ROI of the finger vein image.
3. The method for recognizing finger vein images based on neural network learning according to claim 1, wherein said S2 comprises the following sub-steps:
s21: extracting finger vein image characteristics based on a ResNet-50 network model;
s22: selecting a loss function;
s23: network training is carried out according to the loss function;
s24: proceed to S3.
4. The method for recognizing finger vein images based on neural network learning according to claim 1, wherein said S3 comprises the following sub-steps:
s31: solving cosine similarity of the obtained feature vectors and feature vectors in a registry one by one;
s32: and selecting the feature vector category with the cosine similarity as the maximum value as a matching result.
5. The method for recognizing finger vein images based on neural network learning according to claim 1, wherein said S4 comprises the following sub-steps:
s41: selecting a proper experimental environment, experimental data and evaluation indexes;
s42: testing the performance of the finger vein image quality judgment module and performing a comparison test on the loss function;
s43: in order to verify the effectiveness of the quality judgment algorithm and the combined feature extraction algorithm, a comparison test is carried out;
s44: and integrating and testing the system.
CN202110935812.5A 2021-08-16 2021-08-16 Finger vein image identification method based on neural network learning Pending CN113469143A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110935812.5A CN113469143A (en) 2021-08-16 2021-08-16 Finger vein image identification method based on neural network learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110935812.5A CN113469143A (en) 2021-08-16 2021-08-16 Finger vein image identification method based on neural network learning

Publications (1)

Publication Number Publication Date
CN113469143A true CN113469143A (en) 2021-10-01

Family

ID=77866627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110935812.5A Pending CN113469143A (en) 2021-08-16 2021-08-16 Finger vein image identification method based on neural network learning

Country Status (1)

Country Link
CN (1) CN113469143A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973343A (en) * 2022-03-03 2022-08-30 湖南中科助英智能科技研究院有限公司 Vein identification method and device based on Gabor-Siamese network
CN115063845A (en) * 2022-06-20 2022-09-16 华南理工大学 Finger vein identification method based on lightweight network and deep hash
CN115944293A (en) * 2023-03-15 2023-04-11 汶上县人民医院 Neural network-based hemoglobin level prediction system for kidney dialysis
CN116386091A (en) * 2022-11-18 2023-07-04 荣耀终端有限公司 Fingerprint identification method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944458A (en) * 2017-12-08 2018-04-20 北京维大成科技有限公司 A kind of image-recognizing method and device based on convolutional neural networks
CN109961089A (en) * 2019-02-26 2019-07-02 中山大学 Small sample and zero sample image classification method based on metric learning and meta learning
CN110390282A (en) * 2019-07-12 2019-10-29 西安格威西联科技有限公司 A kind of finger vein identification method and system based on the loss of cosine center
CN110674850A (en) * 2019-09-03 2020-01-10 武汉大学 Image description generation method based on attention mechanism
US20200134383A1 (en) * 2018-10-29 2020-04-30 Samsung Electronics Co., Ltd. Generative model training and image generation apparatus and method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107944458A (en) * 2017-12-08 2018-04-20 北京维大成科技有限公司 A kind of image-recognizing method and device based on convolutional neural networks
US20200134383A1 (en) * 2018-10-29 2020-04-30 Samsung Electronics Co., Ltd. Generative model training and image generation apparatus and method
CN109961089A (en) * 2019-02-26 2019-07-02 中山大学 Small sample and zero sample image classification method based on metric learning and meta learning
CN110390282A (en) * 2019-07-12 2019-10-29 西安格威西联科技有限公司 A kind of finger vein identification method and system based on the loss of cosine center
CN110674850A (en) * 2019-09-03 2020-01-10 武汉大学 Image description generation method based on attention mechanism

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
ANDREW BROWN等: "Smooth-apSmoothing the path towards large-scale image retrieval", 《HTTPS://BROWSE.ARXIV.ORG/PDF/2007.12163.PDF》, 8 September 2020 (2020-09-08), pages 1 - 28 *
ANDREW BROWN等: "Smooth-apSmoothing the path towards large-scale image retrieval", 《HTTPS://BROWSE.ARXIV.ORG/PDF/2007.12163.PDF》, pages 1 - 28 *
BORUI HOU等: "ArcVein-Arccosine Center Loss for Finger Vein Verification", 《IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT》, vol. 70, pages 1 - 11, XP011843909, DOI: 10.1109/TIM.2021.3062164 *
张娜: "基于深度残差网络与离散哈希的指静脉识别方法", 《浙江理工大学学报》, vol. 43, no. 4, pages 549 - 556 *
杨柯楠: "基于深度学习的指静脉识别研究", 《中国优秀硕士学位论文全文数据库 杨柯楠》, no. 5, pages 138 - 1176 *
王欣宇: "卷积神经网络在手指静脉识别中的应用研究", 《中国优秀硕士学位论文全文数据库 基础科学辑》, no. 1, pages 006 - 604 *
王欣宇等: "基于质量评价与特征提取网络的手指静脉识别", 《计算机仿真》, vol. 40, no. 7, 31 July 2023 (2023-07-31), pages 440 - 446 *
王欣宇等: "基于质量评价与特征提取网络的手指静脉识别", 《计算机仿真》, vol. 40, no. 7, pages 440 - 446 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114973343A (en) * 2022-03-03 2022-08-30 湖南中科助英智能科技研究院有限公司 Vein identification method and device based on Gabor-Siamese network
CN115063845A (en) * 2022-06-20 2022-09-16 华南理工大学 Finger vein identification method based on lightweight network and deep hash
CN115063845B (en) * 2022-06-20 2024-05-28 华南理工大学 Finger vein recognition method based on lightweight network and deep hash
CN116386091A (en) * 2022-11-18 2023-07-04 荣耀终端有限公司 Fingerprint identification method and device
CN116386091B (en) * 2022-11-18 2024-04-02 荣耀终端有限公司 Fingerprint identification method and device
CN115944293A (en) * 2023-03-15 2023-04-11 汶上县人民医院 Neural network-based hemoglobin level prediction system for kidney dialysis
CN115944293B (en) * 2023-03-15 2023-05-16 汶上县人民医院 Neural network-based hemoglobin level prediction system for kidney dialysis

Similar Documents

Publication Publication Date Title
Oloyede et al. Unimodal and multimodal biometric sensing systems: a review
CN100492400C (en) Matching identification method by extracting characters of vein from finger
CN113469143A (en) Finger vein image identification method based on neural network learning
TWI599964B (en) Finger vein recognition system and method
CN111462379A (en) Access control management method, system and medium containing palm vein and face recognition
Liu et al. One-class fingerprint presentation attack detection using auto-encoder network
CN104809450B (en) Wrist vena identification system based on online extreme learning machine
Meng et al. Finger vein recognition based on convolutional neural network
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
Kranthi Kumar et al. Effective deep learning approach based on VGG-mini architecture for iris recognition
Kuzu et al. Gender-specific characteristics for hand-vein biometric recognition: Analysis and exploitation
Li et al. Hand-based multimodal biometric fusion: A review
Méndez-Llanes et al. On the use of local fixations and quality measures for deep face recognition
Kumar et al. An improved biometric fusion system based on fingerprint and face using optimized artificial neural network
Vasavi et al. An Image Pre-processing on Iris, Mouth and Palm print using Deep Learning for Biometric Recognition
AlShemmary et al. Siamese Network-Based Palm Print Recognition
Shende et al. Soft computing approach for feature extraction of palm biometric
Obaid et al. People identification via tongue print using fine-tuning deep learning
Ozkaya et al. Intelligent face border generation system from fingerprints
Sun et al. Research on palm vein recognition algorithm based on improved convolutional neural network
Zidan et al. Hand Vein Pattern Enhancement using Advanced Fusion Decision
Goutham et al. A review on detection of vein pattern in human body for the biometric applications
Tran et al. A Survey of Finger Vein Recognition
Muhammad et al. Fingerprint Identification System based on VGG, CNN, and ResNet Techniques
CN117542086A (en) Palm print palm vein multi-mode identity authentication method, device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination