CN114743278A - Finger vein identification method based on generation of confrontation network and convolutional neural network - Google Patents

Finger vein identification method based on generation of confrontation network and convolutional neural network Download PDF

Info

Publication number
CN114743278A
CN114743278A CN202210458700.XA CN202210458700A CN114743278A CN 114743278 A CN114743278 A CN 114743278A CN 202210458700 A CN202210458700 A CN 202210458700A CN 114743278 A CN114743278 A CN 114743278A
Authority
CN
China
Prior art keywords
finger vein
network
image
vein image
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210458700.XA
Other languages
Chinese (zh)
Inventor
介婧
陈羽川
郑慧
张淼
武晓莉
李津蓉
张以涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lover Health Science and Technology Development Co Ltd
Original Assignee
Zhejiang Lover Health Science and Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lover Health Science and Technology Development Co Ltd filed Critical Zhejiang Lover Health Science and Technology Development Co Ltd
Priority to CN202210458700.XA priority Critical patent/CN114743278A/en
Publication of CN114743278A publication Critical patent/CN114743278A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a finger vein identification method based on generation of a countermeasure network and a convolutional neural network, which comprises the following steps of: finger vein image acquisition is carried out on the discrimination object, and edge detection is carried out on the acquired finger vein image; obtaining a finger vein foreground region which does not contain background information through image morphological processing; normalization of gray level transformation; amplifying image data of the acquired finger vein data set; constructing a depth convolution generation confrontation network model, inputting the amplified data set, and generating finger vein images with more sufficient quantity; constructing a BP-AdaBoost network, and judging the quality of the finger vein image by combining a plurality of parameters; dividing finger vein data sets which accord with quality into a training set and a testing set according to a certain proportion; constructing an improved dual-channel VGG network for training to obtain a finger vein classification network; and outputting the classification result by the test set. The method can accelerate network convergence and effectively improve the accuracy of finger vein identification.

Description

Finger vein identification method based on generation of confrontation network and convolutional neural network
Technical Field
The invention belongs to the technical field of image processing and biological feature recognition, and particularly relates to a finger vein recognition method based on a generation countermeasure network and a convolution neural network.
Background
The finger veins are effective living biological information, are different from biological characteristics such as fingerprints, palm prints, irises, human faces and the like, and are easy to lose, forget, damage or forge. The finger vein image is a distribution map of vein vessels inside the finger under infrared light. Vein recognition has the characteristics of living body recognition, internal characteristics, non-contact acquisition and high safety level, so that the vein recognition device becomes a king in the field of recognition, and can be widely applied to the fields of intelligent locks, intelligent security, identity recognition, crime tracking, discrimination and the like.
The research objects of the vein recognition technology include finger veins, hand back veins, palm veins and the like. Compared with the palm vein, the finger vein hardware equipment is small and exquisite and has low cost; compared with the vein on the back of the hand, the finger vein can not be interfered by the hair of the human body. Therefore, the finger vein is more advantageous in low-cost identification.
In the finger vein recognition, the method mainly comprises the steps of image acquisition, image preprocessing, image amplification, feature extraction, feature matching and the like. The near infrared light penetrating through the skin is absorbed by hemoglobin to capture a vein image, and the left hand and the right hand respectively take the index finger, the ring finger and the middle finger, and one finger collects for a plurality of times and collects in three stages within one year. The image amplification is divided into two steps, the first step is to expand original vein data through simple rotation, translation and cutting, and the second step is to generate finger vein pictures with larger quantity through conditional generation and antagonistic network training, so that the problem that the quality of generated images generated by a network is low due to too few data sets is avoided, the finger vein data sets are further amplified on the basis of the first step, and the model can easily meet the requirements of people. The purpose of image preprocessing is to determine the position of a vein region and enhance a vein image, perform gray scale transformation normalization on the determined vein region, and the vein image collected by using the gray scale transformation normalization is an 8-bit gray scale image which should have 256 gray levels, but the gray levels of the image are concentrated in one or a plurality of gray level sections due to the influence of factors such as illumination during collection, and at this time, the image can be expanded to 256 gray levels by adopting a gray scale stretching method. The gray stretching reduces the interference of the illumination on the vein image. The quality of image preprocessing has great influence on subsequent links, and the accuracy of the model is often limited to the quality of the image. Common characteristic extraction methods for finger veins include: repeated line tracking, wide line detector, Gabor filter, maximum curvature point, mean curvature, local binary pattern, etc. These feature extraction methods have some effect, but may also result in the loss of some features to varying degrees. The traditional flow of finger vein recognition is often composed of a plurality of modules, wherein the quality of each module influences the result of the whole training.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a finger vein identification method based on a generation countermeasure network and a convolution neural network. The method is different from the traditional vein recognition method, applies the end-to-end advantage of the deep learning model in the training process, reduces the complexity of engineering, and avoids the problems of information loss and the like caused by multiple modules. The improved dual-channel VGG model is used, the feature complementarity is increased and the feature extraction capability is improved through model feature fusion, the dual channels are divided into VGG16 and SimpleVGG, compared with the conventional dual-channel VGG model, the computing resources can be effectively saved, the advantages of transfer learning and a global average pooling layer are integrated, and the model has stronger learning capability and generalization capability.
A finger vein identification method based on generation of a countermeasure network and a convolutional neural network comprises the following steps:
s1: collecting and establishing a finger vein image database of a discrimination object, and preprocessing all finger vein images in the database;
s2: amplifying the finger vein interested region image sample of the acquired finger vein database;
s3: constructing a deep convolution to generate a confrontation network, training the deep convolution to generate the confrontation network by using the amplified finger vein image, and generating a target finger vein image by using a generator;
s4: constructing a BP-AdaBoost network, and judging the quality of the finger vein image by combining a plurality of parameters including contrast, information entropy, definition and equivalent vision;
s5: dividing finger vein data sets meeting quality requirements into a training set and a testing set;
s6: constructing an improved two-channel VGG network for feature extraction, and training the improved two-channel VGG network by using a training set to obtain a finger vein image classification network;
s7: after the model parameters are optimized in an iterative mode, the finger vein image in the test set is input for testing, and the classification result of the finger vein image is output.
Preferably, S1 includes the following sub-steps:
s11: acquiring a distribution diagram of the finger veins by adopting a near infrared sensor, and storing a sample plate;
s12: establishing a finger vein database through the acquired finger vein images;
s13: carrying out sobel operator edge detection on finger vein images in a database, and obtaining a finger vein foreground region without background information through image morphological processing including closed operation and denoising;
s14: and (5) carrying out gray level transformation normalization on the finger vein image.
Preferably, S2 includes the following substeps:
s21: obtaining a preprocessed finger vein region-of-interest database;
s22: and amplifying the finger vein image samples of the finger vein region-of-interest database by methods including translation, rotation, gamma transformation and affine transformation.
Preferably, S3 includes the following substeps:
s31: constructing a deep convolution to generate a countermeasure network;
s32: inputting the amplified finger vein image for training;
s33: the generator generates a finger vein image;
s34: the discriminator judges whether the finger vein image generated in the step is a real finger vein image;
s35: the generator and the discriminator are trained alternately in a circulating way, so that the data generated by the final generator gradually approaches to the real data.
Preferably, S4 includes the following substeps:
s41: inputting a finger vein image;
s42: selecting a parameter index;
s43: obtaining an evaluation index output value;
s44: constructing a sample data set;
s45: constructing a BP-AdaBoost strong classifier;
s46: and outputting the classification result of the finger vein image.
Preferably, S5 includes the following sub-steps:
s51: according to the classification result, establishing a finger vein database meeting the requirements;
s52: and (3) adding the following components in percentage by weight of 8: the scale of 2 is divided into a training set and a test set.
Preferably, S6 includes the following substeps:
s61: using Imagenet initialization weight to perform migration learning on the finger vein image classification network;
s62: extracting the finger vein features based on the improved dual-channel VGG;
s63: selecting a loss function;
s64: selecting an optimizer;
s65: and carrying out network training according to the loss function and the optimizer.
Preferably, the improved two-channel VGG network is constructed by the following method: based on a framework of a VGG convolutional neural network, a VGG16 and a SimpleVGG are respectively built to form a dual-channel network structure, a full connection layer of an original network is deleted, and a self-defined embedding layer is used for replacing the full connection layer; the preceding feature extraction layer comprises convolution layers and a maximum pooling layer, the VGG16 network comprises 13 convolution layers, wherein each convolution layer comprises a convolution sum and a ReLU activation function, the network comprises 5 maximum pooling layers, wherein the horizontal and vertical step size of pooling is 2; the simpleVGG network contains 6 convolutional layers, each of which includes convolution, ReLU activation function and batch normalization, the network contains 3 maximum pooling layers, where the horizontal and vertical step size of pooling is 2; fusing the characteristics output by the two networks and inputting the fused characteristics into a user-defined embedding layer; the user-defined embedding layer comprises a global average pooling layer, a LeakyReLU activation function, a batch standardization layer, a full connection layer and a discarding layer; the dimensionality of the classification layer is the number of classified classes that are considered in the training process.
Preferably, S7 includes the following sub-steps:
s71: solving Euclidean distances from the obtained feature vectors and feature vectors in a database one by one;
s72: and selecting the feature vector category which enables the Euclidean distance to be the minimum value as a matching result. Preferably, the gray scale transformation normalization formula is as follows:
Figure BDA0003619684740000041
in the formula, N (I, j) represents the gray scale value of the transformed image, I (I, j) represents the gray scale value of the original image, and min and max represent the minimum gray scale value and the maximum gray scale value of the original image, respectively.
The invention has the beneficial effects that:
1. the finger vein recognition effect is effectively improved, and the applications of intelligent lock control, security protection, identity discrimination and the like based on finger vein information recognition are effectively ensured; the finger vein recognition is applied to the criminal investigation field, the living finger vein recognition and the characteristics of internal characteristics are utilized to assist the police to better recognize the identity of criminals, and the police case solving efficiency can be improved to a certain extent;
2. according to the method, the quality of the finger vein image is judged based on the BP-AdaBoost network after the image is amplified and generated, the problem that the quality of the image generated by the countermeasure network is uneven due to depth convolution generation is solved, and the influence of the image quality problem on the subsequent model identification precision is avoided;
3. the invention applies the end-to-end advantage of the deep learning model in the training process, reduces the complexity of the engineering, and avoids the problems of information loss and the like caused by multiple modules;
4. according to the invention, the selu activation function is used for replacing the relu activation function in the generated network, so that the characteristics of richer images are provided, and the problem of low network model precision caused by insufficient data set quantity is solved.
5. The improved dual-channel VGG model is used, the feature complementarity is increased and the feature extraction capability is improved through model feature fusion, the dual channels are divided into VGG16 and SimpleVGG, compared with the conventional dual-channel VGG model, the computing resources can be effectively saved, the advantages of transfer learning and a global average pooling layer are integrated, and the model has stronger learning capability and generalization capability.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a collected raw finger vein image;
FIG. 3 is a flow chart of the present invention for extracting regions of interest from an original finger vein image by edge detection;
FIG. 4 is a flowchart of the BP-AdaBoost network algorithm of the present invention;
FIG. 5 is a schematic diagram of an improved dual channel VGG configuration of the present invention;
FIG. 6 is a graph of the improved two-channel VGG network training results of the present invention.
Detailed Description
The invention is further illustrated with reference to the following specific examples, without limiting the scope of the invention as claimed.
Referring to fig. 1, a finger vein identification method based on generation of a countermeasure network and a convolutional neural network includes the following steps:
s1: acquiring and establishing a finger vein image database of the discrimination object, and preprocessing all finger vein images in the database;
s1 includes the following substeps:
s11: acquiring a distribution map of the finger veins by adopting a near infrared sensor, and storing a sample plate;
s12: establishing a finger vein image database through the acquired finger vein images;
s13: carrying out sobel operator edge detection on a finger vein image in a finger vein image database, and obtaining a finger vein foreground region without background information through image morphological processing including closed operation and denoising;
s14: and (5) carrying out gray level transformation normalization on the finger vein image.
Preprocessing all images in an image database, including ROI interest region extraction, image gray level conversion normalization and other processing; the collected finger vein image is an 8-bit gray level image, 256 gray levels should be provided, but due to the influence of factors such as illumination during collection, the gray levels of the image are concentrated in one or more gray level sections, and at this time, the image can be expanded to 256 gray levels by adopting a gray level stretching method; the gray stretching reduces the interference of illumination on vein images, and the convolution neural network can learn features more easily by acquiring the finger vein interested region through edge detection. The collected original finger vein image is shown in fig. 2, the flow of extracting the region of interest from the original finger vein image through edge detection is shown in fig. 3, the background of the original finger vein image is removed after image preprocessing, and only the region of interest of the finger vein is reserved. For the extraction of the finger region, a method combining closed operation and denoising in sobel operator edge detection and mathematical morphology processing is adopted, and the sobel operator is a common edge detection method, also called a weighted average difference method, is a first-order discrete difference operator and is used for calculating a gray value approximate value of an image brightness function. Because the acquired finger vein image is of a vertical method, only the sobel operator is needed to detect the vertical edge to obtain the finger contour. The vertical detection effect of the sobel operator on the contour extraction of the finger vein is obviously better than that of other operators such as canny. The closing operation is to expand the image and then corrode the image, which is helpful to close the small holes or small black spots on the foreground object, and can achieve the effect of thinning the outline.
The formula for the gray scale transform normalization is as follows:
Figure BDA0003619684740000061
in the formula, N (I, j) represents the gradation value of the image after conversion, I (I, j) represents the gradation value of the original image, and min and max represent the minimum gradation value and the maximum gradation value of the original image, respectively.
The sobel convolution factor is as follows:
+1 +2 +1
0 0 0
-1 -2 -1
the closed operation formula is as follows:
A·S=(A⊕S)ΘS
s2: amplifying the finger vein interested region image sample of the acquired finger vein database in translation, rotation and other modes;
s2 includes the following substeps:
s21: obtaining a preprocessed finger vein region-of-interest database;
s22: and amplifying the finger vein image sample of the finger vein region-of-interest database by methods such as translation, rotation, gamma transformation, affine transformation and the like.
S3: constructing a depth convolution to generate a confrontation network, training the depth convolution to generate the confrontation network by the amplified finger vein image, and generating a target finger vein image by a generator;
s3 includes the following substeps:
s31: constructing a deep convolution to generate a countermeasure network;
s32: inputting the amplified finger vein image for training;
s33: the generator generates a finger vein image;
s34: the discriminator judges whether the finger vein image generated in the step is a real finger vein image;
s35: the generator and the discriminator are trained alternately in a circulating way, so that the data generated by the final generator gradually approaches the real data.
Training the deep convolution generation countermeasure network to generate the required finger vein image, wherein the activation function in the deep convolution generation countermeasure network is changed from relu to selu, and the calculation of less than 0 input is reserved, so that richer features are provided.
S4: constructing a BP-AdaBoost network, and judging the quality of the finger vein image by combining a plurality of parameters including contrast, information entropy, definition and equivalent vision;
s4 includes the following substeps:
s41: inputting a finger vein image;
s42: selecting a parameter index;
s43: obtaining an evaluation index output value;
s44: constructing a sample data set;
s45: constructing a BP-AdaBoost strong classifier;
s46: and outputting the classification result of the finger vein image.
S5: dividing finger vein data sets meeting quality requirements into a training set and a testing set;
s5 includes the following substeps:
s51: according to the classification result, establishing a finger vein database meeting the requirements;
s52: and (3) adding the following components in percentage by weight of 8: the scale of 2 is divided into a training set and a test set.
For the finger vein images after preprocessing and data enhancement, an evaluation index output value is obtained by combining a plurality of parameters of contrast, information entropy and definition equivalent vision, a sample data set is constructed, a BP-AdaBoost strong classifier is constructed, and a classification result of the vein images is output. Preserving excellent finger vein samples meeting the standard, establishing an image database for recognition, and forming an experiment data set by using images in the image database, wherein the experiment data set comprises a training set and a testing set.
The finger vein image processing flow based on the BP-AdaBoost network is shown in FIG. 4. The BP-AdaBoost network adopts a plurality of BP neural networks as weak classifiers, each BP neural network completes the prediction of an output sample after repeated training, and then a strong classifier is generated by an AdaBoost algorithm to obtain a final classification result.
S6: constructing an improved two-channel VGG network for feature extraction, and training the improved two-channel VGG network by using a training set to obtain a finger vein image classification network;
s6 includes the following substeps:
s61: using Imagenet initialization weight to perform migration learning on the finger vein image classification network;
s62: extracting the finger vein features based on the improved dual-channel VGG;
s63: selecting a loss function;
s64: selecting an optimizer;
s65: and carrying out network training according to the loss function.
Establishing an improved two-channel VGG finger vein image feature extraction learning model, wherein the improved two-channel VGG structure is shown in FIG. 5, and the specific method comprises the following steps: based on the framework of the VGG convolutional neural network, the VGG16 and the SimpleVGG are respectively built to form a dual-channel network structure, the full connection layer of the original network is deleted, and the full connection layer is replaced by a self-defined embedded layer. The preceding feature extraction layers include convolutional layers and max pooling layers, the VGG16 network contains 13 convolutional layers (conv), each of which includes Convolution (Convolution) and ReLU activation functions, and 5 max pooling layers, where the horizontal and vertical steps of pooling are both 2. The simplvgg network contains 6 convolutional layers, where each conv layer includes Convolution (Convolution), ReLU activation function, and Batch Normalization (Batch Normalization), and contains 3 maximum pooling layers, where pooling has a horizontal and vertical step size of 2. And fusing the characteristics output by the two networks and inputting the fused characteristics into a custom embedding layer. The custom embedding layers include global average pooling Layer (globalaveragePooling), LeakyReLU activation function, Batch Normalization (Batch Normalization), Fully Connected Layer (full Connected Layer), and drop Layer (Dropout). The dimension of the classification layer is the number of classified items considered in the training process. And selecting an LeakyReLU activation function to replace the original ReLU activation function in the self-defined embedding layer, and reserving values of some negative axes, so that the information of the negative axes cannot be completely lost, and the defect that the gradient of the ReLU in a negative region is 0 is overcome. The global average pooling layer can receive images with any size, can better correspond the categories with the feature map of the last convolution layer, can also reduce the parameter number, can prevent overfitting on the layer, and integrates global spatial information.
An optimizer of the model selects an Adam self-adaptive moment estimation method to quickly optimize parameters, and the advantage of automatically adjusting the learning rate is benefited, so that the network can be converged quickly. And calculating a loss value by using a cross entropy mode, and outputting the loss value as a final classification layer by using softmax.
The improved dual-channel VGG model is used, the feature complementarity is increased and the feature extraction capability is improved through model feature fusion, the dual channels are divided into VGG16 and SimpleVGG, compared with the conventional dual-channel VGG model, the computing resources can be effectively saved, the model also integrates the advantages of transfer learning and global average pooling layers, and the model has stronger learning capability and generalization capability.
S7: after the model parameters are optimized in an iterative mode, inputting finger vein image tests in a test set, and outputting finger vein image classification results;
s7 includes the following substeps:
s71: solving Euclidean distances from the obtained feature vectors and feature vectors in a database one by one;
s72: and selecting the feature vector type with the Euclidean distance as the minimum value as a matching result.
And testing the established finger vein image feature extraction learning model by using a test set in the database, classifying by using a softmax function according to the Euclidean distance, selecting the feature vector type with the minimum Euclidean distance as a matching result, and outputting the recognition result.
The invention comprises the following steps: collecting finger vein images of the screened object by using an infrared collector, and carrying out edge detection on the collected finger vein images; obtaining a finger vein foreground region without background information through image morphological processing such as closed operation, denoising and the like; normalization of gray level transformation; randomly amplifying the image data of the acquired finger vein data set by using operations of rotation, translation and the like; constructing a depth convolution to generate a confrontation network model, inputting a data set after image data enhancement, and generating finger vein images with more sufficient quantity; constructing a BP-AdaBoost network, and judging the quality of the finger vein image by combining a plurality of parameters such as contrast, information entropy, definition, equivalent vision and the like; dividing finger vein data sets which accord with quality into a training set and a testing set according to a certain proportion; constructing an improved dual-channel VGG network for training to obtain a finger vein classification network; and outputting the classification result by the test set. The accuracy and loss of model training are shown in fig. 6, the output test accuracy on the test set reaches 99.16%, and the experimental result shows that the method can accelerate network convergence, save calculation cost and effectively improve the recognition accuracy.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. A finger vein identification method based on generation of a countermeasure network and a convolutional neural network is characterized by comprising the following steps:
s1: acquiring and establishing a finger vein image database of a discrimination object, and preprocessing all finger vein images in the database;
s2: amplifying the finger vein interested region image sample of the acquired finger vein database;
s3: constructing a depth convolution to generate a confrontation network, training the depth convolution to generate the confrontation network by the amplified finger vein image, and generating a target finger vein image by a generator;
s4: constructing a BP-AdaBoost network, and judging the quality of the finger vein image by combining a plurality of parameters including contrast, information entropy, definition and equivalent vision;
s5: dividing finger vein data sets meeting quality requirements into a training set and a testing set;
s6: constructing an improved two-channel VGG network for feature extraction, and training the improved two-channel VGG network by using a training set to obtain a finger vein image classification network;
s7: after the model parameters are optimized in an iterative mode, the finger vein image in the test set is input for testing, and the classification result of the finger vein image is output.
2. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network as claimed in claim 1, wherein S1 comprises the following sub-steps:
s11: acquiring a distribution diagram of the finger veins by adopting a near infrared sensor, and storing a sample plate;
s12: establishing a finger vein image database through the acquired finger vein images;
s13: carrying out sobel operator edge detection on a finger vein image in a finger vein image database, and obtaining a finger vein foreground region without background information through image morphological processing including closed operation and denoising;
s14: and (5) carrying out gray level transformation normalization on the finger vein image.
3. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network as claimed in claim 1, wherein S2 comprises the following sub-steps:
s21: obtaining a preprocessed finger vein region-of-interest database;
s22: and amplifying the finger vein image samples of the finger vein region-of-interest database by methods including translation, rotation, gamma transformation and affine transformation.
4. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network as claimed in claim 1, wherein S3 comprises the following sub-steps:
s31: constructing a deep convolution to generate a confrontation network;
s32: inputting the amplified finger vein image for training;
s33: the generator generates a finger vein image;
s34: the discriminator judges whether the finger vein image generated in the step is a real finger vein image;
s35: the generator and the discriminator are trained alternately in a circulating way, so that the data generated by the final generator gradually approaches the real data.
5. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network as claimed in claim 1, wherein S4 comprises the following sub-steps:
s41: inputting a finger vein image;
s42: selecting a parameter index;
s43: obtaining an evaluation index output value;
s44: constructing a sample data set;
s45: constructing a BP-AdaBoost strong classifier;
s46: and outputting the classification result of the finger vein image.
6. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network as claimed in claim 1, wherein S5 comprises the following sub-steps:
s51: according to the classification result, establishing a finger vein database meeting the requirements;
s52: and (3) adding 8: the scale of 2 is divided into a training set and a test set.
7. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network as claimed in claim 1, wherein S6 comprises the following sub-steps:
s61: using Imagenet initialization weight to perform migration learning on the finger vein image classification network;
s62: extracting the finger vein features based on the improved dual-channel VGG;
s63: selecting a loss function;
s64: selecting an optimizer;
s65: and carrying out network training according to the loss function and the optimizer.
8. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network of claim 7, wherein the improved dual-channel VGG network is constructed by the following method: based on a framework of a VGG convolutional neural network, a VGG16 and a SimpleVGG are respectively built to form a dual-channel network structure, a full connection layer of an original network is deleted, and a self-defined embedding layer is used for replacing the full connection layer; the preceding feature extraction layer comprises convolution layers and a maximum pooling layer, the VGG16 network comprises 13 convolution layers, wherein each convolution layer comprises a convolution sum and a ReLU activation function, the network comprises 5 maximum pooling layers, wherein the horizontal and vertical step size of pooling is 2; the simpleVGG network contains 6 convolutional layers, each of which includes convolution, ReLU activation function and batch normalization, the network contains 3 maximum pooling layers, where the horizontal and vertical step size of pooling is 2; fusing the characteristics output by the two networks and inputting the fused characteristics into a user-defined embedding layer; the user-defined embedding layer comprises a global average pooling layer, a LeakyReLU activation function, a batch standardization layer, a full connection layer and a discarding layer; the dimension of the classification layer is the number of classified items considered in the training process.
9. The finger vein recognition method based on generation of the countermeasure network and the convolutional neural network as claimed in claim 1, wherein S7 comprises the following sub-steps:
s71: solving Euclidean distances from the obtained feature vectors and feature vectors in a database one by one;
s72: and selecting the feature vector category which enables the Euclidean distance to be the minimum value as a matching result.
10. The finger vein recognition method based on generation of a countermeasure network and a convolutional neural network as claimed in claim 2, wherein: the gray scale transformation normalization formula is as follows:
Figure FDA0003619684730000031
in the formula, N (I, j) represents the gray scale value of the transformed image, I (I, j) represents the gray scale value of the original image, and min and max represent the minimum gray scale value and the maximum gray scale value of the original image, respectively.
CN202210458700.XA 2022-04-24 2022-04-24 Finger vein identification method based on generation of confrontation network and convolutional neural network Pending CN114743278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210458700.XA CN114743278A (en) 2022-04-24 2022-04-24 Finger vein identification method based on generation of confrontation network and convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210458700.XA CN114743278A (en) 2022-04-24 2022-04-24 Finger vein identification method based on generation of confrontation network and convolutional neural network

Publications (1)

Publication Number Publication Date
CN114743278A true CN114743278A (en) 2022-07-12

Family

ID=82283681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210458700.XA Pending CN114743278A (en) 2022-04-24 2022-04-24 Finger vein identification method based on generation of confrontation network and convolutional neural network

Country Status (1)

Country Link
CN (1) CN114743278A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058727A (en) * 2023-07-18 2023-11-14 广州脉泽科技有限公司 Image enhancement-based hand vein image recognition method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058727A (en) * 2023-07-18 2023-11-14 广州脉泽科技有限公司 Image enhancement-based hand vein image recognition method and device
CN117058727B (en) * 2023-07-18 2024-04-02 广州脉泽科技有限公司 Image enhancement-based hand vein image recognition method and device

Similar Documents

Publication Publication Date Title
KR101254177B1 (en) A system for real-time recognizing a face using radial basis function neural network algorithms
Ghoualmi et al. An ear biometric system based on artificial bees and the scale invariant feature transform
CN113361495B (en) Method, device, equipment and storage medium for calculating similarity of face images
CN107293011B (en) Access control system of intelligence house
CN111539320B (en) Multi-view gait recognition method and system based on mutual learning network strategy
Yang et al. -Means Based Fingerprint Segmentation with Sensor Interoperability
Shakya et al. Human behavior prediction using facial expression analysis
Lazimul et al. Fingerprint liveness detection using convolutional neural network and fingerprint image enhancement
CN114743278A (en) Finger vein identification method based on generation of confrontation network and convolutional neural network
CN112395901A (en) Improved face detection, positioning and recognition method in complex environment
Liu et al. Implementation System of Human Eye Tracking Algorithm Based on FPGA.
Achban et al. Wrist hand vein recognition using local line binary pattern (LLBP)
Pathak et al. Entropy based CNN for segmentation of noisy color eye images using color, texture and brightness contour features
KR101658528B1 (en) NIGHT VISION FACE RECOGNITION METHOD USING 2-Directional 2-Dimensional Principal Component Analysis ALGORITHM AND Polynomial-based Radial Basis Function Neural Networks
CN116311403A (en) Finger vein recognition method of lightweight convolutional neural network based on FECAGhostNet
He The influence of image enhancement algorithm on face recognition system
Karungaru et al. Face recognition in colour images using neural networks and genetic algorithms
Aravinth et al. A novel feature extraction techniques for multimodal score fusion using density based gaussian mixture model approach
Aydoğdu et al. A study on liveness analysis for palmprint recognition system
CN114913610A (en) Multi-mode identification method based on fingerprints and finger veins
Herlambang et al. Cloud-based architecture for face identification with deep learning using convolutional neural network
CN111553202B (en) Training method, detection method and device for neural network for living body detection
Prasanth et al. Fusion of iris and periocular biometrics authentication using CNN
Bianchini et al. An eye detection system based on neural autoassociators
Tayade et al. Sclera feature extraction using DWT co-efficients

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination