CN107403161A - Biological feather recognition method and device - Google Patents

Biological feather recognition method and device Download PDF

Info

Publication number
CN107403161A
CN107403161A CN201710637494.8A CN201710637494A CN107403161A CN 107403161 A CN107403161 A CN 107403161A CN 201710637494 A CN201710637494 A CN 201710637494A CN 107403161 A CN107403161 A CN 107403161A
Authority
CN
China
Prior art keywords
network
image
vena metacarpea
network node
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710637494.8A
Other languages
Chinese (zh)
Other versions
CN107403161B (en
Inventor
张晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Techology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Techology Co Ltd filed Critical Goertek Techology Co Ltd
Priority to CN201710637494.8A priority Critical patent/CN107403161B/en
Publication of CN107403161A publication Critical patent/CN107403161A/en
Priority to PCT/CN2017/113585 priority patent/WO2019024350A1/en
Application granted granted Critical
Publication of CN107403161B publication Critical patent/CN107403161B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of biological feather recognition method and device, this method includes:Gather the palmprint image and vena metacarpea image of user to be identified;The palmprint image and the vena metacarpea image are merged, obtains fused images;Using palmmprint in the fused images and vena metacarpea corresponding pixel points as network node;Based on various boundary conditions, by meeting that the network node of any constraints builds complex network, to obtain multiple complex networks;Feature to be identified is formed by the network characterization of the multiple complex network.The present invention improves validity and the degree of accuracy of identification.

Description

Biological feather recognition method and device
Technical field
The invention belongs to field of biological recognition, specifically, is related to a kind of biological feather recognition method and device.
Background technology
Each individual have can uniquely measure or can automatic identification and checking physiological characteristic or behavior, i.e., it is raw Thing feature.Biological identification technology it can be identified by unique biological characteristic between this each individual and body The certification of part, computer is often referred to using biological characteristics such as human body intrinsic fingerprint, face or sound, to carry out user's body The technology of part certification.
In the prior art, can be by gathering each individual fingerprint, and the fingerprint characteristic that can be changed based on the fingerprint Carry out fingerprint recognition.Can be by gathering face face-image, and facial characteristics progress can be converted to based on the face-image Face recognition.The sound that can also be sent by gathering each individual, and sound characteristic is converted to based on the sound and carries out face Identification.
But the biological characteristic such as fingerprint, face or sound is easily tampered, for example, fingerprint can be forged, face can hide Gear, sound can also be changed using voice changer, therefore, it is impossible to ensure effectively to identify.
The content of the invention
In view of this, the invention provides a kind of biological feather recognition method and device, by by personal recognition with and Vena metacarpea identification is combined, and is solved the problems, such as effectively identify in the prior art, is improved the validity and standard of identification Exactness.
In order to solve the above-mentioned technical problem, the first aspect of the present invention provides a kind of biological feather recognition method, this method Including:
Gather the palmprint image and vena metacarpea image of user to be identified;
The palmprint image and the vena metacarpea image are merged, obtains fused images;
Using palmmprint in the fused images and vena metacarpea corresponding pixel points as network node;
Based on various boundary conditions, the network node by meeting any constraints builds complex network, multiple to obtain Complex network;
Feature to be identified is formed by the network characterization of the multiple complex network.
Preferably, the constraints includes the nodal distance of arbitrary network node less than constraint distance;Difference constraint bar The constraint distance of part is different;
Methods described also includes:
Calculate the nodal distance of any two network node;
It is described to be based on various boundary conditions, by meeting that the network node of any constraints builds complex network, to obtain Multiple complex networks include:
Based on different constraint distances, by the network section for being less than any constraint distance with the nodal distance of arbitrary network node Point structure complex network, to obtain multiple complex networks.
Preferably, the nodal distance for calculating any two network node includes:
According to pixel coordinate corresponding to any two network node, calculate the coordinate of any two network node away from From;
The coordinate distance is normalized and obtains the nodal distance.
Preferably, the fusion palmprint image and the vena metacarpea image, obtaining fused images includes:
By the palmprint image and vena metacarpea image binaryzation, palmmprint and vena metacarpea respective pixel are converted into first Numerical value, and non-palmmprint and non-vena metacarpea respective pixel value are converted into second value;
The palmprint image and the vena metacarpea image after fusion binaryzation, obtain fused images.
Preferably, it is described to include palmmprint in the fused images and vena metacarpea corresponding pixel points as network node:
By in the fused images, pixel value is the pixel of the first numerical value as network node.
Preferably, the network characterization by the multiple complex network forms feature to be identified and included:
Determine the node degree of each network node in any complex network;
According to the node degree, the network characterization of calculating any complex network;
The network characterization of any complex network in the multiple complex network is subjected to combinations of features;
Using the network characterization after combination as feature to be identified.
Preferably, the fusion palmprint image and the vena metacarpea image, obtaining fused images includes:
The palmprint image of the user to be identified and vena metacarpea image are subjected to noise reduction;
The palmprint image after noise reduction and the vena metacarpea image are merged, obtains fused images.
The second aspect of the present invention provides a kind of biometric devices, and the device includes:
Image capture module, for gathering the palmprint image and vena metacarpea image of user to be identified;
Image co-registration module, for merging the palmprint image and the vena metacarpea image, obtain fused images;
Node determining module, for using palmmprint in the fused images and vena metacarpea corresponding pixel points as network section Point;
Network struction module, for based on various boundary conditions, the network node by meeting any constraints to build multiple Miscellaneous network, to obtain multiple complex networks;
Feature construction module, for forming feature to be identified by the network characterization of the multiple complex network.
Preferably, the constraints includes the nodal distance of arbitrary network node less than constraint distance;Difference constraint bar The constraint distance of part is different;
Described device also includes:
Distance calculation module, for calculating the nodal distance of any two network node;
The network struction module includes:
Network struction unit, for based on different constraint distances, being appointed by being less than with the nodal distance of arbitrary network node The network node structure complex network of one constraint distance, to obtain multiple complex networks.
Preferably, the distance calculation module includes:
Metrics calculation unit, for the pixel coordinate according to corresponding to any two network node, calculate any two The coordinate distance of network node;
Range normalization unit, the nodal distance is obtained for the coordinate distance to be normalized.
Preferably, described image Fusion Module includes:
Image conversion unit, for by the palmprint image and vena metacarpea image binaryzation, by palmmprint and vena metacarpea Respective pixel is converted into the first numerical value, and non-palmmprint and non-vena metacarpea respective pixel value are converted into second value;
First integrated unit, for merging the palmprint image after binaryzation and the vena metacarpea image, melted Close image.
Preferably, the node determining module includes:
Node determining unit, for by the fused images, pixel value to be the pixel of the first numerical value as network section Point.
Preferably, the feature construction module includes:
First determining unit, for determining the node degree of each network node in any complex network;
Feature calculation unit, for according to the node degree, the network characterization of calculating any complex network;
Combinations of features unit, for the network characterization of any complex network in the multiple complex network to be carried out into feature group Close;
Second determining unit, for using the network characterization after combination as feature to be identified.
Preferably, described image Fusion Module includes:
Image noise reduction unit, for the palmprint image of the user to be identified and vena metacarpea image to be carried out into noise reduction;
Second integrated unit, for merging the palmprint image after noise reduction and the vena metacarpea image, obtain fusion figure Picture.
Compared with prior art, the present invention can be obtained including following technique effect:
In the present invention, the palmprint image and vena metacarpea image of user to be identified are gathered, and the two is merged, is melted Image is closed, the fused images combine the feature of palmprint image and vena metacarpea image, the enhancing of feature identification degree.By the fusion Palmmprint and vena metacarpea corresponding pixel points are as network node in image, and are based on various boundary conditions, by meeting any constraint The network node structure complex network of condition obtains multiple complex networks, is made up of and treats the network characterization of the multiple complex network Identification feature.The multiple complex network is that the pixel based on the higher palmmprint of identification and vena metacarpea is formed, with reference to The feature of the two, also, feature to be identified formed by the network characterization of multiple complex networks can more characterize user to be identified Identification feature, validity and the degree of accuracy of identification can be improved.
Brief description of the drawings
Accompanying drawing described herein is used for providing a further understanding of the present invention, forms the part of the present invention, this hair Bright schematic description and description is used to explain the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is a kind of flow chart of one embodiment of biological feather recognition method of the embodiment of the present invention;
Fig. 2 is a kind of flow chart of another embodiment of biological feather recognition method of the embodiment of the present invention;
Fig. 3 is a kind of structural representation of one embodiment of biometric devices of the embodiment of the present invention;
Fig. 4 is a kind of structural representation of another embodiment of biometric devices of the embodiment of the present invention.
Embodiment
Embodiments of the present invention are described in detail below in conjunction with drawings and Examples, and thereby how the present invention is applied Technological means can fully understand and implement according to this to solve technical problem and reach the implementation process of technical effect.
The embodiment of the present invention is mainly used in field of biological recognition, mainly gathers palmmprint and carrys out table with two kinds of features of vena metacarpea The biological characteristic of user to be identified is levied, validity and the degree of accuracy of identification can be improved.
In the prior art, bio-identification uses face recognition, iris recognition or voice recognition more, but these identification features Easily it is tampered, it is impossible to ensure effectively identification.
Inventor studies discovery, and palmmprint and vena metacarpea are the biological characteristics of human body.And everyone palmmprint and the palm Vein is different, can be used for bio-identification.But inventor has found, during using personal recognition, palmmprint is a kind of exposed to outer Biological characteristic is easily forged;And the vena metacarpea feature that vena metacarpea is formed is less, accuracy of identification is relatively low.Therefore, hold to overcome Easy drawbacks described above, inventor expect blending personal recognition and vena metacarpea identification, and the fusion for obtaining palmmprint and vena metacarpea is special Sign, it can both ensure the accuracy of identification of the identification feature of user to be identified, and may insure the identification feature of user to be identified again Unforgeable, propose technical scheme accordingly.
In the embodiment of the present invention, the palmprint image and vena metacarpea image of user to be identified are gathered, by by two kinds of collection Image is merged, and obtains fused images;And according to the network node determined in fused images, using different constraints with Different multiple complex networks are determined, feature to be identified is formed by the network characterization of multiple complex networks.The complex network of structure The basis of feature to be identified is to determine, and complex network is obtained based on the pixel of fused images, it can be ensured that The information to be identified of structure has the content of two kinds of biological characteristics simultaneously, and then improves the validity and accuracy of identification.
Below, the embodiment of the present invention will be described in detail with reference to accompanying drawing.
As shown in figure 1, it is a kind of flow of one embodiment of biological feather recognition method provided in an embodiment of the present invention Figure, this method can include following steps:
101:Gather the palmprint image and vena metacarpea image of user to be identified.
Wherein, the palmprint image is identical with the size of the vena metacarpea image, for example, the two can be pixels tall For h, pixel wide is w image, and the size of its picture element matrix is w*h.
It is alternatively possible to gather the palmprint image and vena metacarpea image using multispectral palm imaging sensor.Light more Spectrum palm imaging sensor can gather different types of palm image under different light sources, for example, can be with visible light source Palmprint image is gathered, vena metacarpea image can be gathered under infrared light supply.
It is alternatively possible within the time interval shorter time user to be identified described in continuous acquisition palmprint image and the palm Vein image, to ensure that the position of the palmprint image and palm in vena metacarpea image of collection is constant.The palmprint image is quiet with slapping Arteries and veins image can also be gathered in different periods, it is only necessary to ensure the palm position of palmprint image and vena metacarpea image during collection Do not change.
102:The palmprint image and the vena metacarpea image are merged, obtains fused images.
The palmprint image in the vena metacarpea image with containing palm print characteristics and vena metacarpea feature, by palmprint image And vena metacarpea image is merged and is also merged palm print characteristics and vena metacarpea feature.
The fusion palmprint image can refer to the palmprint image and the palm is quiet with the vena metacarpea image Arteries and veins image is overlapping according to the position progress pixel of each pixel respectively, makes same two pixel values of position correspondence.With described The pixels tall of palmprint image and the vena metacarpea image is h, and exemplified by pixel wide is w, the palmprint image and the palm are quiet Arteries and veins image is illustrated as w*h two-dimensional matrix, by the two progress it is overlapping after, the fused images can be expressed as 2*w* H three-dimensional matrice.
103:Using palmmprint in the fused images and vena metacarpea corresponding pixel points as network node.
In fused images, the palmmprint non-with surrounding of pixel value corresponding to palmmprint and vena metacarpea and the corresponding picture of non-vena metacarpea Plain value differs greatly, and therefore, can be determined using the difference of pixel value in fused images corresponding to palmmprint and vena metacarpea Pixel.
104:Based on various boundary conditions, by meeting that the network node of any constraints builds complex network, to obtain Multiple complex networks.
The constraints is mainly used in constraining the network node, when the network node meets constraints, i.e., The network node structure complex network for meeting constraints can be used.
Alternatively, under various boundary conditions, meet that the network node of the constraints is also different, therefore, by heterogeneous networks The complex network of node structure is also different.
Because network node is pixel corresponding to the palmmprint and vena metacarpea, therefore the network section of the complex network Point is actually that the pixel according to corresponding to the palmmprint and vena metacarpea is formed, and then complex network is determined for slapping The correlated characteristic of line and vena metacarpea.
It is alternatively possible to the network characterization of the multiple complex network is obtained, for example, the network characterization can refer to averagely The characteristic values such as degree, variance degree and maximal degree.
105:Feature to be identified is formed by the network characterization of the multiple complex network.
It is alternatively possible to by the network characterization of the multiple complex network feature to be identified in series.With described Exemplified by the network characterization of Complex Networks Feature is 1*3 two-dimensional matrix, it is assumed that the multiple complex network is 10, then structure of connecting Into to be identified be characterized as 10*3.In the embodiment of the present invention, palmprint image and vena metacarpea image are gathered, to obtain melting for the two Image is closed, and then two kinds of biological characteristics of palmmprint and vena metacarpea can be merged, and pair based on palmmprint and vena metacarpea Pixel is answered to determine different complex networks, namely obtain multiple network between palmmprint and vena metacarpea as network node Association, and then different network characterizations can be determined, the feature to be identified being made up of the network characterization of the multiple complex network Content more comprehensively, validity and the degree of accuracy of identification can be improved.
It is illustrated in figure 2 a kind of flow of another embodiment of biological feather recognition method provided in an embodiment of the present invention Figure, in this embodiment, the constraints can include:The nodal distance of arbitrary network node is less than constraint distance;It is different The constraint distance of constraints is different.Methods described can also include following steps:
201:Gather the palmprint image and vena metacarpea image of user to be identified;
202:The palmprint image and the vena metacarpea image are merged, obtains fused images;
203:Using palmmprint in the fused images and vena metacarpea corresponding pixel points as network node;
204:Calculate the nodal distance of any two network node.
205:Based on different constraint distances, by the net for being less than any constraint distance with the nodal distance of arbitrary network node Network node builds complex network, to obtain multiple complex networks.
In the case where regarding the fused images as a three-dimensional matrice, any one pixel can be understood as according to What the mode of three-dimensional matrice was arranged, therefore corresponding coordinate points be present in any one pixel, thus may determine that any two Coordinate distance between individual network node.
It is assumed that the coordinate position of any one pixel is (X1, Y1, Z1) on palmprint image, on vena metacarpea image any one The coordinate position of pixel is (X2, Y2, Z2), then the coordinate distance between two network nodes can be expressed as D, and D can be by It is calculated according to below equation:
Certainly, the coordinate distance comprised more than between palmmprint pixel and vena metacarpea pixel, the palm can also be included Coordinate between coordinate distance between line pixel and palmmprint pixel, and vena metacarpea pixel and vena metacarpea pixel away from From.
The constraint distance refers to the distance constant for constraining the nodal distance between two network nodes, constrains distance Estimate to obtain according to the nodal distance.When in face of different palmprint image and vein image, the nodal distance can Can be different, and larger difference is there may be between different nodal distances, it is necessary to the nodal distance to differ greatly for acquisition, It needs to be determined that the corresponding constraint distance to differ greatly.But face to face larger to quantity palmprint image and vein image when, Determine that constraint apart from sufficiently complex, is unfavorable for the widespread adoption of the present invention successively.
Therefore, alternatively, the nodal distance for calculating any two network node can include:
According to pixel coordinate corresponding to any two network node, calculate the coordinate of any two network node away from From.
The coordinate distance is normalized and obtains the nodal distance.
Normalization is to calculate the coordinate distance and the ratio of the size of the fused images between the network node, is obtained Corresponding normalized cumulant.The normalization formula can calculate according to below equation:
Wherein, w is the width of palmprint image and vena metacarpea image, and h is the height of palmprint image and vena metacarpea image; Because fused images are by palmprint image and vena metacarpea image is overlapping obtains, the height of the fused images is 2.
It is described that the coordinate distance is normalized, the coordinate distance is subjected to unification, and by the constraint Away from it is unified to (0,1] in section.And then when in face of substantial amounts of palmprint image and vein image, it can be constrained using same group Distance, it is not necessary to repeatedly defined the constraint distance.Calculating process is simplified, has been also convenient for largely should for the present invention With.
206:Feature to be identified is formed by the network characterization of the multiple complex network.
The network node is made up of the pixel of palmmprint and vena metacarpea.The multiple complex network by meet constraint away from From any two network node between line form, have recorded distance relation between any two network node, rather than The spatial relation of arbitrary network node.And the feature to be identified is made up of the network characterization of multiple complex networks, therefore, The feature to be identified represents the relative position relation of any two pixel of the palmmprint and vena metacarpea, its relative rotation Turn, displacement equal error has stronger robustness.That is, the palmprint image and vena metacarpea image are gathered every time, to be treated During identification feature, not by rotate, displacement etc. is influenceed, relatively stable feature to be identified can be obtained.
For example it is assumed that when palmprint image and vena metacarpea image gather for the first time, finger sensing 12 o'clock direction, and palmmprint When image and vena metacarpea image gather for the second time, finger points to 1 o'clock direction, although this palmprint image for gathering twice and 30 ° of difference, still, the phase between the palmmprint and any two pixel of vena metacarpea that gather twice be present in vena metacarpea image To position relationship, and then the multiple complex networks formed are constant, and the feature to be identified of acquisition is relatively stable.
In the embodiment of the present invention, corresponding different complex web is determined by the nodal distance between two network nodes Network, can be by determining multiple different complex networks, and the complex network is the pixel based on palmmprint and vena metacarpea Form, and then multiple network characterizations of palmmprint and vena metacarpea can be determined so that network characterization is more accurate, and then can obtain Obtain higher recognition effect and the degree of accuracy.
As another embodiment, the fusion palmprint image and the vena metacarpea image, fused images bag is obtained Include:
By the palmprint image and vena metacarpea image binaryzation, palmmprint and vena metacarpea respective pixel are converted into first Numerical value, and non-palmmprint and non-vena metacarpea respective pixel value are converted into second value;
The palmprint image and the vena metacarpea image after fusion binaryzation, obtain fused images.
Alternatively, it is described to include palmmprint in the fused images and vena metacarpea corresponding pixel points as network node:
By in the fused images, pixel value is the pixel of the first numerical value as network node.
The palmprint image and vein image binaryzation are referred to, by the palmmprint in palmprint image, and vena metacarpea image In vena metacarpea extract, its corresponding pixel is identified with the first numerical value of definition, and other non-palmmprint with And the pixel of non-vena metacarpea with definition second value be identified, and then can clearly determine the palmmprint in palmprint image with And the vena metacarpea in vena metacarpea image.
Alternatively, in order that the palmmprint and vena metacarpea are apparent, first numerical value can be 1, second number Value can be 0.
It is alternatively possible to the palmprint image and vena metacarpea image are subjected to binaryzation conversion using Binarization methods.
The Binarization methods can refer to LBP (Local Binary Patterns, local binary patterns), average window Mouth filtering algorithm scheduling algorithm.
In the embodiment of the present invention, before the palmprint image and vein image are merged, first by the palmmprint figure As and vena metacarpea image carried out binaryzation conversion, and then can be by the palmmprint in palmprint image, the palm in vena metacarpea image The features such as vein determine that other useless features are then rejected, and then can accurately determine and belong to palmmprint and slap quiet The pixel of arteries and veins, and using the palmmprint pixel terminal accurately determined and vena metacarpea pixel as network node, it is accurate to form Complex network, and then more accurately feature to be identified can be determined, further increase the accuracy and effectively of identification feature Property.
As another embodiment, the network characterization by the multiple complex network, which forms feature to be identified, to be included:
Determine the node degree of each network node in any complex network;
According to the node degree, the network characterization of calculating any complex network;
The network characterization of any complex network in the multiple complex network is subjected to combinations of features;
Using the network characterization after combination as feature to be identified.
The node degree of the network node can refer to the connection quantity of a network node and other network nodes.Example Such as, a network node is connected to 3 other network nodes, and the degree of the network node is 3.
Determine each network node in any complex network node degree be actually determine it is each in any complex network Annexation between network node and other network nodes, and then phase can be determined according to the annexation between network node The network characterization answered.
The network characterization for calculating any complex network can include:
Calculate the average degree, variance degree and/or maximal degree of any complex network.
The average degree refers to the average value of all-network node moderate;The variance degree refer to according to the average degree with And each node degree calculates the variance yields of the degree obtained;The maximal degree refers to the maximum of all-network node moderate.
In the embodiment of the present invention, based on the node degree of each network node, the multiple networks for calculating fused images are special Sign.The network characterization of any complex network can more accurately be determined using the calculation of node degree, it is more accurate to obtain Network characterization, and then improve identification validity and the degree of accuracy.
As another embodiment, the fusion palmprint image and the vena metacarpea image, fused images bag is obtained Include:
The palmprint image of the user to be identified and vena metacarpea image are subjected to noise reduction;
The palmprint image after noise reduction and the vena metacarpea image are merged, obtains fused images.
Wherein, the palmprint image of the user to be identified and vena metacarpea image being carried out into noise reduction can refer to treat described Identify that the radio-frequency component in the palmprint image and vena metacarpea image of user is filtered out.
It is alternatively possible to using noise reduction algorithm by the height in the palmprint image and vena metacarpea image of the user to be identified Frequency composition is filtered out.The noise reduction algorithm can refer to wavelet transformation, Kalman filtering algorithm, median filtering algorithm etc..
In the embodiment of the present invention, before the palmprint image and vena metacarpea image are merged, by the palmmprint figure Picture and vein image have carried out noise reduction process, and the palmmprint in palmprint image, the vena metacarpea in vena metacarpea image can be made more clear It is clear, it is determined that when palmmprint and vena metacarpea, the reduction of various noises, pixel corresponding to the palmmprint and vena metacarpea can be made It is more accurate, and then more accurately complex network can be determined, to obtain the higher feature to be identified of the degree of accuracy, further increase The validity of identification and the degree of accuracy.
As shown in figure 3, be a kind of structural representation of one embodiment of biometric devices of the embodiment of the present invention, The device can include following module:
Image capture module 301, for gathering the palmprint image and vena metacarpea image of user to be identified.
Wherein, the palmprint image is identical with the size of the vena metacarpea image, for example, the two can be pixels tall For h, pixel wide is w image, and the size of its picture element matrix is w*h.
It is alternatively possible to gather the palmprint image and vena metacarpea image using multispectral palm imaging sensor.Light more Spectrum palm imaging sensor can gather different types of palm image under different light sources, for example, can be with visible light source Palmprint image is gathered, vena metacarpea image can be gathered under infrared light supply.
It is alternatively possible within the time interval shorter time user to be identified described in continuous acquisition palmprint image and the palm Vein image, to ensure that the position of the palmprint image and palm in vena metacarpea image of collection is constant.The palmprint image is quiet with slapping Arteries and veins image can also be gathered in different periods, it is only necessary to ensure the palm position of palmprint image and vena metacarpea image during collection Do not change.
Image co-registration module 302, for merging the palmprint image and the vena metacarpea image, obtain fused images.
The palmprint image in the vena metacarpea image with containing palm print characteristics and vena metacarpea feature, by palmprint image And vena metacarpea image is merged and is also merged palm print characteristics and vena metacarpea feature.
The fusion palmprint image can refer to the palmprint image and the palm is quiet with the vena metacarpea image Arteries and veins image is overlapping according to the position progress pixel of each pixel respectively, same position is included two pixel values.With described The pixels tall of palmprint image and the vena metacarpea image is h, and exemplified by pixel wide is w, the palmprint image and the palm are quiet Arteries and veins image is illustrated as w*h two-dimensional matrix, by the two progress it is overlapping after, the fused images can be expressed as 2*w* H three-dimensional matrice.
Node determining module 303, for using palmmprint in the fused images and vena metacarpea corresponding pixel points as network Node.
In fused images, the pixel value of palmmprint and vena metacarpea and differing greatly for the pixel value of surrounding normal skin, because This, can utilize the difference of pixel value determine in fused images pixel corresponding to palmmprint and vena metacarpea.
Network struction module 304, for based on various boundary conditions, by meeting that the network node of any constraints is built Complex network, to obtain multiple complex networks.
The constraints is mainly used in constraining the network node, when the network node meets constraints, i.e., The network node structure complex network for meeting constraints can be used.
Alternatively, under various boundary conditions, meet that the network node of the constraints is also different, therefore, by heterogeneous networks The complex network of node structure is also different.
Because network node is pixel corresponding to the palmmprint and vena metacarpea, therefore the network section of the complex network Point is actually that the pixel according to corresponding to the palmmprint and vena metacarpea is formed, and then complex network is determined for slapping The correlated characteristic of line and vena metacarpea.
It is alternatively possible to the network characterization of the multiple complex network is obtained, for example, the network characterization can refer to averagely The characteristic values such as degree, variance degree and maximal degree.
Feature construction module 305, for forming feature to be identified by the network characterization of the multiple complex network.
It is alternatively possible to by the network characterization of the multiple complex network feature to be identified in series.With described Exemplified by the network characterization of Complex Networks Feature is 1*3 two-dimensional matrix, it is assumed that the multiple complex network is 10, then structure of connecting Into to be identified be characterized as 10*3.
In the embodiment of the present invention, palmprint image and vena metacarpea image are gathered, to obtain the fused images of the two, Jin Erke So that two kinds of biological characteristics of palmmprint and vena metacarpea to be merged, and the corresponding pixel points based on palmmprint and vena metacarpea are as net Network node, different complex networks is determined, namely obtain multiple network between palmmprint and vena metacarpea and associate, and then can be true Fixed different network characterization, the content for the feature to be identified being made up of the network characterization of the multiple complex network more comprehensively, can To improve the validity of identification and the degree of accuracy.
As shown in figure 4, be a kind of structure flow chart of one embodiment of biometric devices of the embodiment of the present invention, The device can include following module:
Image capture module 401, for gathering the palmprint image and vena metacarpea image of user to be identified.
Image co-registration module 402, for merging the palmprint image and the vena metacarpea image, obtain fused images.
Node determining module 403, for using palmmprint in the fused images and vena metacarpea corresponding pixel points as network Node.
Distance calculation module 404, for calculating the nodal distance of any two network node.
In the case where regarding the fused images as a three-dimensional matrice, any one pixel can be understood as according to What the mode of three-dimensional matrice was arranged, therefore corresponding coordinate points be present in any one pixel, thus may determine that any two Coordinate distance between individual network node.
Certainly, the coordinate distance comprised more than between palmmprint pixel and vena metacarpea pixel, the palm can also be included Coordinate between coordinate distance between line pixel and palmmprint pixel, and vena metacarpea pixel and vena metacarpea pixel away from From.
The constraint distance refers to the distance constant for constraining the nodal distance between two network nodes, constrains distance Estimate to obtain according to the nodal distance.When in face of different palmprint image and vein image, the nodal distance can Can be different, and larger difference is there may be between different nodal distances, it is necessary to the nodal distance to differ greatly for acquisition, It needs to be determined that the corresponding constraint distance to differ greatly.But face to face larger to quantity palmprint image and vein image when, Determine that constraint apart from sufficiently complex, is unfavorable for the widespread adoption of the present invention successively.
Alternatively, the distance calculation module can include:
Metrics calculation unit, for the pixel coordinate according to corresponding to any two network node, calculate any two The coordinate distance of network node;
Range normalization unit, the nodal distance is obtained for the coordinate distance to be normalized.
Normalization is the coordinate distance and the ratio of the size of the fused images between the network node, is obtained corresponding Normalized cumulant.
It is described that the coordinate distance is normalized, the coordinate distance is subjected to unification, and by the constraint Away from it is unified to (0,1] in section.And then when in face of substantial amounts of palmprint image and vein image, it can be constrained using same group Distance, it is not necessary to repeatedly defined the constraint distance.Calculating process is simplified, has been also convenient for largely should for the present invention With.
Network struction module 405, for based on various boundary conditions, by meeting that the network node of any constraints is built Complex network, to obtain multiple complex networks.
The nodal distance that the constraints can include arbitrary network node is less than constraint distance;Various boundary conditions Constraint distance is different.
The network struction module can include:
Network struction unit 4051, for based on different constraint distances, by small with the nodal distance of arbitrary network node Complex network is built in the network node of any constraint distance, to obtain multiple complex networks.
Feature construction module 406, for forming feature to be identified by the network characterization of the multiple complex network.
The network node is made up of the pixel of palmmprint and vena metacarpea.The multiple complex network by meet constraint away from From any two network node between line form, have recorded distance relation between any two network node, rather than The spatial relation of arbitrary network node.And the feature to be identified is made up of the network characterization of multiple complex networks, therefore, The feature to be identified represents the relative position relation of any two pixel of the palmmprint and vena metacarpea, its relative rotation Turn, displacement equal error has stronger robustness.That is, the palmprint image and vena metacarpea image are gathered every time, to be treated During identification feature, not by rotate, displacement etc. is influenceed, relatively stable feature to be identified can be obtained.
In the embodiment of the present invention, corresponding different complex web is determined by the nodal distance between two network nodes Network, can be by determining multiple different complex networks, and the complex network is the pixel based on palmmprint and vena metacarpea Form, and then multiple network characterizations of palmmprint and vena metacarpea can be determined so that network characterization is more accurate, and then can obtain Obtain higher recognition effect and the degree of accuracy.
As another embodiment, described image Fusion Module can include:
Image conversion unit, for by the palmprint image and vena metacarpea image binaryzation, by palmmprint and vena metacarpea Respective pixel is converted into the first numerical value, and non-palmmprint and non-vena metacarpea respective pixel value are converted into second value.
First integrated unit, for merging the palmprint image after binaryzation and the vena metacarpea image, melted Close image.
Alternatively, the node determining module can include:
Node determining unit, for by the fused images, pixel value to be the pixel of the first numerical value as network section Point.
The palmprint image and vein image binaryzation are referred to, by the palmmprint in palmprint image, and vena metacarpea image In vena metacarpea extract, its corresponding pixel is identified with the first numerical value of definition, and other non-palmmprint with And the pixel of non-vena metacarpea is identified with the second value of definition.The palmmprint and the palm in palmprint image can clearly be determined Vena metacarpea in vein image.
Alternatively, in order that the palmmprint and vena metacarpea are apparent, first numerical value can be 1, second number Value can be 0.
It is alternatively possible to the palmprint image and vena metacarpea image are subjected to binaryzation using Binarization methods.
The Binarization methods can refer to LBP (Local Binary Patterns, local binary patterns), average window Mouth filtering algorithm etc..
In the embodiment of the present invention, before the palmprint image and vein image are merged, first by the palmmprint figure As and vena metacarpea image carried out binaryzation conversion, and then can be by the palmmprint in palmprint image, the palm in vena metacarpea image The features such as vein determine that other useless features are then rejected, and then can accurately determine and belong to palmmprint and slap quiet The pixel of arteries and veins, and using the palmmprint pixel terminal accurately determined and vena metacarpea pixel as network node, it is accurate to form Complex network, and then more accurately feature to be identified can be determined, further increase the accuracy and effectively of identification feature Property.
As another embodiment, the feature construction module can include:
First determining unit, for determining the node degree of each network node in any complex network.
Feature calculation unit, for according to the node degree, the network characterization of calculating any complex network.
Combinations of features unit, for the network characterization of any complex network in the multiple complex network to be carried out into feature group Close.
Second determining unit, for using the network characterization after combination as feature to be identified.
The node degree of the network node can refer to the connection of a network node and other network nodes.Example Such as, a network node is connected to 3 other network nodes, and the degree of the network node is 3.
Determine each network node in any complex network node degree be actually determine it is each in any complex network Annexation between network node and other network nodes, and then phase can be determined according to the annexation between network node The network characterization answered.
The network characterization for calculating any complex network can include:
Calculate the average degree, variance degree and/or maximal degree of any complex network.
The average degree refers to the average value of all-network node moderate;The variance degree refer to according to the average degree with And each node degree calculates the variance yields of the degree obtained;The maximal degree refers to the maximum of all-network node moderate.
In the embodiment of the present invention, based on the node degree of each network node, the multiple networks for calculating fused images are special Sign.The network characterization of any complex network can more accurately be determined using the calculation of node degree, it is more accurate to obtain Network characterization, and then improve identification validity and the degree of accuracy.
As another embodiment, described image Fusion Module can include:
Image noise reduction unit, for the palmprint image of the user to be identified and vena metacarpea image to be carried out into noise reduction;
Second integrated unit, for merging the palmprint image after noise reduction and the vena metacarpea image, obtain fusion figure Picture.
Wherein, the palmprint image of the user to be identified and vena metacarpea image being carried out into noise reduction can refer to treat described Identify that the radio-frequency component in the palmprint image and vena metacarpea image of user is filtered out.
It is alternatively possible to using noise reduction algorithm by the height in the palmprint image and vena metacarpea image of the user to be identified Frequency composition is filtered out.The noise reduction algorithm can refer to wavelet transformation, Kalman filtering algorithm, median filtering algorithm etc..
In the embodiment of the present invention, before the palmprint image and vena metacarpea image are merged, by the palmmprint figure Picture and vein image have carried out noise reduction process, and the palmmprint in palmprint image, the vena metacarpea in vena metacarpea image can be made more clear It is clear, it is determined that when palmmprint and vena metacarpea, the reduction of various noises, pixel corresponding to the palmmprint and vena metacarpea can be made It is more accurate, and then more accurately complex network can be determined, to obtain the higher feature to be identified of the degree of accuracy, further increase The validity of identification and the degree of accuracy.
In a typical configuration, computing device includes one or more processors (CPU), input/output interface, net Network interface and internal memory.
Internal memory may include computer-readable medium in volatile memory, random access memory (RAM) and/or The forms such as Nonvolatile memory, such as read-only storage (ROM) or flash memory (flash RAM).Internal memory is computer-readable medium Example.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method Or technology come realize information store.Information can be computer-readable instruction, data structure, the module of program or other data. The example of the storage medium of computer includes, but are not limited to phase transition internal memory (PRAM), static RAM (SRAM), moved State random access memory (DRAM), other kinds of random access memory (RAM), read-only storage (ROM), electric erasable Programmable read only memory (EEPROM), fast flash memory bank or other memory techniques, read-only optical disc read-only storage (CD-ROM), Digital versatile disc (DVD) or other optical storages, magnetic cassette tape, the storage of tape magnetic rigid disk or other magnetic storage apparatus Or any other non-transmission medium, the information that can be accessed by a computing device available for storage.Define, calculate according to herein Machine computer-readable recording medium does not include non-temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
Some vocabulary has such as been used to censure specific components among specification and claim.Those skilled in the art should It is understood that hardware manufacturer may call same component with different nouns.This specification and claims are not with name The difference of title is used as the mode for distinguishing component, but is used as the criterion of differentiation with the difference of component functionally.Such as logical The "comprising" of piece specification and claim mentioned in is an open language, therefore should be construed to " include but do not limit In "." substantially " refer in receivable error range, those skilled in the art can be described within a certain error range solution Technical problem, basically reach the technique effect.In addition, " coupling " one word is herein comprising any direct and indirect electric property coupling Means.Therefore, if the first device of described in the text one is coupled to a second device, representing the first device can directly electrical coupling The second device is connected to, or the second device is electrically coupled to indirectly by other devices or coupling means.Specification Subsequent descriptions for implement the present invention better embodiment, so it is described description be by illustrate the present invention rule for the purpose of, It is not limited to the scope of the present invention.Protection scope of the present invention is worked as to be defined depending on appended claims institute defender.
It should also be noted that, term " comprising ", "comprising" or its any other variant are intended to nonexcludability Comprising, so that commodity or system including a series of elements not only include those key elements, but also including without clear and definite The other element listed, or also include for this commodity or the intrinsic key element of system.In the feelings not limited more Under condition, the key element that is limited by sentence "including a ...", it is not excluded that in the commodity including the key element or system also Other identical element be present
Some preferred embodiments of the present invention have shown and described in described above, but as previously described, it should be understood that the present invention Be not limited to form disclosed herein, be not to be taken as the exclusion to other embodiment, and available for various other combinations, Modification and environment, and above-mentioned teaching or the technology or knowledge of association area can be passed through in application contemplated scope described herein It is modified., then all should be in this hair and the change and change that those skilled in the art are carried out do not depart from the spirit and scope of the present invention In the protection domain of bright appended claims.

Claims (14)

  1. A kind of 1. biological feather recognition method, it is characterised in that including:
    Gather the palmprint image and vena metacarpea image of user to be identified;
    The palmprint image and the vena metacarpea image are merged, obtains fused images;
    Using palmmprint in the fused images and vena metacarpea corresponding pixel points as network node;
    Based on various boundary conditions, by meeting that the network node of any constraints builds complex network, to obtain multiple complexity Network;
    Feature to be identified is formed by the network characterization of the multiple complex network.
  2. 2. according to the method for claim 1, it is characterised in that the constraints includes the nodal point separation of arbitrary network node With a distance from less than constraint;The constraint distance of various boundary conditions is different;
    Methods described also includes:
    Calculate the nodal distance of any two network node;
    Described to be based on various boundary conditions, the network node by meeting any constraints builds complex network, multiple to obtain Complex network includes:
    Based on different constraint distances, by the network node structure for being less than any constraint distance with the nodal distance of arbitrary network node Complex network is built, to obtain multiple complex networks.
  3. 3. according to the method for claim 2, it is characterised in that the nodal distance bag for calculating any two network node Include:
    According to pixel coordinate corresponding to any two network node, the coordinate distance of any two network node is calculated;
    The coordinate distance is normalized and obtains the nodal distance.
  4. 4. according to the method for claim 1, it is characterised in that the fusion palmprint image and the vena metacarpea figure Picture, obtaining fused images includes:
    By the palmprint image and vena metacarpea image binaryzation, palmmprint and vena metacarpea respective pixel are converted into the first number Value, and non-palmmprint and non-vena metacarpea respective pixel value are converted into second value;
    The palmprint image and the vena metacarpea image after fusion binaryzation, obtain fused images.
  5. 5. according to the method for claim 4, it is characterised in that described by palmmprint and vena metacarpea pair in the fused images Pixel is answered to include as network node:
    By in the fused images, pixel value is the pixel of the first numerical value as network node.
  6. 6. according to the method for claim 1, it is characterised in that the network characterization by the multiple complex network is formed Feature to be identified includes:
    Determine the node degree of each network node in any complex network;
    According to the node degree, the network characterization of calculating any complex network;
    The network characterization of any complex network in the multiple complex network is subjected to combinations of features;
    Using the network characterization after combination as feature to be identified.
  7. 7. according to the method for claim 1, it is characterised in that the fusion palmprint image and the vena metacarpea figure Picture, obtaining fused images includes:
    The palmprint image of the user to be identified and vena metacarpea image are subjected to noise reduction;
    The palmprint image after noise reduction and the vena metacarpea image are merged, obtains fused images.
  8. A kind of 8. biometric devices, it is characterised in that including:
    Image capture module, for gathering the palmprint image and vena metacarpea image of user to be identified;
    Image co-registration module, for merging the palmprint image and the vena metacarpea image, obtain fused images;
    Node determining module, for using palmmprint in the fused images and vena metacarpea corresponding pixel points as network node;
    Network struction module, for based on various boundary conditions, by meeting that the network node of any constraints builds complex web Network, to obtain multiple complex networks;
    Feature construction module, for forming feature to be identified by the network characterization of the multiple complex network.
  9. 9. device according to claim 8, it is characterised in that the constraints includes the nodal point separation of arbitrary network node With a distance from less than constraint;The constraint distance of various boundary conditions is different;
    Described device also includes:
    Distance calculation module, for calculating the nodal distance of any two network node;
    The network struction module includes:
    Network struction unit, for based on different constraint distances, by with the nodal distance of arbitrary network node be less than it is any about The network node structure complex network of beam distance, to obtain multiple complex networks.
  10. 10. device according to claim 9, it is characterised in that the distance calculation module includes:
    Metrics calculation unit, for the pixel coordinate according to corresponding to any two network node, calculate any two network The coordinate distance of node;
    Range normalization unit, the nodal distance is obtained for the coordinate distance to be normalized.
  11. 11. device according to claim 8, it is characterised in that described image Fusion Module includes:
    Image conversion unit, it is for by the palmprint image and vena metacarpea image binaryzation, palmmprint and vena metacarpea is corresponding Pixel is converted into the first numerical value, and non-palmmprint and non-vena metacarpea respective pixel value are converted into second value;
    First integrated unit, for merging the palmprint image after binaryzation and the vena metacarpea image, obtain fusion figure Picture.
  12. 12. device according to claim 11, it is characterised in that the node determining module includes:
    Node determining unit, for by the fused images, pixel value to be the pixel of the first numerical value as network node.
  13. 13. device according to claim 8, it is characterised in that the feature construction module includes:
    First determining unit, for determining the node degree of each network node in any complex network;
    Feature calculation unit, for according to the node degree, the network characterization of calculating any complex network;
    Combinations of features unit, for the network characterization of any complex network in the multiple complex network to be carried out into combinations of features;
    Second determining unit, for using the network characterization after combination as feature to be identified.
  14. 14. device according to claim 8, it is characterised in that described image Fusion Module includes:
    Image noise reduction unit, for the palmprint image of the user to be identified and vena metacarpea image to be carried out into noise reduction;
    Second integrated unit, for merging the palmprint image after noise reduction and the vena metacarpea image, obtain fused images.
CN201710637494.8A 2017-07-31 2017-07-31 Biological feather recognition method and device Active CN107403161B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710637494.8A CN107403161B (en) 2017-07-31 2017-07-31 Biological feather recognition method and device
PCT/CN2017/113585 WO2019024350A1 (en) 2017-07-31 2017-11-29 Biometric recognition method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710637494.8A CN107403161B (en) 2017-07-31 2017-07-31 Biological feather recognition method and device

Publications (2)

Publication Number Publication Date
CN107403161A true CN107403161A (en) 2017-11-28
CN107403161B CN107403161B (en) 2019-07-05

Family

ID=60401655

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710637494.8A Active CN107403161B (en) 2017-07-31 2017-07-31 Biological feather recognition method and device

Country Status (2)

Country Link
CN (1) CN107403161B (en)
WO (1) WO2019024350A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019024350A1 (en) * 2017-07-31 2019-02-07 歌尔科技有限公司 Biometric recognition method and apparatus
CN109344849A (en) * 2018-07-27 2019-02-15 广东工业大学 A kind of complex network image-recognizing method based on constitutional balance theory
CN109614988A (en) * 2018-11-12 2019-04-12 国家电网有限公司 A kind of biometric discrimination method and device
TWI678661B (en) * 2018-08-31 2019-12-01 中華電信股份有限公司 Palm print recognition apparatus and method having data extension
CN117542090A (en) * 2023-11-25 2024-02-09 一脉通(深圳)智能科技有限公司 Palm print vein recognition method based on fusion network and SIF characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045788A1 (en) * 2008-08-19 2010-02-25 The Hong Kong Polytechnic University Method and Apparatus for Personal Identification Using Palmprint and Palm Vein
CN103116741A (en) * 2013-01-28 2013-05-22 天津理工大学 Capture and identification system for blending images of palm veins and palm prints
CN103200096A (en) * 2013-03-13 2013-07-10 南京理工大学 Heuristic routing method avoiding key nodes in complex network
CN106022218A (en) * 2016-05-06 2016-10-12 浙江工业大学 Palm print palm vein image layer fusion method based on wavelet transformation and Gabor filter
CN106548134A (en) * 2016-10-17 2017-03-29 沈阳化工大学 GA optimizes palmmprint and the vena metacarpea fusion identification method that SVM and normalization combine

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107403161B (en) * 2017-07-31 2019-07-05 歌尔科技有限公司 Biological feather recognition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100045788A1 (en) * 2008-08-19 2010-02-25 The Hong Kong Polytechnic University Method and Apparatus for Personal Identification Using Palmprint and Palm Vein
CN103116741A (en) * 2013-01-28 2013-05-22 天津理工大学 Capture and identification system for blending images of palm veins and palm prints
CN103200096A (en) * 2013-03-13 2013-07-10 南京理工大学 Heuristic routing method avoiding key nodes in complex network
CN106022218A (en) * 2016-05-06 2016-10-12 浙江工业大学 Palm print palm vein image layer fusion method based on wavelet transformation and Gabor filter
CN106548134A (en) * 2016-10-17 2017-03-29 沈阳化工大学 GA optimizes palmmprint and the vena metacarpea fusion identification method that SVM and normalization combine

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019024350A1 (en) * 2017-07-31 2019-02-07 歌尔科技有限公司 Biometric recognition method and apparatus
CN109344849A (en) * 2018-07-27 2019-02-15 广东工业大学 A kind of complex network image-recognizing method based on constitutional balance theory
CN109344849B (en) * 2018-07-27 2022-03-11 广东工业大学 Complex network image identification method based on structure balance theory
TWI678661B (en) * 2018-08-31 2019-12-01 中華電信股份有限公司 Palm print recognition apparatus and method having data extension
CN109614988A (en) * 2018-11-12 2019-04-12 国家电网有限公司 A kind of biometric discrimination method and device
CN109614988B (en) * 2018-11-12 2020-05-12 国家电网有限公司 Biological identification method and device
CN117542090A (en) * 2023-11-25 2024-02-09 一脉通(深圳)智能科技有限公司 Palm print vein recognition method based on fusion network and SIF characteristics
CN117542090B (en) * 2023-11-25 2024-06-18 一脉通(深圳)智能科技有限公司 Palm print vein recognition method based on fusion network and SIF characteristics

Also Published As

Publication number Publication date
CN107403161B (en) 2019-07-05
WO2019024350A1 (en) 2019-02-07

Similar Documents

Publication Publication Date Title
CN107403161A (en) Biological feather recognition method and device
Dibeklioğlu et al. Combining facial dynamics with appearance for age estimation
Dua et al. Biometric iris recognition using radial basis function neural network
Ming et al. A survey on anti-spoofing methods for facial recognition with rgb cameras of generic consumer devices
Yuan et al. Deep residual network with adaptive learning framework for fingerprint liveness detection
Raghavendra et al. Novel image fusion scheme based on dependency measure for robust multispectral palmprint recognition
Ullah et al. A Real‐Time Framework for Human Face Detection and Recognition in CCTV Images
CN108345818B (en) Face living body detection method and device
TW202006602A (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
Seal et al. Human face recognition using random forest based fusion of à-trous wavelet transform coefficients from thermal and visible images
KR20160101973A (en) System and method for identifying faces in unconstrained media
Malgheet et al. Iris recognition development techniques: a comprehensive review
CN111222380B (en) Living body detection method and device and recognition model training method thereof
Karampidis et al. A comprehensive survey of fingerprint presentation attack detection
CN111178130A (en) Face recognition method, system and readable storage medium based on deep learning
Sabharwal et al. Recognition of surgically altered face images: an empirical analysis on recent advances
Li et al. Design and learn distinctive features from pore-scale facial keypoints
Benzaoui et al. A comprehensive survey on ear recognition: databases, approaches, comparative analysis, and open challenges
Sams et al. Hq-fingan: High-quality synthetic fingerprint generation using gans
Dong et al. FusionPID: A PID control system for the fusion of infrared and visible light images
CN111723762B (en) Face attribute identification method and device, electronic equipment and storage medium
Samatas et al. Biometrics: going 3D
Hizem et al. Face recognition from synchronised visible and near-infrared images
Mr et al. Developing a novel technique to match composite sketches with images captured by unmanned aerial vehicle
Suh et al. Face liveness detection for face recognition based on cardiac features of skin color image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant