CN115830686A - Biological recognition method, system, device and storage medium based on feature fusion - Google Patents

Biological recognition method, system, device and storage medium based on feature fusion Download PDF

Info

Publication number
CN115830686A
CN115830686A CN202211601733.1A CN202211601733A CN115830686A CN 115830686 A CN115830686 A CN 115830686A CN 202211601733 A CN202211601733 A CN 202211601733A CN 115830686 A CN115830686 A CN 115830686A
Authority
CN
China
Prior art keywords
fusion
image
feature
feature information
finger vein
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211601733.1A
Other languages
Chinese (zh)
Inventor
肖红梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yunzhi Intelligent Technology Guangzhou Co ltd
Original Assignee
Yunzhi Intelligent Technology Guangzhou Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yunzhi Intelligent Technology Guangzhou Co ltd filed Critical Yunzhi Intelligent Technology Guangzhou Co ltd
Priority to CN202211601733.1A priority Critical patent/CN115830686A/en
Publication of CN115830686A publication Critical patent/CN115830686A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Collating Specific Patterns (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a biological recognition method, a system, a device and a storage medium based on feature fusion, which are characterized in that image preprocessing is carried out by acquiring a face image and a finger vein image of a target object, image feature extraction is respectively carried out on the preprocessed face image and the finger vein image to obtain face feature information and finger vein feature information, the face feature information and the finger vein feature information are subjected to primary feature fusion to obtain first fusion feature information, secondary feature fusion is carried out on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information, the second fusion feature information is led into a trained classifier to carry out feature classification to obtain a classification result, the biological recognition result of the target object is judged according to the classification result, the biological recognition efficiency and the accuracy of the target object can be improved, the defects and the application limitations of the traditional single-mode biological feature recognition can be overcome, and the biological recognition application scene of the biological feature recognition can be expanded.

Description

Biological recognition method, system, device and storage medium based on feature fusion
Technical Field
The invention belongs to the technical field of biological identification, and particularly relates to a biological identification method, a biological identification system, a biological identification device and a storage medium based on feature fusion.
Background
In recent years, with the rapid development of information technology, intelligent devices are widely deployed, so that identity recognition technologies such as fingerprint recognition, face recognition, iris recognition and voice recognition are widely applied to various aspects of daily life. However, in practical application, the single-mode biometric feature recognition technology is not only affected by the external environment, but also limited by the single-mode biometric feature, so that the application scenario is greatly limited, and the accuracy of identity recognition is reduced. For example, in fingerprint identification, the accuracy of acquisition and identification of fingerprints is affected by damage and water contamination of the fingerprints; in face recognition, the recognition efficiency of the face recognition is influenced by the wearing of the mask due to the increase of age; in iris recognition, the recognition accuracy is affected by external illumination and glasses wearing. Therefore, how to make up for the defect of single biometric feature identification and improve the accuracy of identity identification becomes a problem to be solved urgently in the technical field of biometric identification at present.
Disclosure of Invention
It is an object of the present invention to provide a biometric method, system, device and storage medium based on feature fusion to solve the above-mentioned problems in the prior art.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, a biometric identification method based on feature fusion is provided, including:
acquiring a face image and a finger vein image of a target object;
carrying out image preprocessing on the face image and the finger vein image to obtain a preprocessed face image and a preprocessed finger vein image;
respectively extracting image characteristics of the preprocessed face image and the preprocessed finger vein image to obtain face characteristic information and finger vein characteristic information;
performing primary feature fusion on the face feature information and the finger vein feature information to obtain first fusion feature information, and performing secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information;
importing the second fusion feature information into a trained classifier for feature classification to obtain a classification result;
and judging to obtain a biological recognition result of the target object according to the classification result.
Based on the technical content, the face image and the finger vein image of the target object are subjected to feature extraction, then bimodal feature fusion is carried out to obtain bimodal fusion features so as to ensure maximization of feature information, feature classification is carried out based on the bimodal fusion features, and a biological recognition result of the target object is obtained according to the feature classification result, so that the biological recognition efficiency and accuracy of the target object can be improved, the defects and application limitations of traditional monomodal biological feature recognition can be overcome, and the biological feature recognition application scene can be expanded.
In a possible design, the extracting the image features of the preprocessed face image and the preprocessed finger vein image to obtain the face feature information and the finger vein feature information respectively includes:
and importing the preprocessed face image and the finger vein image into a dual-channel convolution neural network model, and respectively carrying out dual-channel feature extraction on the preprocessed face image and the preprocessed finger vein image through the dual-channel convolution neural network model to obtain face feature information and finger vein feature information.
In one possible design, the performing two-channel feature extraction on the preprocessed face image and the preprocessed finger vein image respectively through a two-channel convolutional neural network model includes:
and respectively carrying out double-channel feature extraction on the preprocessed face image and the preprocessed finger vein image through a multilayer convolution layer and a pooling layer of a double-channel convolution neural network model.
In a possible design, the preliminary feature fusion is performed on the face feature information and the finger vein feature information to obtain first fusion feature information, including:
the face feature information and the finger vein feature information are led into a dual-channel convolution neural network model, and are subjected to Fusion Conv dimension reduction and then subjected to Softmax layer processing to respectively obtain self-attention weights;
and performing preliminary feature fusion on the face feature information and the finger vein feature information according to respective self-attention weights to obtain first fusion feature information.
In a possible design, performing secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information includes: and performing secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information based on a ResNet residual error structure to obtain second fusion feature information.
In one possible design, the classifier is a full connection layer obtained by training a two-channel convolutional neural network model through a classification sample.
In one possible design, the image preprocessing is performed on the face image and the finger vein image, and includes: carrying out gray level normalization, image denoising and image enhancement processing on the face image, and carrying out ROI (region of interest) region interception, image enhancement and image denoising processing on the finger vein image.
In a second aspect, a biometric identification system based on feature fusion is provided, which includes an acquisition unit, a preprocessing unit, a feature extraction unit, a feature fusion unit, a classification unit, and a determination unit, wherein:
an image acquisition unit for acquiring a face image and a finger vein image of a target object;
the preprocessing unit is used for preprocessing the face image and the finger vein image to obtain a preprocessed face image and a preprocessed finger vein image;
the feature extraction unit is used for respectively extracting image features of the preprocessed face image and the preprocessed finger vein image to obtain face feature information and finger vein feature information;
the feature fusion unit is used for carrying out primary feature fusion on the face feature information and the finger vein feature information to obtain first fusion feature information, and then carrying out secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information;
the classification unit is used for importing the second fusion characteristic information into a trained classifier for characteristic classification to obtain a classification result;
and the judging unit is used for judging to obtain the biological recognition result of the target object according to the classification result.
In a third aspect, there is provided a feature fusion based biometric device comprising:
a memory to store instructions;
a processor configured to read the instructions stored in the memory and execute the method of any of the first aspects according to the instructions.
In a fourth aspect, there is provided a computer readable storage medium having instructions stored thereon, which when executed on a computer, cause the computer to perform the method of any of the first aspects. Also, a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects is provided.
Has the advantages that: the method comprises the steps of extracting the characteristics of a face image and a finger vein image of a target object, performing bimodal characteristic fusion through a dual-channel convolution neural network model to obtain bimodal fusion characteristics to ensure maximization of characteristic information, performing characteristic classification based on the bimodal fusion characteristics, judging and obtaining a biological identification result of the target object according to the characteristic classification result, improving biological identification efficiency and accuracy of the target object, making up for the defects and application limitations of traditional monomodal biological characteristic identification, and expanding a biological characteristic identification application scene.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram of the method steps of an embodiment of the present invention;
FIG. 2 is a schematic diagram of a system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus according to an embodiment of the present invention.
Detailed Description
It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It is to be understood that, unless expressly stated or limited otherwise, the term "connected" is to be interpreted broadly, as meaning fixed or detachable connections or integral connections; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. Specific meanings of the above terms in the examples can be understood by those of ordinary skill in the art according to specific situations.
In the following description, specific details are provided to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.
Example 1:
the embodiment provides a biometric identification method based on feature fusion, as shown in fig. 1, the method comprises the following steps:
s1, acquiring a face image and a finger vein image of a target object.
During specific implementation, the face image of the target object can be acquired through the corresponding camera, and the finger vein image of the target object can be acquired through the corresponding finger vein terminal. Among various biometric features, the face image is the most natural and obvious individual recognition feature; the finger veins are biological features in vivo, are not easy to damage, forge and copy, and have difference between the finger vein features of each person/each finger, so that the subsequent identification processing is carried out by adopting the face image and the finger vein image of the target object, and the identification accuracy can be effectively improved.
And S2, carrying out image preprocessing on the face image and the finger vein image to obtain a preprocessed face image and a preprocessed finger vein image.
In specific implementation, the image preprocessing of the face image and the finger vein image specifically comprises: the method comprises the steps of carrying out gray level normalization, image denoising and image enhancement on a face image, wherein the gray level normalization is used for removing the influence of light rays on the subsequent processing of the image, the image denoising is used for removing the interference of a non-face part in the image and enabling the face part to be clearer, the image enhancement can be carried out in a wavelet transformation mode, firstly, the face part image is decomposed into components with different sizes, positions and directions, then, the components needing to be emphasized are amplified according to the requirements, unnecessary components are reduced, and finally, the wavelet inversion is used for obtaining the enhanced face image. And carrying out ROI region interception, image enhancement and image denoising on the finger vein image. The method comprises the steps of intercepting a finger vein image in a Region of Interest (ROI) to remove excessive background useless information, firstly adopting a Prewitt edge detection operator to carry out edge detection on upper and lower edges of the finger vein original image in the vertical direction, removing the false edges by setting a connected domain threshold value for the phenomenon of false edges, fitting a central axis of a finger by using least square linear regression, rotationally correcting the image according to an included angle between a fitting straight line and a horizontal line, fitting an internal tangent line of the upper and lower edges of the finger, selecting a finger joint (namely a brightness peak value ROI) according to a brightness change trend in the horizontal direction of the image, and finally intercepting the finger vein Region. In order to obtain clear vein lines, CLAHE (Contrast Limited Adaptive Histogram Equalization) is required to be performed on the intercepted ROI image, a Gabor filter is added after the CLAHE image is enhanced to remove noise after the image is enhanced, and clear vein lines can be obtained compared with the original image after the ROI area image is enhanced by the CLAHE image and the Gabor filter.
And S3, respectively carrying out image feature extraction on the preprocessed face image and the preprocessed finger vein image to obtain face feature information and finger vein feature information.
In specific implementation, the preprocessed face image and the finger vein image are led into a preset double-channel convolutional neural network model, and double-channel feature extraction is respectively carried out on the preprocessed face image and the finger vein image through the double-channel convolutional neural network model to obtain face feature information and finger vein feature information. The dual-channel convolutional neural network model comprises a feature extraction module, a feature fusion module and a classification identification module. And a feature extraction module of the dual-channel convolutional neural network model respectively performs dual-channel feature extraction on the preprocessed face image and the preprocessed finger vein image through a multilayer convolutional layer and a pooling layer. The feature extraction module can adopt the first five convolutional layers of an AlexNet network as the multilayer convolutional layers of the feature extraction module, the AlexNet network is a simpler CNN model, and the whole network comprises 8 convolutional layers, 5 convolutional layers and 3 full-connection layers. The first layer convolution layer receives a 224x224x3 image, and 96 convolution kernels of size 11x11 are used to obtain a feature map of size 55x55x96. After each convolution layer, a ReLU activation function is used to output feature map size 13x13x256 through the fifth convolution layer and the maximum pooling layer. The first 16 convolutional layers of the VGG-19 network or the Feature layer of the MobileNet V2 network may also be used.
And S4, performing primary feature fusion on the face feature information and the finger vein feature information to obtain first fusion feature information, and performing secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information.
In specific implementation, after the face feature information and the finger vein feature information are imported into a feature fusion module of a dual-channel convolutional neural network model, the face feature information and the finger vein feature information are subjected to dimensionality reduction through a fusion Conv layer and then processed through a Softmax layer to respectively obtain self-attention weights, the feature fusion module performs primary feature fusion on the face feature information and the finger vein feature information according to the respective self-attention weights of the face feature information and the finger vein feature information to obtain first fusion feature information, and then performs secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information based on a ResNet residual error structure to obtain second fusion feature information.
And S5, importing the second fusion feature information into the trained classifier to perform feature classification, and obtaining a classification result.
In specific implementation, the classifier is a full-connection layer obtained by training a two-channel convolutional neural network model through classification samples, the full-connection layer is contained in the classification and identification module, the second fusion feature information is led into the classification and identification module, and feature classification is carried out by using the full-connection layer to obtain a classification result.
A typical convolutional neural network model is mainly composed of a convolutional layer, a pooling layer and a full-link layer. The convolutional layer performs a pixel-by-pixel convolution operation on the input image shift using a convolution kernel of a certain size (typically n x n sized squares) to detect information in the image. The first layer convolutional layer of the convolutional neural network can extract lower-level features on the image, including color, edge and other information. As the number of convolutional layers a picture passes through is greater, the deeper and more complex the features extracted on the picture, and the more information it contains. Different depth convolutional neural network models can be designed according to the input image and the actual requirement. The convolutional layer is responsible for extracting the characteristics of the image and is a core layer for constructing the whole convolutional neural network model. The pooling layer compresses the feature map output by the convolutional layer, and extracts main features, thereby greatly reducing data dimensionality. Common pooling layers are maximum pooling and average pooling. In order to avoid the over-fitting phenomenon in the fusion module and reduce the computation amount after fusion, the bimodal fusion method provided by the embodiment adopts adaptive mean pooling. And the full-connection layer is connected with the characteristics output after the processing of the convolution layer and the pooling layer and is finally used for classification and identification. The performance of the CNN network is improved by using a ReLU activation function in a full connection layer, and the model overfitting is prevented by using a Dropout technology.
And S6, judging to obtain a biological recognition result of the target object according to the classification result.
In specific implementation, after the classification recognition result is obtained through the two-channel convolutional neural network model, the biological recognition result of the target object can be judged according to the classification recognition result.
In order to verify the effectiveness of the bimodal feature extraction, fusion and identification method provided by the embodiment and simultaneously embody the advantages of the bimodal feature extraction, fusion and identification method and the monomodal biometric feature identification, a Finger Vein public data set SDUMLA-FV, finger vessel USM (FV-USM) and a face public data set CASIA-Webface are selected for testing. The sdma-FV dataset contained 6 images of the finger veins of the index, middle and ring fingers of the left and right hand of 106 individuals, 636 (106 x 6) fingers, 3816 (106 x 6) images. The FV-USM dataset, containing 6 pictures of the index and middle fingers of the right and left hand of 123 individuals, 492 types (123 x 4) of fingers, and 2952 (123 x 4x 6) images, provided a clipped ROI image to facilitate processing of finger vein images. The CASIA-WebFace dataset is one of the most widely published datasets applying face recognition, and is the collection of 10575 classes of images, 494414 images of faces on a network. In the test, according to the category number of the finger vein image, the corresponding category number is randomly selected from the face data set. According to the test result, the identification method based on the bimodal feature fusion has the feature fusion identification accuracy rate of 99.80% in the DUMLA-FV data set and the CASIA-WebFace face data set, has the feature fusion experiment identification accuracy rate of 99.95% in the FV-USM data set and the CASIA-WebFace face data set, and is greatly improved compared with the identification accuracy rate in a single mode.
Example 2:
the present embodiment provides a biometric identification system based on feature fusion, as shown in fig. 2, including an acquisition unit, a preprocessing unit, a feature extraction unit, a feature fusion unit, a classification unit, and a determination unit, wherein:
an image acquisition unit for acquiring a face image and a finger vein image of a target object;
the preprocessing unit is used for preprocessing the face image and the finger vein image to obtain a preprocessed face image and a preprocessed finger vein image;
the feature extraction unit is used for respectively extracting image features of the preprocessed face image and the preprocessed finger vein image to obtain face feature information and finger vein feature information;
the feature fusion unit is used for carrying out primary feature fusion on the face feature information and the finger vein feature information to obtain first fusion feature information, and then carrying out secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information;
the classification unit is used for importing the second fusion characteristic information into a trained classifier for characteristic classification to obtain a classification result;
and the judging unit is used for judging to obtain the biological recognition result of the target object according to the classification result.
Example 3:
the embodiment provides a biometric device based on feature fusion, as shown in fig. 3, at a hardware level, including:
the data interface is used for establishing data butt joint between the processor and the image acquisition device so as to acquire a corresponding face image and a corresponding finger vein image;
a memory to store instructions;
and the processor is used for reading the instructions stored in the memory and executing the biometric identification method based on the feature fusion in the embodiment 1 according to the instructions.
Optionally, the meter device further comprises an internal bus. The processor, the memory, and the display may be connected to each other via an internal bus, which may be an ISA (Industry Standard Architecture) bus, a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc.
The Memory may include, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Flash Memory (Flash Memory), a First In First Out (FIFO), a First In Last Out (FILO), and/or the like. The Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
Example 4:
the present embodiment provides a computer-readable storage medium, which stores instructions that, when executed on a computer, cause the computer to execute the biometric identification method based on feature fusion in embodiment 1, wherein the computer-readable storage medium refers to a carrier for storing data, which may include but is not limited to a floppy disk, an optical disk, a hard disk, a flash Memory, a flash disk and/or a Memory Stick (Memory Stick), and the like, and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable system. The present embodiment also provides a computer program product containing instructions that, when executed on a computer, cause the computer to perform the biometric identification method based on feature fusion of embodiment 1. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable system.
Finally, it should be noted that: the above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The biological identification method based on feature fusion is characterized by comprising the following steps:
acquiring a face image and a finger vein image of a target object;
carrying out image preprocessing on the face image and the finger vein image to obtain a preprocessed face image and a preprocessed finger vein image;
respectively extracting image characteristics of the preprocessed face image and the preprocessed finger vein image to obtain face characteristic information and finger vein characteristic information;
performing primary feature fusion on the face feature information and the finger vein feature information to obtain first fusion feature information, and performing secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information;
importing the second fusion feature information into a trained classifier for feature classification to obtain a classification result;
and judging to obtain a biological recognition result of the target object according to the classification result.
2. The biometric identification method based on feature fusion of claim 1, wherein the image feature extraction is performed on the preprocessed face image and the preprocessed finger vein image respectively to obtain the face feature information and the finger vein feature information, and the method comprises the following steps:
and importing the preprocessed face image and the finger vein image into a preset double-channel convolution neural network model, and respectively carrying out double-channel feature extraction on the preprocessed face image and the finger vein image through the double-channel convolution neural network model to obtain face feature information and finger vein feature information.
3. The biometric identification method based on feature fusion of claim 2, wherein the two-channel feature extraction is respectively carried out on the preprocessed face image and the finger vein image through a two-channel convolutional neural network model, and comprises the following steps:
and respectively carrying out double-channel feature extraction on the preprocessed face image and the preprocessed finger vein image through a multilayer convolution layer and a pooling layer of a double-channel convolution neural network model.
4. The biometric identification method based on feature fusion of claim 2, wherein the preliminary feature fusion of the face feature information and the finger vein feature information to obtain first fused feature information comprises:
the face feature information and the finger vein feature information are led into a dual-channel convolution neural network model, and are subjected to Fusion Conv dimension reduction and then subjected to Softmax layer processing to respectively obtain self-attention weights;
and performing preliminary feature fusion on the face feature information and the finger vein feature information according to respective self-attention weights to obtain first fusion feature information.
5. The biometric identification method based on feature fusion of claim 4, wherein the second feature fusion of the first fusion feature information, the face feature information and the finger vein feature information to obtain the second fusion feature information comprises: and performing secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information based on a ResNet residual error structure to obtain second fusion feature information.
6. The feature fusion-based biometric identification method according to claim 2, wherein the classifier is a full-connected layer obtained by training a two-channel convolutional neural network model through a classification sample.
7. The biometric identification method based on feature fusion of claim 1, wherein the image preprocessing of the face image and the finger vein image comprises: carrying out gray level normalization, image denoising and image enhancement on the face image, and carrying out ROI region interception, image enhancement and image denoising on the finger vein image.
8. The biological recognition system based on feature fusion is characterized by comprising an acquisition unit, a preprocessing unit, a feature extraction unit, a feature fusion unit, a classification unit and a judgment unit, wherein:
an image acquisition unit for acquiring a face image and a finger vein image of a target object;
the preprocessing unit is used for preprocessing the face image and the finger vein image to obtain a preprocessed face image and a preprocessed finger vein image;
the feature extraction unit is used for respectively extracting image features of the preprocessed face image and the preprocessed finger vein image to obtain face feature information and finger vein feature information;
the feature fusion unit is used for carrying out primary feature fusion on the face feature information and the finger vein feature information to obtain first fusion feature information, and then carrying out secondary feature fusion on the first fusion feature information, the face feature information and the finger vein feature information to obtain second fusion feature information;
the classification unit is used for importing the second fusion characteristic information into a trained classifier for characteristic classification to obtain a classification result;
and the judging unit is used for judging to obtain the biological recognition result of the target object according to the classification result.
9. A biometric device based on feature fusion, comprising:
a memory to store instructions;
a processor for reading the instructions stored in the memory and executing the method of any one of claims 1-7 in accordance with the instructions.
10. A computer-readable storage medium having stored thereon instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-7.
CN202211601733.1A 2022-12-13 2022-12-13 Biological recognition method, system, device and storage medium based on feature fusion Pending CN115830686A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211601733.1A CN115830686A (en) 2022-12-13 2022-12-13 Biological recognition method, system, device and storage medium based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211601733.1A CN115830686A (en) 2022-12-13 2022-12-13 Biological recognition method, system, device and storage medium based on feature fusion

Publications (1)

Publication Number Publication Date
CN115830686A true CN115830686A (en) 2023-03-21

Family

ID=85547087

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211601733.1A Pending CN115830686A (en) 2022-12-13 2022-12-13 Biological recognition method, system, device and storage medium based on feature fusion

Country Status (1)

Country Link
CN (1) CN115830686A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292443A (en) * 2023-09-25 2023-12-26 杭州名光微电子科技有限公司 Multi-mode recognition system and method for fusing human face and palm vein

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090513A (en) * 2017-12-19 2018-05-29 天津科技大学 Multi-biological characteristic blending algorithm based on particle cluster algorithm and typical correlation fractal dimension
CN112465999A (en) * 2020-12-29 2021-03-09 天津科技大学 Attendance checking equipment and attendance checking system based on face and finger vein fusion recognition
CN112580590A (en) * 2020-12-29 2021-03-30 杭州电子科技大学 Finger vein identification method based on multi-semantic feature fusion network
CN114913610A (en) * 2022-06-15 2022-08-16 南京邮电大学 Multi-mode identification method based on fingerprints and finger veins

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090513A (en) * 2017-12-19 2018-05-29 天津科技大学 Multi-biological characteristic blending algorithm based on particle cluster algorithm and typical correlation fractal dimension
CN112465999A (en) * 2020-12-29 2021-03-09 天津科技大学 Attendance checking equipment and attendance checking system based on face and finger vein fusion recognition
CN112580590A (en) * 2020-12-29 2021-03-30 杭州电子科技大学 Finger vein identification method based on multi-semantic feature fusion network
CN114913610A (en) * 2022-06-15 2022-08-16 南京邮电大学 Multi-mode identification method based on fingerprints and finger veins

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
周卫斌等: "基于特征融合的双模态生物识别方法", 天津科技大学学报, vol. 37, no. 4, pages 44 - 49 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117292443A (en) * 2023-09-25 2023-12-26 杭州名光微电子科技有限公司 Multi-mode recognition system and method for fusing human face and palm vein
CN117292443B (en) * 2023-09-25 2024-06-07 杭州名光微电子科技有限公司 Multi-mode recognition system and method for fusing human face and palm vein

Similar Documents

Publication Publication Date Title
TWI744283B (en) Method and device for word segmentation
CN110060237B (en) Fault detection method, device, equipment and system
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN110569756B (en) Face recognition model construction method, recognition method, device and storage medium
TW202006602A (en) Three-dimensional living-body face detection method, face authentication recognition method, and apparatuses
CN111209952A (en) Underwater target detection method based on improved SSD and transfer learning
WO2018145470A1 (en) Image detection method and device
CN111680690B (en) Character recognition method and device
CN109815797B (en) Living body detection method and apparatus
CN110400288B (en) Sugar network disease identification method and device fusing binocular features
US11605210B2 (en) Method for optical character recognition in document subject to shadows, and device employing method
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN111814682A (en) Face living body detection method and device
CN115830686A (en) Biological recognition method, system, device and storage medium based on feature fusion
CN115424093A (en) Method and device for identifying cells in fundus image
CN109712134B (en) Iris image quality evaluation method and device and electronic equipment
CN111881803B (en) Face recognition method based on improved YOLOv3
CN106940904A (en) Attendance checking system based on recognition of face and speech recognition
CN113673396A (en) Spore germination rate calculation method and device and storage medium
CN113378609B (en) Agent proxy signature identification method and device
CN117392375A (en) Target detection algorithm for tiny objects
CN111639555A (en) Finger vein image noise accurate extraction and self-adaptive filtering denoising method and device
CN111163332A (en) Video pornography detection method, terminal and medium
CN114821194B (en) Equipment running state identification method and device
CN113051901B (en) Identification card text recognition method, system, medium and electronic terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination