CN114220130A - Non-contact gesture and palm print and palm vein fused identity recognition system and method - Google Patents
Non-contact gesture and palm print and palm vein fused identity recognition system and method Download PDFInfo
- Publication number
- CN114220130A CN114220130A CN202111290863.3A CN202111290863A CN114220130A CN 114220130 A CN114220130 A CN 114220130A CN 202111290863 A CN202111290863 A CN 202111290863A CN 114220130 A CN114220130 A CN 114220130A
- Authority
- CN
- China
- Prior art keywords
- palm
- user
- gesture
- features
- light source
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 210000003462 vein Anatomy 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000003384 imaging method Methods 0.000 claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 16
- 239000013598 vector Substances 0.000 claims description 30
- 238000013527 convolutional neural network Methods 0.000 claims description 19
- 230000008569 process Effects 0.000 claims description 7
- 238000012549 training Methods 0.000 claims description 7
- 230000033001 locomotion Effects 0.000 claims description 6
- 238000005286 illumination Methods 0.000 claims description 5
- 239000000284 extract Substances 0.000 claims description 4
- 230000001960 triggered effect Effects 0.000 claims description 4
- 230000002159 abnormal effect Effects 0.000 claims description 3
- 230000036760 body temperature Effects 0.000 claims description 3
- 230000007246 mechanism Effects 0.000 claims description 3
- 230000009471 action Effects 0.000 claims description 2
- 230000003542 behavioural effect Effects 0.000 claims 2
- 230000003287 optical effect Effects 0.000 abstract description 4
- 230000006399 behavior Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 210000004369 blood Anatomy 0.000 description 4
- 239000008280 blood Substances 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000002265 prevention Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 102000001554 Hemoglobins Human genes 0.000 description 2
- 108010054147 Hemoglobins Proteins 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 2
- 239000011324 bead Substances 0.000 description 2
- 210000004204 blood vessel Anatomy 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000009529 body temperature measurement Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000003466 welding Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses an identity recognition system and method fusing non-contact gestures with palmprints and palmar veins, which comprises a shell, an imaging system and a control system, wherein the imaging system and the control system are positioned in the shell; the imaging system comprises a light source module, an acquisition module and a photoelectric switch; the control system comprises a processing and control module; the light source module comprises a light source circuit board; the acquisition module comprises a camera provided with an optical filter; the light source circuit board and the camera are started through the photoelectric switch, when the photoelectric switch detects an object, the light source is turned on, and the camera starts to acquire images; the processing and control module comprises an interface circuit board, an embedded core board, a temperature sensor and a touch screen.
Description
Technical Field
The invention belongs to the field of biological feature recognition, and particularly relates to the field of palm print recognition, the field of palm vein recognition and the field of gesture authentication. Meanwhile, the system and the method relate to the field of gesture recognition and control, in particular to an identity recognition system and method fusing non-contact gestures with palmprints and palmar veins.
Background
How to prove that I is a topic is always a hot problem in the field of identity authentication. Unlike the traditional identity authentication technology based on passwords and the like, the biometric identification technology is more and more concerned by people. The biometric identification technology has been widely popularized in our daily life, and for example, face payment, fingerprint login, airport security inspection and the like are related to the aspects of daily safety of our daily life. However, with the development of society, information technology is going deep into every corner of people's life, and as people's life becomes more convenient, personal information is threatened, passwords, magnetic cards and the like may be lost or forgotten, and faces, fingerprints and the like may be stolen and forged, and thus, the information is utilized by people with no particular interest. Meanwhile, the new crown epidemic outbreaks in the world still do not subside, and people are pursuing safer and more sanitary identification means, such as non-contact identification.
Any single-mode identification method cannot perfectly complete the task of identity identification. The palm biological feature recognition, including biological features of veins, palm prints, palm shapes and the like of the palm, has different advantages and disadvantages, is easy to collect the palm shapes, has high acceptance degree, but has low uniqueness, and easily causes high misrecognition rate when the palm shapes are used for recognition. The palm print acquisition requirement is slightly high, but the uniqueness is better, and the shortcoming can receive the interference of environment and user occupation etc. the collection degree of difficulty of palm vein is the highest, and the advantage is difficult for receiving the interference of environment, and the palm vein is as a vital biological feature that living body is unique simultaneously, and it is extremely difficult to counterfeit and forge, and the security is very high.
In addition, in recent years, there is a biometric-based authentication method that uses gestures to perform authentication and that starts to enter the visual field of people. The gesture carries physiological characteristics and behavior characteristics of a human body, and when different people do the same gesture, identity authentication can be performed through characteristics such as palm shapes and finger key point motion tracks. Gestures are the same as languages, characters and the like, and are also a way for expressing intentions of people, so that the gestures are widely applied to devices such as a remote control computer at present, and people have stronger requirements on a non-contact control method under the background of global outbreak of new crown epidemic situations. However, since gesture recognition has a high demand for a recognition place, techniques for remotely controlling curtains, lamps, and the like based on gestures have not been widely used.
In the prior art, when gesture authentication is performed, the influence of temperature on a gesture is not considered, and a device is not provided, which can simultaneously collect palm print, palm vein and gesture information and use the information for identity authentication. When identifying the palm print, the palm vein and the gesture information, the extracted features are not encrypted, so the security is not high. There is no means to integrate palm print recognition, palm vein recognition and gesture authentication into a single device and perform offline authentication. Moreover, the existing palm print and palm vein recognition device usually needs a user to place a hand at a fixed position and keep the hand still, so that the usability is poor.
The current gesture authentication runs under the same illumination condition, the carried information amount is insufficient, the interference of a complex environment is easy to occur, and edge equipment for performing remote gesture recognition based on deep learning and further realizing gesture control is lacked. Therefore, the multi-mode palm biological feature recognition which integrates the palm print, the palm vein and the gesture has advantages in multiple aspects, the recognition performance is guaranteed, meanwhile, the three biological features can be collected through a non-contact means, and the safety and the user friendliness of the recognition method are greatly enhanced.
Disclosure of Invention
In order to solve the problem of slow recognition of multi-modal biological characteristics, the invention aims to provide system equipment capable of carrying out identity authentication on a dynamic palm by simultaneously utilizing gesture information, palm print information and palm vein information.
The invention is realized by at least one of the following technical schemes.
The non-contact gesture, palm print and palm vein integrated identity recognition system comprises a shell, an imaging system and a control system, wherein the imaging system and the control system are positioned in the shell; the imaging system comprises a light source module and an acquisition module; the control system comprises a processing and control module;
the light source module comprises a light source module; the acquisition module comprises a camera; the light source module and the camera are started through the photoelectric switch, when the photoelectric switch detects an object, a light source of the light source module is turned on, and the camera starts to acquire images;
and the processing and control module receives the acquired information of the acquisition module to realize the identity recognition of the user and control the corresponding equipment switch.
Preferably, the light source module comprises a light source with a wavelength of 800nm or 850 nm-980 nm, and the light source comprises infrared light and visible light.
Preferably, the camera is provided with an optical filter corresponding to the wavelength of the light source.
The method for realizing the non-contact gesture and palm print and palm vein fused identity recognition system comprises the following steps:
(1) acquiring images of the palm print, the palm vein and the gesture action by adopting the acquisition module;
(2) extracting the texture characteristics of the palm print and the palm vein of the user by a convolutional neural network in the processing and control module, classifying the texture characteristics of different palm prints and palm veins respectively, and storing the characteristics into a characteristic library; when the user identifies, matching the extracted features of the user with the features in the feature library to obtain a matching score;
(3) classifying different gestures according to the gesture characteristics of the user to finally obtain a classification result; when the user identifies, extracting the palm area of the user, inputting the palm area of the user into the trained convolutional neural network for gesture classification, and realizing the gesture identification of the user.
Preferably, step (2) comprises the steps of:
(21) intercepting areas of palm print information and palm vein information contained in the palm print image and the palm vein image;
(22) collecting an ROI image to form a palm vein data set and a palm print data set, training a corresponding convolutional neural network by using the palm vein data set and the palm print data set, enabling the convolutional neural network to extract palm print features and palm vein features from a palm of a user, and obtaining feature vectors of corresponding dimensions capable of classifying the palm from the palm print features and the palm vein features; after the feature vector is extracted each time, encrypting the feature vector by using an asymmetric encryption algorithm; training two corresponding classifiers to enable the classifiers to classify the palmprint and the palm vein of different people according to the feature vectors, and finally obtaining classification results.
Preferably, the gesture features include physiological features and behavior features, the physiological features include shapes of hands of the users, and the behavior features include tracks of palm movements when the users make certain gestures.
Preferably, a video of the palm movement of the user is collected under infrared light and visible light, a convolutional neural network extracts physiological features and behavior features from the video to obtain feature vectors of corresponding dimensions, after the feature vectors under two illumination conditions are obtained, the two feature vectors are spliced to obtain gesture features of the user, and after splicing, encryption is performed by using an encryption algorithm; the classifier classifies the gestures of different people according to the feature vectors to finally obtain a classification result.
Preferably, the matching process comprises: firstly, respectively obtaining matching scores s of palm printspp iMatching score of palm vein spv iMatching score s of gestureps iIs characterized by three kinds ofWeighting the matching scores to obtain the matching scores s for classificationi:
si=ω1spp i+ω2spv i+(1-ω1-ω2)sps i (1)
Wherein ω is1、ω2Represents the weight of the palm print and the palm vein.
Preferably, when the image is collected, the palm temperature of the user is detected through the temperature sensor, if the palm temperature is lower than a threshold value, recognition failure is prompted, if the palm temperature is higher than the threshold value, the body temperature of the user is abnormal, and at this time, an alarm mechanism is triggered.
Preferably, the processing and control module outputs a corresponding level according to the recognized user gesture, and controls a corresponding device switch.
Compared with the prior art, the invention has the beneficial effects that:
1. the method has the advantages that collection of three characteristics of the palm print, the palm vein and the gesture is achieved by using a single device, the characteristics of the three modes are fused and used for identity authentication, and the accuracy and the robustness of the authentication are improved.
2. Considering the influence of temperature on gestures, the temperature of the palm can be collected in real time during identity authentication by designing a non-contact temperature sensor, and the gesture authentication process is supervised by using temperature information, so that the accuracy of gesture authentication is improved. After the gesture feature, the palm print feature and the palm vein feature are extracted, the feature vector is encrypted by an encryption means, so that the security of identity authentication is improved, and the risk of counterfeiting and attacking is reduced.
3. The gesture authentication and gesture recognition under various illumination conditions are realized, and the accuracy of the authentication and recognition is improved by integrating information under various modes. Through designing light source, camera etc. realize the clear formation of image to developments palm print, palm vein, improve the ease for use and the transient exposure of equipment. By designing a gesture recognition algorithm, gesture recognition and remote control based on deep learning by using edge equipment are realized.
Drawings
FIG. 1 is an exploded view of an embodiment of an identification device;
FIG. 2 is a schematic structural diagram of an identification device according to an embodiment of the present invention;
FIG. 3 is a light source arrangement according to an embodiment of the invention;
FIG. 4 is a hardware block diagram of an embodiment of the invention;
FIG. 5 is an exemplary view of palm vein and palm print images according to an embodiment of the present invention;
FIG. 6 is an exemplary diagram of a gesture image according to an embodiment of the present invention;
FIG. 7 is a flow chart of a palm vein recognition algorithm according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a palm vein critical area in an embodiment of the invention;
FIG. 9 is a flowchart of a palm print recognition algorithm according to an embodiment of the present invention;
FIG. 10 is a schematic diagram of a cut-out of a palm print key area according to an embodiment of the present invention;
FIG. 11 is a flowchart of a gesture authentication algorithm according to an embodiment of the present invention;
FIG. 12 is a schematic view of a palm section in accordance with an embodiment of the present invention;
FIG. 13 is a flow chart of an embodiment of the invention;
the LED lamp comprises 1-LED lamp beads, 2-light source modules, 3-temperature sensors, 4-touch screens, 5-photoelectric switches, 6-cameras, 7-embedded core boards and 8-shells.
Detailed Description
Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
As shown in fig. 1 and 2, a non-contact gesture and palm print and palm vein fused identity recognition system mainly comprises a housing 8, an imaging system and a control system, wherein the imaging system and the control system are positioned in the housing;
the imaging system comprises a light source module, an acquisition module and a photoelectric switch 5; the control system comprises a processing and control module, the processing and control module comprises an interface circuit board, an embedded core board 7, a temperature sensor 3 and an external touch screen, and the touch screen 4 is installed on the surface of the shell.
The interface circuit board comprises interfaces for communicating with various devices, and a copper-clad disc capable of welding pins is mainly arranged on the interface circuit board. The embedded core board is a development board.
On the interface circuit board, there are 40 two rows of copper-clad discs, and two rows of bases are welded on the upper side, and then the embedded core board is inserted into the bases, and the core board and the interface circuit board are connected together.
Interface circuit board: a series of pins on the embedded core board can be connected with devices such as a temperature sensor and a screen, but the distance between the pins is small, so that the interface circuit board is designed to expand the interface of the embedded core board to realize the connection between the embedded core board and each peripheral.
The temperature sensor can enable the equipment to have the function of identifying identity and the function of assisting epidemic prevention, and when the temperature of the palm is higher than a certain threshold value, the equipment can give an alarm to assist the epidemic prevention. In addition, the gesture deformation condition caused by the temperature, especially the low-temperature condition, is considered when the gesture information is processed by the device, and the weight occupied by the gesture information is reduced and the weight of the palm print and palm vein information is improved when the temperature is low. If the temperature sensed by the temperature sensor is lower than a certain threshold value, the device skips the identification process and directly considers that the device is identifying the object for attack, such as a mobile phone screen and the like.
The light source module comprises a light source circuit board 2, and the light source circuit board 2 comprises LED lamp beads 1 with the wavelength of more than 800 nm;
the arrangement of light sources on the light source circuit board 2 is as follows in fig. 3 to enable a uniform, stable light field of the light sources: no. 1 is near infrared light LED, No. 2 is visible light LED, the near infrared light LED and the visible light LED are distributed on the optical original plate in a symmetrical annular array, and in each kind of illuminated LED, the included angle between the LED at the edge position and the LED at the center position is approximately 60 degrees in pairs.
The acquisition module comprises a camera 6 provided with a 850nm band-pass filter, and the camera 6 adopts a global shutter CMOS camera; the light source circuit board 2 and the camera 6 are started through the photoelectric switch 5, when the photoelectric switch 5 detects an object in a certain range, a light source on the light source circuit board 2 is lightened, and the camera starts to acquire images.
The length, width and height of the shell 8 are respectively 97mm 100mm 56 mm.
As shown in fig. 4, the cameras 6 are connected with the processing and control module through USB 2.0; the camera 6 is connected with the light source circuit board 2 through an IO port. The touch screen 6 is connected with the processing and control module through a serial port, the processing and control module can output control information to the outside, and interfaces comprise a USB (universal serial bus), an IIC (inter-integrated circuit) and the like.
Preferably, the light source circuit board 2 is connected to the embedded core board through a power line. The touch screen is fixed on the shell through screws and nuts. The photoelectric switch is connected to the interface circuit board through a data power line and fixed on the interface circuit board through a nut. The camera 6 is connected to the interface circuit board through a data power line and fixed on the interface circuit board through a nut. The interface circuit board is fixedly connected with the equipment shell through a nut. The embedded core board is fixedly connected with the base on the interface circuit board through pins.
A non-contact gesture and palm print and palm vein fused identity recognition method comprises the following steps:
(1) collecting palm print and palm vein images by adopting the collecting module;
the invention adopts an image acquisition means of an active light source and a filter with a corresponding wavelength, a light source module can emit near infrared light and visible light with the wavelength of 850nm, the light irradiates the palm and is reflected back to the camera, and the filter with the corresponding wavelength is additionally arranged in the camera, so that the image obtained by the camera has no background interference and only palm images with different wavelengths.
In order to observe vein information at a relatively deep level in the palm vein image, a light source having a long wavelength is required, and as can be seen from fig. 5, a light source having an excellent wavelength of 800nm is required to observe information at a deep level in the palm. The palm vein blood mainly contains oxygenated hemoglobin, the hemoglobin has the strongest absorption capacity to light of 800-850 nm, when the palm is irradiated by the light in the wavelength range, the absorption degree of the palm vein area is higher, the difference between the palm vein area and the non-palm vein area is larger, and the obtained image is clearer in the palm vein area.
For the palmprint image, the palmprint can be regarded as a ravine area on the surface of the palm, a fat layer is arranged under the surface of the palm, blood vessels and blood are arranged under the fat layer, and the ravine area is relatively close to the blood vessels and the blood because of the depth of the ravine area, so that the ravine area can be distinguished from the non-ravine area by selecting light which is easily absorbed by the blood. The imaging results of the present apparatus are shown in fig. 5a and 5 b.
(2) Extracting the texture features of the palm print and the palm vein of the user, classifying the texture features of different palm prints and palm veins, and storing the features into a feature library; when the user identifies, matching the extracted features of the user with the features in the feature library to obtain a matching score; as shown in fig. 7 and 9, the method comprises the following steps:
(21) intercepting an area containing the palm vein information in the palm vein image, which is called ROI interception and is shown in FIG. 10 b; the area of the palm vein information is shown in fig. 10 a;
(22) acquiring enough ROI images to form a palm vein data set, training a corresponding convolutional neural network by using the palm vein data set, enabling the convolutional neural network to extract palm vein features from the palm of a user, and obtaining a feature vector of a certain specific dimension capable of classifying the palm from the palm vein features. After the feature vectors are extracted each time, the feature vectors are encrypted by using an asymmetric encryption algorithm, and two corresponding classifiers are trained to enable the classifiers to classify palm veins of different people according to the feature vectors, so that a classification result is finally obtained.
(23) Similarly, enough ROI images are collected to form a palm print data set, and the convolutional neural network is trained by utilizing the palm print data set, so that the convolutional neural network can extract a feature vector of a certain specific dimension for classifying the palm from the palm print images. After the feature vectors are extracted each time, the feature vectors are encrypted by using an asymmetric encryption algorithm, and a classifier is trained to classify palm prints of different people according to the feature vectors, so that a classification result is finally obtained.
(3) Extracting gesture features of a user, wherein the gesture features comprise physiological features and behavior features, the physiological features comprise the shape of a hand of the user and the like, and the behavior features comprise the track of palm movement and the like when the user makes certain gestures.
In the gesture recognition authentication, clear images can be acquired by a camera for gesture acquisition, the background is simple, and the interference is less. Based on the imaging principle, the invention utilizes two light sources, namely a visible light source, an infrared light source and a corresponding wavelength filter to combine and collect dynamic palm information, the light sources adopt a 850nm LED array, and a 850nm band-pass filter is arranged in a camera, so that the interference of ambient light is filtered while sufficient light intensity is provided. In addition, the camera adopts a CMOS camera with a global shutter, the global shutter camera has better imaging effect on dynamic objects compared with a rolling shutter camera, and clear imaging can be performed when the palm is in a dynamic state through the cooperation of the light source, the camera and the optical filter. As can be seen from fig. 8 and 10, the imaging system of the present invention achieves a better effect, can obtain clearer palm vein information when imaging the palm print and the palm vein, can obtain two images when imaging the gesture, the image under the visible light carries a part of palm texture information, the image under the infrared light carries a part of palm vein image, and the two images are combined to further improve the gesture authentication effect.
As shown in fig. 11, the present invention collects a video of a palm moving under infrared light and visible light through the identity recognition device, the convolutional neural network extracts physiological features and behavior features from the video to obtain a feature vector of a certain dimension, after obtaining two feature vectors under two illumination conditions, first splices the two feature vectors to obtain gesture features of a user, after splicing, encrypts the two feature vectors by using an encryption algorithm, and then trains a classifier to classify gestures of different people according to the feature vectors, thereby finally obtaining a classification result.
(4) And fusing the gesture feature, the palm print feature and the palm vein feature by adopting a fractional layer fusion method. When the user uses the recognition function, the extracted features of the user are matched with the features in the feature library to obtain matching scores, the matching scores of the three features are weighted according to a certain weight in order to fuse the three features, finally the gesture features of the score layer are fused with the palm vein features, and the matching scores s of the palm prints are respectively obtainedpp iMatching score of palm vein spv iMatching score s of gestureps iThen, the weights are added to obtain a matching score s for classificationiThe calculation formula is as follows:
si=ω1spp i+ω2spv i+(1-ω1-ω2)sps i (1)
wherein ω is1、ω2The weight indicating the palm print and the palm vein is a predetermined value, and the importance of the palm print and the palm vein feature in the final recognition is determined.
As a preferred embodiment, after a new feature is extracted, the feature is compared with the features of all users in a feature library (the features of all registered users in the feature library), the feature closest to the new feature and the corresponding user name are found, if the similarity is greater than a set threshold, the identification is determined to be successful, and the person corresponding to the new feature is authenticated as the registered user.
After the user passes the identity authentication, the method can also identify a specific gesture of the user. This is mainly achieved by object detection and gesture classification. After the user passes the identity authentication, the user can collect a certain specific gesture through the equipment and then recognize the gesture at a position 7 to 15cm above the camera.
To recognize a particular gesture, the palm of the user must first be detected, as shown in FIG. 12 below. After the palm of the user is detected, the palm area of the user is extracted, and the palm area of the user is sent into the trained convolutional neural network for gesture classification, so that the purpose of recognizing the gesture of the user is achieved.
After the gesture is recognized, the processing and control module outputs control information to control various external devices (motors, drivers and the like). As shown in fig. 6, a user may control an external device (e.g., a curtain, a gate, etc.) by making a particular gesture (e.g., waving his hand, making a fist, etc.). For example, after a user makes a specific gesture, the invention outputs high and low levels according to the recognized gesture after recognition is finished, and controls the motor or the driver to move, so that the curtain is opened or the gate is opened.
The invention can also realize real-time non-contact temperature measurement through the non-contact temperature sensor, and when a user uses the invention, if the palm temperature is too low, the gesture of the user can be changed, thereby influencing the accuracy of identification. In addition, the detected temperature is too low, the attack may be suffered or the identified object is a deliberately made artificial hand, through the temperature detection, the invention can resist the attack, the temperature of the artificial hand is obviously lower than the palm temperature, therefore, if the temperature detected by the temperature sensor of the invention is too low, the artificial hand is considered to attack the equipment at the moment, the identification process is directly skipped, and the equipment returns to the initial state. If the palm temperature is too high, the body temperature of the user is abnormal, and at the moment, an alarm mechanism is triggered to assist epidemic prevention.
In this embodiment, the convolutional neural network is trained using a server, and the training framework is a pitoch. The deployment platform of the system is an embedded system designed herein, and the deployment forward reasoning framework is ncnn. The embedded deployment end adopts c + + language, the opencv version is 3.4, and the ncnn version is 20210322. And (3) converting the convolutional neural network model obtained by utilizing the server training into a model from a pytorch frame to an ncnn frame, installing a linux system on the embedded core board, and calling the convolutional neural network under the ncnn frame by using a c + + language to perform feature extraction.
The work flow of the whole system of the invention is shown in the following figure 13, which can be divided into three states of registration, deletion and verification,
registration state: when a new user registers, the user name needs to be input according to prompts, after the user name is input, the palm print and palm vein acquisition stage is started, in the stage, the user only needs to place the palm in an acquisition area for a period of time in any posture, the palm print and palm vein image is acquired in the process, the video acquisition stage for acquiring the gesture characteristics is automatically started after the acquisition is completed, in the stage, the user needs to make a fixed gesture in the acquisition area, in the process, the gesture video is acquired, and the main interface is automatically started after the acquisition is completed.
And (4) deleting the state: when the user in the device needs to be deleted, the user needs to input the user name of the user to be deleted according to the prompt, and the user is moved out of the database.
Verifying the state: after the user registration is completed, temperature detection is needed to be passed, if the temperature is too low, verification failure is prompted, if the temperature is too high, epidemic prevention alarm is triggered, if the temperature detection is passed, the user can realize palm print and palm vein authentication only by freely waving a palm above the equipment, after the palm print and palm vein authentication is passed, the user needs to repeat the gesture used during registration, namely, the user can pass the gesture authentication, after the two layers of authentication are passed, the authentication result is output by the invention, and meanwhile, the user can start to use the gesture recognition function.
When the user uses the recognition function, the user makes a gesture, and the controller can output control information to various peripherals after recognizing the result, for example, the user can control the on and off of the lamp through the gesture.
The preferred embodiments of the invention disclosed above are intended to be illustrative only. The preferred embodiments are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention. The invention is limited only by the claims and their full scope and equivalents.
Claims (10)
1. The non-contact gesture and palm print and palm vein fused identity recognition system is characterized by comprising a shell, an imaging system and a control system, wherein the imaging system and the control system are positioned in the shell; the imaging system comprises a light source module and an acquisition module; the control system comprises a processing and control module;
the light source module comprises a light source module; the acquisition module comprises a camera; the light source module and the camera are started through the photoelectric switch, when the photoelectric switch detects an object, a light source of the light source module is turned on, and the camera starts to acquire images;
and the processing and control module receives the acquired information of the acquisition module to realize the identity recognition of the user and control the corresponding equipment switch.
2. The system for identifying a hand gesture fused with a palm print and a palm vein in a non-contact manner according to claim 1, wherein the light source module comprises a light source with a wavelength of 800nm or between 850nm and 980nm, and the light source comprises infrared light and visible light.
3. The system according to claim 1, wherein the camera is provided with a filter corresponding to the wavelength of the light source.
4. The method for realizing the non-contact gesture and palm print and palm vein fused identity recognition system of claim 1, which comprises the following steps:
(1) acquiring images of the palm print, the palm vein and the gesture action by adopting the acquisition module;
(2) extracting the texture characteristics of the palm print and the palm vein of the user by a convolutional neural network in the processing and control module, classifying the texture characteristics of different palm prints and palm veins respectively, and storing the characteristics into a characteristic library; when the user identifies, matching the extracted features of the user with the features in the feature library to obtain a matching score;
(3) classifying different gestures according to the gesture characteristics of the user to finally obtain a classification result; when the user identifies, extracting the palm area of the user, inputting the palm area of the user into the trained convolutional neural network for gesture classification, and realizing the gesture identification of the user.
5. The system for identifying a non-contact gesture fused with a palm print and a palm vein according to claim 4, wherein the step (2) comprises the following steps:
(21) intercepting areas of palm print information and palm vein information contained in the palm print image and the palm vein image;
(22) collecting an ROI image to form a palm vein data set and a palm print data set, training a corresponding convolutional neural network by using the palm vein data set and the palm print data set, enabling the convolutional neural network to extract palm print features and palm vein features from a palm of a user, and obtaining feature vectors of corresponding dimensions capable of classifying the palm from the palm print features and the palm vein features; after the feature vector is extracted each time, encrypting the feature vector by using an asymmetric encryption algorithm; training two corresponding classifiers to enable the classifiers to classify the palmprint and the palm vein of different people according to the feature vectors, and finally obtaining classification results.
6. The system of claim 4, wherein the gesture features comprise physiological features and behavioral features, the physiological features comprise the shape of the hand of the user, and the behavioral features comprise the trajectory of the palm movement of the user when performing a certain gesture.
7. The system for identifying the non-contact gesture fused with the palm print and the palm vein according to claim 4, wherein a video of palm movement of a user is collected under infrared light and visible light, a convolutional neural network extracts physiological features and behavior features from the video to obtain feature vectors of corresponding dimensions, after the feature vectors under two illumination conditions are obtained, the two feature vectors are spliced to obtain gesture features of the user, and after the splicing, encryption is performed by using an encryption algorithm; the classifier classifies the gestures of different people according to the feature vectors to finally obtain a classification result.
8. The system of claim 4, wherein the matching process comprises: firstly, respectively obtaining matching scores s of palm printspp iMatching score of palm vein spv iMatching score s of gestureps iWeighting the matching scores of the three features to obtain a matching score s for classificationi:
si=ω1spp i+ω2spv i+(1-ω1-ω2)sps i (1)
Wherein ω is1、ω2Represents the weight of the palm print and the palm vein.
9. The system according to claim 4, wherein when the image is collected, the palm temperature of the user is detected by the temperature sensor, and if the palm temperature is lower than a threshold value, the recognition is failed, and if the palm temperature is higher than the threshold value, the body temperature of the user is abnormal, and an alarm mechanism is triggered.
10. The system of claim 4, wherein the processing and control module outputs a corresponding level according to the recognized gesture of the user to control the corresponding device switch.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111290863.3A CN114220130A (en) | 2021-11-02 | 2021-11-02 | Non-contact gesture and palm print and palm vein fused identity recognition system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111290863.3A CN114220130A (en) | 2021-11-02 | 2021-11-02 | Non-contact gesture and palm print and palm vein fused identity recognition system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114220130A true CN114220130A (en) | 2022-03-22 |
Family
ID=80696603
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111290863.3A Pending CN114220130A (en) | 2021-11-02 | 2021-11-02 | Non-contact gesture and palm print and palm vein fused identity recognition system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114220130A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115328319A (en) * | 2022-10-13 | 2022-11-11 | 华南理工大学 | Intelligent control method and device based on light-weight gesture recognition |
CN116405211A (en) * | 2023-06-07 | 2023-07-07 | 深圳市乐凡信息科技有限公司 | Multiple encryption method, device, equipment and storage medium based on biological characteristics |
CN116738411A (en) * | 2023-06-02 | 2023-09-12 | 广州广电运通智能科技有限公司 | Multi-mode registration method and identity recognition method based on biological feature recognition |
CN117350737A (en) * | 2023-11-29 | 2024-01-05 | 深圳市盛思达通讯技术有限公司 | Payment method and system based on palmprint recognition |
-
2021
- 2021-11-02 CN CN202111290863.3A patent/CN114220130A/en active Pending
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115328319A (en) * | 2022-10-13 | 2022-11-11 | 华南理工大学 | Intelligent control method and device based on light-weight gesture recognition |
CN116738411A (en) * | 2023-06-02 | 2023-09-12 | 广州广电运通智能科技有限公司 | Multi-mode registration method and identity recognition method based on biological feature recognition |
CN116738411B (en) * | 2023-06-02 | 2024-04-19 | 广州广电运通智能科技有限公司 | Multi-mode registration method and identity recognition method based on biological feature recognition |
CN116405211A (en) * | 2023-06-07 | 2023-07-07 | 深圳市乐凡信息科技有限公司 | Multiple encryption method, device, equipment and storage medium based on biological characteristics |
CN116405211B (en) * | 2023-06-07 | 2023-09-01 | 深圳市乐凡信息科技有限公司 | Multiple encryption method, device, equipment and storage medium based on biological characteristics |
CN117350737A (en) * | 2023-11-29 | 2024-01-05 | 深圳市盛思达通讯技术有限公司 | Payment method and system based on palmprint recognition |
CN117350737B (en) * | 2023-11-29 | 2024-05-14 | 深圳市盛思达通讯技术有限公司 | Payment method and system based on palmprint recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114220130A (en) | Non-contact gesture and palm print and palm vein fused identity recognition system and method | |
JP5609970B2 (en) | Control access to wireless terminal functions | |
Buciu et al. | Biometrics systems and technologies: A survey | |
Delac et al. | A survey of biometric recognition methods | |
Itqan et al. | User identification system based on finger-vein patterns using convolutional neural network | |
US11068701B2 (en) | Apparatus and method for vehicle driver recognition and applications of same | |
KR102160137B1 (en) | Apparatus and Method for Recognizing Fake Face By Using Minutia Data Variation | |
CN112232159A (en) | Fingerprint identification method, device, terminal and storage medium | |
Ghosh et al. | Symptoms-based biometric pattern detection and recognition | |
Perwej | An Empirical Investigation of Human Identity Verification Methods | |
CN112232157B (en) | Fingerprint area detection method, device, equipment and storage medium | |
Venkatesh et al. | A new multi-spectral iris acquisition sensor for biometric verification and presentation attack detection | |
Singha et al. | Study and analysis on biometrics and face recognition methods | |
Manocha et al. | Palm vein recognition for human identification using NN | |
Engelsma et al. | Fingerprint match in box | |
KR20070118806A (en) | Method of detecting face for embedded system | |
CN112232152B (en) | Non-contact fingerprint identification method and device, terminal and storage medium | |
Gupta | Advances in multi modal biometric systems: a brief review | |
Deokar et al. | Literature Survey of Biometric Recognition Systems | |
Bakshi et al. | Biometric Technology: A Look and Survey at Face Recogntion | |
KR20070097629A (en) | Embedded system for recognizing face | |
Rabie et al. | Multi-modal biometrics for real-life person-specific emotional human-robot-interaction | |
CN212569821U (en) | Non-contact fingerprint acquisition device | |
WO2017116331A1 (en) | Stereo palm vein detection method and biometric identification system operating in compliance with said method | |
Teotia et al. | Cnn-Based Palm Vein Confirmation In An Integrated Biometrics Id Configuration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information |
Inventor after: Qiao Yitao Inventor after: Kang Wenxiong Inventor after: Wan Hao Inventor after: Xie Yukang Inventor after: Han Xu Inventor before: Kang Wenxiong Inventor before: Qiao Yitao Inventor before: Wan Hao Inventor before: Xie Yukang Inventor before: Han Xu |
|
CB03 | Change of inventor or designer information |