CN115223278B - Intelligent door lock based on face recognition and unlocking method - Google Patents

Intelligent door lock based on face recognition and unlocking method Download PDF

Info

Publication number
CN115223278B
CN115223278B CN202210833452.2A CN202210833452A CN115223278B CN 115223278 B CN115223278 B CN 115223278B CN 202210833452 A CN202210833452 A CN 202210833452A CN 115223278 B CN115223278 B CN 115223278B
Authority
CN
China
Prior art keywords
video image
information
sound
sensor data
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210833452.2A
Other languages
Chinese (zh)
Other versions
CN115223278A (en
Inventor
龙文瑞
卢英桂
龙梁容
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Niuzhi Technology Co ltd
Original Assignee
Shenzhen Niuzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Niuzhi Technology Co ltd filed Critical Shenzhen Niuzhi Technology Co ltd
Priority to CN202210833452.2A priority Critical patent/CN115223278B/en
Publication of CN115223278A publication Critical patent/CN115223278A/en
Application granted granted Critical
Publication of CN115223278B publication Critical patent/CN115223278B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/00174Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys
    • G07C9/00563Electronically operated locks; Circuits therefor; Nonmechanical keys therefor, e.g. passive or active electrical keys or other data carriers without mechanical keys using personal physical data of the operator, e.g. finger prints, retinal images, voicepatterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The intelligent door lock based on face recognition comprises an information acquisition end, an information processing end and an intelligent unlocking module; the information acquisition end acquires video images and sensor data; the information processing end carries out matrix vectorization on the acquired video image and the sensor data; the intelligent unlocking module adopts convolutional neural network to process the information of the information processing end, and unlocks when the set threshold value is met, otherwise, the locking state is kept continuously. The method comprises the steps of performing feature extraction on a video image and sensor data in an acquired time period T to form a matrix vector H, and inputting the matrix vector H into a trained convolutional neural network; the convolutional neural network adopts a loss function as Q; in calculating the loss function, the parameter L is used for i And the product form of the function greatly enhances the training effect, improves the recognition accuracy and enhances the safety of the door lock.

Description

Intelligent door lock based on face recognition and unlocking method
Technical Field
The invention relates to the technical field of information safety, in particular to an intelligent door lock based on face recognition and an unlocking method.
Background
Currently, in the prior art: with the development of technology, intelligent access control systems have been put into people's daily lives. The intelligent, safe and low-cost access control system is one of research hotspots of intelligent home. The continuous promotion of smart city construction, the research on the safety and convenience of intelligent products is more and more paid attention to. The intelligent door lock is opened and closed in an artificial intelligent identification mode and the like, and equipment management and control and system maintenance under the complex condition of operation guarantee are increasingly popular.
However, the existing intelligent door lock face recognition technology recognition method mainly considers a single face image, and considers less comprehensive information such as human trunk, gait, sound and the like, and the face recognition under the unconstrained condition is relatively ineffective due to lack of robustness to illumination, posture, expression and image quality change. Etc. by predicting under the assumption of distribution in advance, a low-dimensional feature is obtained to describe the face, but the problem is that the overall method has no way to contain local face changes. Subsequent face recognition methods based on local features gradually become corner-to-corner, but manually designed features often lack robustness. The attention mechanism is introduced into the deep convolutional neural network, the loss function is optimized, the application of improving the face feature extraction capacity of the model is less, and only the independent application of Euclidean distance or triangular distance is realized.
Disclosure of Invention
In order to solve the technical problems, the intelligent door lock and the unlocking method based on face recognition are provided, the automatic working level of the door lock is remarkably improved, the safety and the convenience of the door lock are greatly improved, and the user experience is enhanced; an intelligent door lock based on face recognition comprises an information acquisition end, an information processing end and an intelligent unlocking module; the information acquisition end acquires video images and sensor data; the information processing end carries out matrix vectorization on the acquired video image and the sensor data; the intelligent unlocking module adopts convolutional neural network to process the information of the information processing end, and unlocks when the set threshold value is met, otherwise, the locking state is kept continuously; the sensor data comprises sound information obtained by a sound sensor, and the sound information y n (t) from the speaking voice component x n (t) and step sound component v n (t) the sound source information is s (t), and the sound source feedback parameter is g n N=1, 2, t is time, and the sound information satisfies: y is n (t)=s(t)*g n +v n (t)=x n (t)+v n (t); the matrix vectorization is specifically that the obtained video image and sensor data in the time period T are subjected to feature extraction to form a matrix vector H, and the matrix vector H is input into a trained convolutional neural network; the convolutional neural network adopts a loss function of Q:
L i =y i log(p i )+(1-y i )(1-log(p i ))
wherein p is i Representing that the ith video image sample is the face of the homeownerProbability of torso, y i The characteristic parameters of the face and the trunk of the homeowner are expressed, the video image is divided into N sections, s represents the amplitude redundancy value, the characteristic parameters of the homeowner face and the trunk are determined, m represents the angle redundancy value, and cos theta i And the cosine value representing the i-th video image sample vector and the homeowner's own feature vector.
Preferably, the convolutional neural network consists of a three-layer network structure, wherein the first layer is P-Net, the second layer is R-Net, the third layer is O-Net, and the P-Net constructs 5-layer feature pyramid from input data; and the O-Net outputs final face and trunk candidate frames, face and trunk confidence and key point information.
Preferably, the information processing end is connected to the Jetson Nano embedded system provided with the deep learning library through a mobile industry processor interface MIPI by adopting a CSI camera and connected through a GPIO interface.
Preferably, the matrix vectorization includes digitizing the acquired video image data and the sensor data, and specifically encoding pixels, coordinates, acquisition time of the video image data and speaking voice, footstep voice and footstep frequency values of the sensor data to form a vector matrix.
Preferably, the unlocking is performed when the set threshold is met, otherwise, the locking state is kept, namely, the confidence of the calculation result is output through the convolution network, and when the similarity between the matrix vector H and the matrix vector stored by the data server meets the set threshold, the unlocking is performed, otherwise, the locking is performed.
The invention also discloses an intelligent door lock unlocking method based on face recognition, which is characterized by comprising the following steps: s1: information acquisition, wherein an information acquisition end acquires video images and sensor data; s2: information processing, wherein an information processing end carries out matrix vectorization on the acquired video image and sensor data; s3: the intelligent unlocking module adopts convolutional neural network to process the information of the information processing end, and when the set threshold value is met, the intelligent unlocking module unlocks, otherwise, the intelligent unlocking module continues to keep the locking state;
wherein the sensor data comprises sound information obtained by a sound sensor, and sound information y n (t) from the speaking voice component x n (t) and step sound component v n (t) the sound source information is s (t), and the sound source feedback parameter is g n N=1, 2, t is time, and the sound information satisfies: y is n (t)=s(t)*g n +v n (t)=x n (t)+v n (t); the sound control system also comprises a sound quality distortion control module; the matrix vectorization is specifically that the obtained video image and sensor data in the time period T are subjected to feature extraction to form a matrix vector H, and the matrix vector H is input into a trained convolutional neural network; the convolutional neural network adopts a loss function of Q:
L i =y i log(p i )+(1-y i )(1-log(p i ))
wherein p is i Representing the probability of being the face and trunk of the homeowner in the ith video image sample, y i The characteristic parameters of the face and the trunk of the homeowner are expressed, the video image is divided into N sections, s represents the amplitude redundancy value, the characteristic parameters of the homeowner face and the trunk are determined, m represents the angle redundancy value, and cos theta i And the cosine value representing the i-th video image sample vector and the homeowner's own feature vector.
Preferably, the convolutional neural network consists of a three-layer network structure, wherein the first layer is P-Net, the second layer is R-Net, the third layer is O-Net, and the P-Net constructs 5-layer feature pyramid from input data; and the O-Net outputs final face and trunk candidate frames, face and trunk confidence and key point information.
Preferably, the information processing end is connected to the Jetson Nano embedded system provided with the deep learning library through a mobile industry processor interface MIPI by adopting a CSI camera and connected through a GPIO interface.
Preferably, the matrix vectorization includes digitizing the acquired video image data and the sensor data, and specifically encoding pixels, coordinates, acquisition time of the video image data and speaking voice, footstep voice and footstep frequency values of the sensor data to form a vector matrix.
Preferably, the unlocking is performed when the set threshold is met, otherwise, the locking state is kept, namely, the confidence of the calculation result is output through the convolution network, and when the similarity between the matrix vector H and the matrix vector stored by the data server meets the set threshold, the unlocking is performed, otherwise, the locking is performed.
Compared with the prior art, the technical scheme of the invention has the following beneficial effects:
the door lock has the advantages that the problem that the door lock in the traditional technology is low in automation level is solved, footstep sound and speaking sound of a homeowner are creatively added into the feature vector, judgment is carried out through a convolutional neural network, and the safety and accuracy of the door lock are greatly improved. In addition, the sensor data includes sound information obtained by a sound sensor, sound information y i (t) from the speaking voice component x i (t) and step sound component v i (t) composition. The method comprises the steps of performing feature extraction on video images and sensor data in an acquired time period T to form a matrix vector H, and inputting the matrix vector H into a trained convolutional neural network; the convolutional neural network adopts a loss function as Q; in calculating the loss function, the parameter L is used for i And the product form of the function greatly enhances the training effect and remarkably improves the judgment accuracy of the convolutional neural network.
Drawings
FIG. 1 is a system diagram of an intelligent door lock unlocking method based on face recognition;
Detailed Description
As understood by those skilled in the art, as the background technology is said, the intelligent door lock face recognition technology recognition method in the conventional technology mainly considers a single face image, but considers less comprehensive information such as human trunk, gait, sound and the like, and the face recognition under the unconstrained condition is relatively ineffective due to lack of robustness to illumination, posture, expression and image quality change. Etc. by predicting under the assumption of distribution in advance, a low-dimensional feature is obtained to describe the face, but the problem is that the overall method has no way to contain local face changes. Subsequent face recognition methods based on local features gradually become corner-to-corner, but manually designed features often lack robustness. The attention mechanism is introduced into the deep convolutional neural network, the loss function is optimized, the application of improving the face feature extraction capacity of the model is less, and only the independent application of Euclidean distance or triangular distance is realized. In order to make the above objects, features and advantages of the present invention more comprehensible, embodiments accompanied with figures are described in detail below.
Embodiment one:
fig. 1 is a system diagram of an intelligent door lock unlocking method based on face recognition, and in some embodiments, an intelligent door lock based on face recognition includes an information acquisition end, an information processing end and an intelligent unlocking module; the information acquisition end acquires video images and sensor data; the information processing end carries out matrix vectorization on the acquired video image and the sensor data; the intelligent unlocking module adopts convolutional neural network to process the information of the information processing end, and unlocks when the set threshold value is met, otherwise, the locking state is kept continuously; the sensor data comprises sound information obtained by a sound sensor, and the sound information y n (t) from the speaking voice component x n (t) and step sound component v n (t) the sound source information is s (t), and the sound source feedback parameter is g n N=1, 2, t is time, and the sound information satisfies: y is n (t)=s(t)*g n +v n (t)=x n (t)+v n (t); the sound control system also comprises a sound quality distortion control module; the matrix vectorization is specifically that the obtained video image and sensor data in the time period T are subjected to feature extraction to form a matrix vector H, and the matrix vector H is input into a trained convolutional neural network; the convolutional neural network adopts a loss function of Q:
L i =y i log(p i )+(1-y i )(1-log(p i ))
wherein p is i Representing the probability of being the face and trunk of the homeowner in the ith video image sample, y i The characteristic parameters of the face and the trunk of the homeowner are expressed, the video image is divided into N sections, s represents the amplitude redundancy value, the characteristic parameters of the homeowner face and the trunk are determined, m represents the angle redundancy value, and cos theta i And the cosine value representing the i-th video image sample vector and the homeowner's own feature vector.
In some embodiments, the convolutional neural network consists of a three-layer network structure, the first layer is P-Net, the second layer is R-Net, and the third layer is O-Net, the P-Net builds a 5-layer feature pyramid from input data; and the O-Net outputs final face and trunk candidate frames, face and trunk confidence and key point information.
In some embodiments, the information processing end is connected to the Jetson Nano embedded system provided with the deep learning library through a mobile industry processor interface MIPI by adopting a CSI camera and connected through a GPIO interface.
In some embodiments, the matrix vectorization includes digitizing the acquired video image data and the sensor data, and specifically encoding pixels, coordinates, acquisition time of the video image data, and speaking sounds, footstep sounds, and footstep frequency values of the sensor data to form a vector matrix.
In some embodiments, the lock is unlocked when the set threshold is met, otherwise, the lock is kept, that is, the confidence of the calculation result is output through the convolution network, and when the similarity between the matrix vector H and the matrix vector stored in the data server meets the set threshold, the lock is unlocked, otherwise, the lock is locked.
Example two
The invention also discloses an intelligent door lock unlocking method based on face recognition, which is characterized by comprising the following steps: s1: information acquisition, wherein an information acquisition end acquires video images and sensor data; s2: information processing, wherein an information processing end carries out matrix vectorization on the acquired video image and sensor data; s3: the intelligent unlocking module adopts convolutional neural network to process the information of the information processing end, and when the set threshold value is met, the intelligent unlocking module unlocks, otherwise, the intelligent unlocking module continues to keep the locking state;
wherein the sensor data comprises sound information obtained by a sound sensor, and sound information y n (t) from the speaking voice component x n (t) and step sound component v n (t) the sound source information is s (t), and the sound source feedback parameter is g n N=1, 2, t is time, and the sound information satisfies: y is n (t)=s(t)*g n +v n (t)=x n (t)+v n (t); the sound control system also comprises a sound quality distortion control module; the matrix vectorization is specifically that the obtained video image and sensor data in the time period T are subjected to feature extraction to form a matrix vector H, and the matrix vector H is input into a trained convolutional neural network; the convolutional neural network adopts a loss function of Q:
L i =y i log(p i )+(1-y i )(1-log(p i ))
wherein p is i Representing the probability of being the face and trunk of the homeowner in the ith video image sample, y i The characteristic parameters of the face and the trunk of the homeowner are expressed, the video image is divided into N sections, s represents the amplitude redundancy value, the characteristic parameters of the homeowner face and the trunk are determined, m represents the angle redundancy value, and cos theta i And the cosine value representing the i-th video image sample vector and the homeowner's own feature vector.
In some embodiments, the convolutional neural network consists of a three-layer network structure, the first layer is P-Net, the second layer is R-Net, and the third layer is O-Net, the P-Net builds a 5-layer feature pyramid from input data; and the O-Net outputs final face and trunk candidate frames, face and trunk confidence and key point information.
In some embodiments, the information processing end is connected to the Jetson Nano embedded system provided with the deep learning library through a mobile industry processor interface MIPI by adopting a CSI camera and connected through a GPIO interface.
In some embodiments, the matrix vectorization includes digitizing the acquired video image data and the sensor data, specifically encoding pixels, coordinates, acquisition time of the video image data, and speaking voice, footstep voice, and footstep frequency values of the sensor data to form a vector matrix, and in some embodiments, extracting the video image feature data to form a matrix feature vector using wavelet transform and fourier transform.
In some embodiments, the lock is unlocked when the set threshold is met, otherwise, the lock state is kept, that is, the confidence of the calculation result is output through the convolution network, and when the similarity between the matrix vector and the matrix vector stored by the data server meets the set threshold, the lock is unlocked, otherwise, the lock is locked.
According to the intelligent door lock and the unlocking method based on face recognition, the door lock is low in automation level in the traditional technology, footstep sound and speaking sound of a homeowner are creatively added into the feature vector, judgment is carried out through the convolutional neural network, and safety and accuracy of the door lock are greatly improved. In addition, the sensor data includes sound information obtained by a sound sensor, sound information y i (t) from the speaking voice component x i (t) and step sound component v i (t) composition. The method comprises the steps of performing feature extraction on video images and sensor data in an acquired time period T to form a matrix vector H, and inputting the matrix vector H into a trained convolutional neural network; the convolutional neural network adopts a loss function as Q; in calculating the loss function, the parameter L is used for i And the product form of the function greatly enhances the training effect and remarkably improves the judgment accuracy of the convolutional neural network. .
It will be apparent to those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product, and that the present application may therefore take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.

Claims (4)

1. The intelligent door lock based on the face recognition is characterized by comprising an information acquisition end, an information processing end and an intelligent unlocking module; the information acquisition end acquires video images and sensor data; the information processing end carries out matrix vectorization on the acquired video image and the sensor data; the intelligent unlocking module adopts convolutional neural network to process the information of the information processing end, and unlocks when the set threshold value is met, otherwise, the locking state is kept continuously; the sensor data comprises sound information obtained by a sound sensor, and the sound information y i (t) from the speaking voice component x i (t) and step sound component v i (t) the sound source information is s (t), and the sound source feedback parameter is g i I=1, 2, and the sound information satisfies: y is i (t)=s(t)*g i +v i (t)=x i (t)+v i (t), (i=1, 2), the sound control system further comprising a sound quality distortion control module; the matrix vectorization is carried out, the feature extraction is carried out on the video image and the sensor data in the acquired time period T to form a matrix vector H, and the matrix vector H is input into a trained convolutional neural network; the convolutional neural network adopts a loss function of Q:
L i =y i log(p i )+(1-y i )(1-log(p i ))
wherein p is i Representing the probability of being the face and trunk of the homeowner in the ith video image sample, y i The characteristic parameters of the face and the trunk of the homeowner are expressed, the video image is divided into N sections, s represents the amplitude redundancy value, the characteristic parameters of the homeowner face and the trunk are determined, m represents the angle redundancy value, and cos theta i Cosine value representing i-th video image sample vector and homeowner principal feature vector;
The matrix vectorization comprises the steps of digitizing the acquired video image data and the sensor data, and specifically coding pixels, coordinates, acquisition time of the video image data and speaking voice, footstep voice and footstep frequency values of the sensor data to form a vector matrix;
unlocking when the set threshold is met, otherwise, continuously maintaining the locking state, namely outputting the confidence coefficient of the calculation result through the convolution network, unlocking when the similarity between the matrix vector and the matrix vector stored by the data server meets the set threshold, otherwise, locking;
the convolutional neural network consists of a three-layer network structure, wherein the first layer is P-Net, the second layer is R-Net, the third layer is O-Net, and the P-Net constructs 5 layers of feature pyramids from input data; and the O-Net outputs final face and trunk candidate frames, face and trunk confidence and key point information.
2. The intelligent door lock based on face recognition according to claim 1, wherein the information processing end is connected to the Jetson Nano embedded system provided with the deep learning library through a mobile industry processor interface MIPI by adopting a CSI camera and connected through a GPIO interface.
3. An intelligent door lock unlocking method based on face recognition is characterized by comprising the following steps: s1: information acquisition, wherein an information acquisition end acquires video images and sensor data; s2: information processing, wherein an information processing end carries out matrix vectorization on the acquired video image and sensor data; s3: the intelligent unlocking module adopts convolutional neural network to process the information of the information processing end, and when the set threshold value is met, the intelligent unlocking module unlocks, otherwise, the intelligent unlocking module continues to keep the locking state;
wherein the sensor data comprises sound information obtained by a sound sensor, and sound information y i (t) from the speaking voice component x i (t) and step sound component v i (t) the sound source information is s (t), and the sound source feedback parameter is g i I=1, 2, and the sound information satisfies: y is i (t)=s(t)*g i +v i (t)=x i (t)+v i (t), (i=1, 2), the sound control system further comprising a sound quality distortion control module; the matrix vectorization is carried out, the feature extraction is carried out on the video image and the sensor data in the acquired time period T to form a matrix vector H, and the matrix vector H is input into a trained convolutional neural network; the convolutional neural network adopts a loss function of Q:
L i =y i log(p i )+(1-y i )(1-log(p i ))
wherein p is i Representing the probability of being the face and trunk of the homeowner in the ith video image sample, y i The characteristic parameters of the face and the trunk of the homeowner are expressed, the video image is divided into N sections, s represents the amplitude redundancy value, the characteristic parameters of the homeowner face and the trunk are determined, m represents the angle redundancy value, and cos theta i Cosine values representing the i-th video image sample vector and the homeowner's own feature vector;
the matrix vectorization comprises the steps of digitizing the acquired video image data and the sensor data, and specifically coding pixels, coordinates, acquisition time of the video image data and speaking voice, footstep voice and footstep frequency values of the sensor data to form a vector matrix; unlocking when the set threshold is met, otherwise, continuously maintaining the locking state, namely outputting the confidence coefficient of the calculation result through the convolution network, unlocking when the similarity between the matrix vector and the matrix vector stored by the data server meets the set threshold, otherwise, locking;
the convolutional neural network consists of a three-layer network structure, wherein the first layer is P-Net, the second layer is R-Net, the third layer is O-Net, and the P-Net constructs 5 layers of feature pyramids from input data; and the O-Net outputs final face and trunk candidate frames, face and trunk confidence and key point information.
4. The intelligent door lock unlocking method based on face recognition according to claim 3, wherein the information processing end is connected to a jetson nano embedded system provided with a deep learning library through a mobile industry processor interface MIPI by adopting a CSI camera and is connected through a GPIO interface.
CN202210833452.2A 2022-07-15 2022-07-15 Intelligent door lock based on face recognition and unlocking method Active CN115223278B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210833452.2A CN115223278B (en) 2022-07-15 2022-07-15 Intelligent door lock based on face recognition and unlocking method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210833452.2A CN115223278B (en) 2022-07-15 2022-07-15 Intelligent door lock based on face recognition and unlocking method

Publications (2)

Publication Number Publication Date
CN115223278A CN115223278A (en) 2022-10-21
CN115223278B true CN115223278B (en) 2023-08-01

Family

ID=83611415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210833452.2A Active CN115223278B (en) 2022-07-15 2022-07-15 Intelligent door lock based on face recognition and unlocking method

Country Status (1)

Country Link
CN (1) CN115223278B (en)

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102034288B (en) * 2010-12-09 2012-06-20 江南大学 Multiple biological characteristic identification-based intelligent door control system
CN105426875A (en) * 2015-12-18 2016-03-23 武汉科技大学 Face identification method and attendance system based on deep convolution neural network
CN107680229B (en) * 2017-10-23 2018-10-23 西安科技大学 The control method of access control system based on phonetic feature and recognition of face
CN109447199A (en) * 2018-10-16 2019-03-08 山东大学 A kind of multi-modal criminal's recognition methods and system based on step information
CN111311809A (en) * 2020-02-21 2020-06-19 南京理工大学 Intelligent access control system based on multi-biological-feature fusion
CN111401257B (en) * 2020-03-17 2022-10-04 天津理工大学 Face recognition method based on cosine loss under non-constraint condition
CN112149638B (en) * 2020-10-23 2022-07-01 贵州电网有限责任公司 Personnel identity recognition system construction and use method based on multi-modal biological characteristics
CN113807164A (en) * 2021-07-29 2021-12-17 四川天翼网络服务有限公司 Face recognition method based on cosine loss function

Also Published As

Publication number Publication date
CN115223278A (en) 2022-10-21

Similar Documents

Publication Publication Date Title
CN106599866B (en) Multi-dimensional user identity identification method
CN106919903B (en) robust continuous emotion tracking method based on deep learning
CN108804453B (en) Video and audio recognition method and device
CN108875645B (en) Face recognition method under complex illumination condition of underground coal mine
CN107133590B (en) A kind of identification system based on facial image
CN112131975B (en) Face illumination processing method based on Retinex decomposition and generation of confrontation network
CN112766217B (en) Cross-modal pedestrian re-identification method based on disentanglement and feature level difference learning
CN110717423B (en) Training method and device for emotion recognition model of facial expression of old people
CN107122725B (en) Face recognition method and system based on joint sparse discriminant analysis
CN109325472B (en) Face living body detection method based on depth information
CN108073875A (en) A kind of band noisy speech identifying system and method based on monocular cam
CN111507239A (en) Local feature face recognition method based on image pyramid
Mei et al. Learn a compression for objection detection-vae with a bridge
CN114581965A (en) Training method of finger vein recognition model, recognition method, system and terminal
CN115223278B (en) Intelligent door lock based on face recognition and unlocking method
CN116883900A (en) Video authenticity identification method and system based on multidimensional biological characteristics
CN115374854A (en) Multi-modal emotion recognition method and device and computer readable storage medium
CN111709312B (en) Local feature face recognition method based on combined main mode
Abidin et al. Wavelet based approach for facial expression recognition
CN111104868B (en) Cross-quality face recognition method based on convolutional neural network characteristics
Liu Research on improved fingerprint image compression and texture region segmentation algorithm
CN111898452A (en) Video monitoring networking system
Lin et al. A Lightweight Embedding Probability Estimation Algorithm Based on LBP for Adaptive Steganalysis
CN110633385A (en) Retrieval and compression method of medical image
CN116758617B (en) Campus student check-in method and campus check-in system under low-illuminance scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant