CN112307973B - Living body detection method, living body detection system, electronic device, and storage medium - Google Patents

Living body detection method, living body detection system, electronic device, and storage medium Download PDF

Info

Publication number
CN112307973B
CN112307973B CN202011196654.8A CN202011196654A CN112307973B CN 112307973 B CN112307973 B CN 112307973B CN 202011196654 A CN202011196654 A CN 202011196654A CN 112307973 B CN112307973 B CN 112307973B
Authority
CN
China
Prior art keywords
neural network
living body
auxiliary
output result
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011196654.8A
Other languages
Chinese (zh)
Other versions
CN112307973A (en
Inventor
何超超
浦贵阳
程耀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Hangzhou Information Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202011196654.8A priority Critical patent/CN112307973B/en
Publication of CN112307973A publication Critical patent/CN112307973A/en
Application granted granted Critical
Publication of CN112307973B publication Critical patent/CN112307973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention relates to the field of face recognition, and discloses a method, a system, electronic equipment and a storage medium for detecting a living body. The invention is applied to electronic equipment with at least two image acquisition devices, wherein the at least two image acquisition devices acquire images at the same position, and the method comprises the following steps: simultaneously acquiring at least two pictures through the image acquisition device; and respectively sending the pictures to at least one main neural network and at least one auxiliary neural network, wherein the output result of the main neural network is fused with the output result of the auxiliary neural network. Outputting the living body characteristics through the main neural network and the auxiliary neural network, and sending the living body characteristics to a full connection layer; outputting the category of the living body characteristic as the result of the living body detection through the full connection layer and the softmax layer. The living body detection through the multi-input flow neural network architecture has higher precision and speed.

Description

Living body detection method, living body detection system, electronic device, and storage medium
Technical Field
The embodiment of the invention relates to the field of face recognition, in particular to a living body detection method, a living body detection system, electronic equipment and a storage medium.
Background
Face recognition is widely applied to scenes such as face payment, face entrance guard, face attendance checking, and human evidence verification, and generally, a face recognition system comprises functional modules such as face photo acquisition, face detection, face feature extraction, face search/comparison and the like. When the collected photos are face images of printing, output of an electronic screen or wearing a mask, the system can be easily invaded by faces forged by external personnel. In order to avoid counterfeit face fraud, a face living body detection module is added into a face recognition system to improve the safety of the system, and the main function of the face living body detection is to judge whether a face to be recognized is a real person or not and give confidence information that the face is the real person.
However, in the related art, a living body detection method with a single input stream deep neural network architecture is usually adopted, an input picture is one of an RGB picture, a depth map, a near infrared map, and the like, and a network structure is relatively simple, but since single-type picture information is input, the accuracy of living body detection is difficult to improve after reaching a certain value.
Disclosure of Invention
The invention aims to provide a living body detection method, a living body detection system, an electronic device and a storage medium, which have higher precision and higher speed of living body detection through a multi-input-stream neural network architecture.
In order to solve the above technical problems, an embodiment of the present invention provides a method for detecting a living body, including the steps of:
simultaneously acquiring at least two pictures through the image acquisition device;
and respectively sending the pictures to at least one main neural network and at least one auxiliary neural network, wherein the output result of the main neural network is fused with the output result of the auxiliary neural network.
Outputting the living body characteristics through the main neural network and the auxiliary neural network, and sending the living body characteristics to a full connection layer;
outputting the category of the living body feature as a result of the living body detection through the full connection layer and the softmax layer.
Embodiments of the present invention also provide a living body detection system, including:
the image acquisition module is used for simultaneously acquiring at least two images through the image acquisition device;
the living body detection module is used for respectively sending the pictures to at least one main neural network and at least one auxiliary neural network, wherein the output result of the main neural network is fused with the output result of the auxiliary neural network; outputting the living body characteristics through the main neural network and the auxiliary neural network, and sending the living body characteristics to a full connection layer;
And the result output module is used for outputting the category of the living body characteristic through the full connection layer and the softmax layer.
An embodiment of the present invention also provides an electronic device, including:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a liveness detection method.
Embodiments of the present invention also provide a computer storage medium, comprising: a computer program is stored, characterized in that the computer program, when being executed by a processor, realizes a method of living body detection.
Compared with the prior art, the method and the device have the advantages that multiple devices are adopted to acquire the pictures and the multiple networks to detect the pictures, the method and the device have higher accuracy compared with the traditional method that only a single picture is used for detection, different neural networks can be set as the main and auxiliary neural networks according to requirements, and particularly, a network model with higher accuracy and a network model with higher speed can be respectively set in the main and auxiliary neural networks, so that the living body detection has the advantages of both accuracy and speed, and the accuracy and the speed are improved. And a plurality of neural networks can be arranged as required to adapt to various application scenarios.
In addition, the method for detecting a living body according to the present invention includes, after the at least two pictures are simultaneously acquired by the image acquisition device: detecting whether the picture has a face image; and if the picture has the face image, the face image is sent to the main neural network. Before the living body detection, the images are screened, and only the pictures with the portrait are subjected to further living body detection, so that the detection number is reduced, and the processing speed is increased.
In addition, according to the in-vivo detection method provided by the invention, in the process of fusing the output result of the main neural network with the output result of the auxiliary neural network, the main neural network and the auxiliary neural network both comprise a plurality of blocks; the method comprises the following steps: fusing the output result of each Block in the primary neural network with the output result of the Block in the same stage in the auxiliary neural network; and sending the output result to the next Block in the master neural network. The output result of each Block in the main neural network is fused with the output result of the auxiliary neural network, so that the output result has the advantages of network models applied to the main neural network and the auxiliary neural network, the living body detection speed is higher, and the living body detection efficiency is higher.
In addition, the living body detection method provided by the present invention, wherein the living body characteristics are output by the main neural network and the auxiliary neural network and transmitted to the fully connected layer, includes: fusing the characteristics output by the main neural network and the characteristics output by the auxiliary neural network; and sending the fusion result to the full connection layer. And (3) fusing the characteristics output by the main network and the auxiliary network to finally obtain a final result of the whole network model for living body detection, so that the final result can be compared with the preset category.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
FIG. 1 is a flowchart 1 of a method for detecting a living body according to a first embodiment of the present invention;
FIG. 2 is a flowchart 2 of a biopsy method according to a first embodiment of the present invention;
FIG. 3 is a flowchart of training of a biopsy model provided in a second embodiment of the present invention;
FIG. 4 is a flowchart of a biopsy system according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.
The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to a method of detecting a living body. The specific process is shown in FIG. 1.
Step 101, at least two pictures are simultaneously acquired through the image acquisition device.
In this embodiment, at least two image capturing devices are provided on the living body detecting apparatus, and the image capturing devices may be cameras or other devices that can be used for image capturing. The image acquisition equipment is relatively fixed and can acquire pictures simultaneously.
Specifically, the acquired picture may be a color (RGB) picture, a near infrared (nir) picture, a depth map, or the like, and in particular practice, at least one RGB picture is generally used, but the type of the acquired picture is not limited herein.
Further, before entering the main and auxiliary neural networks, the face detection needs to be performed on the picture, and if no face exists in the picture, the subsequent living body detection is not required. The specific flow is shown in fig. 2:
step 201, sending the first picture to a face detection module.
Specifically, the face detection module uses an existing deep neural network for face detection, such as MTCNN, YOLO, retinaNet, and the like.
Step 202, the face detection module sends the face image in the first picture to the main neural network.
And 203, the face detection module sends the position coordinates of the face in the first picture to the O-Net module.
Specifically, the O-Net module includes an O-Net architecture and other logic functions, and is an independent logic module for processing the second picture, i.e., regressing the face image in the second picture.
And step 204, the O-Net module regresses the face image in the second picture by combining the position coordinates.
And step 205, sending the face image in the second picture to an auxiliary neural network.
Specifically, the primary neural network receives a first picture processed by the face detection module, and the secondary neural network receives a second picture processed by the O-Net module.
It should be noted that, in specific practice, the main neural network is generally set as a neural network with higher precision, and the auxiliary neural network is set as a neural network with higher speed, because the result of Block in the main neural network needs to be fused with the result of Block in the auxiliary neural network, and the fused auxiliary neural network cannot be slower than the main neural network, otherwise, the work of the main neural network and the speed of the whole living body detection will be slowed down. Due to the arrangement in specific practice, the O-Net module with a relatively high regression speed is adopted in the auxiliary neural network, so that the processing speed of the auxiliary neural network is increased.
And 102, respectively sending the pictures to at least one main neural network and at least one auxiliary neural network, wherein the output result of the main neural network is fused with the output result of the auxiliary neural network.
In the embodiment, the main neural network generally adopts a neural network architecture with higher precision, and the auxiliary neural network generally adopts a neural network architecture with higher processing speed, but different neural network architectures can be used according to different application scenarios of the in-vivo detection method provided by the invention, and the neural network architecture is not specifically limited here.
For example, the primary neural Network adopts a BlockA structure based on a Residual Network (ResNet), and the secondary neural Network adopts a blockab structure based on MobileNet:
specifically, 4 convolutional layers are generally used in block a, residual is added to the outputs of the 2 nd and 4 th convolutions, 6 convolutional layers are generally used in block b, and residual is added after the 3 rd convolutional layer, so that the problem of network degradation is effectively solved.
Furthermore, the convolution of Block B adopts a separation structure of depthwise (dw) and pointwise (pw) to extract features, and compared with the conventional convolution operation, the method has the advantages of lower parameter quantity and operation cost and greatly reduced calculated amount.
In this embodiment, the output of each of the BlockA structures in the master neural network fuses the output results of the blockab of the same stage and sends the fused result to the next BlockA structure, for example, the output result of BlockA _1 fuses the output result of blockab _1 and sends the fused result to BlockA _2.
Specifically, the fusion mode may be a connection of channel dimensions (localization) or an addition of corresponding elements (element-wise add), and other feature fusion modes in the convolutional neural network may be used, which is not specifically limited herein.
And 103, outputting the living body characteristics through the main neural network and the auxiliary neural network, and sending the living body characteristics to a full connection layer.
In the present embodiment, the master neural network and the auxiliary neural network output features respectively, and the features are fused, and the fusion method may be localization.
And 104, outputting the category of the living body characteristic as a living body detection result through the full connection layer and the softmax layer.
In this embodiment, the neural network cascades the fully-connected layer and the softmax layer for processing characteristics of the neural network output.
Specifically, the fully-connected layer is used to reduce the influence of feature position on classification, and the softmax layer is used to output the class of features.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are within the scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A second embodiment of the present invention relates to training of a living body detection model, as shown in fig. 3, including:
Step 301, acquiring a picture through an image acquisition device.
In this embodiment, the acquired pictures include a live picture and a non-live picture, the live picture is a live picture, and the non-live picture is not limited to a printed face picture, a face picture displayed on an electronic screen, a face of a mask, a face of a cartoon character, and the like. Since the purpose of the present embodiment is to train the biopsy model, the more samples that are trained, the better the training effect, and the higher the accuracy of the biopsy.
Step 302, classifying and labeling the acquired pictures.
In the present embodiment, the captured picture is labeled as a corresponding category according to the object to be captured. For example, categories are classified into live human beings, printed photographs, photographs of electronic screens, photographs of wearing masks, and other non-live human beings.
Specifically, the labeled labels may be one-hot vectors, that is, the numerical value is 1 under the category corresponding to the picture, and the others are 0.
And 303, preprocessing the picture to construct a training set.
In this embodiment, the preprocessing is shown as step 201 and steps 203-204, which are not described herein again.
The first picture processed in step 201 and the second picture processed in step 204 are added to the training set and the class of the group of pictures is labeled.
Step 304, training the network model through the training set.
In this embodiment, the architecture of the network model is shown in step 102, which is not described herein.
Specifically, the features output by the main and auxiliary neural networks are fused, the feature classes obtained by the network model are output through the full connection layer and the softmax layer, and the feature classes and the class output loss function labeled in advance are output, as shown in formula (1):
Figure GDA0003915413480000061
wherein, y i For the class of the input picture pair, O i Output category for softmax.
Further, training the network model through a Stochastic Gradient Descent (SGD) method, and storing the network model and the optimal parameters.
A third embodiment of the present invention relates to a living body detection system, as shown in fig. 4, including:
a picture acquiring module 401, configured to acquire at least two pictures simultaneously through the image acquisition device.
In this embodiment, the image acquisition module 401 includes an image acquisition module and a face recognition module.
Specifically, the image acquisition module comprises at least two image acquisition devices, the image acquisition devices are relatively fixed, and the acquired images are images at the same position.
The face recognition module comprises a face detection module and an O-Net module, wherein the face detection module processes the first picture and sends the first picture to the main neural network, and the O-Net module processes the second picture and sends the second picture to the auxiliary neural network.
The living body detection module 402 is configured to send the picture to at least one main neural network and at least one auxiliary neural network, respectively, where an output result of the main neural network is fused with an output result of the auxiliary neural network; outputting the living body characteristics through the main neural network and the auxiliary neural network, and sending the living body characteristics to a full connection layer;
specifically, the living body detection module comprises at least one main neural network and at least one auxiliary neural network, and a full connection layer and a softmax layer which are cascaded with the neural networks, wherein each layer is connected in a cascaded mode.
A result output module 403, configured to output the category of the living body feature as the result of the living body detection through the full connection layer and the softmax layer.
Specifically, the result output module outputs the result of the living body test in accordance with a category set in advance.
It should be understood that this embodiment is a system example corresponding to the first embodiment, and may be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related-art details mentioned in the present embodiment can also be applied to the first embodiment.
It should be noted that, in practical applications, one logical unit may be one physical unit, may be a part of one physical unit, and may also be implemented by a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, elements that are not so closely related to solving the technical problems proposed by the present invention are not introduced in the present embodiment, but this does not indicate that other elements are not present in the present embodiment.
A fifth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 5, including:
at least one processor 501; and a memory 502 communicatively coupled to the at least one processor 501; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform any of the liveness detection methods.
The memory and the processor are connected by a bus, which may include any number of interconnected buses and bridges, linking together one or more of the various circuits of the processor and the memory. The bus may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
Those skilled in the art can understand that all or part of the steps in the method of the foregoing embodiments may be implemented by a program to instruct related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, etc.) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (9)

1. A method for detecting a living body, applied to an electronic device having at least two image capturing devices, wherein the at least two image capturing devices capture images of a same position, comprises:
simultaneously acquiring at least two pictures through the image acquisition device;
respectively sending the pictures to at least one main neural network and at least one auxiliary neural network, wherein the output result of the main neural network is fused with the output result of the auxiliary neural network;
outputting the living body characteristics through the main neural network and the auxiliary neural network, and sending the living body characteristics to a full connection layer;
outputting the category of the living body characteristic as a result of the living body detection through the full connection layer and the softmax layer;
in the process of fusing the output result of the main neural network with the output result of the auxiliary neural network, the main neural network and the auxiliary neural network both comprise a plurality of blocks; the method comprises the following steps:
fusing the output result of each Block in the primary neural network with the output result of the Block in the same stage in the auxiliary neural network;
and sending the output result to the next Block in the master neural network.
2. The in-vivo detection method according to claim 1, comprising, after said simultaneously acquiring at least two pictures by said image acquisition device:
detecting whether the first picture has a face image;
and if the first picture has the face image, the face image is sent to the main neural network.
3. The in-vivo detection method according to claim 2, wherein the first picture having the face image sends the face image to the autonomic neural network, further comprising:
and sending the position coordinates of the human face.
4. The in-vivo detection method according to claim 3, after the sending of the coordinates of the position of the face, comprising:
regressing the face image in the second picture by combining the position coordinates;
and sending the face image in the second picture to the auxiliary neural network.
5. The in-vivo detection method according to claim 1, wherein said sending the pictures to at least one primary neural network and at least one secondary neural network respectively comprises:
and training the main neural network and the auxiliary neural network according to preset classification, wherein the classification at least comprises a live human living body class.
6. The in-vivo detection method according to claim 1, wherein the outputting of the in-vivo characteristics through the main neural network and the auxiliary neural network and the sending to a full connection layer comprises:
fusing the characteristics output by the main neural network and the characteristics output by the auxiliary neural network;
and sending the fusion result to the full connection layer.
7. A living body detection system, comprising:
the image acquisition module is used for acquiring at least two images simultaneously through the image acquisition device;
the living body detection module is used for respectively sending the pictures to at least one main neural network and at least one auxiliary neural network, wherein the output result of the main neural network is fused with the output result of the auxiliary neural network; outputting the living body characteristics through the main neural network and the auxiliary neural network, and sending the living body characteristics to a full connection layer;
a result output module for outputting the category of the living body feature as a result of the living body detection through the full connection layer and the softmax layer;
wherein, the in vivo detection module is specifically configured to: in the process of fusing the output result of the main neural network with the output result of the auxiliary neural network, the main neural network and the auxiliary neural network both comprise a plurality of blocks;
Fusing the output result of each Block in the main neural network with the output result of the Block in the same level in the auxiliary neural network;
and sending the output result to the next Block in the master neural network.
8. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the liveness detection method of any one of claims 1-6.
9. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the in-vivo detection method according to any one of claims 1 to 6.
CN202011196654.8A 2020-10-30 2020-10-30 Living body detection method, living body detection system, electronic device, and storage medium Active CN112307973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011196654.8A CN112307973B (en) 2020-10-30 2020-10-30 Living body detection method, living body detection system, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011196654.8A CN112307973B (en) 2020-10-30 2020-10-30 Living body detection method, living body detection system, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN112307973A CN112307973A (en) 2021-02-02
CN112307973B true CN112307973B (en) 2023-04-18

Family

ID=74333508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011196654.8A Active CN112307973B (en) 2020-10-30 2020-10-30 Living body detection method, living body detection system, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN112307973B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169867A (en) * 2007-12-04 2008-04-30 北京中星微电子有限公司 Image dividing method, image processing apparatus and system
CN103136893A (en) * 2013-01-24 2013-06-05 浙江工业大学 Tunnel fire early-warning controlling method based on multi-sensor data fusion technology and system using the same
CN108229325A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Method for detecting human face and system, electronic equipment, program and medium
CN108345818A (en) * 2017-01-23 2018-07-31 北京中科奥森数据科技有限公司 A kind of human face in-vivo detection method and device
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN110705365A (en) * 2019-09-06 2020-01-17 北京达佳互联信息技术有限公司 Human body key point detection method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019100436A1 (en) * 2017-11-22 2019-05-31 Zhejiang Dahua Technology Co., Ltd. Methods and systems for face recognition

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101169867A (en) * 2007-12-04 2008-04-30 北京中星微电子有限公司 Image dividing method, image processing apparatus and system
CN103136893A (en) * 2013-01-24 2013-06-05 浙江工业大学 Tunnel fire early-warning controlling method based on multi-sensor data fusion technology and system using the same
CN108345818A (en) * 2017-01-23 2018-07-31 北京中科奥森数据科技有限公司 A kind of human face in-vivo detection method and device
CN108229325A (en) * 2017-03-16 2018-06-29 北京市商汤科技开发有限公司 Method for detecting human face and system, electronic equipment, program and medium
CN109034102A (en) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 Human face in-vivo detection method, device, equipment and storage medium
CN110705365A (en) * 2019-09-06 2020-01-17 北京达佳互联信息技术有限公司 Human body key point detection method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Dual-station radar-based living body detection and localisation;Chao Yan等;《The Journal of Engineering》;20191231;第7880-7884页 *
神经网络方法在证券市场预测中的应用研究;杨楠 等;《金融经济》;20160930;第63-64页 *

Also Published As

Publication number Publication date
CN112307973A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN108596277B (en) Vehicle identity recognition method and device and storage medium
CN108351967A (en) A kind of plurality of human faces detection method, device, server, system and storage medium
CN108073898B (en) Method, device and equipment for identifying human head area
CN111462128B (en) Pixel-level image segmentation system and method based on multi-mode spectrum image
CN205810112U (en) Vehicle License Plate Recognition System
CN105574550A (en) Vehicle identification method and device
CN112418360B (en) Convolutional neural network training method, pedestrian attribute identification method and related equipment
CN107633237A (en) Image background segmentation method, device, equipment and medium
CN105590089A (en) Face recognition method and device
CN111126411B (en) Abnormal behavior identification method and device
CN113642639A (en) Living body detection method, living body detection device, living body detection apparatus, and storage medium
JP7059889B2 (en) Learning device, image generator, learning method, and learning program
CN116863297A (en) Monitoring method, device, system, equipment and medium based on electronic fence
CN113792686B (en) Vehicle re-identification method based on visual representation of invariance across sensors
CN112766176B (en) Training method of lightweight convolutional neural network and face attribute recognition method
CN112597995B (en) License plate detection model training method, device, equipment and medium
CN108154199B (en) High-precision rapid single-class target detection method based on deep learning
CN112232236B (en) Pedestrian flow monitoring method, system, computer equipment and storage medium
CN112307973B (en) Living body detection method, living body detection system, electronic device, and storage medium
You et al. Tampering detection and localization base on sample guidance and individual camera device convolutional neural network features
CN115909408A (en) Pedestrian re-identification method and device based on Transformer network
CN110378241A (en) Crop growthing state monitoring method, device, computer equipment and storage medium
CN111860100B (en) Pedestrian number determining method and device, electronic equipment and readable storage medium
CN113642353B (en) Training method of face detection model, storage medium and terminal equipment
CN110263788B (en) Method and system for quickly identifying vehicle passing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant