CN113420747B - Face recognition method and device, electronic equipment and storage medium - Google Patents

Face recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113420747B
CN113420747B CN202110979814.4A CN202110979814A CN113420747B CN 113420747 B CN113420747 B CN 113420747B CN 202110979814 A CN202110979814 A CN 202110979814A CN 113420747 B CN113420747 B CN 113420747B
Authority
CN
China
Prior art keywords
fusion
geometric
features
facial
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110979814.4A
Other languages
Chinese (zh)
Other versions
CN113420747A (en
Inventor
王凌云
郑玉玲
王梓凝
蔺志峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengfang Financial Technology Co ltd
Original Assignee
Chengfang Financial Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengfang Financial Technology Co ltd filed Critical Chengfang Financial Technology Co ltd
Priority to CN202110979814.4A priority Critical patent/CN113420747B/en
Publication of CN113420747A publication Critical patent/CN113420747A/en
Application granted granted Critical
Publication of CN113420747B publication Critical patent/CN113420747B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a face recognition method, a face recognition device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an original face image, and determining facial texture features and geometric features of five sense organs of a face part in the original face image; performing feature fusion on the facial texture features and the geometric features of the five sense organs to obtain facial fusion features; and carrying out face recognition based on the face fusion characteristics of the original face image. According to the embodiment of the invention, the problem that the face recognition only by using the facial texture features is easily interfered by image noise or easily influenced by the face posture is solved, and the technical effects of resisting noise interference to a certain degree and effectively recognizing the face under different face postures are achieved.

Description

Face recognition method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of deep learning, in particular to a face recognition method, a face recognition device, electronic equipment and a storage medium.
Background
The current face recognition technology has wide application in scenes such as face brushing payment, face brushing login, face brushing unlocking, face brushing check-in, unmanned retail, public place safety control and the like. After 2014, the deep neural network is developed vigorously, a face recognition model based on deep learning is continuously refreshed in an LFW (laboratory Faces in the Wild) data set, the best record is better than that of human beings, and a face recognition technology based on the deep neural network becomes a mainstream method in the field of face recognition.
Currently, a mainstream face recognition algorithm based on a deep neural network usually aligns an input face picture when a face embedding vector is trained. Although the prior art can resist the influence caused by different postures to a certain extent, the extraction of textural features from a face picture by affine transformation alone cannot resist noise interference to a certain extent and accurately identify the face picture under different face postures, and particularly cannot resist the spoofing attack of face identification by adding noise to an image to resist the capture of textural features by a network.
Disclosure of Invention
The embodiment of the invention provides a face recognition method, a face recognition device, electronic equipment and a storage medium, and aims to achieve the technical effects of resisting noise interference to a certain degree and effectively performing face recognition under different face postures.
In a first aspect, an embodiment of the present invention provides a face recognition method, where the face recognition method includes:
acquiring an original face image, and determining facial texture features and geometric features of five sense organs of a face part in the original face image;
performing feature fusion on the facial texture features and the geometric features of the five sense organs to obtain facial fusion features;
and carrying out face recognition based on the face fusion characteristics of the original face image.
In a second aspect, an embodiment of the present invention further provides a face recognition apparatus combining geometric features of five sense organs, where the face recognition apparatus includes:
the original face image acquisition module is used for acquiring an original face image and determining facial texture features and geometric features of five sense organs of a face part in the original face image;
the feature fusion module is used for performing feature fusion on the facial texture features and the geometric features of the five sense organs to obtain facial fusion features;
and the face recognition module is used for carrying out face recognition based on the face fusion characteristics of the original face image.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a storage device for storing one or more programs,
when the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the face recognition method provided by any embodiment of the present invention.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the face recognition method provided in any embodiment of the present invention.
According to the technical scheme of the embodiment of the invention, the geometrical characteristics of five sense organs and the facial texture characteristics are determined by acquiring the original face image, and the facial texture characteristics and the geometrical characteristics of the five sense organs are subjected to characteristic fusion, so that the problem that the face recognition is easily interfered by image noise or easily influenced by face postures only by using the facial texture characteristics is solved, and the technical effects of resisting noise interference to a certain degree and effectively carrying out face recognition under different face postures are achieved.
Drawings
In order to more clearly illustrate the technical solutions of the exemplary embodiments of the present invention, a brief description is given below of the drawings used in describing the embodiments. It should be understood that the drawings described are only for a part of the embodiments of the present invention and not for all embodiments, and that other drawings may be derived by those skilled in the art without inventive step.
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of geometric features of facial features according to a first embodiment of the present invention;
fig. 3 is a schematic flow chart of a face recognition method according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of a geometric angle between facial features according to a second embodiment of the present invention;
fig. 5 is a schematic view of a face recognition process according to a second embodiment of the present invention;
fig. 6 is a schematic flow chart illustrating fusion feature extraction of a face recognition method according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a face recognition apparatus according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
It should be further noted that, for the convenience of description, only some but not all of the relevant aspects of the present invention are shown in the drawings. Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Example one
Fig. 1 is a schematic flow chart of a face recognition method according to an embodiment of the present invention, which is applicable to a face recognition scenario, and particularly can effectively resist a situation that face recognition is attacked by adding noise to an image.
As shown in fig. 1, the method of the embodiment may specifically include:
s110, acquiring an original face image, and determining facial texture features and geometric features of five sense organs of a face part in the original face image.
The original face image may be a face image acquired by an image acquisition device, for example, the original face image may be a face image acquired by a monitoring device, a camera, or the like.
The facial texture feature can be a visual feature reflecting a homogeneous phenomenon in a facial image, and embodies the surface structure organization arrangement attribute with slow change or periodic change of the facial surface. Facial texture features may be obtained, for example, by a convolutional neural network.
The determining of the facial texture features of the face part in the original face image may be performed by aligning the original face image, and then extracting the facial texture features by using a Convolutional Neural Networks (CNN) feature extraction network.
Optionally, on the basis of any optional technical solution in the embodiment of the present invention, the determining the facial texture feature of the face part in the original face image specifically includes: and if the original face image is the face front image, inputting the face front image into a convolutional neural network trained in advance to obtain the face texture characteristics of the face part in the original face image.
Optionally, if the original face image is not the face front image, the original face image may be transformed into the face front image through affine transformation, and then the face front image is input into a convolutional neural network trained in advance to obtain the facial texture features of the face part in the original face image.
The geometric features of the five sense organs can be geometric features of the five sense organs of the human face on the plane of the human face, and for example, the geometric features can include an eye distance ratio, an eye nose ratio, a nose shape ratio, a lip distance ratio, an ear nose ratio, a lip nose ratio and the like.
The determining of the geometric features of the five sense organs of the face part in the original face image may be determining the location of the five sense organs of the face by performing keypoint detection on the original face image, and then calculating the geometric features of the five sense organs.
Optionally, on the basis of any optional technical solution in the embodiment of the present invention, the geometric features of five sense organs include at least one of an eye distance ratio, an eye nose ratio, a nose shape ratio, a lip distance ratio, and a lip nose ratio.
As shown in fig. 2, is a schematic diagram of the geometric features of the five sense organs, which is an alternative to the embodiment of the present invention, and the embodiment of the present invention is not limited to the specific geometric features of the five sense organs; as shown in FIG. 2, the eye-to-nose ratio may be eye _ sd/eye _ ld, the eye-to-nose ratio may be eye _ sd/nose _ hd, the nose-to-nose ratio may be nose _ hd/nose _ ld, the lip-to-nose ratio may be philitrum _ ld/child _ ld, and the lip-to-nose ratio may be philitrum _ ld/nose _ ld.
And S120, performing feature fusion on the facial texture features and the geometric features of the five sense organs to obtain facial fusion features.
Wherein, the feature fusion can be weighted fusion according to the facial texture features and the geometric features of the five sense organs. For example, facial texture features and geometric features of five sense organs can be processed by different weight matrixes, and then feature fusion is carried out on the facial texture features and the geometric features of five sense organs according to the weight matrixes. The facial texture features and the geometric features of the five sense organs processed by the weight matrix can be subjected to feature fusion by means of addition and/or convolution and/or matrix multiplication and the like.
And S130, carrying out face recognition based on the face fusion characteristics of the original face image.
The face recognition based on the face fusion features of the original face image can be realized by classifying the fusion features through a classifier and then carrying out face recognition.
According to the technical scheme, the geometric features and the facial textural features of the five sense organs are determined by acquiring the original face image, and the facial textural features and the geometric features of the five sense organs are subjected to feature fusion, so that the problem that face recognition is easily interfered by image noise or influenced by face postures only by using the facial textural features is solved, and the technical effects of resisting noise interference to a certain degree and effectively performing face recognition under different face postures are achieved.
Example two
Fig. 3 is a schematic flow chart of a face recognition method according to a second embodiment of the present invention, where the determining geometric features of five sense organs of a face portion in an original face image according to any optional technical solution in the second embodiment of the present invention includes: extracting key points of the original face image to obtain face key points; calculating an Euler angle of a face pose of the original face image based on the face key points; and calculating geometric characteristics of five sense organs when the face of the original face image turns to the front face based on the Euler angles.
Optionally, the feature fusing the facial texture features and the geometric features of the five sense organs includes: calculating the geometric included angle of the facial features of the original facial image based on the facial key points; determining a fusion guide vector for guiding the facial texture features and the geometric features of the five sense organs to perform feature fusion based on the geometric included angle and the Euler angle; and performing feature fusion on the original face image based on the facial texture features, the geometric features of the five sense organs and the fusion guide vector.
As shown in fig. 3, the method of the present embodiment may specifically include:
s210, obtaining an original face image, and determining facial texture features of a face part in the original face image.
And S220, extracting key points of the original face image to obtain face key points.
The face key points may be five sense organs in the face or other points that can facilitate face location and recognition, for example, points that are used to characterize the positions of the five sense organs, such as nose, eyes, mouth, ears and eyebrows, in the face. Specifically, the face keypoint detection may be performed by performing face keypoint detection on an original face image.
And S230, calculating the Euler angle of the face pose of the original face image based on the face key points.
The face pose may be angle information of the face orientation relative to the turning direction, for example, the face pose may be represented by a rotation matrix, a rotation vector, a quaternion, or an euler angle.
Wherein the Euler angles comprise a yaw angle, a pitch angle and a roll angle.
S240, calculating geometric characteristics of five sense organs when the face of the original face image turns to the front face based on the Euler angles.
The geometric features of the five sense organs when the face of the original face image turns to the front face are calculated based on the euler angles, the geometric features of the five sense organs when the face of the original face image turns to the front face are calculated by calculating the geometric features of the five sense organs for the key points of the original face image, and then performing cos/sin/tan and other spatial transformations according to the spatial euler angles, so that the geometric features of the five sense organs when the face turns to the front face are calculated, as shown in fig. 2, a schematic diagram of the geometric features of the five sense organs when the face turns to the front face is provided.
And S250, calculating the geometric included angle of the facial features of the original facial image based on the facial key points.
The geometric angle of the facial features can be a planar geometric angle of the facial features on a planar projection, for example, an angle between a nose, eyes and a mouth and an angle between the nose and two eyes, as shown in fig. 4, the geometric angle of the facial features is a schematic diagram of the geometric angle of the facial features, and the geometric angle of the facial features can be ═ CAD, < CAB, < CBA, < CBE, < CDE, and < CED, etc.
S260, determining a fusion guide vector for guiding the facial texture feature and the geometric feature of the five sense organs to perform feature fusion based on the geometric included angle and the Euler angle.
The fusion guidance vector can be understood as the fusion strength capable of indicating the facial texture feature and the geometric feature of the five sense organs, for example, different fusion strengths are determined according to the geometric angle and the Euler angle of the facial five sense organs so as to adapt to the fusion of the features of the face under different angles or different conditions of the geometric angle of the facial five sense organs, so that the fused features are more beneficial to face recognition, the feature proportion influencing the face recognition under different environmental influences is reduced, and the accuracy of the face recognition is improved.
Optionally, the determining a fusion guidance vector for guiding feature fusion of the facial texture features and the geometric features of the five sense organs based on the geometric angle and the euler angle includes: and splicing the geometric included angle and the Euler angle to obtain a fusion guide vector for guiding the facial texture feature and the geometric feature of the five sense organs to perform feature fusion.
The splicing can be directly connecting a geometric angle and an Euler angle to form a fusion guide vector, for example, 3 Euler angles can be spliced for k geometric angles of five sense organs to form a k + 3-dimensional fusion guide vector.
S270, performing feature fusion on the original face image based on the facial texture features, the geometric features of the five sense organs and the fusion guide vector to obtain face fusion features.
The feature fusion may be a weighted fusion of the facial texture features and the geometric features of the five sense organs according to the fusion guide vector, for example, the facial texture features and the geometric features of the five sense organs are processed by different weight matrices, then the facial texture features and the geometric features of the five sense organs which are processed by the weight matrices are feature fused according to the fusion guide vector, for example, the facial fusion features may be obtained by adding and/or convolving and/or matrix multiplying the facial texture features and the geometric features of the five sense organs by different weight matrices, and then the result is added and/or convolved and/or matrix multiplied by the fusion guide vector.
Optionally, on the basis of any optional technical solution in the embodiment of the present invention, the performing feature fusion on the original face image based on the facial texture features, the geometric features of the five sense organs, and the fusion guide vector includes:
step one, respectively determining a first fusion weight corresponding to the facial texture feature and a second fusion weight corresponding to the geometric feature of the five sense organs based on the fusion guide vector.
The first fusion weight may be determined according to the fusion guide vector and the facial texture feature, for example, the first fusion weight may be obtained by performing addition, convolution, matrix multiplication, or other operation on the fusion guide vector and the facial texture feature, or may be obtained by performing multi-layer convolution and activation function operation.
The second fusion weight may be a first fusion weight vector determined according to the fusion guidance vector and the geometric features of the five sense organs, for example, the first fusion weight may be obtained by performing addition, convolution, matrix multiplication, or other operation on the fusion guidance vector and the geometric features of the five sense organs, or may be obtained by performing multilayer convolution and activation function operation.
And secondly, performing feature fusion on the original face image based on the facial texture features, the first fusion weight, the geometric features of five sense organs and the second fusion weight.
The feature fusion is performed on the original face image based on the facial texture features, the first fusion weight, the geometric features of the five sense organs and the second fusion weight, the feature fusion may be performed in an operation manner such as addition, convolution and/or matrix multiplication between the facial texture features, the first fusion weight, the geometric features of the five sense organs and the second fusion weight, and the feature fusion may also be performed through a multi-layer convolution and activation function.
Specifically, the product of the facial texture feature and a first basic weight corresponding to the facial texture feature and the first fusion weight is calculated to obtain a to-be-fused facial texture feature; calculating the product of the geometric feature of the five sense organs and a second basic weight corresponding to the geometric feature of the five sense organs and the second fusion weight to obtain the geometric feature of the five sense organs to be fused; and performing feature fusion on the original face image based on the facial texture features to be fused and the geometric features of the five sense organs to be fused.
The first basic weight may be a weight matrix for extracting features of the facial texture feature by performing an operation with the facial texture feature, and may be, for example, a weight matrix for extracting features of the facial texture feature by performing a matrix multiplication between the facial texture feature and the first basic weight.
The second basic weight may be understood as a weight matrix for extracting the geometric features of the five sense organs by performing an operation on the geometric features of the five sense organs, for example, the weight matrix for extracting the geometric features of the five sense organs by performing a matrix multiplication on the geometric features of the five sense organs and the second basic weight may be used.
The facial texture features to be fused can be understood as the facial texture features to be fused which are fused with the facial texture features through the fusion guide vectors, and then are fused with the geometric features of the five sense organs to be fused to obtain fusion features.
The geometric features of the five sense organs to be fused can be the geometric features of the five sense organs to be fused which are fused with the geometric features of the five sense organs through the fusion guide vector, and then are fused with the facial texture features to be fused to obtain fusion features.
The original face image is subjected to feature fusion based on the facial texture features to be fused and the geometric features of the five sense organs to be fused, the original face image can be subjected to feature fusion in operation modes such as addition and/or convolution and/or matrix multiplication of the facial texture features to be fused and the geometric features of the five sense organs to be fused, for example, the facial texture features to be fused and the geometric features of the five sense organs to be fused are subjected to same-dimension transformation, and then are correspondingly added to perform feature fusion.
Optionally, on the basis of any optional technical solution in the embodiment of the present invention, the determining, based on the fusion guidance vector, a first fusion weight corresponding to a facial texture feature and a second fusion weight corresponding to a geometric feature of the five sense organs respectively includes: step one, determining a guiding facial texture weight corresponding to the facial texture feature and a guiding five sense organ geometric weight corresponding to the five sense organ geometric feature based on the fusion guiding vector, a guiding weight matrix corresponding to the fusion guiding vector, the facial texture feature, a first texture weight matrix corresponding to the facial texture feature, the five sense organ geometric feature and a first geometric weight matrix corresponding to the five sense organ geometric feature.
The guidance weight matrix may be a weight matrix that is operated with the fusion guidance vector to extract the feature of the fusion guidance vector, and may be a weight matrix that is obtained by performing matrix multiplication on the fusion guidance vector and the guidance weight matrix to extract the feature of the fusion guidance vector, for example.
The first texture weight matrix may be a weight matrix that is operated with the facial texture feature to extract the feature of the facial texture feature, for example, the first texture weight matrix may be a weight matrix that is matrix-multiplied with the facial texture feature to extract the feature of the facial texture feature.
The first geometric weight matrix may be a weight matrix for extracting the geometric features of the five sense organs by performing an operation on the geometric features of the five sense organs, for example, the first geometric weight matrix may be a weight matrix for extracting the geometric features of the five sense organs by performing a matrix multiplication on the geometric features of the five sense organs and the first geometric weight matrix.
Inputting the guide facial texture weight and the guide facial organ geometric weight into a softmax layer, and outputting a first fusion weight corresponding to the facial texture feature and a second fusion weight corresponding to the facial organ geometric feature.
Wherein the guiding facial texture weight is a product of the fused guiding vector and a guiding weight matrix corresponding to the fused guiding vector and a product of the facial texture feature and a first texture weight matrix corresponding to the facial texture feature; the geometric weight of the five sense organs is the product of the fused guide vector and a guide weight matrix corresponding to the fused guide vector and the product of the geometric feature of the five sense organs and a first geometric weight matrix corresponding to the geometric feature of the five sense organs.
Wherein the softmax layer may be for mapping one vector into another vector such that elements of each dimension of the vector range between 0 and 1 and the sum of all elements is 1, the guided facial texture weight and the guided facial geometric weight may be normalized to a first fusion weight and a second fusion weight by the softmax layer.
Wherein the guiding facial texture weight may be a vector obtained by performing a matrix product calculation on a first product vector and a second product vector, and the first product vector may be obtained by multiplying the fusion guiding vector by a guiding weight matrix corresponding to the fusion guiding vector; the second product vector may be obtained by multiplying the facial texture feature by a first texture weight matrix corresponding to the facial texture feature.
The geometric weight of the guiding five sense organs can be a vector obtained by performing matrix product calculation on a first product vector and a third product vector, and the first product vector can be obtained by multiplying the fusion guiding vector by a guiding weight matrix corresponding to the fusion guiding vector; the third product vector may be obtained by multiplying the geometric feature of the five sense organs by a first geometric weight matrix corresponding to the geometric feature of the five sense organs.
And S280, carrying out face recognition based on the face fusion characteristics of the original face image.
According to the technical scheme of the embodiment, the fusion guidance vector is obtained by calculating the geometric included angle of the facial features of the human face and fusing the geometric included angle with the Euler angle, so that the fusion of the facial texture features and the geometric features of the facial features is guided, the fusion features of the facial image are obtained, the problem of how to fuse the facial texture features and the geometric features of the facial features is solved, and the technical effect of fusing the geometric features of the facial features and the facial texture features through the fusion guidance vector in a strength telescopic mode is achieved.
Fig. 5 is a schematic view of a face recognition process according to an embodiment of the present invention, and fig. 5 is an example to illustrate an alternative scheme of face recognition according to an embodiment of the present invention.
Step 1, inputting an original face image, namely inputting a face picture to be recognized.
And 2, detecting a face frame of the original face image.
And 3, extracting key points of the original face image to obtain face key points.
And 4, calculating the geometric included angle of facial features of the original image, and calculating Euler angles (a yaw angle, a pitch angle and a roll angle) of the facial pose.
Step 5, correcting and aligning the face posture, and calculating the five facial geometric features when the face turns to the front face based on the Euler angle of the face posture: including the eye distance ratio, eye nose ratio, nose shape ratio, lip distance ratio and lip nose ratio.
And 6, splicing the geometric included angle of the facial features and the Euler angle of the facial pose to form a guidance vector.
And 7, calculating geometric characteristics of the five sense organs according to the key points of the human face.
And 9, guiding the fusion strength of the geometric features of the facial features according to the geometric included angle of the facial features and the Euler angle to generate an embedded feature extraction network with the fused facial texture features and the geometric features of the facial features with the telescopic strength, and extracting the fusion features.
And step 10, comparing the original face image (namely, the input face image) based on the fusion characteristics.
Fig. 6 is a schematic flow chart of fused feature extraction in a face recognition method according to an embodiment of the present invention, and an alternative scheme of the fused feature extraction according to the embodiment of the present invention is described with reference to fig. 6 as an example.
As shown in fig. 6, a front image of a face image is obtained by aligning face images to be recognized, and the aligned face images are input into a CNN feature extraction network trained in advance to obtain a facial texture feature of the face image (i.e., a facial texture feature F in fig. 6). Then, feature fusion is performed through the facial texture features, the geometric features G of the five sense organs of the face picture and a fusion guide vector (i.e., the fusion guide vector a in fig. 6) to obtain face fusion features E, and face recognition is performed based on the face fusion features E.
Specifically, the geometric feature G of the five sense organs passes through a first geometric weight matrix Wk2And a second basis weight Wv2Then respectively changing into two vectors with the dimensionality of n, and enabling the facial texture feature F to pass through a first texture weight matrix Wk1And a first basis weight Wv1Then respectively changing into two vectors with dimensionality n, fusing the guide vector A and passing through a guide weight matrix WqAnd then also becomes a vector of dimension n.
Wherein, the following formula can be included to calculate the geometric weight V of the guide five sense organs2
V 2 =(W q A)*(W k2 G)
Wherein, V2Representing the geometric weight of the five sense organs, A representing the fusion guide vector of the face image, WqRepresenting the guiding weight matrix corresponding to the fusion guiding vector, representing matrix multiplication, WqA represents that the fusion guide vector A passes through a guide weight matrix WqThe neural network after processing may, for example, be by WqPerforming matrix multiplication with A to obtain WqA,Wk2G represents geometric feature G of five sense organs weighted by Wk2The neural network after processing may, for example, be by Wk2Multiplying with G to obtain Wk2G,Wk2And G represents the geometric features of the five sense organs of the face picture.
Wherein, the guiding facial texture weight V1 can be calculated by the following formula:
V 1 =(W q A)*(W k1 F)
wherein, V1Representing the corresponding guide facial texture weight of the facial texture feature, A representing the fusion guide vector of the facial image, WqRepresenting the guiding weight matrix corresponding to the fusion guiding vector, representing the matrix multiplication, WqA represents that the fusion guide vector A passes through a guide weight matrix WqThe neural network after processing may, for example, be by WqPerforming matrix multiplication with A to obtain WqA,Wk1F represents the facial texture feature F passing through a first texture weight matrix Wk1The neural network after processing may, for example, be by Wk1Is multiplied by F to obtainWk1F, Wk1And F represents the facial texture features of the facial image.
Wherein the first fusion weight Z can be calculated by taking into account the following formula1And a second fusion weight Z2
Z 1 ,Z 2 =softmax(V 1 ,V 2 )
Wherein the content of the first and second substances,softmax(V 1 ,V 2 )representing inputting the guide facial texture weight and the guide facial geometric weight into a softmax layer and outputting a first fusion weight Z corresponding to the facial texture feature1And a second fusion weight Z corresponding to the geometric feature of the five sense organs2
Wherein, the fusion characteristic E can be calculated by the following formula:
E=Z 2 *(W v2 G)+Z 1 *(W v1 F)
wherein Z is1Representing facial texture features corresponding to a first fusion weight, Z2Representing a second fusion weight, W, corresponding to the geometric feature of the five sense organsv1First basis weight, W, representing the correspondence of facial texture featuresv2Representing the second basic weight corresponding to the geometric feature of the five sense organs, G representing the geometric feature of the five sense organs of the human face picture, F representing the facial texture feature of the human face picture, Wv2G represents geometric feature G of five sense organs passing through a second basis weight Wv2The neural network after processing may, for example, be by Wv2Multiplying with G to obtain Wv2G,Wv1F represents that the facial texture feature F passes through a first basis weight Wv1The neural network after processing, for example, mayThrough Wv1Performing matrix multiplication with F to obtain Wv1F, + represents the addition of two matrices, a multiplication, a priority greater than +.
The vector with the dimension n may be a vector with the dimension n obtained by multiplying two matrices, where n is not specifically limited, and only represents the matrix dimension obtained by multiplying two matrices.
According to the technical scheme of the optional embodiment, the facial geometric features of the five sense organs of the human face are extracted and fused with the facial textural features of the human face to recognize the human face, the problem that the short plate for recognizing the geometric structure of the five sense organs in the prior art can enhance the accuracy of the facial recognition and improve the attack resistance to textural feature noise is solved, and the novel fusion embedding network provided by the optional embodiment has certain gesture tolerance.
EXAMPLE III
Fig. 7 is a schematic structural diagram of a face recognition device combining geometric features of five sense organs according to a third embodiment of the present invention, where the face recognition device combining geometric features of five sense organs according to the third embodiment of the present invention may be implemented by software and/or hardware, and may be configured in a terminal and/or a server to implement the face recognition method according to the third embodiment of the present invention. As shown in fig. 7, the apparatus may specifically include: an original face image acquisition module 310, a feature fusion module 320 and a face recognition module 330.
The original face image obtaining module 310 is configured to obtain an original face image, and determine facial texture features and geometric features of five sense organs of a face part in the original face image;
the feature fusion module 320 is configured to perform feature fusion on the facial texture features and the geometric features of the five sense organs to obtain face fusion features;
the face recognition module 330 is configured to perform face recognition based on the face fusion features of the original face image.
According to the technical scheme, the geometric features and the facial textural features of the five sense organs are determined by acquiring the original face image, and the geometric features of the five sense organs are subjected to feature fusion, so that the problem that face recognition is easily interfered by image noise or easily influenced by face postures only by using the facial textural features is solved, and the technical effects of resisting noise interference to a certain degree and effectively performing face recognition under different face postures are achieved.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the original face image obtaining module 310 includes: the face conversion facial feature extraction device comprises a key point extraction unit, an Euler angle calculation unit and a face conversion facial feature geometric feature calculation unit.
The key point extracting unit is used for extracting key points of the original face image to obtain face key points; the Euler angle calculation unit is used for calculating the Euler angle of the face posture of the original face image based on the face key point; and the human face to facial features geometric feature calculation unit is used for calculating the facial features geometric feature of the facial features of the original human face image when the human face turns to the facial features based on the Euler angles.
Based on any of the optional technical solutions in the embodiments of the present invention, optionally, the geometric features of the five sense organs include at least one of an eye distance ratio, an eye nose ratio, a nose shape ratio, a lip distance ratio and a lip nose ratio.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the feature fusion module 320 includes: a geometric angle calculation unit of the five sense organs, a fusion guide vector determination unit and a feature fusion unit.
The geometric included angle calculation unit of the facial features is used for calculating the geometric included angle of the facial features of the original facial image based on the facial key points; a fusion guide vector determination unit, configured to determine a fusion guide vector for guiding feature fusion of the facial texture features and the geometric features of the five sense organs based on the geometric angle and the euler angle; and the feature fusion unit is used for performing feature fusion on the original face image based on the facial texture features, the geometric features of the five sense organs and the fusion guide vector.
On the basis of any optional technical scheme in the embodiment of the present invention, optionally, the fusion guidance vector determining unit is configured to splice the geometric angle and the euler angle to obtain a fusion guidance vector for guiding feature fusion of the facial texture feature and the geometric feature of five sense organs.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the feature fusion unit includes: the first and second fusion weights determine the sub-units and the feature fusion sub-units.
Wherein the first fusion weight and the second fusion weight determine the subunit: for determining, based on the fusion guidance vector, a first fusion weight corresponding to facial textural features and a second fusion weight corresponding to the geometric features of the five sense organs, respectively; and the feature fusion subunit is used for performing feature fusion on the original face image based on the facial texture features, the first fusion weight, the geometric features of the five sense organs and the second fusion weight.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the first fusion weight and the second fusion weight determining subunit is configured to:
determining a guiding facial texture weight corresponding to the facial texture feature and a guiding facial geometric weight corresponding to the facial geometric feature based on the fused guide vector, a guiding weight matrix corresponding to the fused guide vector, the facial texture feature, a first texture weight matrix corresponding to the facial texture feature, the facial geometric feature and a first geometric weight matrix corresponding to the facial geometric feature;
inputting the guide facial texture weight and the guide facial geometric weight into a softmax layer, and outputting a first fusion weight corresponding to a facial texture feature and a second fusion weight corresponding to the facial geometric feature;
wherein the guiding facial texture weight is a product of the fused guiding vector and a guiding weight matrix corresponding to the fused guiding vector and a product of the facial texture feature and a first texture weight matrix corresponding to the facial texture feature; the geometric weight of the five sense organs is the product of the fused guide vector and a guide weight matrix corresponding to the fused guide vector and the product of the geometric feature of the five sense organs and a first geometric weight matrix corresponding to the geometric feature of the five sense organs.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the feature fusion subunit is configured to:
performing product calculation on the product of the facial texture features and first basic weight corresponding to the facial texture features and the first fusion weight to obtain facial texture features to be fused;
calculating the product of the geometric feature of the five sense organs and a second basic weight corresponding to the geometric feature of the five sense organs and the second fusion weight to obtain the geometric feature of the five sense organs to be fused;
and performing feature fusion on the original face image based on the facial texture features to be fused and the geometric features of the five sense organs to be fused.
On the basis of any optional technical solution in the embodiment of the present invention, optionally, the original face image obtaining module 310 is configured to:
and if the original face image is the face front image, inputting the face front image into a convolutional neural network trained in advance to obtain the face texture characteristics of the face part in the original face image.
The device can execute the method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 8 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention, as shown in fig. 8, the electronic device includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the device may be one or more, and one processor 410 is taken as an example in fig. 8; the processor 410, the memory 420, the input device 430 and the output device 440 in the apparatus may be connected by a bus or other means, and fig. 8 illustrates the connection by a bus as an example.
The memory 420, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the rendering method in the embodiment of the present invention. The processor 410 executes various functional applications of the device and data processing by executing software programs, instructions, and modules stored in the memory 420.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to devices through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive entered numeric or character information and to generate signal inputs relating to user settings and function control of the apparatus. The output device 440 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a face recognition method, and the method includes: acquiring an original face image, and determining facial texture features and geometric features of five sense organs of a face part in the original face image; performing feature fusion on the facial texture features and the geometric features of the five sense organs to obtain facial fusion features; and carrying out face recognition based on the face fusion characteristics of the original face image.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for embodiments of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (9)

1. A face recognition method, comprising:
acquiring an original face image, and determining facial texture features and geometric features of five sense organs of a face part in the original face image;
performing feature fusion on the facial texture features and the geometric features of the five sense organs to obtain facial fusion features;
carrying out face recognition based on the face fusion characteristics of the original face image;
the determining the geometric features of the five sense organs of the face part in the original face image comprises the following steps:
extracting key points of the original face image to obtain face key points;
calculating an Euler angle of a face pose of the original face image based on the face key points;
calculating geometric features of five sense organs when the face of the original face image turns to the front face based on the Euler angles;
the feature fusion of the facial texture features and the geometric features of the five sense organs comprises:
calculating the geometric included angle of the facial features of the original facial image based on the facial key points;
determining a fusion guide vector for guiding the facial texture features and the geometric features of the five sense organs to perform feature fusion based on the geometric included angle and the Euler angle;
performing feature fusion on the original face image based on the facial texture features, the geometric features of the five sense organs and the fusion guide vector;
the determining a fusion guiding vector for guiding the facial texture feature and the geometric feature of five sense organs to perform feature fusion based on the geometric angle and the euler angle comprises:
and splicing the geometric included angle and the Euler angle to obtain a fusion guide vector for guiding the facial texture feature and the geometric feature of the five sense organs to perform feature fusion.
2. The method of claim 1, wherein the geometric features of the five sense organs include at least one of an eye distance ratio, an eye nose ratio, a nose shape ratio, a lip distance ratio, and a lip nose ratio.
3. The method of claim 1, wherein the feature fusing the original face image based on the facial texture features, the geometric features of the five sense organs and the fusion guidance vector comprises:
respectively determining a first fusion weight corresponding to facial texture features and a second fusion weight corresponding to geometric features of the five sense organs based on the fusion guide vector;
performing feature fusion on the original face image based on the facial texture features, the first fusion weight, the geometric features of five sense organs and the second fusion weight.
4. The method according to claim 3, wherein the determining a first fusion weight corresponding to facial textural features and a second fusion weight corresponding to geometric features of five sense organs, respectively, based on the fusion guidance vector comprises:
determining a guiding facial texture weight corresponding to the facial texture feature and a guiding facial geometric weight corresponding to the facial geometric feature based on the fused guide vector, a guiding weight matrix corresponding to the fused guide vector, the facial texture feature, a first texture weight matrix corresponding to the facial texture feature, the facial geometric feature and a first geometric weight matrix corresponding to the facial geometric feature;
inputting the guide facial texture weight and the guide facial geometric weight into a softmax layer, and outputting a first fusion weight corresponding to a facial texture feature and a second fusion weight corresponding to the facial geometric feature;
wherein the guiding facial texture weight is a product of the fused guiding vector and a guiding weight matrix corresponding to the fused guiding vector and a product of the facial texture feature and a first texture weight matrix corresponding to the facial texture feature; the geometric weight of the five sense organs is the product of the fused guide vector and a guide weight matrix corresponding to the fused guide vector and the product of the geometric feature of the five sense organs and a first geometric weight matrix corresponding to the geometric feature of the five sense organs.
5. The method of claim 3, wherein said feature fusing the original facial image based on the facial texture features, the first fusion weight, the geometric features of five sense organs, and the second fusion weight comprises:
performing product calculation on the product of the facial texture features and first basic weight corresponding to the facial texture features and the first fusion weight to obtain facial texture features to be fused;
calculating the product of the geometric feature of the five sense organs and a second basic weight corresponding to the geometric feature of the five sense organs and the second fusion weight to obtain the geometric feature of the five sense organs to be fused;
and performing feature fusion on the original face image based on the facial texture features to be fused and the geometric features of the five sense organs to be fused.
6. The method of claim 1, wherein determining facial texture features of a face portion of the original facial image comprises:
and if the original face image is the face front image, inputting the face front image into a convolutional neural network trained in advance to obtain the face texture characteristics of the face part in the original face image.
7. A face recognition apparatus, comprising:
the original face image acquisition module is used for acquiring an original face image and determining facial texture features and geometric features of five sense organs of a face part in the original face image;
the feature fusion module is used for performing feature fusion on the facial texture features and the geometric features of the five sense organs to obtain facial fusion features;
the face recognition module is used for carrying out face recognition based on the face fusion characteristics of the original face image;
the original face image acquisition module comprises: the human face conversion facial feature extraction device comprises a key point extraction unit, an Euler angle calculation unit and a human face conversion facial feature geometric feature calculation unit;
the key point extracting unit is used for extracting key points of the original face image to obtain face key points; the Euler angle calculation unit is used for calculating the Euler angle of the face posture of the original face image based on the face key point; the human face to facial features geometric feature calculation unit is used for calculating facial features geometric features of facial features of the original human face image when the human face turns to the front face based on the Euler angles;
the feature fusion module includes: a geometric included angle calculation unit, a fusion guide vector determination unit and a feature fusion unit of the five sense organs;
the geometric included angle calculation unit of the facial features is used for calculating the geometric included angle of the facial features of the original facial image based on the facial key points; a fusion guide vector determination unit, configured to determine a fusion guide vector for guiding feature fusion of the facial texture features and the geometric features of the five sense organs based on the geometric angle and the euler angle; the feature fusion unit is used for performing feature fusion on the original face image based on the facial texture features, the geometric features of the five sense organs and the fusion guide vector;
and the fusion guide vector determining unit is used for splicing the geometric included angle and the Euler angle to obtain a fusion guide vector for guiding the facial texture feature and the geometric feature of the five sense organs to perform feature fusion.
8. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a face recognition method as claimed in any one of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the face recognition method according to any one of claims 1 to 6.
CN202110979814.4A 2021-08-25 2021-08-25 Face recognition method and device, electronic equipment and storage medium Active CN113420747B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110979814.4A CN113420747B (en) 2021-08-25 2021-08-25 Face recognition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110979814.4A CN113420747B (en) 2021-08-25 2021-08-25 Face recognition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113420747A CN113420747A (en) 2021-09-21
CN113420747B true CN113420747B (en) 2021-11-23

Family

ID=77719898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110979814.4A Active CN113420747B (en) 2021-08-25 2021-08-25 Face recognition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113420747B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958479A (en) * 2017-12-26 2018-04-24 南京开为网络科技有限公司 A kind of mobile terminal 3D faces augmented reality implementation method
CN108416323A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 The method and apparatus of face for identification
CN109446980A (en) * 2018-10-25 2019-03-08 华中师范大学 Expression recognition method and device
CN109657612A (en) * 2018-12-19 2019-04-19 苏州纳智天地智能科技有限公司 A kind of quality-ordered system and its application method based on facial image feature
CN110361003A (en) * 2018-04-09 2019-10-22 中南大学 Information fusion method, device, computer equipment and computer readable storage medium
CN112613416A (en) * 2020-12-26 2021-04-06 中国农业银行股份有限公司 Facial expression recognition method and related device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10339367B2 (en) * 2016-03-29 2019-07-02 Microsoft Technology Licensing, Llc Recognizing a face and providing feedback on the face-recognition process

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107958479A (en) * 2017-12-26 2018-04-24 南京开为网络科技有限公司 A kind of mobile terminal 3D faces augmented reality implementation method
CN108416323A (en) * 2018-03-27 2018-08-17 百度在线网络技术(北京)有限公司 The method and apparatus of face for identification
CN110361003A (en) * 2018-04-09 2019-10-22 中南大学 Information fusion method, device, computer equipment and computer readable storage medium
CN109446980A (en) * 2018-10-25 2019-03-08 华中师范大学 Expression recognition method and device
CN109657612A (en) * 2018-12-19 2019-04-19 苏州纳智天地智能科技有限公司 A kind of quality-ordered system and its application method based on facial image feature
CN112613416A (en) * 2020-12-26 2021-04-06 中国农业银行股份有限公司 Facial expression recognition method and related device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
3D face verification across pose based on euler rotation and tensors;Ammar Chouchane 等;《Multimedia Tllos and Applications》;20171219;第1-26页 *
PFLD:一个实用的人脸关键点检测器;糖心他爸;《https://zhuanlan.zhihu.com/p/73546427》;20190716;第1-8页 *

Also Published As

Publication number Publication date
CN113420747A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
CN111126304B (en) Augmented reality navigation method based on indoor natural scene image deep learning
US11393256B2 (en) Method and device for liveness detection, and storage medium
CN109145759B (en) Vehicle attribute identification method, device, server and storage medium
CN108846440B (en) Image processing method and device, computer readable medium and electronic equipment
US20220172518A1 (en) Image recognition method and apparatus, computer-readable storage medium, and electronic device
WO2020107930A1 (en) Camera pose determination method and apparatus, and electronic device
CN110070564B (en) Feature point matching method, device, equipment and storage medium
JP2021120864A (en) Method and device for detecting obstacle, electronic apparatus, storage medium and computer program
WO2022041830A1 (en) Pedestrian re-identification method and device
CN110826521A (en) Driver fatigue state recognition method, system, electronic device, and storage medium
WO2023016271A1 (en) Attitude determining method, electronic device, and readable storage medium
CN111612842B (en) Method and device for generating pose estimation model
US11508157B2 (en) Device and method of objective identification and driving assistance device
CN109670444B (en) Attitude detection model generation method, attitude detection device, attitude detection equipment and attitude detection medium
CN110532965B (en) Age identification method, storage medium and electronic device
JP2019117577A (en) Program, learning processing method, learning model, data structure, learning device and object recognition device
CN112446322B (en) Eyeball characteristic detection method, device, equipment and computer readable storage medium
CN112001285B (en) Method, device, terminal and medium for processing beauty images
CN114757301A (en) Vehicle-mounted visual perception method and device, readable storage medium and electronic equipment
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
CN116012913A (en) Model training method, face key point detection method, medium and device
CN116310684A (en) Method for detecting three-dimensional target based on multi-mode feature fusion of Transformer
CN113420747B (en) Face recognition method and device, electronic equipment and storage medium
CN116343143A (en) Target detection method, storage medium, road side equipment and automatic driving system
WO2022107548A1 (en) Three-dimensional skeleton detection method and three-dimensional skeleton detection device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant