CN116343313A - Face recognition method based on eye features - Google Patents

Face recognition method based on eye features Download PDF

Info

Publication number
CN116343313A
CN116343313A CN202310618256.8A CN202310618256A CN116343313A CN 116343313 A CN116343313 A CN 116343313A CN 202310618256 A CN202310618256 A CN 202310618256A CN 116343313 A CN116343313 A CN 116343313A
Authority
CN
China
Prior art keywords
eye
color distribution
pixel
features
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310618256.8A
Other languages
Chinese (zh)
Other versions
CN116343313B (en
Inventor
陈莉明
郑婕
张创迪
刘志行
杨超超
罗小萱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Leshan Normal University
Original Assignee
Leshan Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Leshan Normal University filed Critical Leshan Normal University
Priority to CN202310618256.8A priority Critical patent/CN116343313B/en
Publication of CN116343313A publication Critical patent/CN116343313A/en
Application granted granted Critical
Publication of CN116343313B publication Critical patent/CN116343313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Ophthalmology & Optometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a face recognition method based on eye features, which belongs to the technical field of image processing, and aims to comprehensively carry out face recognition by utilizing the difference of the color distribution of each eye contour and the imaging of eyes when the matching degree is higher than a matching threshold value, wherein the eye contour features and the color distribution features are respectively extracted from the shot face images, and the eye contour features and the color distribution features are respectively compared with stored face features to obtain the matching degree.

Description

Face recognition method based on eye features
Technical Field
The invention relates to the technical field of image processing, in particular to a face recognition method based on eye features.
Background
In the existing face recognition method, face images are collected, the face images are processed, face feature data are extracted, and a neural network is adopted to classify the face feature data, so that face recognition is realized. However, in many cases, the person only exposes the eyes and the face is blocked by the mask or the face mask, and therefore, a large amount of face information is hidden, and face recognition cannot be performed, so that authentication cannot be performed.
Disclosure of Invention
Aiming at the defects in the prior art, the face recognition method based on the eye features solves the problem that face recognition cannot be performed when a large amount of face information is hidden.
In order to achieve the aim of the invention, the invention adopts the following technical scheme: a face recognition method based on eye features comprises the following steps:
s1, intercepting an eye image on a face image;
s2, extracting eye contour features and color distribution features from the eye images respectively;
s3, matching the eye contour features and the color distribution features with the stored facial image features respectively to obtain matching degree;
and S4, when the matching degree is higher than the matching threshold, the face recognition is successful.
Further, the step S2 includes the following sub-steps:
s21, filtering the eye image to obtain a filtered image;
s22, partitioning the filtered image to obtain a plurality of eye subareas;
s23, extracting color distribution values from each eye subarea, and constructing a color distribution matrix of each eye subarea;
s24, obtaining color distribution characteristics according to the color distribution matrix of the eye subareas;
s25, carrying out graying treatment on the filtered eye image to obtain a gray image;
s26, extracting contour features from the gray level map to obtain eye contour features.
The beneficial effects of the above-mentioned further scheme are: the invention carries out filtering treatment on the eye image to filter the influence of abnormal pixel points, then carries out partition on the image, extracts color distribution values in each eye subarea, constructs a color distribution matrix for expressing the color distribution condition in one area, carries out graying treatment on the filtered eye image, extracts the outline, reduces the data quantity and simultaneously extracts effective data.
Further, the filtering formula in S21 is:
Figure SMS_1
wherein ,
Figure SMS_2
for filtering the picture->
Figure SMS_5
Pixel value of each pixel, +.>
Figure SMS_8
For the filtering factor +.>
Figure SMS_3
For the first part on the eye image>
Figure SMS_6
Pixel value of each pixel, +.>
Figure SMS_9
For +.>
Figure SMS_10
A peripheral part centered on the pixel of (2)>
Figure SMS_4
Pixel values of the pixel points, ||is absolute value operation, and +.>
Figure SMS_7
As an arctangent function.
The further scheme is thatThe beneficial effects are as follows: the invention uses the characteristic of the arctangent function, takes each pixel point on the eye image as the center, discards the sliding filtering mode, and uses the characteristic of the arctangent function to obtain the sliding filtering mode
Figure SMS_11
When the difference between the pixel value and the surrounding pixel value is large, the pixel value of the surrounding pixel point is considered in the filtering process, so that the noise point can be filtered out, when the pixel value of the surrounding pixel point is large, the surrounding pixel point is also the surrounding pixel point, the characteristic that the pixel value is not influenced by the surrounding pixel point is considered, and when the difference between the pixel value and the surrounding pixel point is small, the pixel value is not influenced by the surrounding pixel point>
Figure SMS_12
The peripheral pixel values are not affected by the original characteristics of the pixel, and the filtering factor +.>
Figure SMS_13
For adjustable parameters, by adjusting the filter factor +.>
Figure SMS_14
The filtering effect can be adjusted.
Further, the color distribution matrix in S23 is:
Figure SMS_15
wherein ,
Figure SMS_28
is a color distribution matrix>
Figure SMS_20
For a first class of color distribution values of the 1 st color component in the eye subregion,
Figure SMS_24
for the second type of color distribution value of the 1 st color component in the eye subregion +.>
Figure SMS_25
A third class of color distribution values for the 1 st color component in the eye subregion>
Figure SMS_29
A fourth class of color distribution values for the 1 st color component in the eye subregion>
Figure SMS_32
Is the eye subregion->
Figure SMS_35
Color distribution values of the first type of the individual color components, respectively>
Figure SMS_26
Is the eye subregion->
Figure SMS_30
Second class of color distribution values of the individual color components, respectively>
Figure SMS_18
Is the eye subregion->
Figure SMS_21
Third class of color distribution values of the individual color components, respectively>
Figure SMS_33
Is the eye subregion->
Figure SMS_36
Fourth class of color distribution values of the individual color components, respectively>
Figure SMS_34
Is the eye subregion->
Figure SMS_37
Color distribution values of the first type of the individual color components, respectively>
Figure SMS_17
Is the eye subregion->
Figure SMS_23
Second class of color distribution values of the individual color components, respectively>
Figure SMS_27
Is the eye subregion->
Figure SMS_31
Third class of color distribution values of the individual color components, respectively>
Figure SMS_16
Is the eye subregion->
Figure SMS_22
Fourth class of color distribution values of the individual color components, respectively>
Figure SMS_19
Is the number of color components;
the calculation formula of the color distribution value is:
Figure SMS_38
,/>
Figure SMS_39
Figure SMS_40
,/>
Figure SMS_41
wherein ,
Figure SMS_42
is the eye subregion->
Figure SMS_43
The +.>
Figure SMS_44
Color component->
Figure SMS_45
Is the pixel point in the eye subareaIs a number of (3).
The beneficial effects of the above further scheme are: the invention shows the color distribution characteristics through four types of color distribution values of a plurality of color components, thereby deeply representing the characteristics of eyes after each human eye is imaged.
Further, the step S26 includes the following substeps:
s261, calculating gray characteristic values of all pixel points on a gray map, and taking the pixel points as pixel points to be contoured when the gray characteristic values are larger than a gray characteristic threshold value;
s262, eliminating isolated pixel points to be contoured according to the position distribution of the pixel points to be contoured to obtain eye contour feature pixel points;
s263, partitioning all the eye contour feature pixel points according to the eye contour feature pixel point positions to obtain contour sub-areas;
s264, calculating the eye contour feature of each contour subarea.
The beneficial effects of the above further scheme are: the method comprises the steps of firstly calculating the gray characteristic value of each pixel point, when the gray characteristic value is larger than a gray threshold value, determining the pixel point as a suspected outline pixel point, deleting isolated points according to whether the suspected outline pixel points are continuous in position, filtering the suspected outline pixel points, and partitioning to obtain the eye outline characteristics of the subareas.
Further, the calculation formula of the gray characteristic value in S261 is:
Figure SMS_46
Figure SMS_47
Figure SMS_48
wherein ,
Figure SMS_49
is the (th) on the gray level diagram>
Figure SMS_53
Gray characteristic value of each pixel point, +.>
Figure SMS_58
Is the (th) on the gray level diagram>
Figure SMS_52
Gray value of each pixel, +.>
Figure SMS_56
Is the (th) on the gray level diagram>
Figure SMS_60
The +.>
Figure SMS_63
Gray value of each pixel, +.>
Figure SMS_51
Is the maximum gray distance value,/>
Figure SMS_55
Is the minimum gray distance value,/>
Figure SMS_59
Is the (th) on the gray level diagram>
Figure SMS_62
The gray value of the 1 st pixel around the pixel,
Figure SMS_50
is the (th) on the gray level diagram>
Figure SMS_54
Gray value of 8 th pixel around each pixel, absolute value operation,/-for the 8 th pixel>
Figure SMS_57
For maximum value of the sequence, +.>
Figure SMS_61
For the minimum of the sequence.
The beneficial effects of the above further scheme are: the invention takes each pixel point on the gray level graph as the center, calculates the gray level characteristic value of each pixel point, the gray level characteristic value is used for evaluating the difference between the gray level characteristic value and the peripheral pixel points, when the gray level difference total value between the center pixel point and the peripheral pixel points is larger, the position where the gray level value is suspected to be obviously changed is indicated, therefore, the point can be a pixel point on the outline, the difference between the maximum gray level distance value and the minimum gray level distance value is found by judging the gray level distance value of the pixel point and the peripheral pixel point, if the difference between the maximum gray level distance value and the minimum gray level distance value is larger, the pixel point on the outline is further embodied, and the pixel point with the gray level characteristic value larger than the gray level characteristic threshold value is directly selected according to the gray level characteristic values of all the pixel points, so as to obtain the outline pixel point.
Further, the formula for calculating the eye contour feature in S264 is:
Figure SMS_64
wherein ,
Figure SMS_67
for eye contour features,/->
Figure SMS_68
Is the>
Figure SMS_70
Distance from each eye contour feature pixel point to the fixed eye contour feature pixel point, +.>
Figure SMS_65
For the number of eye contour feature pixels in the contour subregion, +.>
Figure SMS_69
For distance->
Figure SMS_71
Maximum distance in->
Figure SMS_72
For distance->
Figure SMS_66
Is the minimum distance of (a).
The beneficial effects of the above further scheme are: the fixed eye contour feature pixel point is a pixel point in the contour sub-region, which is used as a reference pixel point, the distances from other pixel points to the reference pixel point are calculated, and according to each pixel point
Figure SMS_73
And->
Figure SMS_74
The difference between them shows the distance distribution and at the same time according to the maximum distance +.>
Figure SMS_75
And->
Figure SMS_76
Difference, sum->
Figure SMS_77
And->
Figure SMS_78
The difference reflects the situation among the maximum distance, the average distance and the minimum distance, and further highlights the position distribution situation of the pixel points on each contour sub-region.
Further, the step S3 includes the following sub-steps:
s31, expanding each color distribution matrix in the color distribution characteristics to form a color distribution vector;
s32, calculating the similarity between the color distribution vector and the color distribution vector in the stored facial image characteristics to obtain a first similarity;
s33, calculating the similarity of the eye contour features and the eye contour features in the stored facial image features to obtain a second similarity;
s34, calculating the matching degree according to the first similarity and the second similarity.
Further, the formula for obtaining the first similarity in S32 is as follows:
Figure SMS_79
wherein ,
Figure SMS_82
is->
Figure SMS_84
First similarity, < >>
Figure SMS_86
Is->
Figure SMS_80
Color distribution vector->
Figure SMS_83
For store->
Figure SMS_85
Color distribution vector->
Figure SMS_87
For dot multiplication, ->
Figure SMS_81
Is cross-multiplied, and is a two-norm operator;
the formula for obtaining the second similarity in S33 is:
Figure SMS_88
wherein ,
Figure SMS_89
is->
Figure SMS_90
Second similarity, ++>
Figure SMS_91
Is->
Figure SMS_92
Eye contour features->
Figure SMS_93
For store->
Figure SMS_94
Eye profile features.
Further, the formula for calculating the matching degree in S34 is as follows:
Figure SMS_95
wherein ,
Figure SMS_97
for matching degree (I)>
Figure SMS_103
Is->
Figure SMS_106
First similarity, < >>
Figure SMS_98
Is->
Figure SMS_101
Second similarity, ++>
Figure SMS_104
For the number of first similarities, +.>
Figure SMS_107
For the second number of similarities, +.>
Figure SMS_96
For the first similarity constant, for statistics +.>
Figure SMS_100
First degree of similarity
Figure SMS_105
An amount of greater than 0.5, +.>
Figure SMS_108
Is a second similarity constant for statistics +.>
Figure SMS_99
Second similarity->
Figure SMS_102
Greater than 0.5.
The beneficial effects of the above further scheme are: the invention compares each color distribution vector with the stored color distribution vector, calculates a plurality of first similarities, compares each eye contour feature with the stored eye contour feature, calculates a plurality of second similarities, each similarity represents the similarity of each part,
Figure SMS_109
similarity for counting color distribution vectors, < >>
Figure SMS_110
For counting the similarity of the eye contour features, +.>
Figure SMS_111
and />
Figure SMS_112
The larger the number of similar parts, the higher the similarity, and the higher the matching degree.
In summary, the invention has the following beneficial effects: according to the invention, the eye images are intercepted from the photographed face images, the eye outline features and the color distribution features are extracted from the eye images respectively, the eye outline features and the color distribution features are compared with the stored face features respectively to obtain the matching degree, and when the matching degree is higher than the matching threshold value, the face recognition is successful.
Drawings
Fig. 1 is a flowchart of a face recognition method based on eye features.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and all the inventions which make use of the inventive concept are protected by the spirit and scope of the present invention as defined and defined in the appended claims to those skilled in the art.
As shown in fig. 1, a face recognition method based on eye features includes the following steps:
s1, intercepting an eye image on a face image;
in this embodiment, the screenshot eye image may adopt a neural network, and the eye position is selected through a neural network frame, so that the face image is intercepted according to the eye position selected by the frame.
The neural network has a loss function of:
Figure SMS_113
wherein ,
Figure SMS_114
for loss function->
Figure SMS_118
As a logarithmic function>
Figure SMS_121
For loss factor->
Figure SMS_116
Width of eye image output for neural network, +.>
Figure SMS_117
Height of eye image output for neural network, +.>
Figure SMS_120
For the target width +.>
Figure SMS_122
For the target height +.>
Figure SMS_115
For the number of target feature pixels, +.>
Figure SMS_119
The number of feature pixel points in the eye image output by the neural network.
The invention sets the difference value between the ratio of the width to the height and the target ratio, thereby avoiding the problem of different heights of the eye images caused by different sizes of the imaged face images, and further considering the number of the feature pixel points in the intercepted area, the more the number of the feature pixel points is, the more the intercepted area is correct, when the number of the feature pixel points in the intercepted area is equal to the number of the target feature pixel points, the more the intercepted area is correct, the size of the intercepted area accords with the target size, and when the size of the intercepted area accords with the target size, the interception of the eye images is successful.
In this embodiment, the neural network may be a YOLO neural network.
S2, extracting eye contour features and color distribution features from the eye images respectively;
the step S2 comprises the following sub-steps:
s21, filtering the eye image to obtain a filtered image;
the filtering formula in S21 is:
Figure SMS_123
wherein ,
Figure SMS_125
for filtering the picture->
Figure SMS_127
Pixel value of each pixel, +.>
Figure SMS_130
For the filtering factor +.>
Figure SMS_126
For the first part on the eye image>
Figure SMS_128
Pixel value of each pixel, +.>
Figure SMS_131
For +.>
Figure SMS_132
A peripheral part centered on the pixel of (2)>
Figure SMS_124
Pixel values of the pixel points, ||is absolute value operation, and +.>
Figure SMS_129
As an arctangent function.
The invention uses the characteristic of the arctangent function, takes each pixel point on the eye image as the center, discards the sliding filtering mode, and uses the characteristic of the arctangent function to obtain the sliding filtering mode
Figure SMS_133
When the difference between the pixel value and the surrounding pixel value is large, the pixel value of the surrounding pixel point is considered in the filtering process, so that the noise point can be filtered out, when the pixel value of the surrounding pixel point is large, the surrounding pixel point is also the surrounding pixel point, the characteristic that the pixel value is not influenced by the surrounding pixel point is considered, and when the difference between the pixel value and the surrounding pixel point is small, the pixel value is not influenced by the surrounding pixel point>
Figure SMS_134
And peripheral pixel valuesCan, without affecting the original characteristics of the spot, the filtering factor +.>
Figure SMS_135
For adjustable parameters, by adjusting the filter factor +.>
Figure SMS_136
The filtering effect can be adjusted.
S22, partitioning the filtered image to obtain a plurality of eye subareas;
s23, extracting color distribution values from each eye subarea, and constructing a color distribution matrix of each eye subarea;
the color distribution matrix in S23 is:
Figure SMS_137
wherein ,
Figure SMS_146
is a color distribution matrix>
Figure SMS_140
For a first class of color distribution values of the 1 st color component in the eye subregion,
Figure SMS_142
for the second type of color distribution value of the 1 st color component in the eye subregion +.>
Figure SMS_141
A third class of color distribution values for the 1 st color component in the eye subregion>
Figure SMS_143
A fourth class of color distribution values for the 1 st color component in the eye subregion>
Figure SMS_147
Is the eye subregion->
Figure SMS_150
First class color component of individual color componentsCloth value (I)>
Figure SMS_148
Is the eye subregion->
Figure SMS_152
Second class of color distribution values of the individual color components, respectively>
Figure SMS_139
Is the eye subregion->
Figure SMS_144
Third class of color distribution values of the individual color components, respectively>
Figure SMS_154
Is the eye subregion->
Figure SMS_158
Fourth class of color distribution values of the individual color components, respectively>
Figure SMS_156
Is the eye subregion->
Figure SMS_159
Color distribution values of the first type of the individual color components, respectively>
Figure SMS_149
Is the eye subregion->
Figure SMS_155
Second class of color distribution values of the individual color components, respectively>
Figure SMS_153
Is the eye subregion->
Figure SMS_157
Third class of color distribution values of the individual color components, respectively>
Figure SMS_138
Is the eye subregion->
Figure SMS_145
Fourth class of color distribution values of the individual color components, respectively>
Figure SMS_151
Is the number of color components;
the calculation formula of the color distribution value is:
Figure SMS_160
,/>
Figure SMS_161
Figure SMS_162
,/>
Figure SMS_163
wherein ,
Figure SMS_164
is the eye subregion->
Figure SMS_165
The +.>
Figure SMS_166
Color component->
Figure SMS_167
Is the number of pixels in the ocular subregion.
The invention shows the color distribution characteristics through four types of color distribution values of a plurality of color components, thereby deeply representing the characteristics of eyes after each human eye is imaged.
S24, obtaining color distribution characteristics according to the color distribution matrix of the eye subareas;
in this embodiment, the color distribution feature is composed of a plurality of color distribution matrices.
S25, carrying out graying treatment on the filtered eye image to obtain a gray image;
s26, extracting contour features from the gray level map to obtain eye contour features.
The invention carries out filtering treatment on the eye image to filter the influence of abnormal pixel points, then carries out partition on the image, extracts color distribution values in each eye subarea, constructs a color distribution matrix for expressing the color distribution condition in one area, carries out graying treatment on the filtered eye image, extracts the outline, reduces the data quantity and simultaneously extracts effective data.
The step S26 comprises the following substeps:
s261, calculating gray characteristic values of all pixel points on a gray map, and taking the pixel points as pixel points to be contoured when the gray characteristic values are larger than a gray characteristic threshold value;
the calculation formula of the gray characteristic value in S261 is as follows:
Figure SMS_168
Figure SMS_169
Figure SMS_170
wherein ,
Figure SMS_171
is the (th) on the gray level diagram>
Figure SMS_177
Gray characteristic value of each pixel point, +.>
Figure SMS_181
Is the (th) on the gray level diagram>
Figure SMS_173
Gray value of each pixel, +.>
Figure SMS_176
Is gray scaleOn the figure +.>
Figure SMS_180
The +.>
Figure SMS_184
Gray value of each pixel, +.>
Figure SMS_174
Is the maximum gray distance value,/>
Figure SMS_178
Is the minimum gray distance value,/>
Figure SMS_182
Is the (th) on the gray level diagram>
Figure SMS_185
The gray value of the 1 st pixel around the pixel,
Figure SMS_172
is the (th) on the gray level diagram>
Figure SMS_175
Gray value of 8 th pixel around each pixel, absolute value operation,/-for the 8 th pixel>
Figure SMS_179
For maximum value of the sequence, +.>
Figure SMS_183
For the minimum of the sequence.
The invention takes each pixel point on the gray level graph as the center, calculates the gray level characteristic value of each pixel point, the gray level characteristic value is used for evaluating the difference between the gray level characteristic value and the peripheral pixel points, when the gray level difference total value between the center pixel point and the peripheral pixel points is larger, the position where the gray level value is suspected to be obviously changed is indicated, therefore, the point can be a pixel point on the outline, the difference between the maximum gray level distance value and the minimum gray level distance value is found by judging the gray level distance value of the pixel point and the peripheral pixel point, if the difference between the maximum gray level distance value and the minimum gray level distance value is larger, the pixel point on the outline is further embodied, and the pixel point with the gray level characteristic value larger than the gray level characteristic threshold value is directly selected according to the gray level characteristic values of all the pixel points, so as to obtain the outline pixel point.
S262, eliminating isolated pixel points to be contoured according to the position distribution of the pixel points to be contoured to obtain eye contour feature pixel points;
s263, partitioning all the eye contour feature pixel points according to the eye contour feature pixel point positions to obtain contour sub-areas;
s264, calculating the eye contour feature of each contour subarea.
The method comprises the steps of firstly calculating the gray characteristic value of each pixel point, when the gray characteristic value is larger than a gray threshold value, determining the pixel point as a suspected outline pixel point, deleting isolated points according to whether the suspected outline pixel points are continuous in position, filtering the suspected outline pixel points, and partitioning to obtain the eye outline characteristics of the subareas.
The formula for calculating the eye contour features in S264 is:
Figure SMS_186
wherein ,
Figure SMS_188
for eye contour features,/->
Figure SMS_190
Is the>
Figure SMS_192
Distance from each eye contour feature pixel point to the fixed eye contour feature pixel point, +.>
Figure SMS_187
For the number of eye contour feature pixels in the contour subregion, +.>
Figure SMS_191
For distance->
Figure SMS_193
Maximum distance in->
Figure SMS_194
For distance->
Figure SMS_189
Is the minimum distance of (a).
The fixed eye contour feature pixel point is a pixel point in the contour sub-region, which is used as a reference pixel point, the distances from other pixel points to the reference pixel point are calculated, and according to each pixel point
Figure SMS_195
And->
Figure SMS_196
The difference between them shows the distance distribution and at the same time according to the maximum distance +.>
Figure SMS_197
And->
Figure SMS_198
Difference, sum->
Figure SMS_199
And->
Figure SMS_200
The difference reflects the situation among the maximum distance, the average distance and the minimum distance, and further highlights the position distribution situation of the pixel points on each contour sub-region.
S3, matching the eye contour features and the color distribution features with the stored facial image features respectively to obtain matching degree;
the stored facial image features include: a stored eye profile feature and a stored color distribution feature. The process of obtaining the stored eye profile features and the stored color distribution features is the same as the process of obtaining the eye profile features and the color distribution features described in the present invention.
The step S3 comprises the following substeps:
s31, expanding each color distribution matrix in the color distribution characteristics to form a color distribution vector;
s32, calculating the similarity between the color distribution vector and the color distribution vector in the stored facial image characteristics to obtain a first similarity;
the formula for obtaining the first similarity in S32 is as follows:
Figure SMS_201
wherein ,
Figure SMS_204
is->
Figure SMS_205
First similarity, < >>
Figure SMS_207
Is->
Figure SMS_203
Color distribution vector->
Figure SMS_206
For store->
Figure SMS_208
Color distribution vector->
Figure SMS_209
For dot multiplication, ->
Figure SMS_202
Is cross-multiplied, and is a two-norm operator;
s33, calculating the similarity of the eye contour features and the eye contour features in the stored facial image features to obtain a second similarity;
the formula for obtaining the second similarity in S33 is:
Figure SMS_210
wherein ,
Figure SMS_211
is->
Figure SMS_212
Second similarity, ++>
Figure SMS_213
Is->
Figure SMS_214
Eye contour features->
Figure SMS_215
For store->
Figure SMS_216
Eye profile features.
S34, calculating the matching degree according to the first similarity and the second similarity.
The formula for calculating the matching degree in S34 is as follows:
Figure SMS_217
wherein ,
Figure SMS_218
for matching degree (I)>
Figure SMS_224
Is->
Figure SMS_228
First similarity, < >>
Figure SMS_219
Is->
Figure SMS_222
Second similarity, ++>
Figure SMS_226
For the number of first similarities, +.>
Figure SMS_229
For the second number of similarities, +.>
Figure SMS_221
For the first similarity constant, for statistics +.>
Figure SMS_223
First degree of similarity
Figure SMS_227
An amount of greater than 0.5, +.>
Figure SMS_230
Is a second similarity constant for statistics +.>
Figure SMS_220
Second similarity->
Figure SMS_225
Greater than 0.5.
The invention compares each color distribution vector with the stored color distribution vector, calculates a plurality of first similarities, compares each eye contour feature with the stored eye contour feature, calculates a plurality of second similarities, each similarity represents the similarity of each part,
Figure SMS_231
similarity for counting color distribution vectors, < >>
Figure SMS_232
For counting the similarity of the eye contour features, +.>
Figure SMS_233
and />
Figure SMS_234
The larger the similarity part is, the more the similarity is, and the matching degree isThe higher.
And S4, when the matching degree is higher than the matching threshold, the face recognition is successful.
In summary, the beneficial effects of the embodiment of the invention are as follows: according to the invention, the eye images are intercepted from the photographed face images, the eye outline features and the color distribution features are extracted from the eye images respectively, the eye outline features and the color distribution features are compared with the stored face features respectively to obtain the matching degree, and when the matching degree is higher than the matching threshold value, the face recognition is successful.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, but various modifications and variations can be made to the present invention by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The face recognition method based on the eye features is characterized by comprising the following steps of:
s1, intercepting an eye image on a face image;
s2, extracting eye contour features and color distribution features from the eye images respectively;
s3, matching the eye contour features and the color distribution features with the stored facial image features respectively to obtain matching degree;
and S4, when the matching degree is higher than the matching threshold, the face recognition is successful.
2. The method for face recognition based on eye features according to claim 1, wherein the step S2 comprises the following sub-steps:
s21, filtering the eye image to obtain a filtered image;
s22, partitioning the filtered image to obtain a plurality of eye subareas;
s23, extracting color distribution values from each eye subarea, and constructing a color distribution matrix of each eye subarea;
s24, obtaining color distribution characteristics according to the color distribution matrix of the eye subareas;
s25, carrying out graying treatment on the filtered eye image to obtain a gray image;
s26, extracting contour features from the gray level map to obtain eye contour features.
3. The face recognition method based on the eye features according to claim 2, wherein the filtering formula in S21 is:
Figure QLYQS_1
wherein ,
Figure QLYQS_3
for filtering the picture->
Figure QLYQS_5
Pixel value of each pixel, +.>
Figure QLYQS_8
For the filtering factor +.>
Figure QLYQS_4
For the first part on the eye image>
Figure QLYQS_7
Pixel value of each pixel, +.>
Figure QLYQS_9
For +.>
Figure QLYQS_10
A peripheral part centered on the pixel of (2)>
Figure QLYQS_2
Pixel values of the pixel points, ||is absolute value operation, and +.>
Figure QLYQS_6
As an arctangent function.
4. The method for face recognition based on eye features according to claim 2, wherein the color distribution matrix in S23 is:
Figure QLYQS_11
wherein ,
Figure QLYQS_22
is a color distribution matrix>
Figure QLYQS_13
For the first type of color distribution value of the 1 st color component in the eye subregion +.>
Figure QLYQS_17
For the second type of color distribution value of the 1 st color component in the eye subregion +.>
Figure QLYQS_26
A third class of color distribution values for the 1 st color component in the eye subregion>
Figure QLYQS_30
A fourth class of color distribution values for the 1 st color component in the eye subregion>
Figure QLYQS_31
Is the eye subregion->
Figure QLYQS_33
Color distribution values of the first type of the individual color components, respectively>
Figure QLYQS_21
Is the eye subregion->
Figure QLYQS_25
Second class of color distribution values of the individual color components, respectively>
Figure QLYQS_12
Is the eye subregion->
Figure QLYQS_18
Third class of color distribution values of the individual color components, respectively>
Figure QLYQS_24
Is the eye subregion->
Figure QLYQS_28
Fourth class of color distribution values of the individual color components, respectively>
Figure QLYQS_29
Is the eye subregion->
Figure QLYQS_32
Color distribution values of the first type of the individual color components, respectively>
Figure QLYQS_15
Is the eye subregion->
Figure QLYQS_19
Second class of color distribution values of the individual color components, respectively>
Figure QLYQS_23
Is the eye subregion->
Figure QLYQS_27
Third class of color distribution values of the individual color components, respectively>
Figure QLYQS_14
Is the eye subregion->
Figure QLYQS_16
Fourth class of color distribution values of the individual color components, respectively>
Figure QLYQS_20
Is the number of color components;
the calculation formula of the color distribution value is:
Figure QLYQS_34
,/>
Figure QLYQS_35
Figure QLYQS_36
,/>
Figure QLYQS_37
wherein ,
Figure QLYQS_38
is the eye subregion->
Figure QLYQS_39
The +.>
Figure QLYQS_40
Color component->
Figure QLYQS_41
Is the number of pixels in the eye subregion, < +.>
Figure QLYQS_42
Is the eye subregion->
Figure QLYQS_43
A first class of color distribution values for the individual color components.
5. The method for face recognition based on eye features according to claim 2, wherein S26 comprises the following sub-steps:
s261, calculating gray characteristic values of all pixel points on a gray map, and taking the pixel points as pixel points to be contoured when the gray characteristic values are larger than a gray characteristic threshold value;
s262, eliminating isolated pixel points to be contoured according to the position distribution of the pixel points to be contoured to obtain eye contour feature pixel points;
s263, partitioning all the eye contour feature pixel points according to the eye contour feature pixel point positions to obtain contour sub-areas;
s264, calculating the eye contour feature of each contour subarea.
6. The eye feature-based face recognition method of claim 5, wherein the calculation formula of the gray feature value in S261 is:
Figure QLYQS_44
Figure QLYQS_45
Figure QLYQS_46
wherein ,
Figure QLYQS_49
is the (th) on the gray level diagram>
Figure QLYQS_54
Gray characteristic value of each pixel point, +.>
Figure QLYQS_58
Is the (th) on the gray level diagram>
Figure QLYQS_48
Gray value of each pixel, +.>
Figure QLYQS_52
Is the (th) on the gray level diagram>
Figure QLYQS_56
The +.>
Figure QLYQS_60
Gray value of each pixel, +.>
Figure QLYQS_47
For the value of the maximum gray-scale distance,
Figure QLYQS_53
is the minimum gray distance value,/>
Figure QLYQS_57
Is the (th) on the gray level diagram>
Figure QLYQS_61
The gray value of the 1 st pixel around the pixel,
Figure QLYQS_50
is the (th) on the gray level diagram>
Figure QLYQS_51
Gray value of 8 th pixel around each pixel, absolute value operation,/-for the 8 th pixel>
Figure QLYQS_55
For maximum value of the sequence, +.>
Figure QLYQS_59
To do soMinimum value of the sequence.
7. The method for face recognition based on eye features according to claim 5, wherein the formula for calculating the eye contour features in S264 is:
Figure QLYQS_62
wherein ,
Figure QLYQS_63
for eye contour features,/->
Figure QLYQS_67
Is the>
Figure QLYQS_69
Distance from each eye contour feature pixel point to the fixed eye contour feature pixel point, +.>
Figure QLYQS_64
For the number of eye contour feature pixels in the contour subregion, +.>
Figure QLYQS_66
Is the distance
Figure QLYQS_68
Maximum distance in->
Figure QLYQS_70
For distance->
Figure QLYQS_65
Is the minimum distance of (a).
8. The method for face recognition based on eye features according to claim 4, wherein the step S3 comprises the following sub-steps:
s31, expanding each color distribution matrix in the color distribution characteristics to form a color distribution vector;
s32, calculating the similarity between the color distribution vector and the color distribution vector in the stored facial image characteristics to obtain a first similarity;
s33, calculating the similarity of the eye contour features and the eye contour features in the stored facial image features to obtain a second similarity;
s34, calculating the matching degree according to the first similarity and the second similarity.
9. The method for face recognition based on eye features of claim 8, wherein the formula for obtaining the first similarity in S32 is:
Figure QLYQS_71
wherein ,
Figure QLYQS_73
is->
Figure QLYQS_75
First similarity, < >>
Figure QLYQS_77
Is->
Figure QLYQS_74
Color distribution vector->
Figure QLYQS_76
For store->
Figure QLYQS_78
Color distribution vector->
Figure QLYQS_79
For dot multiplication, ->
Figure QLYQS_72
Is cross-multiplied, and is a two-norm operator;
the formula for obtaining the second similarity in S33 is:
Figure QLYQS_80
wherein ,
Figure QLYQS_81
is->
Figure QLYQS_82
Second similarity, ++>
Figure QLYQS_83
Is->
Figure QLYQS_84
Eye contour features->
Figure QLYQS_85
For store->
Figure QLYQS_86
Eye profile features.
10. The eye feature-based face recognition method of claim 9, wherein the formula for calculating the matching degree in S34 is:
Figure QLYQS_87
wherein ,
Figure QLYQS_89
for matching degree (I)>
Figure QLYQS_92
Is->
Figure QLYQS_96
First similarity, < >>
Figure QLYQS_91
Is->
Figure QLYQS_93
Second similarity, ++>
Figure QLYQS_97
For the number of first similarities, +.>
Figure QLYQS_99
For the second number of similarities, +.>
Figure QLYQS_88
For the first similarity constant, for statistics +.>
Figure QLYQS_95
First similarity->
Figure QLYQS_98
An amount of greater than 0.5, +.>
Figure QLYQS_100
Is a second similarity constant for statistics +.>
Figure QLYQS_90
Second similarity->
Figure QLYQS_94
Greater than 0.5.
CN202310618256.8A 2023-05-30 2023-05-30 Face recognition method based on eye features Active CN116343313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310618256.8A CN116343313B (en) 2023-05-30 2023-05-30 Face recognition method based on eye features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310618256.8A CN116343313B (en) 2023-05-30 2023-05-30 Face recognition method based on eye features

Publications (2)

Publication Number Publication Date
CN116343313A true CN116343313A (en) 2023-06-27
CN116343313B CN116343313B (en) 2023-08-11

Family

ID=86891577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310618256.8A Active CN116343313B (en) 2023-05-30 2023-05-30 Face recognition method based on eye features

Country Status (1)

Country Link
CN (1) CN116343313B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037152A1 (en) * 2011-04-20 2014-02-06 Institute Of Automation, Chinese Academy Of Sciences Identity recognition based on multiple feature fusion for an eye image
CN105550671A (en) * 2016-01-28 2016-05-04 北京麦芯科技有限公司 Face recognition method and device
CN106503644A (en) * 2016-10-19 2017-03-15 西安理工大学 Glasses attribute detection method based on edge projection and color characteristic
US20170286754A1 (en) * 2016-03-29 2017-10-05 Microsoft Technology Licensing, Llc Recognizing A Face And Providing Feedback On The Face-Recognition Process
US20190050631A1 (en) * 2016-02-26 2019-02-14 Nec Corporation Face recognition system, face recognition method, and storage medium
WO2020258119A1 (en) * 2019-06-27 2020-12-30 深圳市汇顶科技股份有限公司 Face recognition method and apparatus, and electronic device
CN114220143A (en) * 2021-11-26 2022-03-22 华南理工大学 Face recognition method for wearing mask
US20220343680A1 (en) * 2021-05-25 2022-10-27 Beijing Baidu Netcom Scinece Technology Co., Ltd. Method for face liveness detection, electronic device and storage medium
CN115953823A (en) * 2023-03-13 2023-04-11 成都运荔枝科技有限公司 Face recognition method based on big data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140037152A1 (en) * 2011-04-20 2014-02-06 Institute Of Automation, Chinese Academy Of Sciences Identity recognition based on multiple feature fusion for an eye image
CN105550671A (en) * 2016-01-28 2016-05-04 北京麦芯科技有限公司 Face recognition method and device
US20190050631A1 (en) * 2016-02-26 2019-02-14 Nec Corporation Face recognition system, face recognition method, and storage medium
US20170286754A1 (en) * 2016-03-29 2017-10-05 Microsoft Technology Licensing, Llc Recognizing A Face And Providing Feedback On The Face-Recognition Process
CN106503644A (en) * 2016-10-19 2017-03-15 西安理工大学 Glasses attribute detection method based on edge projection and color characteristic
WO2020258119A1 (en) * 2019-06-27 2020-12-30 深圳市汇顶科技股份有限公司 Face recognition method and apparatus, and electronic device
US20220343680A1 (en) * 2021-05-25 2022-10-27 Beijing Baidu Netcom Scinece Technology Co., Ltd. Method for face liveness detection, electronic device and storage medium
CN114220143A (en) * 2021-11-26 2022-03-22 华南理工大学 Face recognition method for wearing mask
CN115953823A (en) * 2023-03-13 2023-04-11 成都运荔枝科技有限公司 Face recognition method based on big data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
廖艺捷;刘昊阳;崔文豪;: "基于积分投影改进的人脸识别模式", 信息与电脑(理论版), no. 15, pages 127 - 130 *
赵广谱等: "基于肤色阈值分割的动静态结合的人眼状态识别方法", 《第22届中国***仿真技术及其应用学术年会论文集》, pages 416 - 421 *
陈莉明: "基于积分投影改进的人脸识别模式", 《华中科技大学学报(自然科学版)》, no. 2018, pages 127 - 130 *

Also Published As

Publication number Publication date
CN116343313B (en) 2023-08-11

Similar Documents

Publication Publication Date Title
US7092554B2 (en) Method for detecting eye and mouth positions in a digital image
KR20180109665A (en) A method and apparatus of image processing for object detection
CN109543548A (en) A kind of face identification method, device and storage medium
CN109685045B (en) Moving target video tracking method and system
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
CN111914748B (en) Face recognition method, device, electronic equipment and computer readable storage medium
CN106570447B (en) Based on the matched human face photo sunglasses automatic removal method of grey level histogram
CN109725721B (en) Human eye positioning method and system for naked eye 3D display system
US10586098B2 (en) Biometric method
CN108710837A (en) Cigarette smoking recognition methods, device, computer equipment and storage medium
CN111145086A (en) Image processing method and device and electronic equipment
Ling et al. Image quality assessment for free viewpoint video based on mid-level contours feature
US12014498B2 (en) Image enhancement processing method, device, equipment, and medium based on artificial intelligence
CN111709305B (en) Face age identification method based on local image block
CN111161276A (en) Iris normalized image forming method
CN115862121B (en) Face quick matching method based on multimedia resource library
CN115082326A (en) Processing method for deblurring video, edge computing equipment and central processor
CN116343313B (en) Face recognition method based on eye features
CN109145875B (en) Method and device for removing black frame glasses in face image
CN116486452A (en) Face recognition method and system
He The influence of image enhancement algorithm on face recognition system
CN112418085B (en) Facial expression recognition method under partial shielding working condition
CN115578781A (en) Method for detecting and identifying iris by removing shielding and readable storage medium
CN111626150B (en) Commodity identification method
CN114155590A (en) Face recognition method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant