CN114005151A - Method for collecting and labeling face angle samples - Google Patents

Method for collecting and labeling face angle samples Download PDF

Info

Publication number
CN114005151A
CN114005151A CN202010736655.0A CN202010736655A CN114005151A CN 114005151 A CN114005151 A CN 114005151A CN 202010736655 A CN202010736655 A CN 202010736655A CN 114005151 A CN114005151 A CN 114005151A
Authority
CN
China
Prior art keywords
angle
point
face
distance
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010736655.0A
Other languages
Chinese (zh)
Other versions
CN114005151B (en
Inventor
田凤彬
于晓静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ingenic Semiconductor Co Ltd
Original Assignee
Beijing Ingenic Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ingenic Semiconductor Co Ltd filed Critical Beijing Ingenic Semiconductor Co Ltd
Priority to CN202010736655.0A priority Critical patent/CN114005151B/en
Publication of CN114005151A publication Critical patent/CN114005151A/en
Application granted granted Critical
Publication of CN114005151B publication Critical patent/CN114005151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for collecting and labeling face angle samples, which comprises the following steps: s1, collecting a face sample; s2 generation of face sample annotation points: the seven-point marking adopts manual marking, and the seven-point marking refers to setting the position of the left ear hole as point 1; the position of the right ear hole is set as point 2; the center position of the bottom of the chin is set as a point 3; the left eye angular position of the human eye, set to point 4; the central position of the two eye connecting line segment is set as a point 5; the right canthus position is set to point 6; and the lower corner of the lower lip is set to point 7; s3, calculating the angle of the face angle sample through the mark points to generate angle mark information: determining a calculation parameter; the method comprises the steps of obtaining the maximum distance by traversing and calculating the distance between the two eye corners of all face pictures of a person to obtain the maximum distance, obtaining the maximum distance from the center of the line connecting the two eye corners to the chin, and obtaining two real lengths of the three-dimensional face of the person; and calculating angles through a projection triangle sine theorem to obtain three angle values of one face. By adopting the method, the error problem of manual marking angles is reduced by adopting an algorithm mode for marking.

Description

Method for collecting and labeling face angle samples
Technical Field
The invention relates to the technical field of face recognition, in particular to a method for collecting and labeling face angle samples.
Background
In the current society, the development of the neural network technology in the field of artificial intelligence is rapid. In the field of face recognition, in the prior art, a sample is labeled with an angle through a manual observation angle. However, in the prior art, the angle is marked manually, which brings great errors, and the manual factors are strong and the errors are great.
Furthermore, the manual labeling of the prior art also includes the following commonly used technical terms:
1. face angle: refers to the angle formed by the human face in three spatial directions.
2. Marking points: refers to the coordinate information of the point and other attributes of the point, such as whether the point is present, the presence flag is 1, the absence flag is 0, and the point is the left eye corner or the chin bottom point.
3. Volunteers: a person voluntarily participating in the collection activity who is willing to contribute to the project.
Disclosure of Invention
In order to solve the above problems, the present invention is directed to: according to the method, the marking is carried out in an algorithm mode, and the problem of errors of manual marking angles is reduced.
Specifically, the invention provides a method for collecting and labeling a face angle sample, which comprises the following steps:
s1, collecting a face sample;
s2, generating a face sample annotation point: the seven-point marking adopts manual marking, the seven points refer to the position of the left ear hole and are set as point 1; the right ear hole position is set as point 2; the center position of the bottom of the chin is set as a point 3; the left eye angular position of the human eye, set to point 4; the central position of the two eye connecting line segment is set as point 5; the right canthus position, set as point 6; and the lower corner of the lower lip, set to point 7;
calculating the maximum canthus distance (i.e. the distance between the point 4 and the point 6) and the maximum canthus connecting line segment center-to-chin distance through traversal
S3, calculating the angle of the face angle sample through the mark points to generate angle mark information: determining a calculation parameter; traversing and calculating the distance between the canthus of two eyes of all pictures of the same human face, namely the distance between a point 4 and a point 6 to obtain the maximum distance, traversing the distance between the center of the line connecting the canthus of the two eyes and the chin, namely the distance between a point 5 and a point 3 to obtain the maximum distance, and obtaining two real distances of the three-dimensional human face, wherein the two distances are two maximum distances of various postures of the human face in an image plane and are defined as two real lengths of the three-dimensional human face of the human; and calculating angles through a projection triangle sine theorem to obtain three angle values of one face.
The step S1 further includes:
s1.1, a person to be collected does not wear glasses, the distance between a camera and the person to be collected is 3 meters, the camera and the nose of the face of the person to be collected are on the same horizontal line, the camera rotates from left to right around a head bag, upwards rises from bottom to top, rotates from left to right and upwards rotates from left to right, rotates from right to left and upwards, rotates from left to right when looking up, rotates from left to right when looking down, rotates to one side, upwards rises from bottom to top, collects n pieces at each angle, and collects 1000 persons;
s1.2, after the collected images are collected by a person without wearing glasses, the person wears one of myopia glasses, sunglasses, a hat, a scarf and a mask randomly, and then the images are collected in the collecting mode of S1.1, the illumination degree is random, and background images are random.
The step S2 further includes:
if the human face is a side human face, writing the blocked ear hole into a data value which can see the ear hole, and marking the position of the blocked ear, and if the canthus of one eye is blocked, marking the data as the data value which can see the eye, and marking the position of the blocked eye; if the hair is the condition with sunglasses or the hair left long, the estimation mode is adopted for marking.
The determining the calculation parameters in the step S3 further includes:
A) the horizontal direction of a connecting line between two canthi on a plane is kept unchanged, the connecting line rotates along the direction of a vertical plane, the length is also unchanged, the length change can be generated only in the vertical direction on the plane, the change is the included angle of the direction, the direction is named as yaw, and the line segment between the point 4 and the point 6 is known as distye; when only one eye can see the lateral side of the two eyes, the change of the distance from the eyes to the ear hole on the corresponding side is reflected in the change of the direction yaw, and the values of the other two directions are not changed; setting the distance from the point 6 to the point 2 or the distance from the point 4 to the point 1 as eareye, and specifically dividing into eareye62 and eareye 41;
B) the line segment from the center of the two eyes to the chin can generate length change in the horizontal direction on the plane, the included angle of the direction is calculated by utilizing the property, the direction is named as pitch, and the line segment from a point 5 to a point 3 is set as eyechin; when the eyechin is the largest in length, the angle is 0;
C) the line segment from the lower corner of the lower lip to the center of the bottom of the chin is longer, the elevation angle is larger, the line segment is smaller, the overlooking angle is larger, the line segment is set as lipchin, and the direction of the pitch can be determined by utilizing the change of the lipchin value;
D) and calculating the included angle in the horizontal direction directly by using the included angle between the line of the two eyes and the horizontal direction, and naming the direction as roll.
The method for implementing the step S3 further includes:
s3.1, placing the picture collected by each collected person into a folder, collecting 1000 faces of the collected persons, and generating 1000 folders;
s3.2, setting the longest distance in each folder as max _ distance, setting the longest value of eareeye as max _ eareeye, setting the longest eyehen as max _ eyehen, and setting the lipchin value corresponding to the max _ eyehen picture as lipchin _ 0;
s3.3, calculating the minimum distance of non-zero values under each folder as min _ distance, wherein the corresponding eareye value is eareye _60, calculating the angles of the two cases,
A=arcos(min_disteye/max_disteye),
B=arcsin(eareye_60/max_eareye),
correcting the angle, wherein the corrected angle value is (B-A)/(90-B);
and S3.4, calculating the space angle of each face.
Said step S3.4 further comprises:
the sequence of the angle directions of the human face is from-90 to 90; calculating the values of disteye, earye, eyehen and lipchi of each graph in each folder as dist _ Disteye, dist _ earye, dist _ eyehen and dist _ lipchi, wherein the maximum values are the maximum values of the folders, such as max _ Disteye, max _ earye, max _ eyehen and lipchi _0 respectively; the maximum value varies from person to person.
In S3, the calculating the angle by the projection triangle sine theorem to obtain three angle values of a face further includes:
calculation of the yaw direction:
if eareye62> eareye41, the yaw direction is positive, otherwise it is negative; the angle value is: if dist _ disteye >0, angle _ raw ═ arccos (dist _ disteye/max _ disteye), otherwise,
angle _ yaw ═ arcsin (dis _ eareye/max _ eareye) -correct [ ("arcsin (dis _ eareye/max _ eareye) -B) ]; adding positive and negative values of the yaw direction to angle _ yaw;
calculation of the pitch direction:
if dist _ lipchin < lipchin _0, the angle is a negative value, otherwise, the angle is a positive value; angle calculation, angle _ pitch ═ arccos (dist _ eyechin/max _ eyechin); adding the positive and negative values of the pitch direction to the angle _ pitch;
calculation of the roll direction:
directly using coordinate points of the annotation point 4 and the annotation point 6 for calculation, and respectively setting the coordinate points as (x1, y1), (x2, y 2); at an angle of
angel_roll=arctan((y2-y1)/(x2-x1))
The value has a direction, i.e., positive and negative;
one face angle is: angle _ yaw, angle _ pitch, angle _ roll.
Calculation of the roll direction: if there is an eye that is occluded, a calculation is made using the annotation point 5 and the eye corner coordinates of the other eye that is not occluded.
Thus, the present application has the advantages that: the method is simple and effective, is easy to operate, and can effectively reduce the human face angle marking error and eliminate the influence of human factors.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic flow diagram of the process of the present invention.
Fig. 2(a) is a schematic diagram of the angular directions of the human face in step S1 of the method of the present invention.
Fig. 2(b) is a schematic diagram of the human face looking down at various angular directions in step S1 of the method of the present invention.
Fig. 2(c) is a schematic diagram of the top view of the face in each angular direction in step S1 of the method of the present invention.
FIG. 3 is a schematic diagram of seven-point labeling of face sample labeling in the method of the present invention.
Fig. 4 is a schematic diagram of face direction naming in the method of the present invention.
Fig. 5 is a specific flowchart of the method for implementing step S3 in the method of the present invention.
Detailed Description
In order that the technical contents and advantages of the present invention can be more clearly understood, the present invention will now be described in further detail with reference to the accompanying drawings.
As shown in fig. 1, the present invention relates to a method for collecting and labeling face angle samples, which comprises the following steps:
s1, collecting a face sample;
s2, generating a face sample annotation point: the seven-point marking adopts manual marking, the seven points refer to the position of the left ear hole and are set as point 1; the right ear hole position is set as point 2; the center position of the bottom of the chin is set as a point 3; the left eye angular position of the human eye, set to point 4; the central position of the two eye connecting line segment is set as point 5; the right canthus position, set as point 6; and the lower corner of the lower lip, set to point 7;
s3, calculating the angle of the face angle sample through the mark points to generate angle mark information: determining a calculation parameter; traversing and calculating the distance between the canthus of two eyes of all pictures of the same human face, namely the distance between a point 4 and a point 6 to obtain the maximum distance, traversing the distance between the center of the line connecting the canthus of the two eyes and the chin, namely the distance between a point 5 and a point 3 to obtain the maximum distance, and obtaining two real distances of the three-dimensional human face, wherein the two distances are two maximum distances of various postures of the human face in an image plane and are defined as two real lengths of the three-dimensional human face of the human; and calculating angles through a projection triangle sine theorem to obtain three angle values of one face.
The technical solution of the present invention can be further described as follows:
and S1, collecting a face sample.
S1.1, the distance between a camera and a person is 3 meters, the camera and the nose of the face of the person are on the same horizontal line, the waist of the person is straight, the neck of the person is straight, the person rotates from left to right with the head bag as the center, leans upward from the bottom, and rotates in other postures (for example, rotates from left to bottom to right to top, rotates from right to bottom to top, rotates from left to right in a supine view, rotates from left to right in a overlooking view, rotates to one side, leans upward from bottom to top, and the like), and the specific angle direction schematic diagram of the face is shown in FIGS. 2(a) - (c). The collected people come from volunteers, and each person collects n pieces of the collected people, and 1000 people are collected;
s1.2, collecting the samples without glasses, and then randomly wearing one of myopia glasses, sunglasses, a hat, a scarf and a mask for collecting. Random illumination level, random background picture.
And S2, generating a face sample labeling point. The seven-point marking adopts manual marking. The seven points refer to the positions of two ear holes, the center of the bottom of the chin, the left canthus of human eyes, the middle point of a line segment connecting the two eyes, the right canthus and the lower corner of the lower lip, and the labeling positions are shown in fig. 3. The left ear hole position is 1, the right ear hole position is 2, the center position of the bottom of the chin is 3, the left eye angle position is 4, the center positions of the two eyes are 5, the right eye angle position is 6, and the lower corner of the lower lip is 7. If the human face is a side human face, the occluded ear hole is written into the data value of the visible ear hole, and the orientation of the occluded ear is marked at the same time. If the hair is marked with sunglasses or long hair, the hair is marked by adopting an estimation mode.
The principles of the present invention may be described as follows:
1) determining a calculation parameter;
A) the connecting line between the two canthi is kept unchanged in the horizontal direction on the plane, is rotated along the vertical plane direction, is also unchanged in length, and only produces the length change in the vertical direction on the plane. This change is the calculation of the angle of the direction. This direction is named yaw, as shown in FIG. 4. Let the line segment between point 4 and point 6 be disteye.
When only one eye can see the lateral side of the two eyes, the distance between the eyes and the ear hole on the corresponding side changes in the yaw direction, and the values in the other two directions are unchanged. The distance from the point 6 to the point 2 or from the point 4 to the point 1 is eareye, and the distance is specifically classified into eareye62 and eareye 41.
B) The line segment from the center of the two eyes to the chin can generate length change in the horizontal direction on the plane, and the included angle in the direction is calculated by utilizing the property. This direction is named pitch, as shown in FIG. 4. Let the line segment from 5 to 3 be: eyechin. The angle is 0 when the eyechin length is maximum.
C) The line segment from the lower corner of the lower lip to the center of the bottom of the chin. The longer the line segment, the larger the elevation angle, the smaller the line segment, the larger the elevation angle, which is caused by the human face approaching a sphere. Let this line segment be lipchin. With this value the direction of pitch can be determined.
D) And calculating the included angle in the horizontal direction by directly using the included angle between the connecting line of the two eyes and the horizontal direction. This direction is named roll, as shown in FIG. 4.
2) The actual length needs to be known. Traversing and calculating the distance between the two eye corners of all pictures of the same human face (namely the distance between the point 4 and the point 6) to obtain the maximum distance, traversing the distance between the center of the line connecting the two eye corners and the chin (namely the distance between the point 5 and the point 3) to obtain the maximum distance, and obtaining two real distances of the three-dimensional human face, wherein the two distances are two maximum distances of various postures of the human face in an image plane, and the two maximum distances are two real lengths of the three-dimensional human face of the human; 3) And calculating the angle through a projection triangle sine theorem. Three angle values are obtained.
As shown in fig. 5, a specific method for implementing step S3 in the present invention can be described as follows:
s3.1, the pictures collected by each volunteer are placed in a folder, 1000 volunteer faces are collected, and 1000 folders are generated.
S3.2, the longest distance between two eye angles in each folder is max _ distance, the longest value of earye is max _ earey, the longest distance from the eyebrow center to the chin is max _ eyehen, and the lipchin value corresponding to the distance max _ eyechin picture is lipchin _ 0.
S3.3, calculating the minimum binocular distance min _ distance of non-zero values under each folder, wherein the corresponding eareye value is eareye _60, calculating the angles of the two conditions,
A=arcos(min_disteye/max_disteye),
b ═ arcsin (eareye _60/max _ eareye). Correcting the angle to obtain correct angle value (B-A)/(90-B)
And S3.4, calculating the space angle of each face.
The sequence of the angle directions of the human face is shown in the figures 2(a) to 2(c), and is from-90 to 90. And calculating the values of system, area, eyechi and lipchi of each graph in each folder as system _ system, system _ eyechi and system _ lipchi. The maximum values are the maximum value of the folder, max _ distance, max _ eareye, max _ eyech (and lipchin _0), respectively. The maximum value is different for different persons.
And calculating the yaw direction. The yaw direction is positive if eareye62> eareye41, and negative otherwise. The angle value is: if it is not
dist _ disteye >0, angle _ yaw ═ arccos (dist _ disteye/max _ disteye), otherwise,
angle _ yaw is arc (dis _ eareye/max _ eareye) -correct (arc) -B). Adding positive and negative values of the yaw direction to angle _ yaw;
calculation of the pitch direction. If dist _ lipchin < lipchin _0, the angle is negative, otherwise it is positive. The calculation of the angle is carried out,
angle _ pitch ═ arccos (dist _ eyechin/max _ eyechin). Adding the positive and negative values of the pitch direction to the angle _ pitch;
calculation of the roll direction. The calculation is done directly using the annotation point 4 and annotation point 6 coordinates (if there is eye occlusion, using annotation point 5 and another eye corner coordinates). Let the coordinate points be (x1, y1), (x2, y2), respectively; at an angle of
angel_roll=arctan((y2-y1)/(x2-x1))
The value has a direction, i.e. positive or negative.
Then, one face angle is: angle _ yaw, angle _ pitch, angle _ roll.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes may be made to the embodiment of the present invention by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A method for acquiring and labeling face angle samples is characterized by comprising the following steps:
s1, collecting a face sample;
s2, generating a face sample annotation point: the seven-point marking adopts manual marking, the seven points refer to the position of the left ear hole and are set as point 1; the right ear hole position is set as point 2; the center position of the bottom of the chin is set as a point 3; the left eye angular position of the human eye, set to point 4; the central position of the two eye connecting line segment is set as point 5; the right canthus position, set as point 6; and the lower corner of the lower lip, set to point 7;
s3, calculating the angle of the face angle sample through the mark points to generate angle mark information:
determining a calculation parameter; traversing and calculating the distance between the canthus of two eyes of all pictures of the same human face, namely the distance between a point 4 and a point 6 to obtain the maximum distance, traversing the distance between the center of the line connecting the canthus of the two eyes and the chin, namely the distance between a point 5 and a point 3 to obtain the maximum distance, and obtaining two real distances of the three-dimensional human face, wherein the two distances are two maximum distances of various postures of the human face in an image plane and are defined as two real lengths of the three-dimensional human face of the human; and calculating angles through a projection triangle sine theorem to obtain three angle values of one face.
2. The method for acquiring and labeling human face angle samples according to claim 1, wherein the step S1 further comprises:
s1.1, a person to be collected does not wear glasses, the distance between a camera and the person to be collected is 3 meters, the camera and the nose of the face of the person to be collected are on the same horizontal line, the camera rotates from left to right around a head bag, upwards rises from bottom to top, rotates from left to right and upwards rotates from left to right, rotates from right to left and upwards, rotates from left to right when looking up, rotates from left to right when looking down, rotates to one side, upwards rises from bottom to top, collects n persons at all angles, and collects 1000 persons;
s1.2, after the collected person does not wear glasses for collection, the person wears one of myopia glasses, sunglasses, a hat, a scarf and a mask randomly, and then collects the images according to the collection mode of S1.1, the illumination degree is random, and background images are random.
3. The method for acquiring and labeling human face angle samples according to claim 1, wherein the step S2 further comprises:
if the human face is a side human face, writing the blocked ear hole into a data value which can see the ear hole, and marking the position of the blocked ear, and if the canthus of one eye is blocked, marking the data as the data value which can see the eye, and marking the position of the blocked eye; if the hair is the condition with sunglasses or the hair left long, the estimation method is adopted for marking.
4. The method for acquiring and labeling human face angle samples according to claim 1, wherein the determining the calculation parameters in step S3 further comprises:
A) the horizontal direction of a connecting line between two canthi on a plane is kept unchanged, the connecting line rotates along the direction of a vertical plane, the length is also unchanged, the length change can be generated only in the vertical direction on the plane, the change is to calculate the included angle of the direction, the direction is named as yaw, and the line segment between a point 4 and a point 6 is considered as disteye; when only one eye can see the lateral side of the two eyes, the change of the distance from the eyes to the ear hole on the corresponding side is reflected in the variation of the yaw direction, and the values of the other two directions are not changed; setting the distance from the point 6 to the point 2 or the distance from the point 4 to the point 1 as eareye, and specifically dividing into eareye62 and eareye 41;
B) the line segment from the center of the two eyes to the chin can generate length change in the horizontal direction on the plane, the included angle of the direction is calculated by utilizing the property, the direction is named as pitch, and the line segment from a point 5 to a point 3 is set as eyechin; when the eyechin is the largest in length, the angle is 0;
C) the line segment from the lower corner of the lower lip to the center of the bottom of the chin is longer, the elevation angle is larger, the line segment is smaller, the overlooking angle is larger, the line segment is set as lipchin, and the direction of the pitch can be determined by utilizing the change of the lipchin value;
D) and calculating the included angle in the horizontal direction directly by using the included angle between the line of the two eyes and the horizontal direction, and naming the direction as roll.
5. The method for acquiring and labeling human face angle samples according to claim 4, wherein the method for implementing the step S3 further comprises:
s3.1, placing the picture collected by each collected person into a folder, collecting 1000 faces of the collected persons, and generating 1000 folders;
s3.2, setting the longest distance in each folder as max _ distance, setting the longest value of eareeye as max _ eareeye, setting the longest eyehen as max _ eyehen, and setting the lipchin value corresponding to the max _ eyechin picture as lipchin _ 0;
s3.3, calculating the minimum distance of non-zero values under each folder as min _ distance, wherein the corresponding eareye value is eareye _60, calculating the angles of the two cases,
A=arcos(min_disteye/max_disteye),
B=arcsin(eareye_60/max_eareye),
correcting the angle, wherein the corrected angle value is (B-A)/(90-B);
and S3.4, calculating the space angle of each face.
6. The method for acquiring and labeling human face angle samples according to claim 5, wherein the step S3.4 further comprises:
the sequence of the angle directions of the human face is from-90 to 90; calculating the values of disteye, earye, eyehen and lipchi of each graph in each folder as dist _ Disteye, dist _ earye, dist _ eyehen and dist _ lipchi, wherein the maximum values are the maximum values of the folders, such as max _ Disteye, max _ earye, max _ eyehen and lipchi _0 respectively; the maximum value is different for different persons.
7. The method as claimed in claim 6, wherein the step of calculating the angles by the projected triangular sine theorem in S3 to obtain the three angle values of a face further comprises:
calculation of the yaw direction:
if eareye62> eareye41, the yaw direction is positive, otherwise it is negative; the angle value is: if dist _ disteye >0, angle _ raw ═ arccos (dist _ disteye/max _ disteye), otherwise,
angle _ yaw ═ arcsin (dis _ eareye/max _ eareye) -correct [ ("arcsin (dis _ eareye/max _ eareye) -B) ]; adding positive and negative values of the yaw direction to angle _ yaw;
calculation of the pitch direction:
if dist _ lipchin < lipchin _0, the angle is a negative value, otherwise, the angle is a positive value; angle calculation, angle _ pitch ═ arccos (dist _ eyechin/max _ eyechin); adding the positive and negative values of the pitch direction to the angle _ pitch;
calculation of the roll direction:
directly using coordinate points of the annotation point 4 and the annotation point 6 for calculation, and respectively setting the coordinate points as (x1, y1), (x2, y 2); at an angle of
angel_roll=arctan((y2-y1)/(x2-x1))
The value has a direction, i.e., positive and negative;
one face angle is: angle _ yaw, angle _ pitch, angle _ roll.
8. The method of claim 7, wherein the calculating of the roll direction further comprises: if there is an eye that is occluded, a calculation is made using the annotation point 5 and the eye corner coordinates of another non-occluded eye.
CN202010736655.0A 2020-07-28 2020-07-28 Face angle sample collection and labeling method Active CN114005151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010736655.0A CN114005151B (en) 2020-07-28 2020-07-28 Face angle sample collection and labeling method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010736655.0A CN114005151B (en) 2020-07-28 2020-07-28 Face angle sample collection and labeling method

Publications (2)

Publication Number Publication Date
CN114005151A true CN114005151A (en) 2022-02-01
CN114005151B CN114005151B (en) 2024-05-03

Family

ID=79920479

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010736655.0A Active CN114005151B (en) 2020-07-28 2020-07-28 Face angle sample collection and labeling method

Country Status (1)

Country Link
CN (1) CN114005151B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009176005A (en) * 2008-01-24 2009-08-06 Toyota Motor Corp Characteristic point detection method for face image and its device
CN104951767A (en) * 2015-06-23 2015-09-30 安阳师范学院 Three-dimensional face recognition technology based on correlation degree
WO2019232866A1 (en) * 2018-06-08 2019-12-12 平安科技(深圳)有限公司 Human eye model training method, human eye recognition method, apparatus, device and medium
CN111259739A (en) * 2020-01-09 2020-06-09 浙江工业大学 Human face pose estimation method based on 3D human face key points and geometric projection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009176005A (en) * 2008-01-24 2009-08-06 Toyota Motor Corp Characteristic point detection method for face image and its device
CN104951767A (en) * 2015-06-23 2015-09-30 安阳师范学院 Three-dimensional face recognition technology based on correlation degree
WO2019232866A1 (en) * 2018-06-08 2019-12-12 平安科技(深圳)有限公司 Human eye model training method, human eye recognition method, apparatus, device and medium
CN111259739A (en) * 2020-01-09 2020-06-09 浙江工业大学 Human face pose estimation method based on 3D human face key points and geometric projection

Also Published As

Publication number Publication date
CN114005151B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
US10268290B2 (en) Eye tracking using structured light
CN108876879B (en) Method and device for realizing human face animation, computer equipment and storage medium
CN104766059B (en) Quick accurate human-eye positioning method and the gaze estimation method based on human eye positioning
CA3072117C (en) Method, device and computer program for virtually adjusting a spectacle frame
WO2020125499A1 (en) Operation prompting method and glasses
US7043056B2 (en) Facial image processing system
KR102442486B1 (en) 3D model creation method, apparatus, computer device and storage medium
CA3068948A1 (en) Method, device and computer program for the virtual fitting of a spectacle frame
CN104978548A (en) Visual line estimation method and visual line estimation device based on three-dimensional active shape model
WO2019062056A1 (en) Smart projection method and system, and smart terminal
CN109656373A (en) One kind watching independent positioning method and positioning device, display equipment and storage medium attentively
CN105354822B (en) The intelligent apparatus of read-write element position and application in automatic identification read-write scene
CN106915303A (en) Automobile A-column blind area perspective method based on depth data and fish eye images
JP2534617B2 (en) Real-time recognition and synthesis method of human image
CN109961021A (en) Method for detecting human face in a kind of depth image
Haker et al. Self-organizing maps for pose estimation with a time-of-flight camera
CN114005151A (en) Method for collecting and labeling face angle samples
CN109255327A (en) Acquisition methods, face&#39;s plastic operation evaluation method and the device of face characteristic information
JP2006285531A (en) Detection device for eye direction, detecting method for eye direction, program for executing the same detecting method for eye direction by computer
Nitschke Image-based eye pose and reflection analysis for advanced interaction techniques and scene understanding
Wang et al. Objective facial paralysis grading based on p face and eigenflow
Barros et al. Real-time monocular 6-dof head pose estimation from salient 2d points
Lefevre et al. Structure and appearance features for robust 3d facial actions tracking
Fang et al. Automatic head and facial feature extraction based on geometry variations
CN113744411A (en) Image processing method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant