WO2017124929A1 - 确定人脸转动角度的方法、装置及计算机存储介质 - Google Patents

确定人脸转动角度的方法、装置及计算机存储介质 Download PDF

Info

Publication number
WO2017124929A1
WO2017124929A1 PCT/CN2017/070607 CN2017070607W WO2017124929A1 WO 2017124929 A1 WO2017124929 A1 WO 2017124929A1 CN 2017070607 W CN2017070607 W CN 2017070607W WO 2017124929 A1 WO2017124929 A1 WO 2017124929A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
symmetric
position information
feature points
line segment
Prior art date
Application number
PCT/CN2017/070607
Other languages
English (en)
French (fr)
Inventor
汪铖杰
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP17740960.4A priority Critical patent/EP3407245B1/en
Priority to KR1020187015392A priority patent/KR102144489B1/ko
Priority to JP2018527759A priority patent/JP6668475B2/ja
Publication of WO2017124929A1 publication Critical patent/WO2017124929A1/zh
Priority to US15/944,656 priority patent/US10713812B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of face recognition technologies, and in particular, to a method, device, and computer storage medium for determining a rotation angle of a face.
  • the face recognition technology recognizes a face image from an image taken by a camera.
  • the face will be head-up, head down, turn left or turn right, etc., causing the face in the image captured by the camera to have a certain angle with the face of the frontal face image.
  • the face recognition technology needs to determine the face rotation angle in the image, and the face image can be recognized from the image according to the face rotation angle.
  • the angle of the face rotation is determined by the following method: the face movement of the face in different rotation directions is performed in advance, and the face images in different rotation directions are photographed by the camera, and the texture of the face image in each rotation direction is separately analyzed.
  • the feature forms a correspondence relationship corresponding to the texture features of the face image in each rotation direction and each rotation direction.
  • the texture feature of the face image is analyzed, and the texture feature most similar to the texture feature is searched for from the correspondence relationship, and the face corresponding to the most similar texture feature is obtained.
  • a direction of rotation estimating a face rotation angle of the face image according to the face rotation direction and the texture feature.
  • the existing method for determining the rotation angle of the face based on the texture feature can only determine the approximate angle of the face rotation, but can not determine the specific face rotation angle, and the texture feature analysis is a complicated process, which is easy because of the texture feature analysis. Inaccurate and determine the wrong face rotation angle.
  • An example provides a method, apparatus, and computer storage medium for determining the angle of rotation of a face.
  • the technical solution is as follows:
  • an embodiment of the present invention provides a method for determining a rotation angle of a face, the method comprising:
  • first position information of the preset plurality of face feature points in the image to be determined the number of the plurality of face feature points being an odd number, comprising a plurality of pairs of symmetric face feature points and a first face feature Point, the plurality of facial feature points are not coplanar;
  • Determining a face rotation angle of the face image to be determined according to the first position information of the symmetric midpoint of each pair of face feature points and the first position information of the first face feature point.
  • an embodiment of the present invention further provides an apparatus for determining a rotation angle of a face, the apparatus comprising:
  • a first acquiring module configured to acquire first position information of the preset plurality of facial feature points in the image to be determined, where the number of the plurality of facial feature points is an odd number, including a plurality of pairs of symmetric facial feature points And a first facial feature point, the plurality of facial feature points are not coplanar;
  • a second acquiring module configured to acquire, according to the first location information of the facial feature points included in each pair of the facial feature points of the plurality of pairs of facial feature points acquired by the first acquiring module, the pair of people First position information of a symmetric midpoint of the face feature point;
  • a first determining module configured to: first location information of a symmetric midpoint of each pair of facial feature points acquired by the second acquiring module, and the first facial feature point acquired by the first acquiring module The first position information determines a face rotation angle of the face image to be determined.
  • the embodiment of the present invention further provides a computer storage medium, where the computer storage medium stores computer executable instructions, and the computer executable instructions are used to perform determining the face rotation according to the embodiment of the present invention. Angle method.
  • FIG. 1 is a flow chart of a method for determining a rotation angle of a face provided in an embodiment of the present invention
  • 2A is a flow chart of a method for determining a rotation angle of a face provided in an embodiment of the present invention
  • 2B is a schematic diagram showing partial feature point markings in a face image to be determined provided in an embodiment of the present invention
  • 2C is a flowchart of a method for determining a face pitch angle in a face image to be determined provided in an embodiment of the present invention
  • 2D is a flowchart of a method for determining a face side corner in a to-be-determined face image according to an embodiment of the present invention
  • FIG. 3A is a flowchart of a method for determining a correspondence between a line segment ratio and a preset face rotation angle according to an embodiment of the present invention
  • FIG. 3B is a schematic diagram of a partial feature point mark in a first face image provided in another embodiment of the present invention.
  • FIG. 3C is a flowchart of a method for establishing a correspondence between a first ratio and a preset face pitch angle according to an embodiment of the present invention
  • FIG. 3D is a flowchart of a method for establishing a correspondence between a third ratio and a preset face pitch angle according to an embodiment of the present invention
  • FIG. 3E is a schematic diagram of partial feature points in the frontal face image when the first face image is a frontal face image provided in an embodiment of the present invention.
  • FIG. 3F is a flowchart of a method for establishing a correspondence between a second ratio and a preset face rotation angle according to an embodiment of the present invention
  • 4A is a block diagram showing the structure of a device for determining a face rotation angle provided in an embodiment of the present invention
  • 4B is a structural block diagram of an apparatus for acquiring first position information of a symmetric midpoint of each pair of facial feature points according to another embodiment of the present invention
  • 4C is a structural block diagram of an apparatus for determining a face elevation angle of a face image to be determined according to another embodiment of the present invention.
  • 4D is a structural block diagram of an apparatus for determining a face side corner of a face image to be determined according to another embodiment of the present invention.
  • 4E is a block diagram showing the structure of an apparatus for calculating a fourth line segment according to another embodiment of the present invention.
  • 4F is a structural block diagram of an apparatus for determining a rotation angle of a human face according to another embodiment of the present invention.
  • 4G is a structural block diagram of an apparatus for establishing a correspondence between a first ratio and a preset face pitch angle according to another embodiment of the present invention.
  • 4H is a pair of a third ratio and a preset face pitch angle provided by still another embodiment of the present invention.
  • 4I is a structural block diagram of an apparatus for establishing a correspondence between a second ratio and a preset face rotation angle according to still another embodiment of the present invention.
  • Figure 5 is a block diagram showing the structure of an electronic device provided in some embodiments of the present invention.
  • the embodiments of the present invention will be further described in detail below with reference to the accompanying drawings.
  • the "electronic devices” mentioned in the article may include smart phones, tablets, smart TVs, e-book readers, MP3 players (Moving Picture Experts Group Audio Layer III), MP4 (Moving Picture) Experts Group Audio Layer IV, motion imaging experts compress standard audio layers 4) players, laptops and desktop computers, and more.
  • FIG. 1 is a flow chart of a method for determining a face rotation angle provided in an embodiment of the present invention. Referring to Figure 1, the method includes:
  • step 101 the first position information of the preset plurality of face feature points in the to-be-determined face image is obtained, and the number of the plurality of face feature points is an odd number, including a plurality of pairs of symmetric face feature points and a first A face feature point, the plurality of face feature points are not coplanar.
  • step 102 the first location information of the symmetric midpoint of each pair of facial feature points is obtained according to the first location information of the facial feature points included in each pair of facial feature points of the plurality of pairs of facial feature points.
  • step 103 the face rotation angle of the face image to be determined is determined according to the first position information of the symmetric midpoint of each pair of face feature points and the first position information of the first face feature point.
  • the method for determining a rotation angle of a face first acquires a preset plurality of pairs of symmetric face feature points and a first face feature point; according to the plurality of pairs of face feature points Obtaining the first position information of the face feature points included in each pair of face feature points, and acquiring each pair of faces First position information of the symmetric midpoint of the feature point; calculating a preset line segment ratio according to the first position information of the symmetric midpoint of each pair of face feature points and the first position information of the first face feature point, Determining the relationship between the preset line segment ratio and the face rotation angle according to the line segment ratio, determining the face rotation angle of the face image to be determined; solving the problem that the face rotation angle cannot be determined; because the preset line segment ratio
  • the corresponding relationship with the angle of rotation of the face is a relatively accurate correspondence between the line segment ratio and the angle. Therefore, the method for determining the rotation angle of the face provided by the embodiment of the present invention achieves the effect of improving the accuracy of determining the
  • FIG. 2A is a flow chart of a method for determining a face rotation angle provided in an embodiment of the present invention.
  • the method obtains a preset plurality of pairs of symmetric face feature points and a first face feature point, and obtains the preset plurality of pairs of symmetric face feature points and a first face feature point in the person to be determined Coordinate position information in the face image; based on the coordinate position information, determining a face rotation angle of the face image to be determined.
  • the method includes:
  • step 200 a plurality of face feature points preset in the face image to be determined are detected.
  • the preset plurality of facial feature points are selected points that are easily recognized in the human face, and the preset facial feature points are located on the contour of the facial organ, which may be a turning point of the contour of the facial organ.
  • the preset feature points may be an inner corner of the eye, an outer corner of the eye, a corner of the mouth, an eyebrow, a brow or a tip of the nose.
  • the inner corner and the outer corner are the turning points of the contour of the eye
  • the corner of the mouth is the turning point of the contour of the mouth
  • the eyebrow and the brow are the turning points of the contour of the eyebrow.
  • the tip of the nose is the turning point of the outline of the nose.
  • Some of the preset feature points have left-right symmetry, for example, two inner corners, two outer corners, two eyebrows, two brows, and two corners of the human face have left-right symmetry .
  • the preset number of the plurality of face feature points is an odd number.
  • the number may be 5 or 7, etc., including a plurality of pairs of symmetric face feature points and a remaining first face feature point, the plurality of people Face feature points are not coplanar.
  • the plurality of facial feature points may include five, the five facial feature points including a first pair of symmetric facial feature points and a second pair of symmetric facial features a point and a remaining first face feature point, wherein, in this embodiment, the first pair of symmetric face feature points may be two inner corners, and the second pair of symmetric face feature points may be 2 mouth corners, the remaining one of the first face feature points are the tip of the nose.
  • the step may be: firstly, detecting a face part in the face image to be determined by using a face detection technology, and then detecting a first pair of symmetrical face feature points in the face part by the face feature point detection technology, That is, two inner corners of the eye, a second pair of symmetrical facial feature points, that is, two corners of the mouth and one of the remaining first facial feature points, that is, the tip of the nose.
  • FIG. 2B is a feature point marker diagram of the face image to be determined provided by the embodiment.
  • the feature point in the figure includes only the detected face feature points, and the detected symmetric face is also included.
  • the symmetrical midpoint and other points formed by the feature points will be described later.
  • the detected first pair of symmetrical facial feature points that is, two inner corners of the eye
  • C' and D' the detected first pair of symmetrical facial feature points
  • the detected second pair of symmetry is detected.
  • the face feature points, that is, the two corners of the mouth are denoted as E' and F', respectively, and the remaining one of the first face features, that is, the tip of the nose, is detected as N'.
  • step 201 the first position information of the preset plurality of facial feature points in the image to be determined is acquired.
  • the first position information refers to a coordinate position of the face feature point in the Cartesian coordinate system when the to-be-determined face image is placed in a two-dimensional Cartesian coordinate system or three-dimensional Cartesian coordinates.
  • the two-dimensional Cartesian coordinate system is a method for representing a plane through two coordinate axes, namely x, y axis, and x and y axes are two mutually perpendicular axes, so that the two-dimensional Cartesian coordinate system is obtained.
  • the coordinate position of the first position information of the face feature point is (x, y); the three-dimensional Cartesian coordinate system is through three coordinate axes, respectively, x, y, z axis, x, y, z axis are two
  • the three axial directions perpendicular to each other represent a method of space, so the coordinate form of the first position information of the face feature point acquired by the three-dimensional Cartesian coordinate system is (x, y, z).
  • the terminal acquires the coordinate positions of the preset first pair of symmetric face feature points (2 inner corners) as C'(x 1 , y 1 ) and D'(x 2 , y 2 ), respectively.
  • the coordinate positions of the symmetric face feature points (2 mouth corners) are E'(x 3 , y 3 ), F'(x 4 , y 4 ), and the remaining first face feature points (nose tip)
  • the coordinate position is N'(x 5 , y 5 ).
  • the obtained coordinate positions of the five facial feature points are: C'(0,0), D'(2,2), E'(1,-2), F'(2, -1), N' (1.5, 0).
  • step 202 the first position of the first symmetric midpoint of the first pair of symmetric facial feature points is obtained according to the first position information of each facial feature point included in the first pair of symmetric facial feature points. information.
  • the first pair of symmetric face feature points are two inner corners, and the two inner corner coordinates are C'(x 1 , y 1 ) and D' (x 2 , y 2 respectively).
  • the first symmetric midpoint of the first pair of symmetric facial feature points is the midpoint of the line segment C'D' formed by the points C'(x 1 , y 1 ) and D'(x 2 , y 2 )
  • the midpoint is denoted as A'(x 6 , y 6 )
  • the coordinate position of A'(x 6 , y 6 ) can be obtained by the midpoint calculation formula, and the specific formula is calculated as follows (1) ) and (2):
  • the first positional information of the first symmetric midpoint of the first pair of symmetric face feature points is A'(1,1).
  • step 203 the first position of the second symmetric midpoint of the second pair of symmetric facial feature points is obtained according to the first position information of each facial feature point included in the second pair of symmetric facial feature points. information.
  • the second pair of symmetrical facial feature points are two corners of the mouth, and the coordinates of the two corners are E′(x 3 , y 3 ) and F′( ⁇ 4 , y 4 ), respectively, and the second pair
  • the second symmetrical midpoint of the symmetric face feature point is the midpoint of the line segment E'F' formed by the point E'(x 3 , y 3 ) and F'(x 4 , y 4 ), as shown in FIG. 2B.
  • the midpoint is denoted as B', then the coordinate position of B'(x 7 , y 7 ) is obtained by the midpoint calculation formula, and the following equations (5) and (6) are calculated:
  • the first position information of the second symmetric midpoint of the second pair of symmetric face feature points is B' (1.5, -1.5).
  • a face pitch angle of the face image to be determined is determined according to the first position information of the symmetric midpoint of each pair of face feature points and the first position information of the first face feature point.
  • the face rotation angle of the face image to be determined may be a face pitch angle and a face side rotation angle.
  • the following describes the method of determining the elevation angle of the face and the side rotation angle of the face.
  • the method can include:
  • step 204a according to the first position information of the first symmetric midpoint and the first face
  • the first position information of the locating point calculates a length of the first line segment formed by the first symmetrical midpoint and the first facial feature point.
  • the first symmetric midpoint of the first pair of symmetric facial feature points is A'(x 6 , y 6 ), and the first location information of the first facial feature point is N′ (x 5 , y 5 ), the first line segment A'N' composed of the first symmetric midpoint A'(x 6 , y 6 ) and the first facial feature point N′(x 5 , y 5 )
  • the length is calculated from the distance formula between two points, and the specific formula is as follows (9):
  • step 204b the first position information of the second symmetric midpoint and the first position information of the first facial feature point are calculated, and the second symmetric midpoint and the first facial feature point are calculated.
  • the first symmetric midpoint of the first pair of symmetric facial feature points is B'(x 7 , y 7 ), and the first location information of the first facial feature point is N′ (x 5 , y 5 ), the first line segment B'N' composed of the first symmetric midpoint B' (x 7 , y 7 ) and the first facial feature point N' (x 5 , y 5 )
  • the length is calculated from the distance formula between two points, and the specific formula is as follows: (10):
  • step 204c according to the first ratio between the length of the first line segment and the length of the second line segment, the face of the to-be-determined face image is obtained from the correspondence between the first ratio and the face pitch angle. Pitch angle.
  • the first ratio calculated in this step is not queried in all the first ratios included in the correspondence between the previously established first ratio and the face pitch angle, And determining, from all the first ratios in the correspondence, a first ratio that is closest to the first ratio calculated in the step, and then using the face elevation angle corresponding to the closest first ratio as the calculation in this step.
  • the obtained first ratio corresponds to the face pitch angle.
  • determining the first ratio that is closest to the first ratio calculated in this step from all the first ratios in the correspondence may be accomplished by:
  • each first ratio included in the correspondence between the previously established first ratio and the face pitch angle is compared with the first ratio calculated in the step, to obtain a first ratio difference, and then for each first The ratio difference is calculated as an absolute value, and each value after the absolute value operation is compared to obtain a minimum absolute value, and then the first ratio corresponding to the minimum absolute value is obtained (the first ratio is included in the correspondence relationship between the face pitch angle and the face pitch angle)
  • the first ratio is determined as the first ratio that is closest to the first ratio calculated in this step.
  • step 205 the face side rotation angle of the to-be-determined face image is determined according to the first position information of the symmetric midpoint of each pair of face feature points and the first position information of the first face feature point.
  • the method can include:
  • the first facial feature is calculated according to the first position information of the first symmetric midpoint, the first position information of the second symmetric midpoint, and the first position information of the first facial feature point.
  • the first position information of the first symmetric midpoint is A'(x 6 , y 6 ).
  • the first position information of the second symmetric midpoint is B′(x 7 , y 7 ), as shown in step 201, the first location information N' (x 5 , y 5 ) of the first facial feature point, the third line segment is the first symmetric midpoint A' (x 6 , y 6) the second symmetric midpoint B '(x 7, y 7 ) consisting of a line segment A'B', the first face feature point is calculated by the following method to obtain a first vertical distance between the third line segment First, the straight line passing through point A' and point B' is recorded as a straight line c, and the general linear equation of the straight line c is calculated according to the first position information of the point A' and the point B', and then according to the first facial feature The first position information of the point N' calculates a first vertical distance of the first face feature point N
  • straight line c passing through the point A'(x 6 , y 6 ) and the point B' (x 7 , y 7 ) can also be calculated by other methods, and will not be described here.
  • the straight line c is a straight line passing through the point A'(x 6 , y 6 ) and the point B' (x 7 , y 7 ), the first facial feature point N'(x 5 , y 5 ) to the straight line c
  • the distance d is also the second vertical distance of the first facial feature point N'(x 5 , y 5 ) to the third line segment A'B', so the first vertical distance is the distance d.
  • the length of the third line segment is obtained by the distance formula between two points, and the specific formula is as follows: (14):
  • step 205b a corresponding third ratio is obtained from the correspondence relationship between the face pitch angle and the third ratio according to the face pitch angle.
  • the corresponding relationship between the third ratio and the face pitch angle established in advance is queried according to the face pitch angle determined in step 204.
  • a third ratio corresponding to the face pitch angle, the face pitch angle is the face pitch angle determined in step 204, and the third ratio is denoted as e.
  • step 204 determines whether the face pitch angle closest to the face pitch angle calculated in step 204 is determined in the face pitch angle, and then the third ratio corresponding to the closest face pitch angle is used as the third ratio corresponding to the face pitch angle.
  • determining the face pitch angle closest to the face pitch angle calculated in step 204 from the face elevation angles can be accomplished as follows:
  • the face pitch angle included in the correspondence between the previously established third ratio and the face pitch angle is made different from the face pitch angle determined in step 204, and the face pitch angle difference is obtained, and then for each The face pitch angle difference is calculated as an absolute value, and each value after the absolute value operation is compared to obtain the smallest absolute value, and then the face elevation angle corresponding to the minimum absolute value is obtained (the correspondence between the third ratio and the face pitch angle)
  • the face pitch angle included in the relationship is determined as the face pitch angle closest to the face pitch angle calculated in step 204.
  • the third ratio is a ratio of two values, the first value being the symmetric midpoint and the second pair of the first pair of facial feature points in the face image to be determined.
  • the line segment formed by the symmetric midpoint of the face feature point that is, the third line segment A'B', and the second value is the third symmetric midpoint A in the front face image point of the first pair of face feature points and the first
  • step 205c the length of the fourth line segment is calculated based on the third ratio and the length of the third line segment.
  • the third ratio is a ratio of the third line segment to the fourth line segment. Therefore, the length of the fourth line segment is the ratio of the third line segment to the third ratio. So the fourth line The value can be calculated by the following formula (16):
  • step 205d according to the second ratio between the first vertical distance and the length of the fourth line segment, the face side rotation angle of the to-be-determined face image is obtained from the correspondence relationship between the second ratio and the face side rotation angle.
  • the step may be: calculating a second ratio between the first vertical distance d and the length of the fourth line segment AB, and querying a correspondence between the second ratio and the face rotation angle established in advance according to the second ratio (the corresponding For the process of establishing the relationship, refer to the subsequent steps 304a-304b), and query the face rotation angle corresponding to the second ratio that is the same as the calculated second ratio from the corresponding relationship, and determine the face rotation angle as the to be determined.
  • the face corner of the face image may be: calculating a second ratio between the first vertical distance d and the length of the fourth line segment AB, and querying a correspondence between the second ratio and the face rotation angle established in advance according to the second ratio (the corresponding For the process of establishing the relationship, refer to the subsequent steps 304a-304b), and query the face rotation angle corresponding to the second ratio that is the same as the calculated second ratio from the corresponding relationship, and determine the face rotation angle as the to be determined.
  • the face corner of the face image may be: calculating
  • the second ratio calculated in this step is not found in all the second ratios included in the correspondence between the previously established second ratio and the face-side rotation angle, then from the correspondence relationship And determining, in all the second ratios, a second ratio that is closest to the second ratio calculated in the step, and then using the face-side rotation angle corresponding to the closest second ratio as the second ratio calculated in the step The face is turned to the corner.
  • determining the second ratio that is closest to the second ratio calculated in this step from all the second ratios in the correspondence may be accomplished by:
  • each second ratio included in the correspondence between the previously established second ratio and the face rotation angle is made to be different from the second ratio calculated in the step, to obtain a second ratio difference, and then for each second
  • the ratio difference is calculated as an absolute value, and each value after the absolute value operation is compared to obtain a minimum absolute value, and then the second ratio corresponding to the minimum absolute value is obtained (the second ratio is included in the correspondence relationship between the face side angle and the corner side angle)
  • the second ratio is determined as the second ratio that is closest to the second ratio calculated in this step.
  • step 206 determining a fifth line segment according to the first position information of each of the pair of symmetric face feature points, and calculating an angle between the fifth line segment and the horizontal line, The face rotation angle of the face image to be determined is obtained.
  • the fifth line segment refers to a line segment formed by two face feature points of any one of the pair of symmetric face feature points of the preset face feature points. Therefore, the two points constituting the fifth line segment in this embodiment may be the first pair of symmetrical facial feature points, that is, the two inner corners or the second pair of symmetrical facial feature points, that is, two corners.
  • the rotation angle of the face is the angle that is always forward in the front direction of the face and the left and right rotation of the face.
  • any one pair of symmetrical facial feature points is 2 inner corners C'(x 1 , y 1 ) and D'(x 2 , y 2 )
  • the fifth line segment is point C' (x 1 , y 1 ) and D'(x 2 , y 2 ) determine the line segment C'D'.
  • the fifth line segment can also be 2 mouth angles E'(x 3 , y 3 ) and F'(x 4 , y 4 )
  • the fifth line segment is a line segment E'F' determined by points E'(x 3 , y 3 ) and F'(x 4 , y 4 ).
  • the value of ⁇ can be obtained by taking the inverse cosine of the cosine value of ⁇ described above, and the specific calculation method is as follows: (18):
  • the ⁇ is the face rotation angle of the face image to be determined.
  • 45°.
  • step 206 may be directly executed after the step 201 is performed.
  • the method for determining a rotation angle of a face first obtains a preset plurality of pairs of symmetric face feature points and a first face feature point; according to the plurality of pairs of face feature points First position information of the face feature points included in each pair of face feature points, obtaining first position information of the symmetric midpoints of each pair of face feature points; according to the symmetric midpoint of each pair of face feature points Determining a preset line segment ratio by using the first position information and the first position information of the first face feature point, and querying a correspondence between the preset line segment ratio and the face rotation angle according to the line segment ratio to determine the to-be determined face
  • the angle of the face rotation of the image solves the problem that the angle of rotation of the face cannot be determined; since the correspondence between the preset line segment ratio and the angle of the face rotation is a relatively accurate correspondence between the line segment ratio and the angle (for why it is a comparison) The exact correspondence will be discussed in the following. Therefore, the method for determining the
  • the line segment refers to a line segment formed by a midpoint connection of two face feature points of a preset plurality of face feature points, or a vertical line segment formed by a vertical distance of one face feature point to another straight line.
  • the correspondence between the ratio of the three sets of line segments and the preset angle of rotation of the face is established.
  • the first group correspondence is a correspondence between the first ratio and the preset face pitch angle
  • the second group correspondence is a correspondence between the third ratio and the preset face pitch angle
  • the third group correspondence The correspondence between the second ratio and the preset face side rotation angle.
  • step 301 second location information of the preset plurality of facial feature points in the first facial image is obtained.
  • the preset plurality of facial feature points have the same meanings as the preset plurality of facial feature points, and are not described herein.
  • the first face image is a face image taken after the face rotates the preset face rotation angle.
  • the preset face rotation angle includes a preset face pitch angle and a face side rotation angle, and the preset face pitch angle and the face rotation angle may be a preset series of discrete rotation angles, and this The difference between each two adjacent rotations of the series of discrete rotation angles is the same, and the difference is small, which can be set to 1°, 2° or other smaller values to ensure a comprehensive line segment ratio and the following process.
  • the correspondence relationship between the preset face rotation angles enables an accurate line segment ratio and an accurate face rotation angle to be obtained when the correspondence is made at the time of inquiry.
  • the second position information is to place the face in a three-dimensional Cartesian coordinate system (through three coordinate axes, respectively, x, y, z axes, x, y, z axes are three axial directions perpendicular to each other, indicating space One way to get it.)
  • the three-dimensional Cartesian coordinate system can be any three-dimensional Cartesian coordinate system. Therefore, the coordinate form of the second position information of the face feature point acquired by the three-dimensional Cartesian coordinate system is (x, y, z).
  • the preset face rotation angle can be described as follows: the face pitch angle refers to the front face rotating along the y-axis, and when the x-axis and the z-axis are not rotated, the obtained face rotation angle; the face side The corner refers to the angle of rotation of the face when the front face is rotated along the z-axis and the x-axis and the y-axis are not rotated.
  • the preset face rotation angle can be obtained by the following method: the initial face rotation angle is set to 0°, that is, the front face of the face is forward, and there is no rotation angle, and two presets are set.
  • the second preset rotation angle is the sum of the difference between the first preset face rotation angle and the two adjacent rotation angles
  • the third preset The rotation angle is the sum of the difference between the second preset face rotation angle and the two adjacent rotation angles; according to this method, all the preset face rotation angles are sequentially obtained, and the preset face rotation angle is preset.
  • the number is the ratio of the difference between 360 and the preset two adjacent rotation angles.
  • the second location information of the preset plurality of facial feature points in the first facial image may be obtained by: firstly, placing the face as a frontal face, according to each two adjacent The preset angle difference of the face rotation angle rotates the front face, and after each rotation of a preset angle difference, the face image is captured, and the coordinates of the preset plurality of face feature points of the captured face image are acquired.
  • the location is recorded as the second location information. For example, if the difference between the rotation angles of two adjacent faces is 1°, the front face is first rotated by 1°, the face is photographed, the face image is obtained, and the preset image of the face image is obtained.
  • the coordinate position of the feature point of the personal face then continue to rotate the face by 1°, and then take the face again to obtain the face image, and obtain the coordinate positions of the plurality of face feature points preset in the face image at this time, repeat The above steps until all the preset angles are rotated and the coordinate positions of the face feature points at each preset angle are obtained.
  • FIG. 3B includes a plurality of face feature points in the first face image shown in this embodiment.
  • two inner corners of the first pair of symmetric facial feature points are respectively recorded as G' and H'
  • two of the second pair of symmetric facial feature points are respectively Recorded as I' and J'
  • the remaining first face feature point is the tip of the nose, denoted as O'.
  • the coordinates of the preset plurality of face feature points in the second position in the first face image are: G'(x 9 , y 9 , z 9 ), H'(x 10 , y 10 , z 10 ) I'(x 11 , y 11 , z 11 ), J'(x 12 , y 12 , z 12 ), O'(x 13 , y 13 , z 13 ).
  • step 302 a correspondence between the first ratio and the preset facial elevation angle is established according to the second position information of the preset plurality of facial feature points.
  • the first set of correspondences is a correspondence between the first ratio and the preset face pitch angle.
  • FIG. 3C is a method flow of establishing a correspondence relationship between the first ratio and the preset face pitch angle. Figure, the method includes:
  • step 302a the second position of the fifth symmetric midpoint of the first pair of symmetric facial feature points is obtained according to the second position information of each facial feature point included in the first pair of symmetric facial feature points. information.
  • the first pair of symmetric face feature points are 2 inner corners G' (x 9 , y 9 , z 9 ) and points H' (x 10 , y 10 , z 10 ), still Referring to Figure 3B, the midpoint of the first pair of symmetrical fifth symmetrical facial feature points as a point G '(x 9, y 9 , z 9) and a point H' (x 10, y 10 , z 10) composed of The midpoint of the line segment G'H', the midpoint of the line segment G'H' is denoted as K', then the coordinate position of K'(x 14 , y 14 , z 14 ) is obtained by the midpoint calculation formula.
  • the specific calculation process is as follows Equations (20), (21), and (22):
  • step 302b the second position of the sixth symmetric midpoint of the second pair of symmetric facial feature points is obtained according to the first position information of each facial feature point included in the second pair of symmetric facial feature points. information.
  • the second pair of symmetric face feature points are two mouth angles I'(x 11 , y 11 , z 11 ) and J'(x 12 , y 12 , z 12 ), still see 3B, the second pair of the sixth symmetrical midpoint symmetrical facial feature points as a point I '(x 11, y 11 , z 11) and J' (x 12, y 12 , z 12) line segments At the midpoint, the midpoint is denoted as L', and the coordinate position of L'(x 15 , y 15 , z 15 ) is obtained by the midpoint calculation formula.
  • the specific calculation process is as follows (23), (24) and 25):
  • step 302c the second position information of the fifth symmetric midpoint and the second position information of the first facial feature point are calculated, and the fifth symmetric point and the first facial feature point are calculated.
  • the second position information of the fifth symmetric midpoint is K' (x 14 , y 14 , z 14 ), and the first facial feature point is the nose tip O' (x 13 , y 13 , z 13 ), the sixth line segment composed of the fifth symmetric midpoint and the first facial feature point is K′(x 14 , y 14 , z 14 ) and the point O′ (x 13 , y 13 , z 13 )
  • the length of the formed line segment K'O', the length of the sixth line segment is calculated by the distance formula between two points, and the specific formula is calculated as follows: (26):
  • step 302d the second position information of the sixth symmetric midpoint and the second position information of the first facial feature point are calculated, and the sixth symmetric point and the first facial feature point are calculated.
  • the second position information of the sixth symmetric midpoint is L'(x 15 , y 15 ), and the first facial feature point is the nose tip O' (x 13 , y 13 , z 13 )
  • the seventh line segment formed by the sixth symmetric midpoint and the first facial feature point is L′(x 15 , y 15 , z 15 ) and the point O′ (x 13 , y 13 , z 13 )
  • the length of the line segment L'O', the length of the seventh line segment is calculated by the distance formula between two points, and the specific formula is calculated as follows (27):
  • step 302e a correspondence between the first ratio between the sixth line segment and the seventh line segment and the preset face pitch angle is established.
  • the six line segment is K'O' and the seventh line segment is L'O'
  • the first ratio is the ratio of K'O' and L'O'.
  • the correspondence between the first ratio and the preset face pitch angle is obtained by rotating the face and stopping the rotation of the face when the face rotation angle is the preset first person's face pitch angle.
  • the ratio of K'O' and L'O' is calculated to obtain a first first ratio, and the first first ratio is stored and the preset first face is Corresponding relationship of pitch angle; continue to rotate the face when the face rotation angle is the preset second person face pitch angle, in the case of the second person face pitch angle, calculate K'O' and L'O' Ratio, obtaining a second first ratio, storing a correspondence between the second first ratio and the preset second personal face pitch angle, repeating the above steps until all first ratios and preset faces are stored The corresponding relationship of the elevation angles.
  • step 303 a correspondence between the third ratio and the preset face pitch angle is established according to the second position information of the preset plurality of facial feature points.
  • the second group correspondence is a correspondence relationship between the third ratio and the preset face pitch angle.
  • FIG. 3D is a method for establishing a correspondence relationship between the third ratio and the preset face pitch angle. Flow chart, the method includes:
  • step 303a acquiring third position information of each face feature point included in the first pair of symmetric face feature points in the front face image of the face, and the second pair of symmetric face feature points The third position information of each face feature point included in the front face image is included.
  • FIG. 3E includes a plurality of facial feature points in the frontal face image when the first face image shown in this embodiment is a frontal face image.
  • the first pair of symmetrical facial feature points are two inner corners of the eye, denoted as G and H, respectively, and the third position information in the frontal face image of the face is G(x 16 respectively). , y 16 , z 16 ) and H (x 17 , y 17 , z 17 ).
  • the second pair of symmetrical facial feature points are two corners of the mouth, denoted as I' and J', respectively, and the third positional information in the frontal face image is I'(x 18 , y 18 , z 18 ) And J'(x 19 , y 19 , z 19 )
  • step 303b the third position of the seventh symmetric midpoint of the first pair of symmetric facial feature points is obtained according to the third position information of each facial feature point included in the first pair of symmetric facial feature points. information.
  • the first pair of symmetric face feature points are two inner corners G(x 16 , y 16 , z 16 ) and H(x 17 , y 17 , z 17
  • the seventh symmetric midpoint of the first pair of symmetric facial feature points is the midpoint of the line segment formed by points G(x 16 , y 16 , z 16 ) and H(x 17 , y 17 , z 17 ) , the midpoint is denoted as K, then the coordinate position of K(x 18 , y 18 , z 18 ) is obtained by the midpoint calculation formula, and the following formulas (28), (19) and (30) are specifically calculated:
  • step 303c the third position of the eighth symmetric midpoint of the second pair of symmetric facial feature points is obtained according to the third position information of each facial feature point included in the second pair of symmetric facial feature points. information.
  • the second pair of symmetric face feature points are two mouth angles I(x 20 , y 20 , z 20 ) and J(x 21 , y 21 , z 21 ).
  • the eighth symmetric midpoint of the second pair of symmetric face feature points is the midpoint of the line segment IJ formed by points I (x 20 , y 20 , z 20 ) and J (x 21 , y 21 , z 21 ) , the midpoint is denoted as L, then the coordinate position of L(x 22 , y 22 , z 22 ) is obtained by the midpoint calculation formula, and the following formulas (31), (32) and (33) are specifically calculated:
  • step 303d an eighth line segment composed of the fifth symmetric midpoint and the sixth symmetric midpoint is calculated according to the second position information of the fifth symmetric midpoint and the second position information of the sixth symmetric midpoint. length.
  • the second position information of the fifth symmetric midpoint is K' (x 14 , y 14 , z 14 ), and the second position information of the sixth symmetric midpoint is L′(x 15 , y 15 , z 15 ), Then the length of the eighth line segment K'L' composed of the fifth symmetric midpoint K'(x 14 , y 14 , z 14 ) and the sixth symmetric midpoint L' (x 15 , y 15 , z 15 )
  • the distance from the point K'(x 14 , y 14 , z 14 ) to the point L'(x 15 , y 15 , z 15 ) is calculated from the distance formula between two points, and the specific formula (34) is calculated as follows:
  • step 303e calculating a ninth line segment formed by the seventh symmetric midpoint and the eighth symmetric midpoint according to the third position information of the seventh symmetric midpoint and the third position information of the eighth symmetric midpoint length.
  • the third position information of the seventh symmetric midpoint is K(x 18 , y 18 , z 18 ) and the third position information of the eighth symmetric midpoint is L(x 22 , y 22 , z 22 ), then the The length of the ninth line segment KL formed by the seventh symmetric midpoint K (x 18 , y 18 , z 18 ) and the eighth symmetric midpoint L (x 22 , y 22 , z 22 ) is a point K (x 18 , y
  • the distance from 18 , z 18 ) to point L (x 22 , y 22 , z 22 ) is calculated from the distance formula between two points.
  • the specific formula is as follows: (35):
  • step 303f a correspondence between a third ratio between the eighth line segment and the ninth line segment and the preset face pitch angle is established.
  • the eighth line segment is K'L'
  • the ninth line segment is KL
  • the third ratio is the ratio of K'L' and KL.
  • the correspondence between the third ratio and the preset face pitch angle is obtained by rotating the face and stopping the rotation of the face when the face rotation angle is the preset first person's face pitch angle.
  • the ratio of K'L' and KL is calculated to obtain a first third ratio
  • the first third ratio is stored with the first of the presets
  • the corresponding relationship of the elevation angle of the face; when the face rotation angle is continued to be the preset second person's face elevation angle, the ratio of K'L' and KL is calculated in the case of the second person's face elevation angle.
  • Obtaining a second third ratio storing a correspondence between the second third ratio and the preset second personal face pitch angle, repeating the above steps until all third ratios are stored and the preset face pitch is stored The corresponding relationship of the angles.
  • step 304 a correspondence between the second ratio and the preset face side rotation angle is established according to the second position information of the preset plurality of facial feature points.
  • the third group correspondence is a correspondence between the second ratio and the preset face side rotation angle.
  • FIG. 3F is a method for establishing a correspondence relationship between the second ratio and the preset face side rotation angle. Flow chart, the method includes:
  • step 304a the first facial feature point is calculated according to the second position information of the fifth symmetric midpoint, the second position information of the sixth symmetric midpoint, and the second position information of the first facial feature point.
  • step 302a the second position information of the fifth symmetric midpoint is K'(x 14 , y 14 , z 14 ).
  • step 302b the second position information of the sixth symmetric midpoint is L' ( x 15 , y 15 , z 15 ), it can be known from step 205 that the second location information of the first facial feature point is O′(x 13 , y 13 , z 13 ), and the first facial feature point O
  • the second vertical distance from '(x 13 , y 13 , z 13 ) to the eighth line segment K'L' is calculated by the following process:
  • the second position information K' (x 14 , y 14 , z 14 ) according to the fifth symmetric midpoint and the second position information L' (x 15 , y 15 , z 15 ) of the sixth symmetric midpoint Calculate the straight line a passing through the point K'(x 14 , y 14 , z 14 ) and the point L'(x 15 , y 15 , z 15 ), as follows:
  • the straight line a passing through the point K' (x 14 , y 14 , z 14 ) and the point L' (x 15 , y 15 , z 15 ) can also be calculated by other methods, and will not be described here.
  • the distance b of the first facial feature point O'(x 13 , y 13 , z 13 ) to the straight line a is calculated according to the point-to-line distance formula, and the following formula (38) is specifically calculated:
  • the straight line a is a straight line passing through the point K' (x 14 , y 14 , z 14 ) and the point L' (x 15 , y 15 , z 15 ), the first face feature point O'(x 13 , y 13 , z 13 )
  • the distance b from the straight line a is the second vertical distance from the first facial feature point O′(x 13 , y 13 , z 13 ) to the eighth line segment K′L′, so the first The two vertical distance is the distance b.
  • step 304b a correspondence between the second ratio between the second vertical distance and the ninth line segment and the preset face side rotation angle is established.
  • the second vertical distance is b
  • the ninth line segment is KL
  • the second ratio is a ratio of b to KL.
  • the correspondence between the second ratio and the preset face rotation angle is obtained by rotating the face when the face rotation angle is the preset first person face rotation angle, stopping the rotation of the face.
  • the ratio of b and KL is calculated to obtain a first second ratio
  • the correspondence between the first second ratio and the preset first person's face rotation angle is stored;
  • the ratio of b and KL is calculated to obtain a second second ratio, and the storage is performed.
  • the corresponding relationship between the second second ratio and the preset second facial side rotation angle is repeated until the correspondence between all the second ratios and the preset facial side rotation angle is stored.
  • the method for establishing a correspondence between a line segment ratio and a preset face rotation angle rotates a frontal face 3D mode according to a preset angle in a three-dimensional Cartesian coordinate system.
  • Type each rotation of a preset angle to obtain the coordinate information of the face feature point, and according to the acquired coordinate information, the correspondence relationship between the line segment ratio and the preset face rotation angle is established; since the preset angle is small, the corresponding relationship
  • the line segment ratio or the face rotation angle is also relatively accurate; and since the correspondence relationship is established in advance, the line segment ratio or the face rotation angle can be directly obtained from the correspondence relationship in the process of determining the face rotation angle, and the determination is reduced.
  • the time required for the face to rotate increasing the efficiency of determining the angle of rotation of the face.
  • the device for determining a face rotation angle includes, but is not limited to, a first acquisition module 401.
  • the first obtaining module 401 is configured to acquire first position information of a preset plurality of facial feature points in the image to be determined, where the number of the plurality of facial feature points is an odd number, including a plurality of pairs of symmetrical faces a feature point and a first face feature point, the plurality of face feature points are not coplanar;
  • the second obtaining module 402 is configured to acquire each pair according to the first location information of the facial feature points included in each pair of facial feature points of the plurality of pairs of facial feature points acquired by the first acquiring module 401. First position information of a symmetric midpoint of a face feature point;
  • the first determining module 403 is configured to: according to the first location information of the symmetric midpoint of each pair of facial feature points acquired by the second acquiring module 402, and the first facial feature acquired by the first acquiring module 401 The first position information of the point determines the face rotation angle of the face image to be determined.
  • the apparatus for determining a face rotation angle first acquires a preset plurality of pairs of symmetric face feature points and a first face feature point; according to each of the plurality of pairs of face feature points Acquiring first position information of the symmetric midpoint of each pair of facial feature points for the first position information of the facial feature points included in the facial feature points; and first according to the symmetric midpoint of each pair of facial feature points Position information and first position information of the first facial feature point, and calculating a preset line segment ratio, Determining the relationship between the preset line segment ratio and the face rotation angle according to the line segment ratio, determining the face rotation angle of the face image to be determined; solving the problem that the face rotation angle cannot be determined; because the preset line segment ratio
  • the corresponding relationship with the angle of rotation of the face is a relatively accurate correspondence between the line segment ratio and the angle. Therefore, the method for determining the rotation angle of the face provided by the embodiment of the present invention achieves the effect of improving the accuracy of determining the rotation angle of the face.
  • the plurality of facial feature points include five, the five facial feature points including a first pair of symmetric facial feature points, a second pair of symmetric facial feature points, and a remaining first person Face feature points.
  • FIG. 4B is a structural block diagram of a second obtaining module 402 according to an embodiment of the present invention.
  • the second obtaining module 402 includes, but is not limited to, a first obtaining submodule 4021.
  • the first obtaining sub-module 4021 is configured to acquire the first pair of symmetry according to the first location information of each facial feature point included in the first pair of symmetric facial feature points acquired by the first acquiring module 401. First position information of the first symmetric midpoint of the face feature point;
  • the second obtaining sub-module 4022 is configured to acquire the second pair of symmetry according to the first location information of each facial feature point included in the second pair of symmetric facial feature points acquired by the first acquiring module 401.
  • FIG. 4C is a structural block diagram of a first determining module 403 according to an embodiment of the present invention.
  • the first determining module 403 includes, but is not limited to, a first computing submodule 4031.
  • the first calculation sub-module 4031 is configured to be based on the first location information of the first symmetric midpoint acquired by the first acquisition submodule 4021 and the first facial feature point acquired by the first acquisition module 401. a position information, calculating a length of the first line segment formed by the first symmetric midpoint and the first facial feature point;
  • the second calculating submodule 4032 is configured to be obtained according to the second obtaining submodule 4022
  • the first position information of the second symmetric midpoint and the first position information of the first facial feature point acquired by the first obtaining module 401 are calculated by the second symmetric midpoint and the first facial feature point The length of the second line segment formed;
  • the third obtaining submodule 4033 is configured to be configured according to a length between the length of the first line segment calculated by the first calculating submodule 4031 and the length of the second line segment calculated by the second calculating submodule 4032
  • the ratio, the face pitch angle of the face image to be determined is obtained from the correspondence between the first ratio and the face pitch angle.
  • FIG. 4D is a structural block diagram of a first determining module 403 according to another embodiment of the present invention.
  • the first determining module 403 includes, but is not limited to, a third computing submodule 4034.
  • the third calculation sub-module 4034 is configured to be based on the first location information of the first symmetric midpoint acquired by the first acquisition submodule 4021 and the second symmetric midpoint acquired by the second acquisition submodule 4022.
  • the first position information and the first position information of the first facial feature point acquired by the first obtaining module 401, and the first vertical distance and the third between the first facial feature point and the third line segment are calculated.
  • the length of the line segment, the third line segment being a line segment consisting of the first symmetric midpoint and the second symmetric midpoint.
  • the fourth calculation sub-module 4035 is configured to calculate a length of the third line segment calculated according to the third calculation sub-module 4034 and a face pitch of the to-be-determined face image acquired by the third acquisition sub-module 4033 An angle, the length of the fourth line segment is calculated, the fourth line segment is a line segment between the third symmetric midpoint and the fourth symmetric midpoint, and the third symmetric midpoint is the front face image of the first pair of facial feature points The symmetrical midpoint in the middle, the fourth symmetrical midpoint is the symmetrical midpoint of the second pair of facial feature points in the frontal face image.
  • the fourth obtaining sub-module 4036 is configured to be configured according to the second vertical distance calculated by the third calculating sub-module 4034 and the second calculated length of the fourth line segment calculated by the fourth calculating sub-module 4035. Ratio, obtaining the to-be-determined from the correspondence between the second ratio and the corner of the face The face corner of the face image.
  • FIG. 4E is a structural block diagram of calculating a fourth line segment provided by another embodiment of the present invention.
  • the calculation is performed by the fourth calculation sub-module 4035.
  • the fourth calculation sub-module 4035 includes But not limited to:
  • the acquiring unit 4035a is configured to acquire a corresponding third ratio from the correspondence between the face pitch angle and the third ratio according to the face pitch angle obtained by the third acquiring submodule 4033.
  • the calculating unit 4035b is configured to calculate the length of the fourth line segment according to the third ratio obtained by the acquiring unit 4035a and the length of the third line segment calculated by the third calculating submodule 4034.
  • FIG. 4F is a structural block diagram of an apparatus for determining a rotation angle of a human face according to another embodiment of the present invention. As shown in FIG. 4F, the apparatus further includes, but is not limited to:
  • the determining calculation module 404 is configured to determine a fifth line segment according to the first position information of each of the face feature points of any one of the pair of symmetric face feature points, and calculate an angle between the fifth line segment and the horizontal line, The face rotation angle of the face image to be determined.
  • the device further includes:
  • the third obtaining module 405 is configured to acquire second position information of the preset plurality of facial feature points in the first facial image, where the first facial image is taken after the face rotates the preset face rotation angle Face image.
  • the establishing module 406 is configured to establish a correspondence between the line segment ratio and the preset face rotation angle according to the second position information of the preset plurality of facial feature points obtained by the third obtaining module 405.
  • the preset face rotation angle includes a preset face pitch angle.
  • FIG. 4G is a structural block diagram of establishing a correspondence between a first ratio and the preset face pitch angle according to another embodiment of the present invention.
  • the establishing module 406 is shown in FIG. 4G.
  • the method includes a fifth obtaining sub-module 4061, a sixth obtaining sub-module 4062, a fifth calculating sub-module 4063, a sixth calculating sub-module 4064, and a first establishing sub-module 4065.
  • the fifth obtaining sub-module 4061 is configured to acquire the second location information of each facial feature point included in the first pair of symmetric facial feature points obtained by the third acquiring module 405, and obtain the first pair. Second position information of the fifth symmetric midpoint of the symmetric face feature point.
  • the sixth obtaining sub-module 4062 is configured to acquire the first location information of each facial feature point included in the second pair of symmetric facial feature points obtained by the third acquiring module 405 Second position information of the sixth symmetric midpoint of the symmetric face feature point.
  • the fifth calculation sub-module 4063 is configured to acquire the second location information of the fifth symmetric midpoint obtained by the fifth acquisition submodule 4061 and the first human face obtained by the third acquisition module 405.
  • the second position information of the feature point calculates a length of the sixth line segment composed of the fifth symmetric midpoint and the first facial feature point.
  • the sixth calculation sub-module 4064 is configured to acquire the second location information of the sixth symmetric midpoint obtained by the sixth acquisition submodule 4062 and the first human face obtained by the third acquisition module 405.
  • the second position information of the feature point calculates a length of the seventh line segment composed of the sixth symmetric midpoint and the first facial feature point.
  • the first establishing submodule 4065 is configured to establish a first ratio between the sixth line segment calculated by the fifth calculating submodule 4063 and the seventh line segment calculated by the sixth calculating submodule 4064. The correspondence between the preset face pitch angles.
  • FIG. 4H is a structural block diagram of establishing a correspondence between a third ratio and the preset face pitch angle according to another embodiment of the present invention.
  • the establishing module 406 includes but is not limited to The seventh obtaining submodule 406a, the eighth obtaining submodule 406b, the ninth obtaining submodule 406c, the seventh calculating submodule 406d, the eighth calculating submodule 406e, and the second establishing submodule 406f.
  • the seventh obtaining submodule 406a is configured to acquire the first pair of symmetric facial feature point packages.
  • the eighth obtaining sub-module 406b is configured to acquire the first location information of each facial feature point included in the first pair of symmetric facial feature points obtained by the seventh acquiring sub-module 406a, and obtain the first Third position information for the seventh symmetric midpoint of the symmetric face feature point.
  • the ninth acquisition sub-module 406c is configured to acquire the second location information of each facial feature point included in the second pair of symmetric facial feature points obtained by the seventh acquiring sub-module 406a, and obtain the second Third position information for the eighth symmetric midpoint of the symmetric face feature point.
  • the seventh calculation sub-module 406d is configured to obtain the second location information of the fifth symmetric midpoint obtained by the fifth acquisition submodule 4061 and the sixth symmetricity obtained by the sixth acquisition submodule 4062.
  • the second position information of the midpoint calculates a length of the eighth line segment composed of the fifth symmetric midpoint and the sixth symmetric midpoint.
  • the eighth calculation sub-module 406e is configured to obtain the third location information of the seventh symmetric midpoint obtained by the eighth acquisition submodule 406b and the eighth symmetricity obtained by the ninth acquisition submodule 406c.
  • the third position information of the midpoint calculates a length of the ninth line segment composed of the seventh symmetric midpoint and the eighth symmetric midpoint.
  • the second establishing submodule 406f is configured to establish a third ratio between the eighth line segment calculated by the seventh calculating submodule 406d and the ninth line segment calculated by the eighth calculating submodule 406e The correspondence between the preset face pitch angles.
  • the preset face rotation angle includes a preset face side rotation angle.
  • FIG. 4I is a structural block diagram of establishing a correspondence between a second ratio and the preset face rotation angle according to another embodiment of the present invention.
  • the setup module 406 further includes The method is limited to: a ninth calculation sub-module 4066 and a third establishment sub-module 4067.
  • the ninth calculation submodule 4066 is configured to be acquired according to the fifth acquisition submodule 4061
  • the obtained second position information of the fifth symmetrical midpoint, the second position information of the sixth symmetrical midpoint acquired by the sixth obtaining submodule 4062, and the first obtained by the third acquiring module 405 a second position information of the face feature point, and calculating a second vertical distance from the first face feature point to the eighth line segment;
  • the third establishing sub-module 4067 is configured to establish a second ratio between the second vertical distance calculated by the ninth calculating sub-module 4066 and the ninth line segment calculated by the eighth calculating sub-module 406e. Correspondence with the preset face side rotation angle.
  • the device for confirming the rotation angle of the face can be realized by an electronic device.
  • the first obtaining module 401, the second obtaining module 402, the first determining module 403, the determining calculating module 404, the third obtaining module 405, and the establishing module 406 in the device, and the submodules included in each module are actually applied.
  • the central processing unit (CPU), the digital signal processor (DSP), the micro control unit (MCU) or the programmable gate array (FPGA, Field-) can be used in the device. Programmable Gate Array) implementation.
  • the apparatus for determining a face rotation angle first acquires a preset plurality of pairs of symmetric face feature points and a first face feature point; according to each of the plurality of pairs of face feature points Acquiring first position information of the symmetric midpoint of each pair of facial feature points for the first position information of the facial feature points included in the facial feature points; and first according to the symmetric midpoint of each pair of facial feature points Position information and first position information of the first facial feature point, calculate a preset line segment ratio, and query a correspondence between a preset line segment ratio and a face rotation angle according to the line segment ratio to determine the face image to be determined.
  • the rotation angle of the face is solved; the problem that the angle of rotation of the face cannot be determined is solved; since the correspondence between the preset line segment ratio and the angle of the face rotation is a relatively accurate correspondence between the line segment ratio and the angle, the embodiment of the present invention provides The method of determining the angle of rotation of the face achieves an effect of improving the accuracy of determining the angle of rotation of the face.
  • the method for establishing a line segment ratio and a preset face rotation angle rotates the frontal face 3D model according to a preset angle in a three-dimensional Cartesian coordinate system, obtains coordinate information of the face feature point every time a predetermined angle is rotated, and establishes a line segment ratio and a preset person according to the acquired coordinate information
  • Corresponding relationship of the angle of rotation of the face since the preset angle is small, the line segment ratio or the angle of rotation of the face in the correspondence relationship is also accurate; and since the correspondence is established in advance, the process of determining the angle of rotation of the face can be The line segment ratio or the face rotation angle is directly obtained from the correspondence relationship, and the time required for determining the face rotation angle is reduced, and the efficiency of determining the face rotation angle is improved.
  • the determining the face rotation angle device provided in the above embodiment is only exemplified by the division of the above functional modules when determining the rotation angle of the face.
  • the function may be allocated according to needs. Different functional modules are completed, that is, the internal structure of the electronic device is divided into different functional modules to complete all or part of the functions described above.
  • the method for determining the face rotation angle provided by the above embodiment is the same as the method for determining the face rotation angle. The specific implementation process is described in detail in the method embodiment, and details are not described herein again.
  • the electronic device 500 is configured to implement the service processing method provided by the foregoing embodiment.
  • the electronic device 500 of the present invention may include one or more of the following components: a processor for executing computer program instructions to perform various processes and methods, for storing information and storing program instructions, random access memory (RAM), and only Read memory (ROM), memory for storing data and information, I/O devices, interfaces, antennas, etc.
  • a processor for executing computer program instructions to perform various processes and methods, for storing information and storing program instructions, random access memory (RAM), and only Read memory (ROM), memory for storing data and information, I/O devices, interfaces, antennas, etc.
  • the electronic device 500 may include an RF (Radio Frequency) circuit 510, a memory 520, an input unit 530, a display unit 540, a sensor 550, an audio circuit 560, a Wi-Fi (Wireless-Fidelity) module 570, and a processor. 580, power supply 582, camera 590 and other components.
  • RF Radio Frequency
  • FIG. 5 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or may combine some Parts, or different parts.
  • the RF circuit 510 can be used for receiving and transmitting signals during the transmission or reception of information or during a call. Specifically, after receiving the downlink information of the base station, it is processed by the processor 580. In addition, the uplink data is designed to be sent to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like.
  • RF circuitry 510 can also communicate with the network and other devices via wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
  • the memory 520 can be used to store software programs and modules, and the processor 580 executes various functional applications and data processing of the electronic device 500 by running software programs and modules stored in the memory 520.
  • the memory 520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data (such as audio data, phone book, etc.) created by the use of the electronic device 500.
  • memory 520 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 530 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the electronic device 500.
  • the input unit 530 may include a touch panel 531 and other input devices 532.
  • the touch panel 531 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, etc.)
  • the combined object or accessory is operated on or near the touch panel 531, and the corresponding connecting device is driven according to a preset program.
  • the touch panel 531 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 580 is provided and can receive commands from the processor 580 and execute them.
  • the touch panel 531 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 530 may also include other input devices 532. Specifically, other input devices 532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 540 can be used to display information input by the user or information provided to the user and various menus of the electronic device 500.
  • the display unit 540 can include a display panel 541.
  • the display panel 541 can be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch panel 531 can cover the display panel 541. When the touch panel 531 detects a touch operation on or near it, the touch panel 531 transmits to the processor 580 to determine the type of the touch event, and then the processor 580 according to the touch event. The type provides a corresponding visual output on display panel 541.
  • touch panel 531 and the display panel 541 are used as two independent components to implement the input and input functions of the electronic device 500 in FIG. 5, in some embodiments, the touch panel 531 and the display panel 541 may be The input and output functions of the electronic device 500 are implemented integrated.
  • Electronic device 500 may also include at least one type of sensor 550, such as a gyro sensor, a magnetic induction sensor, a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 541 according to the brightness of the ambient light, and the proximity sensor may close the display panel 541 when the electronic device 500 moves to the ear. And / or backlight.
  • acceleration sensing It can detect the acceleration of each direction (usually three axes). When it is still, it can detect the magnitude and direction of gravity. It can be used to identify the posture of electronic equipment (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration).
  • Vibration recognition related functions such as pedometer, tapping), etc.; other sensors such as a barometer, a hygrometer, a thermometer, an infrared sensor, and the like that can be configured in the electronic device 500 are not described herein.
  • the audio circuit 560, the speaker 561, and the microphone 562 can provide an audio interface between the user and the electronic device 500.
  • the audio circuit 560 can transmit the converted electrical data of the received audio data to the speaker 561, and convert it into a sound signal output by the speaker 561.
  • the microphone 562 converts the collected sound signal into an electrical signal, and the audio circuit 560 is used by the audio circuit 560. After receiving, it is converted into audio data, and then processed by the audio data output processor 580, transmitted to the terminal, for example, via the RF circuit 510, or outputted to the memory 520 for further processing.
  • Wi-Fi is a short-range wireless transmission technology
  • the electronic device 500 can help users to send and receive e-mails, browse web pages, and access streaming media through the Wi-Fi module 570, which provides users with wireless broadband Internet access.
  • FIG. 5 shows the Wi-Fi module 570, it can be understood that it does not belong to the essential configuration of the electronic device 500, and may be omitted as needed within the scope of not changing the essence of the disclosure.
  • the processor 580 is a control center of the electronic device 500 that connects various portions of the entire electronic device using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 520, and recalling stored in the memory 520.
  • the processor 580 may include one or more processing units; preferably, the processor 580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 580.
  • the electronic device 500 also includes a power source 582 (such as a battery) that supplies power to the various components, preferably, The power supply can be logically coupled to the processor 582 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • a power source 582 such as a battery
  • the power supply can be logically coupled to the processor 582 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the camera 590 is generally composed of a lens, an image sensor, an interface, a digital signal processor, a CPU, a display screen, and the like.
  • the lens is fixed above the image sensor, and the focus can be changed by manually adjusting the lens;
  • the image sensor is equivalent to the "film" of the conventional camera, and is the heart of the image captured by the camera;
  • the interface is used to connect the camera with the cable and the board to the board.
  • the device is connected to the electronic device motherboard to send the acquired image to the memory 520.
  • the digital signal processor processes the acquired image through mathematical operations, converts the collected analog image into a digital image and sends the image through the interface. To memory 520.
  • the electronic device 500 may further include a Bluetooth module or the like, and details are not described herein again.
  • the electronic device 500 includes, in addition to the one or more processors 580, a memory, and one or more programs, wherein one or more programs are stored in the memory and configured to be executed by one or more processors.
  • One or more of the above programs include instructions for performing the following operations:
  • Determining a face rotation angle of the face image to be determined according to the first position information of the symmetric midpoint of each pair of face feature points and the first position information of the first face feature point.
  • the memory of the electronic device 500 is further configured to perform the following operations. Instructions:
  • the plurality of facial feature points include five, and the five facial feature points include a first pair of symmetry The face feature point, the second pair of symmetrical face feature points, and the remaining one of the first face feature points.
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • Acquiring the first position information of the symmetric midpoint of each pair of facial feature points according to the first position information of the facial feature points included in each of the plurality of pairs of facial feature points include:
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • a face rotation angle of the to-be-determined face image including:
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • a face rotation angle of the to-be-determined face image including:
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • Calculating a length of the fourth line segment according to the length of the third line segment and the face pitch angle of the image to be determined including:
  • a length of the fourth line segment is calculated based on the third ratio and the length of the third line segment.
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • the method further includes:
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • the method further includes:
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • the preset face rotation angle includes a preset face pitch angle
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • the establishing a correspondence between the line segment ratio and the preset face rotation angle according to the second position information of the preset plurality of facial feature points further includes:
  • the memory of the electronic device 500 further includes an instruction for performing the following operations:
  • the preset face rotation angle includes a preset face side rotation angle
  • the establishing a correspondence between the line segment ratio and the preset face rotation angle according to the second position information of the preset plurality of facial feature points further includes:
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner such as: multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored or not executed.
  • the coupling, or direct coupling, or communication connection of the components shown or discussed may be indirect coupling or communication connection through some interfaces, devices or units, and may be electrical, mechanical or other forms. of.
  • the units described above as separate components may or may not be physically separated.
  • the components displayed as the unit may be, or may not be, physical units, that is, may be located in one place, or may be distributed to multiple network units; some or all of the units may be selected according to actual needs to implement the solution of the embodiment. purpose.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated into one unit;
  • the unit can be implemented in the form of hardware or in the form of hardware plus software functional units.
  • the foregoing program may be stored in a computer readable storage medium, and the program is executed when executed.
  • the foregoing storage device includes the following steps: the foregoing storage medium includes: a mobile storage device, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.
  • ROM read-only memory
  • RAM random access memory
  • magnetic disk or an optical disk.
  • optical disk A medium that can store program code.
  • the above-described integrated unit of the present invention may be stored in a computer readable storage medium if it is implemented in the form of a software function module and sold or used as a standalone product.
  • the technical solution of the embodiments of the present invention may be embodied in the form of a software product in essence or in the form of a software product stored in a storage medium, including a plurality of instructions.
  • a computer device (which may be a personal computer, server, or network device, etc.) is caused to perform all or part of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes various media that can store program codes, such as a mobile storage device, a ROM, a RAM, a magnetic disk, or an optical disk.
  • the technical solution of the embodiment of the present invention solves the problem that the angle of rotation of the face cannot be determined. Since the correspondence between the preset line segment ratio and the face rotation angle is a relatively accurate relationship between the line segment ratio and the angle, the present invention is implemented.
  • the method of determining the angle of rotation of the face provided by the example greatly improves the accuracy of determining the angle of rotation of the face.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Quality & Reliability (AREA)

Abstract

一种确定人脸转动角度的方法、装置及计算机存储介质。所述方法包括:获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,所述多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和一个第一人脸特征点,所述多个人脸特征点不共面(101);根据所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息(102);根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度(103)。

Description

确定人脸转动角度的方法、装置及计算机存储介质 技术领域
本发明涉及人脸识别技术领域,特别涉及一种确定人脸转动角度的方法、装置及计算机存储介质。
背景技术
人脸识别技术是从摄像机拍摄的图像中识别出人脸图像。摄像机在拍摄人脸时人脸会做抬头、低头、向左转或向右转等头部运动,导致摄像机拍摄的图像中的人脸与正面人脸图像的人脸存在一定角度,该角度即为人脸转动角度,而人脸识别技术需要确定该图像中的人脸转动角度,根据该人脸转动角度才能从该图像中识别出人脸图像。
目前,通过如下方法确定人脸转动角度:事先让人脸做不同转动方向的头部运动,通过摄像机拍摄在不同转动方向下的人脸图像,分别分析每个转动方向下的人脸图像的纹理特征,将每个转动方向和每个转动方向下的人脸图像的纹理特征相对应形成一个对应关系。当需要确定某张人脸图像的人脸转动角度时,分析该人脸图像的纹理特征,从对应关系中寻找与该纹理特征最相似的纹理特征,获取该最相似的纹理特征对应的人脸转动方向,根据该人脸转动方向和该纹理特征估计该人脸图像的人脸转动角度。
现有的基于纹理特征的人脸转动角度确定方法只能确定人脸转动的大概角度,而不能确定出具体的人脸转动角度,而且纹理特征分析是一个复杂的过程,很容易因为纹理特征分析不准确而确定错误的人脸转动角度。
发明内容
为了解决现有技术中不能准确确定人脸转动角度的问题,本发明实施 例提供了一种确定人脸转动角度的方法、装置及计算机存储介质。所述技术方案如下:
第一方面,本发明实施例提供了一种确定人脸转动角度的方法,所述方法包括:
获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,所述多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和一个第一人脸特征点,所述多个人脸特征点不共面;
根据所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息;
根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度。
第二方面,本发明实施例还提供了一种确定人脸转动角度的装置,所述装置包括:
第一获取模块,配置为获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,所述多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和一个第一人脸特征点,所述多个人脸特征点不共面;
第二获取模块,配置为根据所述第一获取模块获取的所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息;
第一确定模块,配置为根据所述第二获取模块获取的所述每对人脸特征点的对称中点的第一位置信息和所述第一获取模块获取的所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度。
第三方面,本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行本发明实施例所述的确定人脸转动角度的方法。
本发明实施例提供的技术方案带来的有益效果是:
首先获取预设的多对对称的人脸特征点和一个第一人脸特征点;根据该多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取该每对人脸特征点的对称中点的第一位置信息;根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,计算预设的线段比值,根据该线段比值查询预设的线段比值与人脸转动角度的对应关系,确定该待确定人脸图像的人脸转动角度;解决了不能确定人脸转动角度的问题;由于该预设的线段比值与人脸转动角度的对应关系是一个比较精确的线段比值与角度的对应关系,所以本发明实施例提供的确定人脸转动角度的方法大大提高了确定人脸转动角度的精确性。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明一个实施例中提供的确定人脸转动角度的方法流程图;
图2A是本发明一个实施例中提供的确定人脸转动角度的方法流程图;
图2B是本发明一个实施例中提供的待确定人脸图像中的部分特征点标记示意图;
图2C是本发明一个实施例中提供的确定待确定人脸图像中的人脸俯仰角的方法流程图;
图2D是本发明一个实施例中提供的确定待确定人脸图像中的人脸侧转角的方法流程图;
图3A是本发明一个实施例中提供的确定线段比值与预设人脸转动角度的对应关系的方法流程图;
图3B是本发明另一个实施例中提供的第一人脸图像中的部分特征点标记示意图;
图3C是本发明一个实施例中提供的建立第一比值与预设人脸俯仰角之间的对应关系的过程的方法流程图;
图3D是本发明一个实施例中提供的建立第三比值与预设人脸俯仰角之间的对应关系的过程的方法流程图;
图3E是本发明一个实施例中提供的第一人脸图像为正面人脸图像时该正面人脸图像中的部分特征点示意图;
图3F是本发明一个实施例中提供的建立第二比值与预设人脸侧转角之间的对应关系的过程的方法流程图;
图4A是本发明一个实施例中提供的确定人脸转动角度装置的结构方框图;
图4B是本发明另一个实施例中提供的获取每对人脸特征点的对称中点的第一位置信息的装置的结构方框图;
图4C是本发明另一个实施例提供的确定待确定人脸图像的人脸俯仰角的装置的结构方框图;
图4D是本发明另一个实施例提供的确定待确定人脸图像的人脸侧转角的装置的结构方框图;
图4E是本发明另一个实施例提供的一种计算第四线段的装置的结构方框图;
图4F是本发明另一个实施例提供的确定人脸转动角度的装置的结构方框图;
图4G是本发明另一个实施例提供的建立第一比值与预设人脸俯仰角的对应关系的装置的结构方框图;
图4H是本发明再一个实施例提供的第三比值与预设人脸俯仰角的对 应关系的装置的结构方框图;
图4I是本发明再一个实施例提供的建立第二比值与预设人脸侧转角的对应关系的装置的结构方框图;
图5是本发明部分实施例中提供的电子设备的结构方框图。
具体实施方式
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。文中所讲的“电子设备”可以包括智能手机、平板电脑、智能电视、电子书阅读器、MP3播放器(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、膝上型便携计算机和台式计算机等等。
图1是本发明一个实施例中提供的确定人脸转动角度的方法流程图。参见图1,该方法包括:
在步骤101中,获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,该多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和一个第一人脸特征点,该多个人脸特征点不共面。
在步骤102中,根据该多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取该每对人脸特征点的对称中点的第一位置信息。
在步骤103中,根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,确定该待确定人脸图像的人脸转动角度。
综上所述,本实施例提供的确定人脸转动角度的方法,首先获取预设的多对对称的人脸特征点和一个第一人脸特征点;根据该多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取该每对人脸 特征点的对称中点的第一位置信息;根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,计算预设的线段比值,根据该线段比值查询预设的线段比值与人脸转动角度的对应关系,确定该待确定人脸图像的人脸转动角度;解决了不能确定人脸转动角度的问题;由于该预设的线段比值与人脸转动角度的对应关系是一个比较精确的线段比值与角度的对应关系,所以本发明实施例提供的确定人脸转动角度的方法达到了提高确定人脸转动角度精确性的效果。
图2A是本发明一个实施例中提供的确定人脸转动角度的方法流程图。该方法通过获取预设的多对对称的人脸特征点和一个第一人脸特征点,并获取该预设的多对对称的人脸特征点和一个第一人脸特征点在待确定人脸图像中的坐标位置信息;根据该坐标位置信息,确定该待确定人脸图像的人脸转动角度。参见图2A,该方法包括:
在步骤200中,检测待确定的人脸图像中预设的多个人脸特征点。
其中,预设的多个人脸特征点是选取在人脸中容易识别的点,预设的人脸特征点位于人脸器官的轮廓上,可以是人脸器官轮廓的转折点。例如,预设特征点可以是内眼角、外眼角、嘴角、眉梢、眉头或鼻尖等,内眼角和外眼角都是眼睛轮廓的转折点,嘴角是嘴轮廓的转折点,眉梢和眉头是眉毛轮廓的转折点,鼻尖是鼻子轮廓的转折点。预设的多个特征点中有部分特征点具有左右对称性,例如,像人脸中的两个内眼角、两个外眼角、两个眉梢、两个眉头、两个嘴角都具有左右对称性。
其中,预设的多个人脸特征点的数目为奇数,例如,该数目可以为5或7等,包括多对对称的人脸特征点和剩下的一个第一人脸特征点,该多个人脸特征点不共面。
作为一种实施方式,在本实施例中,该多个人脸特征点可以包括五个,该五个人脸特征点包括第一对对称的人脸特征点、第二对对称的人脸特征 点和剩下的一个第一人脸特征点,其中,在本实施例中,该第一对对称的人脸特征点可以为2个内眼角,该第二对对称的人脸特征点可以为2个嘴角,剩下的一个第一人脸特征点为鼻尖。
本步骤可以为:首先,通过人脸检测技术,检测该待确定人脸图像中的人脸部分,然后通过人脸特征点检测技术在人脸部分中检测第一对对称的人脸特征点,也即2个内眼角、第二对对称的人脸特征点,也即2个嘴角和剩下的一个第一人脸特征点,也即鼻尖。
当然在检测完预设的人脸特征点后,可以对检测的到的人脸特征点进行标记。参见图2B,图2B是本实施例提供的待确定人脸图像的特征点标记图(其中,图中特征点有仅包括检测到的人脸特征点,还包括由检测到的对称的人脸特征点构成的对称中点和其它点,将在后述内容中说明)。如图2所示,将检测到的预设的第一对对称的人脸特征点,也即2个内眼角,分别记为C’和D’,将检测到的预设的第二对对称的人脸特征点,也即2个嘴角,分别记为E’和F’,将检测到的预设的剩下的一个第一人脸特征点,也即鼻尖,记为N’。
在步骤201中,获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息。
其中,第一位置信息是指将该待确定人脸图像放置于二维直角坐标系或三维直角坐标时,该人脸特征点在该直角坐标系中的坐标位置。该二维直角坐标系是通过两根坐标轴,分别为x,y轴,x,y轴为两个相互垂直的轴向,表示平面的一种方法,所以通过该二维直角坐标系获取到的该人脸特征点的第一位置信息的坐标形式为(x,y);该三维直角坐标系是通过三根坐标轴,分别为x,y,z轴,x,y,z轴为两两相互垂直的三个轴向,表示空间的一种方法,所以通过该三维直角坐标系获取到的该人脸特征点的第一位置信息的坐标形式为(x,y,z)。
检测到人脸特征点后,自动获取人脸特征点的坐标,该人脸特征点坐标形式为(x,y),并将定位后的坐标输出到终端,这样终端就可以直接获取定位后的人脸特征点的坐标位置。例如,终端获取到预设的第一对对称的人脸特征点(2个内眼角)的坐标位置分别为C’(x1,y1)和D’(x2,y2),第二对对称的人脸特征点(2个嘴角)的坐标位置分别为E’(x3,y3),F’(x4,y4),剩下的一个第一人脸特征点(鼻尖)的坐标位置为N’(x5,y5)。示例性的,假设获取到的该5个人脸特征点的坐标位置分别为:C’(0,0)、D’(2,2)、E’(1,-2),F’(2,-1)、N’(1.5,0)。
在步骤202中,根据该第一对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取该第一对对称的人脸特征点的第一对称中点的第一位置信息。
仍以上述例子为例,则该第一对对称的人脸特征点为2个内眼角,该2个内眼角坐标分别为C’(x1,y1)和D’(x2,y2),则该第一对对称的人脸特征点的第一对称中点为点C’(x1,y1)和D’(x2,y2)构成的线段C’D’的中点,如图2B所示,将该中点记为A’(x6,y6),则A’(x6,y6)的坐标位置可通过中点计算公式得到,具体计算如下式(1)和(2):
Figure PCTCN2017070607-appb-000001
Figure PCTCN2017070607-appb-000002
例如,当C’(x1,y1)和D’(x2,y2)的坐标位置分别为:C’(0,0)、D’(2,2)时,点A’(x6,y6)的坐标位置通过下式(3)和(4)计算得到:
Figure PCTCN2017070607-appb-000003
Figure PCTCN2017070607-appb-000004
所以,该第一对对称的人脸特征点的第一对称中点的第一位置信息为A’(1,1)。
在步骤203中,根据该第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取该第二对对称的人脸特征点的第二对称中点的第一位置信息。
例如,该第二对对称的人脸特征点为2个嘴角,该两个嘴角的坐标分别为E’(x3,y3)和F’(x4,y4),则该第二对对称的人脸特征点的第二对称中点为点E’(x3,y3)和F’(x4,y4)构成的线段E’F’的中点,如图2B所示,将该中点记为B’,则B’(x7,y7)的坐标位置通过中点计算公式得到,具体计算如下式(5)和(6):
Figure PCTCN2017070607-appb-000005
Figure PCTCN2017070607-appb-000006
例如:点E’(x3,y3)和F’(x4,y4)的具体坐标为E’(1,-2)和F’(2,-1)时,该第二对称中点的坐标通过如下式(7)和(8)计算得到:.
Figure PCTCN2017070607-appb-000007
Figure PCTCN2017070607-appb-000008
所以,该第二对对称的人脸特征点的第二对称中点的第一位置信息为B’(1.5,-1.5)。
在步骤204中,根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,确定该待确定人脸图像的人脸俯仰角。
该待确定人脸图像的人脸转动角度可以为人脸俯仰角和人脸侧转角。下面分别介绍人脸俯仰角和人脸侧转角的确定方法。
根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,确定该待确定人脸图像的人脸俯仰角,参见图2C,该方法可包括:
在步骤204a中,根据该第一对称中点的第一位置信息和该第一人脸特 征点的第一位置信息,计算由该第一对称中点和该第一人脸特征点构成的第一线段的长度。
由步骤202可知,该第一对对称的人脸特征点的第一对称中点为A’(x6,y6),该第一人脸特征点的第一位置信息为N’(x5,y5),则由该第一对称中点A’(x6,y6)和该第一人脸特征点N’(x5,y5)构成的第一线段A’N’的长度由两点间的距离公式计算得到,具体计算如下式(9):
Figure PCTCN2017070607-appb-000009
在步骤204b中,根据该第二对称中点的第一位置信息和该第一人脸特征点的第一位置信息,计算由该第二对称中点和该第一人脸特征点构成的第二线段的长度。
由步骤203可知,该第一对对称的人脸特征点的第一对称中点为B’(x7,y7),该第一人脸特征点的第一位置信息为N’(x5,y5),则由该第一对称中点B’(x7,y7)和该第一人脸特征点N’(x5,y5)构成的第一线段B’N’的长度由两点间的距离公式计算得到,具体计算如下式(10):
Figure PCTCN2017070607-appb-000010
在步骤204c中,根据该第一线段的长度和该第二线段的长度之间的第一比值,从第一比值与人脸俯仰角的对应关系中获取该待确定人脸图像的人脸俯仰角。
计算该第一线段的长度A’N’和该第二线段的长度B’N’之间的第一比值,根据该第一比值查询事先建立的第一比值与人脸俯仰角的对应关系(该对应关系的建立过程请参见后面步骤302a-302e),从该对应关系中查询与该计算得到的第一比值对应的人脸俯仰角,将该人脸俯仰角确定为该待确定人脸图像的人脸俯仰角。
需要说明的是,如果在该事先建立的第一比值与人脸俯仰角的对应关系中包括的所有第一比值中没有查询到本步骤中计算得到的第一比值时, 则从该对应关系中的所有第一比值中确定与本步骤中计算得到的第一比值最接近的第一比值,然后将该最接近的第一比值对应的人脸俯仰角作为本步骤中计算得到的第一比值对应的人脸俯仰角。
另外,从该对应关系中的所有第一比值中确定与本步骤中计算得到的第一比值最接近的第一比值可通过如下方法完成:
将该事先建立的第一比值与人脸俯仰角的对应关系中包括的每个第一比值与在本步骤计算得到的第一比值做差,得到第一比值差值,然后对每一个第一比值差值做绝对值运算,比较绝对值运算后的每一个值,获取最小的绝对值,然后获取该最小绝对值对应的第一比值(第一比值与人脸俯仰角的对应关系中包括的第一比值),将该第一比值确定为与本步骤计算得到的第一比值最接近的第一比值。
在步骤205中,根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,确定该待确定人脸图像的人脸侧转角。
根据该每对人脸特征点的对称中点的第一位置信息、该第一人脸特征点的第一位置信息和通过上述过程已确定的人脸俯仰角确定该待确定人脸图像的人脸侧转角度。参见图2D,该方法可包括:
在步骤205a中,根据该第一对称中点的第一位置信息、该第二对称中点的第一位置信息和该第一人脸特征点的第一位置信息,计算该第一人脸特征点到第三线段之间的第一垂直距离和该第三线段的长度。
由步骤202可知,该第一对称中点的第一位置信息为A’(x6,y6)、由步骤203可知,该第二对称中点的第一位置信息为B’(x7,y7),由步骤201可知,该第一人脸特征点的第一位置信息N’(x5,y5),该第三线段是由该第一对称中点A’(x6,y6)与该第二对称中点B’(x7,y7)组成的线段A’B’,则该第一人脸特征点到第三线段之间的第一垂直距离通过如下方法计算得到:首先,将经过点A’和点B’的直线记为直线c,根据点A’和点B’的第一位置信息计 算该直线c的一般式直线方程,然后再根据第一人脸特征点N’的第一位置信息计算该第一人脸特征点N’到该直线c的第一垂直距离。具体如下:
首先,根据两点式直线公式得到经过点A’(x6,y6)和点B’(x7,y7)的两点式直线方程,该两点式直线方程式如下式(11):
Figure PCTCN2017070607-appb-000011
将上述方程进行等价变换,变换为一般式直线方程,该一般式直线方程如下式(12):
(y7-y6)x-(x7-x6)y+x7×y6-x6×y7=0        (12)
当然经过点A’(x6,y6)和点B’(x7,y7)的直线c,还可以通过其它方法计算得到,在此不做赘述。
然后根据点到线的距离公式计算该第一人脸特征点N’(x5,y5)到直线c的距离d,具体计算如下式(13):
Figure PCTCN2017070607-appb-000012
因为直线c是经过点A’(x6,y6)和点B’(x7,y7)的直线,所以该第一人脸特征点N’(x5,y5)到直线c的距离d也即为该第一人脸特征点N’(x5,y5)到第三线段A’B’的第二垂直距离,所以该第一垂直距离即为该距离d。
第三线段的长度通过两点间的距离公式得到,具体计算如下式(14):
Figure PCTCN2017070607-appb-000013
在步骤205b中,根据该人脸俯仰角,从人脸俯仰角与第三比值的对应关系中获取对应的第三比值。
根据在步骤204中确定的人脸俯仰角,查询事先建立的第三比值与人脸俯仰角的对应关系(该对应关系的建立过程请参见后续步骤303a-303f),从该对应关系中查询人脸俯仰角对应的第三比值,该人脸俯仰角是在步骤204中确定的人脸俯仰角,将该第三比值记为e。
需要说明的是,如果在该事先建立的第三比值与人脸俯仰角的对应关系中包括的所有人脸俯仰角中没有查询到在步骤204中确定的人脸俯仰角时,则从该所有人脸俯仰角中确定与步骤204计算得到的人脸俯仰角最接近的人脸俯仰角,然后将该最接近的人脸俯仰角对应的第三比值作为该人脸俯仰角对应的第三比值。
另外,从该所有人脸俯仰角中确定与步骤204计算得到的人脸俯仰角最接近的人脸俯仰角可通过如下方法完成:
将该事先建立的第三比值与人脸俯仰角的对应关系中包括的每个人脸俯仰角与在步骤204中确定的人脸俯仰角做差,得到人脸俯仰角差值,然后对每一个人脸俯仰角差值做绝对值运算,比较绝对值运算后的每一个值,获取最小的绝对值,然后获取该最小绝对值对应的人脸俯仰角(第三比值与人脸俯仰角的对应关系中包括的人脸俯仰角),将该人脸俯仰角确定为与步骤204计算得到的人脸俯仰角最接近的人脸俯仰角。
由步骤303d、步骤303e和步骤303f可知,该第三比值是如下两个数值的比值,第一个数值为待确定人脸图像中第一对人脸特征点的对称中点和第二对人脸特征点的对称中点构成的线段,也即第三线段A’B’,第二个数值为在第一对人脸特征点在正面人脸图像中的第三对称中点A和该第二对人脸特征点在正面人脸图像中的第四对称中点B构成的第四线段AB。所以e的值为第三线段与第四线段的比值,所以e的值通过如下公式(15)计算得到:
Figure PCTCN2017070607-appb-000014
在步骤205c中,根据该第三比值和该第三线段的长度,计算第四线段的长度。
由上述步骤205b可知,该第三比值为第三线段与第四线段的比值。所以,该第四线段的长度为第三线段与第三比值的比值。所以该第四线段的 值可以通过下式(16)计算得到:
Figure PCTCN2017070607-appb-000015
在步骤205d中,根据该第一垂直距离和该第四线段的长度之间的第二比值,从第二比值与人脸侧转角的对应关系中获取该待确定人脸图像的人脸侧转角。
本步骤可以为:计算该第一垂直距离d和该第四线段AB的长度之间的第二比值,根据该第二比值查询事先建立的第二比值与人脸侧转角的对应关系(该对应关系的建立过程请参见后续步骤304a-304b),从该对应关系中查询与该计算得到的第二比值相同的第二比值对应的人脸侧转角,将该人脸侧转角确定为该待确定人脸图像的人脸侧转角。
需要说明的是,如果在该事先建立的第二比值与人脸侧转角的对应关系中包括的所有第二比值中没有查询到本步骤中计算得到的第二比值时,则从该对应关系中的所有第二比值中确定与本步骤中计算得到的第二比值最接近的第二比值,然后将该最接近的第二比值对应的人脸侧转角作为本步骤中计算得到的第二比值对应的人脸侧转角。
另外,从该对应关系中的所有第二比值中确定与本步骤中计算得到的第二比值最接近的第二比值可通过如下方法完成:
将该事先建立的第二比值与人脸侧转角的对应关系中包括的每个第二比值与在本步骤计算得到的第二比值做差,得到第二比值差值,然后对每一个第二比值差值做绝对值运算,比较绝对值运算后的每一个值,获取最小的绝对值,然后获取该最小绝对值对应的第二比值(第二比值与人脸侧转角的对应关系中包括的第二比值),将该第二比值确定为与本步骤计算得到的第二比值最接近的第二比值。
在步骤206中,根据任一一对对称的人脸特征点中的每个人脸特征点的第一位置信息,确定第五线段,计算该第五线段与水平线之间的夹角, 得到该待确定人脸图像的人脸旋转角度。
其中,第五线段指预设的人脸特征点中任一一对对称的人脸特征点中的两个人脸特征点构成的线段。所以在本实施例中构成第五线段的两个点可以为第一对对称的人脸特征点,也即2个内眼角或第二对对称的人脸特征点,也即2个嘴角。
其中,人脸旋转角度是在人脸正面方向始终向前,而人脸左右旋转得到的角度。
例如,该任一一对对称的人脸特征点为2个内眼角C’(x1,y1)和D’(x2,y2),则该第五线段为点C’(x1,y1)和D’(x2,y2)确定的线段C’D’,当然,第五线段也可以为2个嘴角E’(x3,y3)和F’(x4,y4),则该第五线段为点E’(x3,y3)和F’(x4,y4)确定的线段E’F’。
以第五线段是C’D’为例,那么该第五线段与水平线之间的夹角(用∠α表示)的具体计算过程如下:
首先,计算该∠α的余弦值,参见图2D,则该∠α的余弦值的具体计算方法如下式(17):
Figure PCTCN2017070607-appb-000016
所以,∠α的值可以通过取上述∠α的余弦值的反余弦得到,具体计算方法如下式(18):
Figure PCTCN2017070607-appb-000017
则该∠α即为该待确定人脸图像的人脸旋转角度。
例如,当C’(x1,y1)和D’(x2,y2)的坐标位置分别为:C’(0,0)、D’(2,2)时,∠α为的计算过程为如下式(19):
Figure PCTCN2017070607-appb-000018
所以,∠α为45°。
需要说明的是,由于确定待确定人脸图像的人脸旋转角度时,只需要根据任一一对对称的人脸特征点中的每个人脸特征点的第一位置信息就可以确定,所以在实际操作中,如果只需要确定人脸图像的人脸旋转角度时,可以在步骤201执行完之后直接执行步骤206。
综上所述,本发明实施例提供的确定人脸转动角度的方法,首先获取预设的多对对称的人脸特征点和一个第一人脸特征点;根据该多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取该每对人脸特征点的对称中点的第一位置信息;根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,计算预设的线段比值,根据该线段比值查询预设的线段比值与人脸转动角度的对应关系,确定该待确定人脸图像的人脸转动角度;解决了不能确定人脸转动角度的问题;由于该预设的线段比值与人脸转动角度的对应关系是一个比较精确的线段比值与角度的对应关系(有关为什么是比较精确的对应关系将在下述内容中论述),所以本发明实施例提供的确定人脸转动角度的方法达到了提高确定人脸转动角度精确性的效果。
在确定待确定人脸图像的人脸俯仰角和人脸侧转角时,都需要查询线段比值与预设人脸转动角度的对应关系。该对应关系是在确定待确定人脸图像的转动角度前已经建立,已使得在确定待确定人脸图像的转动角度时可以直接查询。其中,线段是指由预设的多个人脸特征点中某两个人脸特征点的中点连接构成的线段,或由一个人脸特征点到另外一条直线的垂直距离构成的垂直线段等。
在本实施例中,建立了三组线段比值与预设人脸转动角度的对应关系。第一组对应关系为第一比值与预设人脸俯仰角之间的对应关系,第二组对应关系为第三比值与该预设人脸俯仰角之间的对应关系,第三组对应关系 为第二比值与该预设人脸侧转角之间的对应关系。参见图3A,建立过程如下:
在步骤301中,获取该预设的多个人脸特征点在第一人脸图像中的第二位置信息。
其中,预设的多个人脸特征点与上述预设的多个人脸特征点的含义相同,在此不做赘述。
其中,第一人脸图像是人脸转动预设人脸转动角度后拍摄的人脸图像。预设人脸转动角度包括预设的人脸俯仰角度和人脸侧转角度,该预设的人脸俯仰角度和人脸侧转角度可以为预设的一系列离散的转动角度,且这一系列离散的转动角度的每两个相邻转动的差值相同,该差值较小,可设置为1°,2°或其它较小的值,以保证在下述过程中建立全面的线段比值与该预设人脸转动角度的对应关系,使得在查询时该对应关系时,可以得到准确的线段比值和准确的人脸转动角度。
其中,第二位置信息是将人脸放置于三维直角坐标系(通过三根坐标轴,分别为x,y,z轴,x,y,z轴为两两相互垂直的三个轴向,表示空间的一种方法)中获取得到的。该三维直角坐标系可以是任意一三维直角坐标系。所以通过该三维直角坐标系获取到的该人脸特征点的第二位置信息的坐标形式为(x,y,z)。
所以,上述预设人脸转动角度可以通过如下方式描述:人脸俯仰角是指正面人脸沿y轴转动,而在x轴和z轴无转动时,得到的人脸转动角度;人脸侧转角是指该正面人脸沿z轴转动,而在x轴和y轴无转动时,得到的人脸转动角度。
作为一种实施方式,该预设人脸转动角度可以通过如下方法得到:最初的人脸转动角度设置为0°,也即该人脸的正面向前,也没有任何转动角度,预先设置两个相邻转动角度的差值,则第一个预设的人脸转动角度就 为该两个相邻转动角度的差值,第二个预设的转动角度为第一个预设的人脸转动角度与该两个相邻转动角度的差值的和,第三个预设的转动角度为第二个预设的人脸转动角度与该两个相邻转动角度的差值的和;按此方法依次得到预设的所有人脸转动角度,预设的人脸转动角度的数量为360与预设的两个相邻转动角度的差值的比值。
以预设的人脸俯仰角为例,首先,将人脸设置为正面人脸,并预先设置两个相邻人脸俯仰角的差值为1°,则第一个预设的人脸转动角度就为该两个相邻转动角度的差值1°,第二个预设的转动角度为第一个预设的人脸转动角度与该两个相邻转动角度的差值的和,也即1°+1°=2°,第三个预设的转动角度为第二个预设的人脸转动角度与该两个相邻转动角度的差值的和,也即2°+1°=3°;按此方法依次得到预设的人脸俯仰角度,预设的人脸转动角度的数量为360°与预设的两个相邻转动角度的差值1°的比值为360个。
作为一种实施方式,该预设的多个人脸特征点在第一人脸图像中的第二位置信息可以通过如下方法得到:首先,将人脸放置为正面人脸,按照每两个相邻的人脸转动角度的预设角度差值转动正面人脸,每转动一个预设角度差值后,拍摄人脸图像,并获取该拍摄的人脸图像的预设的多个人脸特征点的坐标位置,记为第二位置信息。比如,每相邻的两个人脸转动角度的差值为1°,则首先将正面人脸转动1°,拍摄此时人脸,得到人脸图像,并获取该人脸图像的预设的多个人脸特征点的坐标位置;然后再继续将该人脸转动1°,再次拍摄该人脸得到人脸图像,并获取此时人脸图像中预设的多个人脸特征点的坐标位置,重复上述步骤,直到转动完所有的预设角度以及获取到每个预设角度下的人脸特征点的坐标位置。
参见图3B,图3B包括了本实施例示出的第一人脸图像中的多个人脸特征点。如图3B所示:预设的第一对对称的人脸特征点中的2个内眼角, 分别记为G’和H’,第二对对称的人脸特征点中的2个嘴角,分别记为I’和J’,剩下的一个第一人脸特征点为鼻尖,记为O’。该预设的多个人脸特征点在第一人脸图像中的第二位置的坐标分别为:G’(x9,y9,z9),H’(x10,y10,z10),I’(x11,y11,z11),J’(x12,y12,z12),O’(x13,y13,z13)。
在步骤302中,根据该预设的多个人脸特征点的第二位置信息建立第一比值与该预设人脸俯仰角的对应关系。
第一组对应关系为第一比值与预设人脸俯仰角之间的对应关系,参见图3C,图3C为第一比值与预设人脸俯仰角之间的对应关系的建立过程的方法流程图,该方法包括:
在步骤302a中,根据该第一对对称的人脸特征点包括的每个人脸特征点的第二位置信息,获取该第一对对称的人脸特征点的第五对称中点的第二位置信息。
仍以上述例子为例,该第一对对称的人脸特征点为2个内眼角G’(x9,y9,z9)和点H’(x10,y10,z10),仍旧参见图3B,则该第一对对称的人脸特征点的第五对称中点为点G’(x9,y9,z9)和点H’(x10,y10,z10)构成的线段G’H’中点,将线段G’H’的中点记为K’,则K’(x14,y14,z14)的坐标位置通过中点计算公式得到,具体计算过程如下式(20)、(21)和(22):
Figure PCTCN2017070607-appb-000019
Figure PCTCN2017070607-appb-000020
Figure PCTCN2017070607-appb-000021
在步骤302b中,根据该第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取该第二对对称的人脸特征点的第六对称中点的第二位置信息。
仍以上述例子为例,则该第二对对称的人脸特征点为两个嘴角I’(x11,y11, z11)和J’(x12,y12,z12),仍旧参见图3B,则该第二对对称的人脸特征点的第六对称中点为点I’(x11,y11,z11)和J’(x12,y12,z12)构成的线段的中点,将该中点记为L’,则L’(x15,y15,z15)的坐标位置通过中点计算公式得到,具体计算过程如下式(23)、(24)和(25):
Figure PCTCN2017070607-appb-000022
Figure PCTCN2017070607-appb-000023
Figure PCTCN2017070607-appb-000024
在步骤302c中,根据该第五对称中点的第二位置信息和该第一人脸特征点的第二位置信息,计算由该第五对称中点和该第一人脸特征点构成的第六线段的长度。
仍以上述例子为例,该第五对称中点的第二位置信息为K’(x14,y14,z14),该第一人脸特征点为鼻尖O’(x13,y13,z13),则由该第五对称中点和该第一人脸特征点构成的第六线段为K’(x14,y14,z14)和点O’(x13,y13,z13)构成的线段K’O’的长度,则该第六线段的长度通过两点间的距离公式计算得到,具体计算如下式(26):
Figure PCTCN2017070607-appb-000025
在步骤302d中,根据该第六对称中点的第二位置信息和该第一人脸特征点的第二位置信息,计算由该第六对称中点和该第一人脸特征点构成的第七线段的长度。
仍以上述例子为例,该第六对称中点的第二位置信息为L’(x15,y15),该第一人脸特征点为鼻尖O’(x13,y13,z13),则由该第六对称中点和该第一人脸特征点构成的第七线段为L’(x15,y15,z15)和点O’(x13,y13,z13)构成的线段L’O’的长度,则该第七线段的长度通过两点间的距离公式计算得到,具体计算如下式(27):
Figure PCTCN2017070607-appb-000026
在步骤302e中,建立该第六线段和该第七线段之间的第一比值与该预设人脸俯仰角之间的对应关系。
仍以上述例子为例,该六线段为K’O’,该第七线段为L’O’,则该第一比值为K’O’和L’O’的比值。该第一比值与该预设人脸俯仰角之间的对应关系通过如下方法得到:转动人脸当人脸转动角度为预设的第一个人脸俯仰角时,停止转动人脸,在该第一个人脸俯仰角的情况下,计算K’O’和L’O’的比值,得到第一个第一比值,存储该第一个第一比值与该预设的第一个人脸俯仰角的对应关系;继续转动人脸当人脸转动角度为预设的第二个人脸俯仰角时,在该第二个人脸俯仰角的情况下,计算K’O’和L’O’的比值,得到第二个第一比值,存储该第二个第一比值与该预设的第二个人脸俯仰角的对应关系,重复上述步骤,直到存储完所有第一比值与预设的人脸俯仰角的对应关系。
在步骤303中,根据该预设的多个人脸特征点的第二位置信息建立第三比值与该预设人脸俯仰角的对应关系。
第二组对应关系为第三比值与该预设人脸俯仰角之间的对应关系,参见图3D,图3D为第三比值与预设人脸俯仰角之间的对应关系的建立过程的方法流程图,该方法包括:
在步骤303a中,获取该第一对对称的人脸特征点包括的每个人脸特征点在该人脸的正面人脸图像中的第三位置信息,以及该第二对对称的人脸特征点包括的每个人脸特征点在该正面人脸图像中的第三位置信息。
参见图3E,图3E包括了本实施例示出的第一人脸图像为正面人脸图像时该正面人脸图像中的多个人脸特征点。如图3E所示:该第一对对称的人脸特征点为两个内眼角,分别记为G和H,在该人脸的正面人脸图像中的第三位置信息分别为G(x16,y16,z16)和H(x17,y17,z17)。该第二对对称的人脸 特征点为两个嘴角,分别记为I’和J’,在该正面人脸图像中的第三位置信息分别为I’(x18,y18,z18)和J’(x19,y19,z19)
在步骤303b中,根据该第一对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取该第一对对称的人脸特征点的第七对称中点的第三位置信息。
仍以上述例子为例,仍旧参见图3E,则该第一对对称的人脸特征点为两个内眼角G(x16,y16,z16)和H(x17,y17,z17),则该第一对对称的人脸特征点的第七对称中点为点G(x16,y16,z16)和H(x17,y17,z17)构成的线段的中点,将该中点记为K,则K(x18,y18,z18)的坐标位置通过中点计算公式得到,具体计算如下式(28)、(19)和(30):
Figure PCTCN2017070607-appb-000027
Figure PCTCN2017070607-appb-000028
Figure PCTCN2017070607-appb-000029
在步骤303c中,根据该第二对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取该第二对对称的人脸特征点的第八对称中点的第三位置信息。
仍以上述例子为例,仍旧参见图3E,则该第二对对称的人脸特征点为两个嘴角I(x20,y20,z20)和J(x21,y21,z21),则该第二对对称的人脸特征点的第八对称中点为点I(x20,y20,z20)和J(x21,y21,z21)构成的线段IJ的中点,将该中点记为L,则L(x22,y22,z22)的坐标位置通过中点计算公式得到,具体计算如下式(31)、(32)和(33):
Figure PCTCN2017070607-appb-000030
Figure PCTCN2017070607-appb-000031
Figure PCTCN2017070607-appb-000032
在步骤303d中,根据该第五对称中点的第二位置信息和该第六对称中点的第二位置信息,计算由该第五对称中点和该第六对称中点构成的第八线段的长度。
该第五对称中点的第二位置信息为K’(x14,y14,z14),该第六对称中点的第二位置信息为L’(x15,y15,z15),则由该第五对称中点K’(x14,y14,z14)和该第六对称中点L’(x15,y15,z15)构成的第八线段K’L’的长度为点K’(x14,y14,z14)到点L’(x15,y15,z15)的距离,由两点间的距离公式计算得到,具体计算如下式(34):
Figure PCTCN2017070607-appb-000033
在步骤303e中,根据该第七对称中点的第三位置信息和该第八对称中点的第三位置信息,计算由该第七对称中点和该第八对称中点构成的第九线段的长度。
该第七对称中点的第三位置信息为K(x18,y18,z18)和该第八对称中点的第三位置信息为L(x22,y22,z22),则该第七对称中点K(x18,y18,z18)和该第八对称中点L(x22,y22,z22)构成的第九线段KL的长度为点K(x18,y18,z18)到点L(x22,y22,z22)的距离,由两点间的距离公式计算得到,具体计算如下式(35):
Figure PCTCN2017070607-appb-000034
在步骤303f中,建立该第八线段与该第九线段之间的第三比值与该预设人脸俯仰角之间的对应关系。
仍以上述例子为例,该第八线段为K’L’,该第九线段为KL,则该第三比值为K’L’和KL的比值。该第三比值与该预设人脸俯仰角之间的对应关系通过如下方法得到:转动人脸当人脸转动角度为预设的第一个人脸俯仰角时,停止转动人脸,在该第一个人脸俯仰角的情况下,计算K’L’和KL的比值,得到第一个第三比值,存储该第一个第三比值与该预设的第一个 人脸俯仰角的对应关系;继续转动人脸当人脸转动角度为预设的第二个人脸俯仰角时,在该第二个人脸俯仰角的情况下,计算K’L’和KL的比值,得到第二个第三比值,存储该第二个第三比值与该预设的第二个人脸俯仰角的对应关系,重复上述步骤,直到存储完所有第三比值与预设的人脸俯仰角的对应关系。
在步骤304中,根据该预设的多个人脸特征点的第二位置信息建立第二比值与该预设人脸侧转角的对应关系。
第三组对应关系为第二比值与该预设人脸侧转角之间的对应关系,参见图3F,图3F为第二比值与预设人脸侧转角之间的对应关系的建立过程的方法流程图,该方法包括:
在步骤304a中,根据该第五对称中点的第二位置信息、第六对称中点的第二位置信息和该第一人脸特征点的第二位置信息,计算该第一人脸特征点到第八线段的第二垂直距离。
由步骤302a可知,该第五对称中点的第二位置信息为K’(x14,y14,z14),由步骤302b可知,该第六对称中点的第二位置信息为L’(x15,y15,z15),由步骤205可知,该第一人脸特征点的第二位置信息为O’(x13,y13,z13),则该第一人脸特征点O’(x13,y13,z13)到第八线段K’L’的第二垂直距离通过如下过程计算得到:
首先,根据该第五对称中点的第二位置信息K’(x14,y14,z14)和该第六对称中点的第二位置信息L’(x15,y15,z15),计算得到经过点K’(x14,y14,z14)和点L’(x15,y15,z15)的直线a,具体计算如下:
首先,根据两点式直线公式得到经过点K’(x14,y14,z14)和点L’(x15,y15,z15)的两点式直线方程,如下式(36):
Figure PCTCN2017070607-appb-000035
将上述方程进行等价变换,变换为一般式直线方程,如下式(37):
(y15-y14)x-(x15-x14)y+x15×y14-x14×y15=0      (37)
当然经过点K’(x14,y14,z14)和点L’(x15,y15,z15)的直线a,还可以通过其它方法计算得到,在此不做赘述。
然后根据点到线的距离公式计算该第一人脸特征点O’(x13,y13,z13)到直线a的距离b,具体计算如下式(38):
Figure PCTCN2017070607-appb-000036
因为直线a为经过点K’(x14,y14,z14)和点L’(x15,y15,z15)的直线,所以该第一人脸特征点O’(x13,y13,z13)到直线a的距离b也即为该第一人脸特征点O’(x13,y13,z13)到第八线段K’L’的第二垂直距离,所以该第二垂直距离即为该距离b。
在步骤304b中,建立该第二垂直距离与该第九线段之间的第二比值与该预设人脸侧转角之间的对应关系。
仍以上述例子为例,该第二垂直距离为b,该第九线段为KL,则该第二比值为b和KL的比值。该第二比值与该预设人脸侧转角之间的对应关系通过如下方法得到:转动人脸当人脸转动角度为预设的第一个人脸侧转角时,停止转动人脸,在该第一个人脸侧转角的情况下,计算b和KL的比值,得到第一个第二比值,存储该第一个第二比值与该预设的第一个人脸侧转角的对应关系;继续转动人脸当人脸转动角度为预设的第二个人脸侧转角时,在该第二个人脸侧转角的情况下,计算b和KL的比值,得到第二个第二比值,存储该第二个第二比值与该预设的第二个人脸侧转角的对应关系,重复上述步骤,直到存储完所有第二比值与预设的人脸侧转角的对应关系。
综上所述,本发明实施例提供的建立线段比值与预设人脸转动角度的对应关系的方法,在三维直角坐标系中按照预设角度旋转正面人脸3D模 型,每转动一个预设角度获取人脸特征点的坐标信息,根据获取到的坐标信息建立线段比值与预设人脸转动角度的对应关系;由于该预设角度较小,从而对应关系中的线段比值或人脸转动角度也较精确;而且由于事先建立该对应关系,使得在确定人脸转动角度的过程中可以直接从该对应关系中获取该线段比值或是该人脸转动角度,减少确定人脸转动角度所需的时间,提高确定人脸转动角度的效率。
下述为本发明装置实施例,可以用于执行本发明方法实施例。对于本发明装置实施例中未披露的细节,请参照本发明方法实施例。
请参考图4A,其示出了本发明一个实施例提供的确定人脸转动角度装置的结构方框图,如图4A所示,该确定人脸转动角度装置包括但不限于:第一获取模块401、第二获取模块402、第一确定模块403。
所述第一获取模块401,配置为获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,该多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和一个第一人脸特征点,该多个人脸特征点不共面;
所述第二获取模块402,配置为根据该第一获取模块401获取的该多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取该每对人脸特征点的对称中点的第一位置信息;
所述第一确定模块403,配置为根据该第二获取模块402获取的该每对人脸特征点的对称中点的第一位置信息和该第一获取模块401获取的该第一人脸特征点的第一位置信息,确定该待确定人脸图像的人脸转动角度。
综上所述,本实施例提供的确定人脸转动角度装置,首先获取预设的多对对称的人脸特征点和一个第一人脸特征点;根据该多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取该每对人脸特征点的对称中点的第一位置信息;根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,计算预设的线段比值, 根据该线段比值查询预设的线段比值与人脸转动角度的对应关系,确定该待确定人脸图像的人脸转动角度;解决了不能确定人脸转动角度的问题;由于该预设的线段比值与人脸转动角度的对应关系是一个比较精确的线段比值与角度的对应关系,所以本发明实施例提供的确定人脸转动角度的方法达到了提高确定人脸转动角度精确性的效果。
作为一种实施方式,该多个人脸特征点包括五个,该五个人脸特征点包括第一对对称的人脸特征点、第二对对称的人脸特征点和剩下的一个第一人脸特征点。
请参考图4B,其示出了本发明一个实施例提供的第二获取模块402的结构方框图,如图4B所示,该第二获取模块402包括但不限于:第一获取子模块4021、第二获取子模块4022。
所述第一获取子模块4021,配置为根据所述第一获取模块401获取的该第一对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取该第一对对称的人脸特征点的第一对称中点的第一位置信息;
所述第二获取子模块4022,配置为根据所述第一获取模块401获取的该第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取该第二对对称的人脸特征点的第二对称中点的第一位置信息。
请参考图4C,其示出了本发明一个实施例提供的第一确定模块403的结构方框图,如图4C所示,该第一确定模块403包括但不限于:第一计算子模块4031、第二计算子模块4032、第三获取子模块4033。
所述第一计算子模块4031,配置为根据该第一获取子模块4021获取的该第一对称中点的第一位置信息和该第一获取模块401获取的该第一人脸特征点的第一位置信息,计算由该第一对称中点和该第一人脸特征点构成的第一线段的长度;
所述第二计算子模块4032,配置为根据所述第二获取子模块4022获取 的该第二对称中点的第一位置信息和该第一获取模块401获取的该第一人脸特征点的第一位置信息,计算由该第二对称中点和该第一人脸特征点构成的第二线段的长度;
所述第三获取子模块4033,配置为根据所述第一计算子模块4031计算的该第一线段的长度和该第二计算子模块4032计算的该第二线段的长度之间的第一比值,从第一比值与人脸俯仰角的对应关系中获取该待确定人脸图像的人脸俯仰角。
请参考图4D,其示出了本发明另一个实施例提供的第一确定模块403的结构方框图,如图4D所示,该第一确定模块403包括但不限于:第三计算子模块4034、第四计算子模块4035、第四获取子模块4036。
所述第三计算子模块4034,配置为根据所述第一获取子模块4021获取的该第一对称中点的第一位置信息、所述第二获取子模块4022获取的该第二对称中点的第一位置信息和所述第一获取模块401获取的该第一人脸特征点的第一位置信息,计算该第一人脸特征点到第三线段之间的第一垂直距离和第三线段的长度,该第三线段是由该第一对称中点与该第二对称中点组成的线段。
所述第四计算子模块4035,配置为根据所述第三计算子模块4034计算得到的该第三线段的长度和所述第三获取子模块4033获取的该待确定人脸图像的人脸俯仰角,计算第四线段的长度,该第四线段为第三对称中点与第四对称中点之间的线段,该第三对称中点是该第一对人脸特征点在正面人脸图像中的对称中点,该第四对称中点是该第二对人脸特征点在正面人脸图像中的对称中点。
所述第四获取子模块4036,配置为根据所述第三计算子模块4034计算得到的该第一垂直距离和该第四计算子模块4035计算得到的该第四线段的长度之间的第二比值,从第二比值与人脸侧转角的对应关系中获取该待确 定人脸图像的人脸侧转角。
请参考图4E,其示出了本发明另一个实施例提供的计算第四线段的结构方框图,该计算由第四计算子模块4035完成,如图4E所示,该第四计算子模块4035包括但不限于:
所述获取单元4035a,配置为根据所述第三获取子模块4033获取得到的该人脸俯仰角,从人脸俯仰角与第三比值的对应关系中获取对应的第三比值。
所述计算单元4035b,配置为根据所述获取单元4035a获取得到的该第三比值和所述第三计算子模块4034计算的该第三线段的长度,计算第四线段的长度。
请参考图4F,其示出了本发明另一个实施例提供的确定人脸转动角度的装置的结构方框图,如图4F所示,该装置还包括但不限于:
确定计算模块404,配置为根据任一一对对称的人脸特征点中的每个人脸特征点的第一位置信息,确定第五线段,计算该第五线段与水平线之间的夹角,得到该待确定人脸图像的人脸旋转角度。
仍旧请参考图4F,该装置还包括:
第三获取模块405,配置为获取该预设的多个人脸特征点在第一人脸图像中的第二位置信息,该第一人脸图像是人脸转动预设人脸转动角度后拍摄的人脸图像。
建立模块406,配置为根据所述第三获取模块405获取得到的该预设的多个人脸特征点的第二位置信息建立线段比值与该预设人脸转动角度的对应关系。
作为一种实施方式,该预设人脸转动角度包括预设人脸俯仰角。
请参考图4G,其示出了本发明另一个实施例提供的建立第一比值与该预设人脸俯仰角的对应关系的结构方框图,如图4G所示,该建立模块406 包括但不限于:第五获取子模块4061、第六获取子模块4062、第五计算子模块4063、第六计算子模块4064、第一建立子模块4065。
所述第五获取子模块4061,配置为根据所述第三获取模块405获取得到的该第一对对称的人脸特征点包括的每个人脸特征点的第二位置信息,获取该第一对对称的人脸特征点的第五对称中点的第二位置信息。
所述第六获取子模块4062,配置为根据所述第三获取模块405获取得到的该第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取该第二对对称的人脸特征点的第六对称中点的第二位置信息。
所述第五计算子模块4063,配置为根据所述第五获取子模块4061获取得到的该第五对称中点的第二位置信息和所述第三获取模块405获取得到的该第一人脸特征点的第二位置信息,计算由该第五对称中点和该第一人脸特征点构成的第六线段的长度。
所述第六计算子模块4064,配置为根据所述第六获取子模块4062获取得到的该第六对称中点的第二位置信息和所述第三获取模块405获取得到的该第一人脸特征点的第二位置信息,计算由该第六对称中点和该第一人脸特征点构成的第七线段的长度。
所述第一建立子模块4065,配置为建立所述第五计算子模块4063计算得到的该第六线段和所述第六计算子模块4064计算得到的该第七线段之间的第一比值与该预设人脸俯仰角之间的对应关系。
请参考图4H,其示出了本发明再一个实施例提供的建立第三比值与该预设人脸俯仰角的对应关系的结构方框图,如图4H所示,该建立模块406包括但不限于:第七获取子模块406a、第八获取子模块406b、第九获取子模块406c、第七计算子模块406d、第八计算子模块406e、第二建立子模块406f。
所述第七获取子模块406a,配置为获取该第一对对称的人脸特征点包 括的每个人脸特征点在该人脸的正面人脸图像中的第三位置信息,以及该第二对对称的人脸特征点包括的每个人脸特征在该正面人脸图像中的第三位置信息。
所述第八获取子模块406b,配置为根据所述第七获取子模块406a获取得到的该第一对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取该第一对对称的人脸特征点的第七对称中点的第三位置信息。
所述第九获取子模块406c,配置为根据所述第七获取子模块406a获取得到的该第二对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取该第二对对称的人脸特征点的第八对称中点的第三位置信息。
所述第七计算子模块406d,配置为根据所述第五获取子模块4061获取得到的该第五对称中点的第二位置信息和所述第六获取子模块4062获取得到的该第六对称中点的第二位置信息,计算由该第五对称中点和该第六对称中点构成的第八线段的长度。
所述第八计算子模块406e,配置为根据所述第八获取子模块406b获取得到的该第七对称中点的第三位置信息和所述第九获取子模块406c获取得到的该第八对称中点的第三位置信息,计算由该第七对称中点和该第八对称中点构成的第九线段的长度。
所述第二建立子模块406f,配置为建立所述第七计算子模块406d计算得到的该第八线段与所述第八计算子模块406e计算得到的该第九线段之间的第三比值与该预设人脸俯仰角之间的对应关系。
作为一种实施方式,该预设人脸转动角度包括预设人脸侧转角。
请参考图4I,其示出了本发明再一个实施例提供的建立第二比值与该预设人脸侧转角的对应关系的结构方框图,如图4I所示,该建立模块406还包括但不限于:第九计算子模块4066、第三建立子模块4067。
所述第九计算子模块4066,配置为根据所述第五获取子模块4061获取 得到的该第五对称中点的第二位置信息、所述第六获取子模块4062获取得到的该第六对称中点的第二位置信息和所述第三获取模块405获取得到的该第一人脸特征点的第二位置信息,计算该第一人脸特征点到第八线段的第二垂直距离;
所述第三建立子模块4067,配置为建立所述第九计算子模块4066计算得到的该第二垂直距离与所述第八计算子模块406e计算得到的该第九线段之间的第二比值与该预设人脸侧转角之间的对应关系。
本发明实施例中,所述确认人脸转动角度的装置可通过电子设备实现。所述装置中的第一获取模块401、第二获取模块402、第一确定模块403、确定计算模块404、第三获取模块405和建立模块406,以及各模块所包括的子模块,在实际应用中均可由所述装置中的中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)、微控制单元(MCU,Microcontroller Unit)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现。
综上所述,本实施例提供的确定人脸转动角度装置,首先获取预设的多对对称的人脸特征点和一个第一人脸特征点;根据该多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取该每对人脸特征点的对称中点的第一位置信息;根据该每对人脸特征点的对称中点的第一位置信息和该第一人脸特征点的第一位置信息,计算预设的线段比值,根据该线段比值查询预设的线段比值与人脸转动角度的对应关系,确定该待确定人脸图像的人脸转动角度;解决了不能确定人脸转动角度的问题;由于该预设的线段比值与人脸转动角度的对应关系是一个比较精确的线段比值与角度的对应关系,所以本发明实施例提供的确定人脸转动角度的方法达到了提高确定人脸转动角度精确性的效果。
综上所述,本发明实施例提供的建立线段比值与预设人脸转动角度的 对应关系的装置,在三维直角坐标系中按照预设角度旋转正面人脸3D模型,每转动一个预设角度获取人脸特征点的坐标信息,根据获取到的坐标信息建立线段比值与预设人脸转动角度的对应关系;由于该预设角度较小,从而对应关系中的线段比值或人脸转动角度也较精确;而且由于事先建立该对应关系,使得在确定人脸转动角度的过程中可以直接从该对应关系中获取该线段比值或是该人脸转动角度,减少确定人脸转动角度所需的时间,提高确定人脸转动角度的效率。
需要说明的是:上述实施例中提供的确定人脸转动角度装置在确定人脸转动角度时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将电子设备的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的确定人脸转动角度装置与确定人脸转动角度方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
请参见图5所示,其示出了本发明部分实施例中提供的电子设备的结构方框图。该电子设备500用于实施上述实施例提供的业务处理方法。本发明中的电子设备500可以包括一个或多个如下组成部分:用于执行计算机程序指令以完成各种流程和方法的处理器,用于信息和存储程序指令随机接入存储器(RAM)和只读存储器(ROM),用于存储数据和信息的存储器,I/O设备,界面,天线等。具体来讲:
电子设备500可以包括RF(Radio Frequency,射频)电路510、存储器520、输入单元530、显示单元540、传感器550、音频电路560、Wi-Fi(Wireless-Fidelity,无线保真)模块570、处理器580、电源582、摄像头590等部件。本领域技术人员可以理解,图5中示出的电子设备结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些 部件,或者不同的部件布置。
下面结合图5对电子设备500的各个构成部件进行具体的介绍:
RF电路510可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器580处理;另外,将设计上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、LNA(Low Noise Amplifier,低噪声放大器)、双工器等。此外,RF电路510还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯***)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA(Code Division Multiple Access,码分多址)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、LTE(Long Term Evolution,长期演进)、电子邮件、SMS(Short Messaging Service,短消息服务)等。
存储器520可用于存储软件程序以及模块,处理器580通过运行存储在存储器520的软件程序以及模块,从而执行电子设备500的各种功能应用以及数据处理。存储器520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作***、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备500的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元530可用于接收输入的数字或字符信息,以及产生与电子设备500的用户设置以及功能控制有关的键信号输入。具体地,输入单元530可包括触控面板531以及其他输入设备532。触控面板531,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适 合的物体或附件在触控面板531上或在触控面板531附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器580,并能接收处理器580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板531。除了触控面板531,输入单元530还可以包括其他输入设备532。具体地,其他输入设备532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元540可用于显示由用户输入的信息或提供给用户的信息以及电子设备500的各种菜单。显示单元540可包括显示面板541,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板541。进一步的,触控面板531可覆盖显示面板541,当触控面板531检测到在其上或附近的触摸操作后,传送给处理器580以确定触摸事件的类型,随后处理器580根据触摸事件的类型在显示面板541上提供相应的视觉输出。虽然在图5中,触控面板531与显示面板541是作为两个独立的部件来实现电子设备500的输入和输入功能,但是在某些实施例中,可以将触控面板531与显示面板541集成而实现电子设备500的输入和输出功能。
电子设备500还可包括至少一种传感器550,比如陀螺仪传感器、磁感应传感器、光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板541的亮度,接近传感器可在电子设备500移动到耳边时,关闭显示面板541和/或背光。作为运动传感器的一种,加速度传感 器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别电子设备姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于电子设备500还可配置的气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路560、扬声器561,传声器562可提供用户与电子设备500之间的音频接口。音频电路560可将接收到的音频数据转换后的电信号,传输到扬声器561,由扬声器561转换为声音信号输出;另一方面,传声器562将收集的声音信号转换为电信号,由音频电路560接收后转换为音频数据,再将音频数据输出处理器580处理后,经RF电路510以发送给比如另一终端,或者将音频数据输出至存储器520以便进一步处理。
Wi-Fi属于短距离无线传输技术,电子设备500通过Wi-Fi模块570可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图5示出了Wi-Fi模块570,但是可以理解的是,其并不属于电子设备500的必须构成,完全可以根据需要在不改变公开的本质的范围内而省略。
处理器580是电子设备500的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器520内的软件程序和/或模块,以及调用存储在存储器520内的数据,执行电子设备500的各种功能和处理数据,从而对电子设备进行整体监控。可选的,处理器580可包括一个或多个处理单元;优选的,处理器580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作***、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器580中。
电子设备500还包括给各个部件供电的电源582(比如电池),优选的, 电源可以通过电源管理***与处理器582逻辑相连,从而通过电源管理***实现管理充电、放电、以及功耗管理等功能。
摄像头590一般由镜头、图像传感器、接口、数字信号处理器、CPU、显示屏幕等组成。其中,镜头固定在图像传感器的上方,可以通过手动调节镜头来改变聚焦;图像传感器相当于传统相机的“胶卷”,是摄像头采集图像的心脏;接口用于把摄像头利用排线、板对板连接器、弹簧式连接方式与电子设备主板连接,将采集的图像发送给所述存储器520;数字信号处理器通过数学运算对采集的图像进行处理,将采集的模拟图像转换为数字图像并通过接口发送给存储器520。
尽管未示出,电子设备500还可以包括蓝牙模块等,在此不再赘述。
电子设备500除了包括一个或者多个处理器580,还包括有存储器,以及一个或者多个程序,其中一个或者多个程序存储于存储器中,并被配置成由一个或者多个处理器执行。上述一个或者一个以上程序包含用于进行以下操作的指令:
获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,所述多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和剩下的一个第一人脸特征点,所述多个人脸特征点不共面;
根据所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息;
根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度。
假设上述为第一种可能的实施方式,则在第一种可能的实施方式作为基础而提供的第二种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述多个人脸特征点包括五个,所述五个人脸特征点包括第一对对称 的人脸特征点、第二对对称的人脸特征点和剩下的一个第一人脸特征点。
在第二种可能的实施方式作为基础而提供的第三种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述根据所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息,包括:
根据所述第一对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第一对对称的人脸特征点的第一对称中点的第一位置信息;
根据所述第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第二对对称的人脸特征点的第二对称中点的第一位置信息。
在第三种可能的实施方式作为基础而提供的第四种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度,包括:
根据所述第一对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,计算由所述第一对称中点和所述第一人脸特征点构成的第一线段的长度;
根据所述第二对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,计算由所述第二对称中点和所述第一人脸特征点构成的第二线段的长度;
根据所述第一线段的长度和所述第二线段的长度之间的第一比值,从第一比值与人脸俯仰角的对应关系中获取所述待确定人脸图像的人脸俯仰角。
在第三种可能的实施方式作为基础而提供的第五种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
在第三种可能的实施方式作为基础而提供的第五种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度,包括:
根据所述第一对称中点的第一位置信息、所述第二对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,计算所述第一人脸特征点到第三线段之间的第一垂直距离和第三线段的长度,所述第三线段是由所述第一对称中点与所述第二对称中点组成的线段;
根据所述第三线段的长度和所述待确定人脸图像的人脸俯仰角,计算第四线段的长度,所述第四线段为第三对称中点与第四对称中点之间的线段,所述第三对称中点是所述第一对人脸特征点在正面人脸图像中的对称中点,所述第四对称中点是所述第二对人脸特征点在正面人脸图像中的对称中点;
根据所述第一垂直距离和所述第四线段的长度之间的第二比值,从第二比值与人脸侧转角的对应关系中获取所述待确定人脸图像的人脸侧转角。
在第五种可能的实施方式作为基础而提供的第六种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述根据所述第三线段的长度和所述待确定人脸图像的人脸俯仰角,计算第四线段的长度,包括:
根据所述人脸俯仰角,从人脸俯仰角与第三比值的对应关系中获取对 应的第三比值;
根据所述第三比值和所述第三线段的长度,计算第四线段的长度。
在第一至第六任意一种可能的实施方式作为基础而提供的第七种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息之后,还包括:
根据任一一对对称的人脸特征点中的每个人脸特征点的第一位置信息,确定第五线段,计算所述第五线段与水平线之间的夹角,得到所述待确定人脸图像的人脸旋转角度。
在第二至第六任意一种可能的实施方式作为基础而提供的第八种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述确定所述待确定人脸图像的人脸转动角度之前,还包括:
获取所述预设的多个人脸特征点在第一人脸图像中的第二位置信息,所述第一人脸图像是人脸转动预设人脸转动角度后拍摄的人脸图像;
根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系。
在第八种可能的实施方式作为基础而提供的第九种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述预设人脸转动角度包括预设人脸俯仰角;
所述根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系,包括:
根据所述第一对对称的人脸特征点包括的每个人脸特征点的第二位置信息,获取所述第一对对称的人脸特征点的第五对称中点的第二位置信息;
根据所述第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第二对对称的人脸特征点的第六对称中点的第二位置信息;
根据所述第五对称中点的第二位置信息和所述第一人脸特征点的第二位置信息,计算由所述第五对称中点和所述第一人脸特征点构成的第六线段的长度;
根据所述第六对称中点的第二位置信息和所述第一人脸特征点的第二位置信息,计算由所述第六对称中点和所述第一人脸特征点构成的第七线段的长度;
建立所述第六线段和所述第七线段之间的第一比值与所述预设人脸俯仰角之间的对应关系。
在第九种可能的实施方式作为基础而提供的第十种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系,还包括:
获取所述第一对对称的人脸特征点包括的每个人脸特征点在所述人脸的正面人脸图像中的第三位置信息,以及所述第二对对称的人脸特征点包括的每个人脸特征在所述正面人脸图像中的第三位置信息;
根据所述第一对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取所述第一对对称的人脸特征点的第七对称中点的第三位置信息;
根据所述第二对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取所述第二对对称的人脸特征点的第八对称中点的第三位置信息;
根据所述第五对称中点的第二位置信息和所述第六对称中点的第二位置信息,计算由所述第五对称中点和所述第六对称中点构成的第八线段的 长度;
根据所述第七对称中点的第三位置信息和所述第八对称中点的第三位置信息,计算由所述第七对称中点和所述第八对称中点构成的第九线段的长度;
建立所述第八线段与所述第九线段之间的第三比值与所述预设人脸俯仰角之间的对应关系。
在第十种可能的实施方式作为基础而提供的第十一种可能的实施方式中,所述电子设备500的存储器中,还包含用于执行以下操作的指令:
所述预设人脸转动角度包括预设人脸侧转角;
所述根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系,还包括:
根据所述第五对称中点的第二位置信息、第六对称中点的第二位置信息和所述第一人脸特征点的第二位置信息,计算所述第一人脸特征点到第八线段的第二垂直距离;
建立所述第二垂直距离与所述第九线段之间的第二比值与所述预设人脸侧转角之间的对应关系。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个***,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。
上述作为分离部件说明的单元可以是、或也可以不是物理上分开的, 作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。
本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。
工业实用性
本发明实施例的技术方案解决了不能确定人脸转动角度的问题;由于该预设的线段比值与人脸转动角度的对应关系是一个比较精确的线段比值与角度的对应关系,所以本发明实施例提供的确定人脸转动角度的方法大大提高了确定人脸转动角度的精确性。

Claims (23)

  1. 一种确定人脸转动角度的方法,所述方法包括:
    获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,所述多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和一个第一人脸特征点,所述多个人脸特征点不共面;
    根据所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息;
    根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度。
  2. 如权利要求1所述的方法,其中,所述多个人脸特征点包括五个,所述五个人脸特征点包括第一对对称的人脸特征点、第二对对称的人脸特征点和一个第一人脸特征点。
  3. 如权利要求2所述的方法,其中,所述根据所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息,包括:
    根据所述第一对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第一对对称的人脸特征点的第一对称中点的第一位置信息;
    根据所述第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第二对对称的人脸特征点的第二对称中点的第一位置信息。
  4. 如权利要求3所述的方法,其中,所述根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度,包括:
    根据所述第一对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,计算由所述第一对称中点和所述第一人脸特征点构成的第一线段的长度;
    根据所述第二对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,计算由所述第二对称中点和所述第一人脸特征点构成的第二线段的长度;
    根据所述第一线段的长度和所述第二线段的长度之间的第一比值,从第一比值与人脸俯仰角的对应关系中获取所述待确定人脸图像的人脸俯仰角。
  5. 如权利要求3所述的方法,其中,所述根据所述每对人脸特征点的对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度,包括:
    根据所述第一对称中点的第一位置信息、所述第二对称中点的第一位置信息和所述第一人脸特征点的第一位置信息,计算所述第一人脸特征点到第三线段之间的第一垂直距离和第三线段的长度,所述第三线段是由所述第一对称中点与所述第二对称中点组成的线段;
    根据所述第三线段的长度和所述待确定人脸图像的人脸俯仰角,计算第四线段的长度,所述第四线段为第三对称中点与第四对称中点之间的线段,所述第三对称中点是所述第一对人脸特征点在正面人脸图像中的对称中点,所述第四对称中点是所述第二对人脸特征点在正面人脸图像中的对称中点;
    根据所述第一垂直距离和所述第四线段的长度之间的第二比值,从第二比值与人脸侧转角的对应关系中获取所述待确定人脸图像的人脸侧转角。
  6. 如权利要求5所述的方法,其中,所述根据所述第三线段的长度和所述待确定人脸图像的人脸俯仰角,计算第四线段的长度,包括:
    根据所述人脸俯仰角,从人脸俯仰角与第三比值的对应关系中获取对应的第三比值;
    根据所述第三比值和所述第三线段的长度,计算第四线段的长度。
  7. 如权利要求1至6任一项权利要求所述的方法,其中,所述获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息之后,还包括:
    根据任一一对对称的人脸特征点中的每个人脸特征点的第一位置信息,确定第五线段,计算所述第五线段与水平线之间的夹角,得到所述待确定人脸图像的人脸旋转角度。
  8. 如权利要求2至6任一项权利要求所述的方法,其中,所述确定所述待确定人脸图像的人脸转动角度之前,还包括:
    获取所述预设的多个人脸特征点在第一人脸图像中的第二位置信息,所述第一人脸图像是人脸转动预设人脸转动角度后拍摄的人脸图像;
    根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系。
  9. 如权利要求8所述的方法,其中,所述预设人脸转动角度包括预设人脸俯仰角;
    所述根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系,包括:
    根据所述第一对对称的人脸特征点包括的每个人脸特征点的第二位置信息,获取所述第一对对称的人脸特征点的第五对称中点的第二位置信息;
    根据所述第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第二对对称的人脸特征点的第六对称中点的第二位置信息;
    根据所述第五对称中点的第二位置信息和所述第一人脸特征点的第二位置信息,计算由所述第五对称中点和所述第一人脸特征点构成的第六线段的长度;
    根据所述第六对称中点的第二位置信息和所述第一人脸特征点的第二位置信息,计算由所述第六对称中点和所述第一人脸特征点构成的第七线 段的长度;
    建立所述第六线段和所述第七线段之间的第一比值与所述预设人脸俯仰角之间的对应关系。
  10. 如权利要求9所述的方法,其中,所述根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系,还包括:
    获取所述第一对对称的人脸特征点包括的每个人脸特征点在所述人脸的正面人脸图像中的第三位置信息,以及所述第二对对称的人脸特征点包括的每个人脸特征在所述正面人脸图像中的第三位置信息;
    根据所述第一对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取所述第一对对称的人脸特征点的第七对称中点的第三位置信息;
    根据所述第二对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取所述第二对对称的人脸特征点的第八对称中点的第三位置信息;
    根据所述第五对称中点的第二位置信息和所述第六对称中点的第二位置信息,计算由所述第五对称中点和所述第六对称中点构成的第八线段的长度;
    根据所述第七对称中点的第三位置信息和所述第八对称中点的第三位置信息,计算由所述第七对称中点和所述第八对称中点构成的第九线段的长度;
    建立所述第八线段与所述第九线段之间的第三比值与所述预设人脸俯仰角之间的对应关系。
  11. 如权利要求10所述的方法,其中,所述预设人脸转动角度包括预设人脸侧转角;
    所述根据所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系,还包括:
    根据所述第五对称中点的第二位置信息、第六对称中点的第二位置信息和所述第一人脸特征点的第二位置信息,计算所述第一人脸特征点到第八线段的第二垂直距离;
    建立所述第二垂直距离与所述第九线段之间的第二比值与所述预设人脸侧转角之间的对应关系。
  12. 一种确定人脸转动角度的装置,所述装置包括:
    第一获取模块,配置为获取预设的多个人脸特征点在待确定人脸图像中的第一位置信息,所述多个人脸特征点的数目为奇数,包括多对对称的人脸特征点和一个第一人脸特征点,所述多个人脸特征点不共面;
    第二获取模块,配置为根据所述第一获取模块获取的所述多对人脸特征点中的每对人脸特征点包括的人脸特征点的第一位置信息,获取所述每对人脸特征点的对称中点的第一位置信息;
    第一确定模块,配置为根据所述第二获取模块获取的所述每对人脸特征点的对称中点的第一位置信息和所述第一获取模块获取的所述第一人脸特征点的第一位置信息,确定所述待确定人脸图像的人脸转动角度。
  13. 如权利要求12所述的装置,其中,所述多个人脸特征点包括五个,所述五个人脸特征点包括第一对对称的人脸特征点、第二对对称的人脸特征点和一个第一人脸特征点。
  14. 如权利要求13所述的装置,其中,所述第二获取模块,包括:
    第一获取子模块,配置为根据所述第一获取模块获取的所述第一对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第一对对称的人脸特征点的第一对称中点的第一位置信息;
    第二获取子模块,配置为根据所述第一获取模块获取的所述第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第二对对称的人脸特征点的第二对称中点的第一位置信息。
  15. 如权利要求14所述的装置,其中,所述第一确定模块,包括:
    第一计算子模块,配置为根据所述第一获取子模块获取的所述第一对称中点的第一位置信息和所述第一获取模块获取的所述第一人脸特征点的第一位置信息,计算由所述第一对称中点和所述第一人脸特征点构成的第一线段的长度;
    第二计算子模块,配置为根据所述第二获取子模块获取的所述第二对称中点的第一位置信息和所述第一获取模块获取的所述第一人脸特征点的第一位置信息,计算由所述第二对称中点和所述第一人脸特征点构成的第二线段的长度;
    第三获取子模块,配置为根据所述第一计算子模块计算的所述第一线段的长度和所述第二计算子模块计算的所述第二线段的长度之间的第一比值,从第一比值与人脸俯仰角的对应关系中获取所述待确定人脸图像的人脸俯仰角。
  16. 如权利要求14所述的装置,其中,所述第一确定模块,包括:
    第三计算子模块,配置为根据所述第一获取子模块获取的所述第一对称中点的第一位置信息、所述第二获取子模块获取的所述第二对称中点的第一位置信息和所述第一获取模块获取的所述第一人脸特征点的第一位置信息,计算所述第一人脸特征点到第三线段之间的第一垂直距离和第三线段的长度,所述第三线段是由所述第一对称中点与所述第二对称中点组成的线段;
    第四计算子模块,配置为根据所述第三计算子模块计算得到的所述第三线段的长度和所述第三获取子模块获取的所述待确定人脸图像的人脸俯仰角,计算第四线段的长度,所述第四线段为第三对称中点与第四对称中点之间的线段,所述第三对称中点是所述第一对人脸特征点在正面人脸图像中的对称中点,所述第四对称中点是所述第二对人脸特征点在正面人脸 图像中的对称中点;
    第四获取子模块,配置为根据所述第三计算子模块计算得到的所述第一垂直距离和所述第四计算子模块计算得到的所述第四线段的长度之间的第二比值,从第二比值与人脸侧转角的对应关系中获取所述待确定人脸图像的人脸侧转角。
  17. 如权利要求16所述的装置,其中,所述第四计算子模块,包括:
    获取单元,配置为根据所述第三获取子模块获取得到的所述人脸俯仰角,从人脸俯仰角与第三比值的对应关系中获取对应的第三比值;
    计算单元,配置为根据所述获取单元获取得到的所述第三比值和所述第三计算子模块计算的所述第三线段的长度,计算第四线段的长度。
  18. 如权利要求12至17任一项权利要求所述的装置,其中,所述装置,还包括:
    确定计算模块,配置为根据任一一对对称的人脸特征点中的每个人脸特征点的第一位置信息,确定第五线段,计算所述第五线段与水平线之间的夹角,得到所述待确定人脸图像的人脸旋转角度。
  19. 如权利要求13至17任一项权利要求所述的装置,其中,所述装置,还包括:
    第三获取模块,配置为获取所述预设的多个人脸特征点在第一人脸图像中的第二位置信息,所述第一人脸图像是人脸转动预设人脸转动角度后拍摄的人脸图像;
    建立模块,配置为根据所述第三获取模块获取得到的所述预设的多个人脸特征点的第二位置信息建立线段比值与所述预设人脸转动角度的对应关系。
  20. 如权利要求19所述的装置,其中,所述预设人脸转动角度包括预设人脸俯仰角;
    所述建立模块,包括:
    第五获取子模块,配置为根据所述第三获取模块获取得到的所述第一对对称的人脸特征点包括的每个人脸特征点的第二位置信息,获取所述第一对对称的人脸特征点的第五对称中点的第二位置信息;
    第六获取子模块,配置为根据所述第三获取模块获取得到的所述第二对对称的人脸特征点包括的每个人脸特征点的第一位置信息,获取所述第二对对称的人脸特征点的第六对称中点的第二位置信息;
    第五计算子模块,配置为根据所述第五获取子模块获取得到的所述第五对称中点的第二位置信息和所述第三获取模块获取得到的所述第一人脸特征点的第二位置信息,计算由所述第五对称中点和所述第一人脸特征点构成的第六线段的长度;
    第六计算子模块,配置为根据所述第六获取子模块获取得到的所述第六对称中点的第二位置信息和所述第三获取模块获取得到的所述第一人脸特征点的第二位置信息,计算由所述第六对称中点和所述第一人脸特征点构成的第七线段的长度;
    第一建立子模块,配置为建立所述第五计算子模块计算得到的所述第六线段和所述第六计算子模块计算得到的所述第七线段之间的第一比值与所述预设人脸俯仰角之间的对应关系。
  21. 如权利要求20所述的装置,其中,所述建立模块,还包括:
    第七获取子模块,配置为获取所述第一对对称的人脸特征点包括的每个人脸特征点在所述人脸的正面人脸图像中的第三位置信息,以及所述第二对对称的人脸特征点包括的每个人脸特征在所述正面人脸图像中的第三位置信息;
    第八获取子模块,配置为根据所述第七获取子模块获取得到的所述第一对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取所述 第一对对称的人脸特征点的第七对称中点的第三位置信息;
    第九获取子模块,配置为根据所述第七获取子模块获取得到的所述第二对对称的人脸特征点包括的每个人脸特征点的第三位置信息,获取所述第二对对称的人脸特征点的第八对称中点的第三位置信息;
    第七计算子模块,配置为根据所述第五获取子模块获取得到的所述第五对称中点的第二位置信息和所述第六获取子模块获取得到的所述第六对称中点的第二位置信息,计算由所述第五对称中点和所述第六对称中点构成的第八线段的长度;
    第八计算子模块,配置为根据所述第八获取子模块获取得到的所述第七对称中点的第三位置信息和所述第九获取子模块获取得到的所述第八对称中点的第三位置信息,计算由所述第七对称中点和所述第八对称中点构成的第九线段的长度;
    第二建立子模块,配置为建立所述第七计算子模块计算得到的所述第八线段与所述第八计算子模块计算得到的所述第九线段之间的第三比值与所述预设人脸俯仰角之间的对应关系。
  22. 如权利要求21所述的装置,其中,所述预设人脸转动角度包括预设人脸侧转角;
    所述建立模块,还包括:
    第九计算子模块,配置为根据所述第五获取子模块获取得到的所述第五对称中点的第二位置信息、第六获取子模块获取得到的所述第六对称中点的第二位置信息和所述第三获取模块获取得到的所述第一人脸特征点的第二位置信息,计算所述第一人脸特征点到第八线段的第二垂直距离;
    第三建立子模块,配置为建立所述第九计算子模块计算得到的所述第二垂直距离与所述第八计算子模块计算得到的所述第九线段之间的第二比值与所述预设人脸侧转角之间的对应关系。
  23. 一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行权利要求1至11任一项所述的确定人脸转动角度的方法。
PCT/CN2017/070607 2016-01-21 2017-01-09 确定人脸转动角度的方法、装置及计算机存储介质 WO2017124929A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP17740960.4A EP3407245B1 (en) 2016-01-21 2017-01-09 Method and device for determining rotation angle of human face, and computer-readable storage medium
KR1020187015392A KR102144489B1 (ko) 2016-01-21 2017-01-09 인간 얼굴의 회전 각도를 결정하기 위한 방법 및 디바이스, 및 컴퓨터 저장 매체
JP2018527759A JP6668475B2 (ja) 2016-01-21 2017-01-09 人の顔の回転角を決定するための方法およびデバイス、ならびにコンピュータ記憶媒体
US15/944,656 US10713812B2 (en) 2016-01-21 2018-04-03 Method and apparatus for determining facial pose angle, and computer storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610041938.7A CN106991367B (zh) 2016-01-21 2016-01-21 确定人脸转动角度的方法和装置
CN201610041938.7 2016-01-21

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/944,656 Continuation-In-Part US10713812B2 (en) 2016-01-21 2018-04-03 Method and apparatus for determining facial pose angle, and computer storage medium

Publications (1)

Publication Number Publication Date
WO2017124929A1 true WO2017124929A1 (zh) 2017-07-27

Family

ID=59361342

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/070607 WO2017124929A1 (zh) 2016-01-21 2017-01-09 确定人脸转动角度的方法、装置及计算机存储介质

Country Status (6)

Country Link
US (1) US10713812B2 (zh)
EP (1) EP3407245B1 (zh)
JP (1) JP6668475B2 (zh)
KR (1) KR102144489B1 (zh)
CN (1) CN106991367B (zh)
WO (1) WO2017124929A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679497A (zh) * 2017-10-11 2018-02-09 齐鲁工业大学 视频面部贴图特效处理方法及生成***
EP3477542A1 (en) * 2017-10-31 2019-05-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd Method and apparatus for image processing, and computer-readable storage medium
CN110059530A (zh) * 2017-12-11 2019-07-26 欧姆龙株式会社 面部位置检测装置
WO2019205009A1 (en) * 2018-04-25 2019-10-31 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for identifying a body motion
CN112395949A (zh) * 2020-10-21 2021-02-23 天津中科智能识别产业技术研究院有限公司 多目标人群的虹膜图像获取装置与方法

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6304999B2 (ja) * 2013-10-09 2018-04-04 アイシン精機株式会社 顔検出装置、方法およびプログラム
WO2018033137A1 (zh) * 2016-08-19 2018-02-22 北京市商汤科技开发有限公司 在视频图像中展示业务对象的方法、装置和电子设备
CN109389018B (zh) * 2017-08-14 2020-12-25 杭州海康威视数字技术股份有限公司 一种人脸角度识别方法、装置及设备
CN107679446B (zh) * 2017-08-17 2019-03-15 平安科技(深圳)有限公司 人脸姿态检测方法、装置及存储介质
CN109086727B (zh) * 2018-08-10 2021-04-30 北京奇艺世纪科技有限公司 一种确定人体头部的运动角度的方法、装置及电子设备
CN109165606B (zh) * 2018-08-29 2019-12-17 腾讯科技(深圳)有限公司 一种车辆信息的获取方法、装置以及存储介质
CN110909568A (zh) * 2018-09-17 2020-03-24 北京京东尚科信息技术有限公司 用于面部识别的图像检测方法、装置、电子设备及介质
CN111460870A (zh) * 2019-01-18 2020-07-28 北京市商汤科技开发有限公司 目标的朝向确定方法及装置、电子设备及存储介质
CN109949237A (zh) * 2019-03-06 2019-06-28 北京市商汤科技开发有限公司 图像处理方法及装置、图像设备及存储介质
CN110032941B (zh) * 2019-03-15 2022-06-17 深圳英飞拓科技股份有限公司 人脸图像检测方法、人脸图像检测装置及终端设备
US11120569B2 (en) * 2019-06-24 2021-09-14 Synaptics Incorporated Head pose estimation
CN110879986A (zh) * 2019-11-21 2020-03-13 上海眼控科技股份有限公司 人脸识别的方法、设备和计算机可读存储介质
CN113011230A (zh) * 2019-12-20 2021-06-22 杭州萤石软件有限公司 一种人脸偏转角度的确定方法及装置
CN111219940B (zh) * 2019-12-26 2023-09-22 青岛海尔智能技术研发有限公司 冰箱内灯光控制的方法及装置、冰箱
CN111444775B (zh) * 2020-03-03 2023-10-27 平安科技(深圳)有限公司 人脸关键点修正方法、装置以及计算机设备
DE102020206350A1 (de) * 2020-05-20 2022-01-27 Robert Bosch Gesellschaft mit beschränkter Haftung Verfahren zur Detektion von Vergleichspersonen zu einer Suchperson, Überwachungsanordnung, insbesondere zur Umsetzung des Verfahrens, sowie Computerprogramm und computerlesbares Medium
CN113705280B (zh) * 2020-05-21 2024-05-10 北京聚匠艺传媒有限公司 一种基于面部特征的人机交互方法及装置
CN113703564A (zh) * 2020-05-21 2021-11-26 北京聚匠艺传媒有限公司 一种基于面部特征的人机交互设备及***
CN111914783A (zh) * 2020-08-10 2020-11-10 深圳市视美泰技术股份有限公司 人脸偏转角度的确定方法、装置、计算机设备及介质
CN112069954B (zh) * 2020-08-26 2023-12-19 武汉普利商用机器有限公司 一种活体微表情检测方法及***
CN112183421A (zh) * 2020-10-09 2021-01-05 江苏提米智能科技有限公司 一种人脸图像评估方法、装置、电子设备及存储介质
CN112932407B (zh) * 2021-01-29 2022-11-15 上海市内分泌代谢病研究所 一种面部正面校准方法及***
JP7481398B2 (ja) 2022-07-04 2024-05-10 ソフトバンク株式会社 判定装置、プログラム、及び判定方法
CN117876494B (zh) * 2024-03-13 2024-05-10 东莞莱姆森科技建材有限公司 一种基于镜面角度动态调节的目标区域观察方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141947A1 (en) * 2007-11-29 2009-06-04 Volodymyr Kyyko Method and system of person identification by facial image
CN102156537A (zh) * 2010-02-11 2011-08-17 三星电子株式会社 一种头部姿态检测设备及方法
CN103558910A (zh) * 2013-10-17 2014-02-05 北京理工大学 一种自动跟踪头部姿态的智能显示器***
CN103605965A (zh) * 2013-11-25 2014-02-26 苏州大学 一种多姿态人脸识别方法和装置

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040070565A1 (en) * 2001-12-05 2004-04-15 Nayar Shree K Method and apparatus for displaying images
US7050607B2 (en) * 2001-12-08 2006-05-23 Microsoft Corp. System and method for multi-view face detection
JP4708948B2 (ja) * 2005-10-03 2011-06-22 富士フイルム株式会社 顔向き特定方法、顔判別方法および装置並びにプログラム
JP2007241478A (ja) * 2006-03-06 2007-09-20 Fuji Xerox Co Ltd 画像処理装置、画像処理装置の制御方法及びプログラム
JP4341696B2 (ja) * 2007-05-18 2009-10-07 カシオ計算機株式会社 撮像装置、顔領域検出プログラム、及び、顔領域検出方法
JP2009245338A (ja) * 2008-03-31 2009-10-22 Secom Co Ltd 顔画像照合装置
KR101694820B1 (ko) * 2010-05-07 2017-01-23 삼성전자주식회사 사용자 위치 인식 방법 및 장치
US9619105B1 (en) * 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
US9928372B2 (en) * 2015-10-23 2018-03-27 Paypal, Inc. Selective screen privacy

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090141947A1 (en) * 2007-11-29 2009-06-04 Volodymyr Kyyko Method and system of person identification by facial image
CN102156537A (zh) * 2010-02-11 2011-08-17 三星电子株式会社 一种头部姿态检测设备及方法
CN103558910A (zh) * 2013-10-17 2014-02-05 北京理工大学 一种自动跟踪头部姿态的智能显示器***
CN103605965A (zh) * 2013-11-25 2014-02-26 苏州大学 一种多姿态人脸识别方法和装置

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107679497A (zh) * 2017-10-11 2018-02-09 齐鲁工业大学 视频面部贴图特效处理方法及生成***
CN107679497B (zh) * 2017-10-11 2023-06-27 山东新睿信息科技有限公司 视频面部贴图特效处理方法及生成***
EP3477542A1 (en) * 2017-10-31 2019-05-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd Method and apparatus for image processing, and computer-readable storage medium
US10929646B2 (en) 2017-10-31 2021-02-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for image processing, and computer-readable storage medium
CN110059530A (zh) * 2017-12-11 2019-07-26 欧姆龙株式会社 面部位置检测装置
CN110059530B (zh) * 2017-12-11 2023-07-04 欧姆龙株式会社 面部位置检测装置
WO2019205009A1 (en) * 2018-04-25 2019-10-31 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for identifying a body motion
US10997722B2 (en) 2018-04-25 2021-05-04 Beijing Didi Infinity Technology And Development Co., Ltd. Systems and methods for identifying a body motion
CN112395949A (zh) * 2020-10-21 2021-02-23 天津中科智能识别产业技术研究院有限公司 多目标人群的虹膜图像获取装置与方法
CN112395949B (zh) * 2020-10-21 2023-06-20 天津中科智能识别产业技术研究院有限公司 多目标人群的虹膜图像获取装置与方法

Also Published As

Publication number Publication date
JP6668475B2 (ja) 2020-03-18
EP3407245B1 (en) 2023-09-20
JP2018537781A (ja) 2018-12-20
EP3407245A1 (en) 2018-11-28
CN106991367A (zh) 2017-07-28
KR20180079399A (ko) 2018-07-10
US20180225842A1 (en) 2018-08-09
CN106991367B (zh) 2019-03-19
KR102144489B1 (ko) 2020-08-14
EP3407245A4 (en) 2019-01-16
US10713812B2 (en) 2020-07-14

Similar Documents

Publication Publication Date Title
WO2017124929A1 (zh) 确定人脸转动角度的方法、装置及计算机存储介质
EP3965003A1 (en) Image processing method and device
WO2016188318A1 (zh) 一种3d人脸重建方法、装置及服务器
US11715224B2 (en) Three-dimensional object reconstruction method and apparatus
CN108985220B (zh) 一种人脸图像处理方法、装置及存储介质
TWI476633B (zh) 傳輸觸覺資訊的系統和方法
CN109947886B (zh) 图像处理方法、装置、电子设备及存储介质
KR101524575B1 (ko) 웨어러블 디바이스
US20170039761A1 (en) Image Processing Method And Apparatus
CN109685915B (zh) 一种图像处理方法、装置及移动终端
WO2016184276A1 (zh) 一种人脸关键点位定位结果的评估方法,及评估装置
CN107833178A (zh) 一种图像处理方法、装置及移动终端
WO2019233216A1 (zh) 一种手势动作的识别方法、装置以及设备
CN107255813A (zh) 基于3d技术的测距方法、移动终端、及存储介质
WO2022222658A1 (zh) 一种凹槽深度测量方法、装置、***及激光测量设备
WO2020108041A1 (zh) 耳部关键点检测方法、装置及存储介质
CN107562288A (zh) 基于红外触控装置的响应方法、红外触控装置和介质
AU2020263183A1 (en) Parameter Obtaining Method and Terminal Device
CN107248137A (zh) 一种实现图像处理的方法及移动终端
CN111091519B (zh) 一种图像处理方法及装置
CN108881544A (zh) 一种拍照的方法及移动终端
WO2023016372A1 (zh) 控制方法、装置、电子设备和存储介质
CN111158478B (zh) 响应方法及电子设备
CN110717964B (zh) 场景建模方法、终端及可读存储介质
CN110152293A (zh) 操控对象的定位方法及装置、游戏对象的定位方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17740960

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2018527759

Country of ref document: JP

ENP Entry into the national phase

Ref document number: 20187015392

Country of ref document: KR

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE