CN110472459B - Method and device for extracting feature points - Google Patents

Method and device for extracting feature points Download PDF

Info

Publication number
CN110472459B
CN110472459B CN201810451370.5A CN201810451370A CN110472459B CN 110472459 B CN110472459 B CN 110472459B CN 201810451370 A CN201810451370 A CN 201810451370A CN 110472459 B CN110472459 B CN 110472459B
Authority
CN
China
Prior art keywords
feature point
point set
face
feature
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810451370.5A
Other languages
Chinese (zh)
Other versions
CN110472459A (en
Inventor
丁欣
李国良
郜文美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810451370.5A priority Critical patent/CN110472459B/en
Publication of CN110472459A publication Critical patent/CN110472459A/en
Application granted granted Critical
Publication of CN110472459B publication Critical patent/CN110472459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

Provided are a method and a device for extracting feature points. The application provides a method, a device and a mobile terminal for user identification, which comprise the following steps: determining a first judgment model according to a first feature point set corresponding to the face of a first user, wherein the first feature point set comprises part of feature points in a second feature point set, and the second feature point set is a set of all extractable feature points of the face of the first user; according to the first position of the feature point in the first feature point set on the face of the first user, a third feature point set corresponding to the identification object is determined, and the feature point in the third feature point set corresponds to the first position at the second position of the face of the identification object.

Description

Method and device for extracting feature points
Technical Field
The present application relates to the field of terminals, and in particular, to a method and an apparatus for extracting feature points in the field of terminals, and a mobile terminal.
Background
The face feature point technology is to detect the organ position and contour information of a face based on the face features of a person and extract the features contained in the face. Specifically, the method is a technology for acquiring positions of some important feature points such as human eyes, nose, mouth corners, eyebrows, contour points of each part of a human face and the like. The technology is very widely applied and can be used for operations such as face recognition, face alignment, expression change, facial beautification and the like. The face recognition compares the feature point information to be detected with the existing face feature point information, and judges whether the feature point information is a corresponding face, so as to recognize the identity; face alignment, i.e. a modeling transformation from the face variation of one person to the face variation of another person; the expression transformation is that digital transformation is carried out according to the condition of facial feature points when people are in different expressions; the beautifying is to perform operations such as further beautifying after face partitioning according to the face characteristic points. In addition, in the current skin health monitoring project, a scheme of 'human face characteristic points' is also needed to be adopted for dividing human face regions so as to carry out subsequent skin detection.
The face characteristic point technology has certain requirements on detection precision and detection speed according to different application scenes. Generally, if the more feature points are included in the face feature point detection scheme, a more detailed face contour model can be established, and the result is more accurate, but the more complex the algorithm is, the slower the recognition speed is. Conversely, the fewer the feature points, the simpler the algorithm, the faster the recognition, but the lower the recognition rate. The number of feature points has a great influence on the final detection performance, and especially in a scene with high real-time requirement, the contradiction between the accuracy and the speed of the face feature point detection is more prominent.
Disclosure of Invention
The application provides a method and a device for extracting feature points and a mobile terminal, which can obtain results more quickly on the premise of ensuring detection precision.
In a first aspect, a method for extracting feature points is provided, and is characterized by comprising: determining a first judgment model according to a first feature point set corresponding to the face of a first user, wherein the first feature point set comprises part of feature points in a second feature point set, and the second feature point set is a set of all extractable feature points of the face of the first user; according to the first position of the feature point in the first feature point set on the face of the first user, determining a third feature point set corresponding to the recognition object, wherein the third feature point set comprises partial feature points in a fourth feature point set, the fourth feature point set is a set of all extractable feature points of the face of the recognition object, and the feature point in the third feature point set corresponds to the first position at the second position of the face of the recognition object.
Through the technical scheme, compared with the traditional 'average face' universal model, the method and the system for training the face model are more specific to the personal feature points of the user, and can effectively avoid negative factors influencing model training. Compared with the existing scheme for rapidly extracting the feature points, the method and the device have the advantages that the result can be obtained more rapidly on the premise of ensuring the detection precision aiming at different application scenes through comparison of the low-configuration model and the standard template. In addition, in some detection scenes, when some target regions cannot be reasonably detected due to abnormal conditions, the result can be obtained in a mode of restoring the standard model by the feature points, so that the region division for the face of the user is reasonable and effective, and the result influencing the user experience cannot occur.
With reference to the first aspect, in certain implementations of the first aspect, the method further includes: and judging whether the recognition object is the first user or not according to the matching degree of the third feature point set and the first judgment model.
With reference to the first aspect and the foregoing implementations, in some implementations of the first aspect, the method further includes: determining a second determination model according to the second feature point set corresponding to the face of the first user; and
when the matching degree of the third feature point set and the first judgment model is higher than a preset first threshold value, determining a fifth feature point set corresponding to the recognition object according to the feature points in the second feature point set except the first feature point set;
and judging whether the recognition object is the first user or not according to the matching degree of the fifth feature point set and the second judgment model.
Specifically, when a first user initializes the mobile phone, the first user is prompted to take a picture under a specific condition, for example, a condition of good brightness, a moderate distance, and the like, to generate a user-specific face feature point model, which is similar to a matching model obtaining mode of face unlocking or fingerprint unlocking. The model is based on a feature point model with the highest precision, the algorithm is time-consuming, but as much feature point information as possible can be obtained, and the model is called as a standard model.
Alternatively, a specific possible generation manner of the "standard model" is: and aligning the characteristic points of a plurality of pictures of the user, and filtering the influence caused by position deviation, size and rotation to obtain a standard model of the user. It should be understood that the obtaining nature of the user model is still consistent with the obtaining of the model of the general algorithm, and the average model is obtained through the samples, only the selected samples of the scheme provided in the embodiment of the present application only use data at the mobile phone user (i.e., the first user), and the model obtained through training can be understood as an "overfitting" condition in a general manner, and is only suitable for the detection of the user (i.e., the first user).
After the "standard model" (i.e. the second judgment model) of the first user is generated, some feature points are selected from the "standard template" according to the requirements of different scenes on the accuracy and the detection real-time performance of the feature points of the face region, and the "low-match model" (i.e. the first judgment model) of the corresponding feature points is trained respectively. As shown in fig. 4, the set of facial feature points is shown in fig. 4, and 68 facial feature points are shown in fig. 4, then the set including all 68 feature points is the "second feature point set" defined above, and the set including any feature points smaller than 68 is the "first feature point set" defined above. There are many possibilities for the first feature point set, including, for example, 30 feature points, 46 feature points, and so on.
The selection standard of the feature points in the low matching model is determined by the detection real-time performance and the reduction feasibility of the feature points, namely, the average time for detecting the 'low matching model' is calculated, if the number of the points is less than a certain number, the detection average time can meet the real-time performance requirement, and meanwhile, the standard template can be reduced according to the number of the detected points.
With reference to the first aspect and the foregoing implementation manners, in some possible implementation manners, when the degree of matching between the third feature point set and the first determination model is higher than a preset first threshold, the method further includes: determining a third feature point set of the recognition object according to the first judgment model; determining a fifth feature point set of the recognition object according to the third position of the feature point in the third feature point set on the face of the recognition object; and restoring a fourth feature point set of the identification object according to the third feature point set and the fifth feature point set.
With reference to the first aspect and the foregoing implementation manners, in some possible implementation manners, the method further includes: determining a third feature point set of the recognition object according to the first judgment model; dividing the face region of the recognition object according to the feature points included in the third feature point set, and determining a first region in the face region, wherein the first region is a region of the face of the recognition object to be subjected to image processing; and performing image processing operation on the first area.
With reference to the first aspect and the foregoing implementation manners, in some possible implementation manners, before determining the first determination model according to the first feature point set corresponding to the face of the first user, the method further includes: a first feature point set corresponding to the face of the first user is determined.
With reference to the first aspect and the foregoing implementation manners, in some possible implementation manners, the determining a first feature point set corresponding to the face of the first user includes:
and determining the first feature point set according to the average time of extracting the feature points in the first feature point set corresponding to the face of the first user.
For example, a number of feature points are selected in the 68-point standard model shown in fig. 4, such as a model with 30 feature points and 46 feature points. Recording and extracting the detection time of 30 characteristic points and the detection time of 46 characteristic points, and then according to the current detection scene, judging whether the detection time meets the real-time requirement. If the detection time of the 46 characteristic points is longer than the detection time required by the current scene, less than 46 characteristic points are selected, and then the number of the characteristic points meeting the current real-time requirement is detected in sequence.
After the number of feature points is determined, the "low match model" is trained using the selected feature points (e.g., 30 feature points), and the constraint relationship between the "low match model" and the "standard model" is recorded. When the subsequent fast extraction is carried out, the corresponding points of the 'low-matching model' and the 'standard template' are aligned, and then other characteristic points in the 'standard template' are restored according to the position relation at equal intervals and the recorded constraint relation.
Or determining the first feature point set according to a second region of the first user face, wherein the second region is a region of the face of the recognition object to be subjected to image processing.
Optionally, the terminal device determines a third feature point set of the recognition object according to the first determination model; dividing the face area of the recognition object according to the feature points included in the third feature point set, and determining a first area in the face area, wherein the first area is an area of the face of the recognition object to be subjected to image processing; and performing image processing operation on the first area.
Or extracting partial feature points at intervals from the feature points included in the second feature point set corresponding to the face of the first user, and determining the extracted partial feature points as the first feature point set.
For example, a 34-point overall contour model is obtained by selecting a plurality of feature points at equal intervals from a 68-point standard template, such as taking one point for every two points, for example, taking feature point 1, feature point 3, feature point 5, and the like. And training the low matching model by using the selected feature points, and recording the constraint relation between the low matching model and the standard model. When the subsequent rapid extraction is carried out, the points corresponding to the 'low-matching model' and the 'standard template' are aligned, and then other feature points in the 'standard template' are restored according to the position relation at equal intervals and the recorded constraint relation.
Or extracting facial feature points of the facial features of the first user from feature points included in a second feature point set corresponding to the facial feature of the first user, and determining the facial feature points of the facial features of the first user as the first feature point set.
For example, only part of the face information, such as the information of the five sense organs, such as eyes, nose, mouth, etc., is needed in a certain stage of application, and only the five sense organ contour 41 point model composed of the above feature points can be selected. And training a low matching model by using the selected feature points, and recording the constraint relation between the low matching model and the standard model. Subsequently, if the face outline information is needed, the whole detection is not needed, only the points corresponding to the 'low-matching model' and the 'standard model' are aligned, and then other feature points in the 'standard model' are restored according to the position relation of the facial features and the recorded constraint relation. This process is illustrated in fig. 5.
With reference to the first aspect and the foregoing implementation manners, in some possible implementation manners, the feature points in the first feature point set corresponding to the face of the first user include feature information at different offset angles.
Because the human face pose changes more in practical application, the pitch deflection, the horizontal deflection and the like bring difficulty in feature point detection, for example, some feature points may be hidden, and the geometric relationship between the feature points changes. There is a limitation to using only a single standard template. Optionally, the feature points of the face of the first user may be corrected in multiple angles, even if the feature points in the first feature point set corresponding to the feature points of the face of the first user include feature information at different offset angles.
Considering that the user may have different use habits and several familiar postures, the face postures of the user can be divided into a plurality of classes, and corresponding standard models are trained according to different familiar postures. And defining the similarity of model detection, carrying out similarity calculation on each model and the detected feature points during detection, and selecting the model with the highest similarity as a detection result.
In a second aspect, an apparatus for extracting feature points is provided, including:
a determining unit, configured to determine a first decision model according to a first feature point set corresponding to a face of a first user, where the first feature point set includes part of feature points in a second feature point set, and the second feature point set is a set of all extractable feature points of the face of the first user;
the determining unit is further configured to determine a third feature point set corresponding to the recognition object according to a first position of a feature point in the first feature point set on the face of the first user, where the third feature point set includes a part of feature points in a fourth feature point set, the fourth feature point set is a set of all extractable feature points of the face of the recognition object, and a second position of the feature point in the third feature point set on the face of the recognition object corresponds to the first position.
With reference to the second aspect, in some possible implementation manners, the apparatus further includes a determining unit, configured to determine whether the identified object is the first user according to a matching degree between the third feature point set and the first determination model.
With reference to the second aspect and the foregoing implementation manners, in some possible implementation manners, the determining unit is further configured to determine a second determination model according to the second feature point set corresponding to the face of the first user; and
when the judging unit judges that the matching degree of the third feature point set and the first judging model is higher than a preset first threshold value, the determining unit determines a fifth feature point set corresponding to the recognition object according to the feature points in the second feature point set except the first feature point set;
the determination unit determines whether the recognition object is the first user according to the matching degree between the fifth feature point set and the second determination model.
With reference to the second aspect and the foregoing implementation manners, in some possible implementation manners, when the determining unit determines that the degree of matching between the third feature point set and the first determination model is higher than a preset first threshold, the determining unit is further configured to:
determining a third feature point set of the recognition object according to the first judgment model;
determining a fifth feature point set of the recognition object according to the third position of the feature point in the third feature point set on the face of the recognition object;
and restoring a fourth feature point set of the identification object according to the third feature point set and the fifth feature point set.
With reference to the second aspect and the foregoing implementations, in some possible implementations, the determining unit is further configured to:
determining a third feature point set of the recognition object according to the first judgment model;
dividing the face area of the recognition object according to the feature points included in the third feature point set, and determining a first area in the face area, wherein the first area is an area to be subjected to image processing on the face of the recognition object;
and performing image processing operation on the first area.
With reference to the second aspect and the foregoing implementation manners, in some possible implementation manners, before the determining unit determines the first determination model according to the first feature point set corresponding to the face of the first user, the determining unit is further configured to:
a first feature point set corresponding to the face of the first user is determined.
With reference to the second aspect and the foregoing implementation manners, in some possible implementation manners, the determining unit determines the first feature point set corresponding to the face of the first user, and specifically includes:
the determining unit determines a first feature point set according to the average time of extracting feature points in the first feature point set corresponding to the face of the first user; or
The determination unit determines the first feature point set according to a second region of the face of the first user, the second region being a region of the face of the recognition object to be subjected to image processing; or
Extracting partial feature points at intervals from among feature points included in a second feature point set corresponding to the face of the first user, the determining unit determining the extracted partial feature points as the first feature point set; or
Extracting facial feature points of the facial features of the first user from feature points included in a second feature point set corresponding to the facial feature of the first user, and determining the facial feature points of the facial features as the first feature point set by the determining unit.
With reference to the second aspect and the foregoing implementation manners, in some possible implementation manners, the feature points in the first feature point set corresponding to the face of the first user include feature information at different offset angles.
In a third aspect, an apparatus is provided, comprising: a processor, coupled to the memory, executes the instructions in the memory to implement the method as in the first aspect and any one of the possible implementations of the first aspect. Optionally, the apparatus further comprises a memory for storing program instructions and data.
In a fourth aspect, there is provided a computer program product comprising: computer program code for causing a computer to perform the first aspect as well as any one of the possible methods of the first aspect described above, when the computer program code runs on a computer.
In a fifth aspect, a computer-readable medium is provided, which stores program code, which, when run on a computer, causes the computer to perform the first aspect and any one of the possible methods of the first aspect.
A sixth aspect provides a chip system, which comprises a processor for enabling a terminal device to implement the first aspect and any one of the possible aspects of the first aspect, for example, to obtain, determine, or process data and/or information involved in the method. In one possible design, the system-on-chip further includes a memory for storing program instructions and data necessary for the terminal device. The chip system may be formed by a chip, and may also include a chip and other discrete devices.
Drawings
Fig. 1 is a schematic diagram of an example of a terminal device to which the method of extracting feature points according to the present application is applied.
Fig. 2 is a schematic flowchart of an example of a method for extracting feature points according to an embodiment of the present application.
Fig. 3 is a schematic diagram of an example of establishing a decision model according to an embodiment of the present application.
Fig. 4 is a schematic diagram of feature points of a human face in the user identification method according to the present application.
Fig. 5 is a schematic diagram of another example of establishing a decision model according to the embodiment of the present application.
Fig. 6 is a schematic diagram of another example of a process for extracting feature points according to an embodiment of the present application.
Fig. 7 is a schematic diagram of another example of a process for extracting feature points according to an embodiment of the present application.
Fig. 8 is a schematic block diagram of an example of the apparatus for extracting feature points according to the present application.
Detailed Description
The technical solution in the present application will be described below with reference to the accompanying drawings.
The user identification method can be applied to identification of the user aiming at the terminal equipment. A terminal device may also be called a User Equipment (UE), an access terminal, a subscriber unit, a subscriber station, a mobile station, a remote terminal, a mobile device, a User terminal, a wireless communication device, a User agent, or a User Equipment. The terminal device may be a Station (ST) in a WLAN, and may be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA) device, a handheld device with Wireless communication capability, a computing device or other processing device connected to a Wireless modem, a vehicle mounted device, a vehicle networking terminal, a computer, a laptop, a handheld communication device, a handheld computing device, a satellite radio, a Wireless modem card, a Set Top Box (STB), a Customer Premises Equipment (CPE), and/or other devices for communicating over a Wireless system, as well as a next generation communication system, such as a terminal device in a 5G Network or a terminal device in a future evolved Public Land Mobile Network (PLMN) Network, and the like.
By way of example and not limitation, in the embodiments of the present application, the terminal device may also be a wearable device. Wearable equipment can also be called wearable intelligent equipment, is the general term of applying wearable technique to carry out intelligent design, develop the equipment that can dress to daily wearing, like glasses, gloves, wrist-watch, dress and shoes etc.. The wearable device may be worn directly on the body or may be a portable device integrated into the user's clothing or accessory. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable smart device has full functions and large size, and can realize complete or partial functions without depending on a smart phone, for example: smart watches or smart glasses and the like, and only focus on a certain type of application function, and need to be matched with other equipment such as a smart phone for use, such as various smart bracelets for physical sign monitoring, smart jewelry and the like.
In addition, in the embodiment of the present application, the terminal device may also be a terminal device in an Internet of Things (IoT) system, the IoT is an important component of future information technology development, and the main technical feature of the present application is that an article is connected to a network through a communication technology, so as to implement an intelligent network with interconnected people and machines and interconnected objects.
Fig. 1 shows a schematic diagram of an example of the terminal device, and as shown in fig. 1, the terminal device 100 may include the following components.
RF circuit 110
The RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, for processing the received downlink information of the base station to the processor 180; in addition, the data for designing uplink is transmitted to the base station. Typically, the RF circuit includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the RF circuitry 110 may also communicate with networks and other devices via wireless communications. The Wireless communication may use any communication standard or protocol, including but not limited to Wireless Local Area Network (WLAN), global System for Mobile communication (GSM) System, code Division Multiple Access (CDMA) System, wideband Code Division Multiple Access (WCDMA) System, general Packet Radio Service (GPRS), long Term Evolution (LTE) System, LTE Frequency Division Duplex (FDD) System, LTE Time Division Duplex (TDD), universal Mobile telecommunications System (Universal Mobile telecommunications System, UMTS), universal internet Access (WiMAX), wiMAX, future generation Radio (NR), etc.
B. Memory 120
The memory 120 may be used to store software programs and modules, and the processor 180 executes various functional applications and data processing of the terminal device 100 by operating the software programs and modules stored in the memory 120. The memory 120 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal device 100, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
C. Other input devices 130
The other input device 130 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device 100. In particular, other input devices 130 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, a light mouse (a light mouse is a touch-sensitive surface that does not display visual output, or is an extension of a touch-sensitive surface formed by a touch screen), and the like. The other input devices 130 are connected to other input device controllers 171 of the I/O subsystem 170 and are in signal communication with the processor 180 under the control of the other input device controllers 171.
D. Display screen 140
The display screen 140 may be used to display information input by or provided to the user and various menus of the terminal device 100, and may also accept user input. The display screen 140 may include a display panel 141 and a touch panel 142. The Display panel 141 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. The touch panel 142, also referred to as a touch screen, a touch sensitive screen, etc., may collect contact or non-contact operations (for example, operations performed by a user on or near the touch panel 142 using any suitable object or accessory such as a finger or a stylus, and may also include body sensing operations; the operations include single-point control operations, multi-point control operations, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 142 may include two parts, i.e., a touch detection device and a touch controller. The touch detection device detects the touch direction and gesture of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into information that can be processed by the processor, sends the information to the processor 180, and receives and executes commands sent by the processor 180. In addition, the touch panel 142 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, a surface acoustic wave, and the like, and the touch panel 142 may also be implemented by any technology developed in the future. Further, the touch panel 142 may cover the display panel 141, a user may operate on or near the touch panel 142 covered on the display panel 141 according to the content displayed on the display panel 141 (the display content includes, but is not limited to, a soft keyboard, a virtual mouse, virtual keys, icons, etc.), the touch panel 142 detects the operation on or near the touch panel 142, and transmits the operation to the processor 180 through the I/O subsystem 170 to determine a user input, and then the processor 180 provides a corresponding visual output on the display panel 141 through the I/O subsystem 170 according to the user input. Although in fig. 4, the touch panel 142 and the display panel 141 are two separate components to implement the input and output functions of the terminal device 100, in some embodiments, the touch panel 142 and the display panel 141 may be integrated to implement the input and output functions of the terminal device 100.
E. Sensor 150
The sensor 150 may be one or more, for example, which may include a light sensor, a motion sensor, and other sensors.
Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 and/or the backlight when the terminal device 100 is moved to the ear.
As one of the motion sensors, the acceleration sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping), and the like, for recognizing the attitude of the terminal device.
In addition, the terminal device 100 may further configure other sensors such as a gravity sensor (also referred to as a gravity sensor), a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described in detail herein.
F. Audio circuit 160, speaker 161, microphone 162
An audio interface between the user and the terminal device 100 may be provided. The audio circuit 160 may transmit the converted signal of the received audio data to the speaker 161, and convert the signal into a sound signal for output by the speaker 161; on the other hand, the microphone 162 converts the collected sound signals into signals, which are received by the audio circuit 160 and converted into audio data, which are output to the RF circuit 108 for transmission to, for example, another terminal device, or output to the memory 120 for further processing.
G.I/O subsystem 170
The I/O subsystem 170 controls input and output of external devices, which may include other devices, an input controller 171, a sensor controller 172, and a display controller 173. Optionally, one or more other input control device controllers 171 receive signals from and/or transmit signals to other input devices 130, and other input devices 130 may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, a light mouse (a light mouse is a touch-sensitive surface that does not display visual output, or is an extension of a touch-sensitive surface formed by a touch screen). It is noted that other input control device controllers 171 may be connected to any one or more of the above-described devices. The display controller 173 in the I/O subsystem 170 receives signals from the display screen 140 and/or sends signals to the display screen 140. After the display screen 140 detects the user input, the display controller 173 converts the detected user input into an interaction with the user interface object displayed on the display screen 140, i.e., implements a human-computer interaction. The sensor controller 172 may receive signals from one or more sensors 150 and/or transmit signals to one or more sensors 150.
H. Processor 180
The processor 180 is a control center of the terminal device 100, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by running or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the terminal device. Alternatively, processor 180 may include one or more processing units; preferably, the processor 180 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
Terminal device 100 also includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically coupled to processor 180 via a power management system to manage charging, discharging, and power consumption functions via the power management system.
In addition, although not shown, the terminal device 100 may further include a camera, a bluetooth module, and the like, which are not described herein again.
Fig. 2 shows an exemplary illustration of an example of the method 200 for extracting feature points of the present application, and for example, the method 200 may be applied to the terminal device 100 described above.
As shown in fig. 2, in S210, the terminal device determines a first determination model according to a first feature point set corresponding to the face of the first user, where the first feature point set includes part of feature points in a second feature point set, and the second feature point set is a set of all extractable feature points of the face of the first user.
The first user may here be a private user of the terminal device. With the rapid development of economy, various different terminal devices cover most people and become indispensable products in people's life. Among them, smart phones have already replaced functional devices to become the mainstream of the market. Based on the privacy of the personal terminal equipment of the user, the extraction speed can be effectively improved by generating a standard model of the special characteristic point of the user on the premise of ensuring the detection precision of the characteristic point algorithm. The application will use a smart phone as a user equipment for detailed description.
First, an application scenario of the face feature point technique according to the present application will be described. The present application is mainly enumerated as follows:
A. specifically, feature point information obtained by detection is compared with existing Face feature point information to determine whether the feature point information corresponds to a Face, thereby identifying an identity. The face recognition realizes the detection, analysis and comparison of the face in the image or video, comprises a face detection positioning module, a face attribute recognition module and a face comparison module which are equal and independent, can provide high-performance on-line Application Program Interface (API) service for developers and enterprises, and is applied to various scenes such as the Augmented Reality (AR) of the face, the face recognition and authentication, large-scale face retrieval, photo management and the like.
B. Face alignment, specifically, the modeling transformation from one person's face variations to another person's face variations. The face alignment is to find the positions of the characteristic points of the face, such as the positions of the characteristic points on the left side of the nose, the lower side of the nostrils, the pupil position, the lower side of the upper lip, and the like. Face alignment may be understood as either face keypoint localization or face facial five sense organ localization. The method is mainly applied to making face-changing special effects, gender identification, age identification, intelligent mapping and the like.
C. And expression transformation, specifically, performing digital transformation according to the condition of the feature points when the face is in different expressions. Expression transformation techniques can process specific expression states from a still image or a moving video sequence to determine the psychological mood of the identified object. For example, seven different types of samples such as anger, disgust, fear, happy, neutral, sad and surprised exist in a large public database for facial expression recognition at present.
D. The beautifying specifically includes that the face is partitioned according to partial feature points of the face, and corresponding beautifying and other operations are performed according to different areas after partitioning. In addition, in the current skin health monitoring project, a scheme of 'human face characteristic points' is also required to be adopted for dividing the human face area, so that subsequent operations such as skin detection and the like are carried out.
In addition to the basic face recognition scenario described above, there are a number of other application scenarios based on face recognition.
E. Specifically, in order to prevent misoperation and improve the security of the terminal device, a user may lock the screen, or the terminal device may lock the screen by itself without detecting the operation of the terminal device by the user within a predetermined time, so that when the user needs to unlock the screen, the user needs to perform a correct unlocking operation. For example, various unlocking operations such as slide unlocking, password unlocking, pattern unlocking, image recognition, and the like can be cited.
The user identification related to the application is to identify the object to be detected mainly through a human face characteristic point technology, detect the organ position and the contour information of the human face mainly based on the human face characteristics, and extract the characteristics contained in the human face, so that whether the object identified by the terminal equipment is the user stored by the terminal equipment or not is judged through characteristic point information comparison.
F. The application unlocking operation, specifically, to improve the security of the terminal device, when a user needs to open a certain application (e.g., a chat-type application or a payment-type application) the terminal device or the application may pop up an unlocking interface. Therefore, after the terminal equipment carries out correct identification, the application can be normally started; alternatively, when the user needs to use a certain function (e.g., a transfer function or an inquiry function) of the application, the terminal device or the application may also pop up an unlocking interface, so that the function can be started normally after the terminal device performs correct identification.
G. For operation of a prescribed application, for example, by way of example and not limitation, the prescribed application may be a user-set application, such as a chat-type application or a payment-type application; alternatively, the predetermined application may be an application set by a manufacturer, an operator, or the like.
It should be understood that the specific contents included in the application scenarios related to the human face feature point technology listed above are only exemplary, and the present application is not limited thereto.
As introduced in the background art, the most significant contradiction of the face feature point technology is the contradiction between precision and speed. The more feature points are included in the feature point detection process, the more detailed outline model can be established, the more accurate the result is, but the more complex the algorithm is, the slower the identification is. Conversely, the fewer the feature points, the simpler the algorithm, the faster the recognition, but the lower the recognition rate. The number of the feature points has a great influence on the final performance, and especially in a scene with high real-time requirement, the contradiction between the precision and the speed is more prominent.
In the embodiment of the application, based on the particularity of a private mobile phone, an initial process requests a user to cooperate, so that as much face information as possible is obtained, a characteristic point 'standard model' exclusive to the user is generated, and meanwhile, a part of characteristic points in the 'standard model' are selected to train a 'low-match model' for subsequent use processes.
Fig. 3 is a schematic diagram of an example of creating a template according to an embodiment of the present application. Specifically, as shown in fig. 3, the first user initiates an initial process to build the model according to the execution diagram shown in fig. 3. The first user detects a special picture under specific conditions to acquire as much face information as possible. Alternatively, here, the "special picture" may be a picture including as many feature points as possible taken by the first user without a shift or occlusion of the face and without interference from external conditions; the "specific condition" may be that natural light is uniform, there is no shadow or other obstruction, and in short, in the case of establishing the determination model, as much feature information of the facial feature points of the first user as possible is acquired. It should be understood that the starting initial process may be various possible scenarios, such as when the first user uses the mobile phone for the first time, or when the mobile phone determination model is updated, and the present application includes but is not limited to this.
Specifically, when a first user initializes the mobile phone, the first user is prompted to take a picture under specific conditions, such as good brightness conditions and moderate distance conditions, to generate a user-specific face feature point model, which is similar to a matching model obtaining mode of face unlocking or fingerprint unlocking. The model is based on a feature point model with the highest precision, the algorithm is time-consuming, but as much feature point information as possible can be obtained, and the model is called as a standard model.
Alternatively, a specific possible generation manner of the "standard model" is: and aligning the characteristic points of a plurality of pictures of the user, and filtering the influence caused by position deviation, size and rotation to obtain a standard model of the user. It should be understood that the obtaining nature of the user model is still consistent with the obtaining of the model of the general algorithm, and the average model is obtained through the samples, only the data at the mobile phone user (i.e. the first user) is used by the selected samples of the scheme provided in the embodiment of the present application, and the model obtained through training can be understood as the "over-fitting" condition in the general manner, and is only suitable for the detection of the user (i.e. the first user).
This application refers to the set of all the extractable feature points of the first user's face as the "second feature point set", and refers to the set of some feature points in the second feature point set as the "first feature point set". The standard model generated from the second feature point set is referred to as a "second determination model", and the determination model generated from the first feature point set is referred to as a "first determination model". The facial feature points of the first user included in the second determination model are the most, and at the same time, there are many cases in the first feature point set, so that each possible first feature point set corresponds to one first determination model, that is, the second determination model is the unique and exclusive "standard model" of the first user, and the first determination model is the "low match model" of the first user.
After the "standard model" (i.e. the second judgment model) of the first user is generated, some feature points are selected from the "standard template" according to the requirements of different scenes on the accuracy and the detection real-time performance of the feature points of the face region, and the "low matching model" (i.e. the first judgment model) corresponding to the feature points is respectively trained. As shown in fig. 4, the set of facial feature points is shown in fig. 4, and 68 facial feature points are shown in fig. 4, then the set including all 68 feature points is the "second feature point set" defined above, and the set including any feature points smaller than 68 is the "first feature point set" defined above. There are many possibilities for the first feature point set, including, for example, 30 feature points, 46 feature points, and so on.
The selection standard of the feature points in the low matching model is determined by the detection instantaneity and the reduction feasibility of the feature points, namely, the average time for detecting the 'low matching model' is calculated, if the number of the points is less than a certain number, the detection average time can meet the instantaneity requirement, and meanwhile, the standard template can be reduced according to the number of the detected points.
Specifically, for example, the following cases are possible for selecting the feature points:
(1) And determining the first feature point set according to the average time of extracting the feature points in the first feature point set corresponding to the face of the first user.
For example, a number of feature points are selected in the 68-point standard model shown in fig. 4, such as a model with 30 feature points and 46 feature points. Recording and extracting the detection time of 30 characteristic points and the detection time of 46 characteristic points, and then according to the current detection scene, judging whether the detection time meets the real-time requirement. If the detection time of the 46 characteristic points is longer than the detection time required by the current scene, less than 46 characteristic points are selected, and then the number of the characteristic points meeting the current real-time requirement is detected in sequence.
After the number of feature points is determined, the "low match model" is trained using the selected feature points (e.g., 30 feature points), and the constraint relationship between the "low match model" and the "standard model" is recorded. When the subsequent rapid extraction is carried out, the points corresponding to the 'low-matching model' and the 'standard template' are aligned, and then other feature points in the 'standard template' are restored according to the position relation at equal intervals and the recorded constraint relation.
(2) And determining the first feature point set according to a second region of the face of the first user, wherein the second region is a region of the face of the recognition object to be subjected to image processing.
Optionally, the terminal device determines a third feature point set of the recognition object according to the first determination model; dividing the face area of the recognition object according to the feature points in the third feature point set, and determining a first area in the face area, wherein the first area is an area to be subjected to image processing on the face of the recognition object; and performing image processing operation on the first area.
For example, feature points 0-5, feature points 12-17, feature points 6-10, and feature points 18-22 are selected from the 68-point standard model in fig. 4, and the region where these feature points are located is divided into eye regions. In the image processing process such as beautifying, corresponding processing is mainly performed in the eye region when functions such as removing dark circles, brightening eyes and the like are executed.
(3) Extracting partial feature points at intervals from the feature points included in the second feature point set corresponding to the face of the first user, and determining the extracted partial feature points as the first feature point set.
For example, a 34-point overall contour model is obtained by selecting a plurality of feature points at equal intervals from a 68-point standard template, such as taking one point for every two points, for example, taking feature point 1, feature point 3, feature point 5, and the like. And training a low matching model by using the selected feature points, and recording the constraint relation between the low matching model and the standard model. When the subsequent rapid extraction is carried out, the points corresponding to the 'low-matching model' and the 'standard template' are aligned, and then other feature points in the 'standard template' are restored according to the position relation at equal intervals and the recorded constraint relation.
(4) Extracting facial feature contour feature points of the facial features of the first user from feature points included in a second feature point set corresponding to the facial features of the first user, and determining the facial feature contour feature points as the first feature point set.
For example, only part of the face information, such as the information of the five sense organs, such as eyes, nose, mouth, etc., is needed in a certain stage of application, and only the five sense organ contour 41 point model composed of the above feature points can be selected. And training the low matching model by using the selected feature points, and recording the constraint relation between the low matching model and the standard model. Subsequently, if the face outline information is needed, the whole detection is not needed, only the points corresponding to the 'low-matching model' and the 'standard model' are aligned, and then other feature points in the 'standard model' are restored according to the position relation of the facial features and the recorded constraint relation. This process is illustrated in fig. 5.
The above lists four possible cases in the process of selecting the feature points in the process of establishing the low matching model, and it should be understood that the present application may use one case alone, or may use a plurality of cases in combination, and the present application includes but is not limited to this.
The above describes the process of establishing the "standard model" and the low-level model of the first user in detail, but because the human face pose changes greatly in practical application, the pitch deflection, the horizontal deflection and the like all bring difficulty in feature point detection, for example, some feature points may be hidden, and the geometric relationship between the feature points is changed. There are limitations to using only a single standard template. Optionally, the feature points of the face of the first user may be corrected in multiple angles, even if the feature points in the first feature point set corresponding to the feature points of the face of the first user include feature information at different offset angles.
Considering that the user may have different use habits and has several habitual postures, the face postures of the user can be divided into several types, and corresponding standard models are trained aiming at different habitual postures. And defining the similarity of model detection, carrying out similarity calculation on each model and the detected feature points during detection, and selecting the model with the highest similarity as a detection result.
In the using process of a user, samples are reserved in each detection, the deflection angle comparison can be carried out on the samples and the single-posture standard model in each detection, and models in different postures can be trained. For example, a template of-5 degrees to 5 degrees can be established by a template of 0 degree, 5 degrees to 10 degrees can be obtained by slightly deflecting the template of 0 degree to 5 degrees, and by analogy, models of different postures of a user can be obtained. The specific process is schematically shown in fig. 5. The method comprises the following specific steps:
(1) Firstly, an average face model is utilized to obtain a 0-degree non-offset standard template exclusive to a user. In this step, reference may be made to the above-mentioned specific process of establishing the standard model (i.e., the second determination model) of the first user, and details are not described here for simplicity.
(2) The standard model is utilized to simulate the conditions of the characteristic points at different deviation angle positions, such as the conditions of overlapping and missing of some characteristic points, and a multi-angle reference model is formed.
(3) And correcting the multi-angle reference model by using the offset angle face data matched with the user or the offset face data of the habit posture of the user.
(4) A user-specific multi-pose standard model is generated.
The exclusive multi-posture standard model of the first user is established through the process, and in practical application, the multi-posture model is generated on the basis of the single-posture model (namely, the second judgment model) according to different change conditions and different habitual postures of the user, so that the robustness is enhanced.
At S220, the terminal device determines a third feature point set corresponding to the recognition object according to a first position of a feature point in the first feature point set on the face of the first user, where the third feature point set includes part of feature points in a fourth feature point set, the fourth feature point set is a set of all extractable feature points of the face of the recognition object, and the feature point in the third feature point set corresponds to the first position at a second position of the face of the recognition object.
In S210, a standard model (i.e., a second determination model) and a low match model (i.e., a first determination model) of the first user have been established, and user identification is performed according to the determination models, i.e., whether the current user is the first user is identified. Taking a 68-point standard model and a 34-point overall outline model as examples, assuming that the current recognition object # a, according to the positions of 34 feature points in the 34-point overall outline model on the face of the original first user, 34 feature points are extracted from the same position of the face of the current recognition object # a, and feature information of the 34 feature points is obtained, that is, a set of 34 feature points extracted from the same position of the face of the current recognition object # a is called a "third feature point set".
Similarly, for the current recognition object # a, the 68 feature point sets corresponding to the 68-point standard model are referred to as "fourth feature point set", and the feature point set other than the feature points included in the third feature point set among the 68 feature points is referred to as "fifth feature point set".
Optionally, the terminal device determines whether the identification object is the first user according to the matching degree between the third feature point set and the first determination model.
Specifically, for the current recognition object # a, after the third feature point set of the recognition object # a is acquired as described above, the matching degree between the feature point information in the third feature point set and the feature point information in the first determination model is compared. Certain threshold values can be set according to different application scenes, and higher threshold values can be set for application scenes related to property safety such as payment. For example, when the terminal determines that feature information of at least 32 feature points out of 34 feature points matches the first determination model, it may determine that the current recognition object # a is the first user, and may perform an operation such as a related payment. For a scenario with a low requirement on accuracy, such as unlocking a mobile phone, a lower threshold may be set, for example, when the terminal determines that feature information of greater than or equal to 25 feature points in 34 feature points matches the first determination model, it determines that the current identified user # a is the first user, and may perform a relevant operation.
Optionally, when the matching degree between the third feature point set and the first determination model is higher than a preset first threshold, determining a fifth feature point set corresponding to the identification object according to the feature points in the second feature point set except for the first feature point set; and judging whether the recognition object is the first user or not according to the matching degree of the fifth feature point set and the second judgment model.
For some application scenarios with high precision requirements, in order to ensure the operation safety, it may be determined whether the feature information of the feature point of the current recognition object matches according to the low-match model, and when it is determined that the feature information of the feature point matches according to the low-match model, it may be further determined whether the current recognition user # a is the first user according to the standard model of the first user. For example, the degree of matching between the feature information of the feature point in the fifth feature point set currently identifying the user # a and the feature information of the corresponding feature point in the original standard model may be determined. And when the matching degree of the feature information of the feature point in the fifth feature point set of the current identified user # A and the feature information of the corresponding feature point in the original standard model is higher than a second threshold value, judging that the current identified user # A is the first user.
Alternatively, in another possible application scenario, it may happen that some target areas of the user cannot be reasonably detected due to abnormal conditions, which may affect the result of the user experience. In the application, when the matching degree between the third feature point set and the first determination model is higher than a preset first threshold, the terminal device determines the third feature point set of the recognition object according to the first determination model, and then determines a fifth feature point set of the recognition object according to a third position of a feature point in the third feature point set on the face of the recognition object; and restoring a fourth feature point set of the identification object according to the third feature point set and the fifth feature point set.
As shown in fig. 6, the terminal device may first determine whether the feature information of the feature point of the current recognition object matches according to the low matching model. When the terminal equipment judges that the feature information of the feature points is matched according to the low-matching model, judging that the currently identified user is the first user; and further restoring the standard model according to the standard model restoring algorithm of the first user, namely directly extracting the characteristic points except the characteristic points included in the low-match model, namely a fifth characteristic point set from the original standard model without extracting the characteristic points of the currently identified user to restore the first user. And combining the third characteristic point set and the fifth characteristic point set to obtain the standard model. In the restoring process, the feature information of all the feature points of the current identification user can be obtained by only extracting less feature information of the feature points, so that the operation is simplified, and the high-precision feature point information can be quickly restored.
In addition, different use habits may exist for users, and several habitual postures exist, the human face postures of the users can be divided into a plurality of classes, and the standard templates corresponding to training of different habitual postures of the users, namely the user-specific multi-posture standard model, are introduced. The multi-pose feature point detection and restoration process is shown in fig. 7. The method comprises the following specific steps:
(1) Detecting the currently recognized user # A by using an existing multi-posture standard template, wherein the familiar posture of the user is preferentially used in the detection;
(2) Selecting a template detection result with the highest similarity, wherein the result comprises the relationship between the offset angle and part of the feature points;
(3) And restoring the 0-degree standard template according to the corresponding relation of the characteristic points of the multi-posture template corresponding to the detected deviation angle and the 0-degree standard template.
The multi-pose feature point detection considers the change situation of the human face pose in practical application and the familiar poses of different users, and generates the multi-pose template on the basis of the single-pose template, so that the existing information is more effectively utilized, and the robustness is enhanced.
Through the technical scheme, compared with the traditional 'average face' universal model, the method and the system for training the face model are more specific to the personal feature points of the user, and can effectively avoid negative factors influencing model training. Compared with the existing scheme for rapidly extracting the feature points, the method and the device have the advantages that the result can be obtained more rapidly on the premise of ensuring the detection precision aiming at different application scenes through comparison of the low-configuration model and the standard template. In addition, in some detection scenes, when some target regions cannot be reasonably detected due to abnormal conditions, the result can be obtained in a mode of restoring the standard model by the feature points, so that the region division for the face of the user is reasonable and effective, and the result influencing the user experience cannot occur.
It should be understood that the method provided in the embodiment of the present application may be applied not only to a single user scenario but also to a multi-user scenario, that is, a standard template of multiple users may be similarly established for matching. Compared with the scene of a single user, the difference is only in the selection standard when the low match model restores the standard template.
Fig. 8 is a schematic block diagram of an apparatus 800 for extracting feature points according to an embodiment of the present application.
As shown in fig. 8, the apparatus 800 includes:
a determining unit 810, configured to determine a first decision model according to a first feature point set corresponding to the face of the first user, where the first feature point set includes part of feature points in a second feature point set, and the second feature point set is a set of all extractable feature points of the face of the first user.
The determining unit 810 is further configured to determine a third feature point set corresponding to the recognition object according to a first position of a feature point in the first feature point set on the face of the first user, where the third feature point set includes a part of feature points in a fourth feature point set, the fourth feature point set is a set of all extractable feature points of the face of the recognition object, and a second position of the feature point in the third feature point set on the face of the recognition object corresponds to the first position.
Optionally, the apparatus further includes a determining unit 820, configured to determine whether the recognition object is the first user according to a matching degree between the third feature point set and the first determination model.
Optionally, the determining unit 810 is further configured to determine a second determination model according to the second feature point set corresponding to the face of the first user. When the determining unit determines that the matching degree between the third feature point set and the first determination model is higher than a preset first threshold, the determining unit 810 determines a fifth feature point set corresponding to the recognition object according to the feature points in the second feature point set except for the first feature point set; the determination unit determines whether the recognition object is the first user or not according to the matching degree between the fifth feature point set and the second determination model.
Optionally, when the determining unit 820 determines that the matching degree between the third feature point set and the first determination model is higher than a preset first threshold, the determining unit 810 is further configured to determine the third feature point set of the recognition object according to the first determination model; determining a fifth feature point set of the recognition object according to the third position of the feature point in the third feature point set on the face of the recognition object; and restoring a fourth feature point set of the identification object according to the third feature point set and the fifth feature point set.
Optionally, the determining unit 810 is further configured to determine a third feature point set of the identified object according to the first decision model; dividing the face area of the recognition object according to the feature points included in the third feature point set, and determining a first area in the face area, wherein the first area is an area to be subjected to image processing on the face of the recognition object; and performing image processing operation on the first area.
Optionally, before the determining unit 810 determines the first decision model according to the first feature point set corresponding to the face of the first user, it is further configured to determine the first feature point set corresponding to the face of the first user.
Optionally, the determining unit 810 determines the first feature point set corresponding to the face of the first user, which may include the following cases:
(1) The determining unit 810 determines a first feature point set according to an average time of extracting feature points in the first feature point set corresponding to the face of the first user; or alternatively
(2) The determination unit determines the first feature point set according to a second region of the face of the first user, the second region being a region of the face of the recognition object to be subjected to image processing; or alternatively
(3) Extracting partial feature points at intervals from among feature points included in a second feature point set corresponding to the face of the first user, the determining unit determining the extracted partial feature points as the first feature point set; or
(4) Extracting facial feature contour feature points of the facial features of the first user from feature points included in a second feature point set corresponding to the facial features of the first user, and determining the facial feature contour feature points as the first feature point set by the determining unit.
In addition, considering that the user may have different use habits and has several habitual postures, the face postures of the user can be divided into several types, and corresponding standard models are trained aiming at different habitual postures. And defining the similarity of model detection, carrying out similarity calculation on each model and the detected feature points during detection, and selecting the model with the highest similarity as a detection result. In the using process of a user, samples are reserved in each detection, the deflection angle comparison can be carried out on the samples and the single-posture standard model in each detection, and models in different postures can be trained. The feature points in the first feature point set corresponding to the face of the first user include feature information at different offset angles.
Fig. 8 shows a schematic block diagram of a user identification device 800 provided in an embodiment of the present application. The ue 800 may perform the method described in the above 200, and each module or unit in the ue is respectively configured to perform each action and processing procedure in the above method 200, and here, detailed descriptions thereof are omitted to avoid redundancy.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method of extracting feature points, comprising:
determining a first judgment model according to a first feature point set corresponding to the face of a first user, wherein the first feature point set comprises part of feature points in a second feature point set, and the second feature point set is a set of all extractable feature points of the face of the first user;
determining a third feature point set corresponding to a recognition object according to a first position of a feature point in the first feature point set on the face of the first user, wherein the third feature point set comprises partial feature points in a fourth feature point set, the fourth feature point set is a set of all extractable feature points of the face of the recognition object, and the feature point in the third feature point set corresponds to the first position at a second position of the face of the recognition object;
when the matching degree of the third feature point set and the first judgment model is higher than a preset first threshold value, determining a fifth feature point set of the recognition object according to the feature points in the second feature point set except the first feature point set;
and restoring a fourth feature point set of the identification object according to the third feature point set and the fifth feature point set.
2. The method of claim 1, further comprising:
and judging whether the identification object is the first user or not according to the matching degree of the third feature point set and the first judgment model.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
dividing the face area of the recognition object according to the feature points included in the third feature point set, and determining a first area in the face area, wherein the first area is an area to be subjected to image processing on the face of the recognition object;
and carrying out image processing operation on the first area.
4. The method of claim 1 or 2, wherein prior to determining the first decision model from the first set of feature points corresponding to the face of the first user, the method further comprises:
determining a first feature point set corresponding to the face of the first user.
5. The method of claim 4, wherein determining the first set of feature points corresponding to the face of the first user comprises:
determining a first feature point set according to the average time of extracting feature points in the first feature point set corresponding to the face of the first user; or
Determining the first feature point set according to a second region of the face of the first user, wherein the second region is a region of the face of the identification object to be subjected to image processing; or
Extracting partial feature points at intervals from feature points included in a second feature point set corresponding to the face of the first user, and determining the extracted partial feature points as the first feature point set; or
Extracting facial feature contour feature points of the facial features of the first user from feature points included in a second feature point set corresponding to the facial features of the first user, and determining the facial feature contour feature points as the first feature point set.
6. The method of claim 1 or 2, wherein the feature points in the first set of feature points corresponding to the face of the first user comprise feature information at different offset angles.
7. An apparatus for extracting feature points, comprising:
a determining unit, configured to determine a first determination model according to a first feature point set corresponding to a face of a first user, where the first feature point set includes part of feature points in a second feature point set, and the second feature point set is a set of all extractable feature points of the face of the first user;
the determining unit is further configured to determine a third feature point set corresponding to the recognition object according to a first position of a feature point in the first feature point set on the face of the first user, wherein the third feature point set comprises part of feature points in a fourth feature point set, the fourth feature point set is a set of all extractable feature points of the face of the recognition object, and a second position of the feature point in the third feature point set on the face of the recognition object corresponds to the first position;
when the determination unit determines that the degree of matching between the third feature point set and the first determination model is higher than a preset first threshold, the determination unit is further configured to:
determining a fifth feature point set of the identification object according to the feature points in the second feature point set except the first feature point set;
and restoring a fourth feature point set of the identification object according to the third feature point set and the fifth feature point set.
8. The apparatus according to claim 7, wherein the determining unit is further configured to determine whether the recognition object is the first user according to a matching degree between the third feature point set and the first determination model.
9. The apparatus according to claim 7 or 8, wherein the determining unit is further configured to:
dividing the face region of the recognition object according to the feature points included in the third feature point set, and determining a first region in the face region, wherein the first region is a region of the face of the recognition object to be subjected to image processing;
and carrying out image processing operation on the first area.
10. The apparatus according to claim 7 or 8, wherein before the determining unit determines the first decision model according to the first feature point set corresponding to the face of the first user, the determining unit is further configured to:
determining a first feature point set corresponding to the face of the first user.
11. The apparatus according to claim 10, wherein the determining unit determines a first feature point set corresponding to the face of the first user, and specifically includes:
the determining unit determines a first feature point set according to the average time of extracting feature points in the first feature point set corresponding to the face of the first user; or alternatively
The determination unit determines the first feature point set according to a second region of the face of the first user, wherein the second region is a region of the face of the recognition object to be subjected to image processing; or alternatively
Extracting partial feature points at intervals from among feature points included in a second feature point set corresponding to the face of the first user, the determining unit determining the extracted partial feature points as the first feature point set; or
Extracting facial feature points of the facial features of the first user from feature points included in a second feature point set corresponding to the facial feature of the first user, and determining the facial feature points as the first feature point set by the determining unit.
12. The apparatus according to claim 7 or 8, wherein the feature points in the first feature point set corresponding to the face of the first user comprise feature information at different offset angles.
13. An apparatus for extracting feature points, comprising:
a processor, coupled to the memory, to execute instructions in the memory to implement the method of any of claims 1 to 6.
14. The apparatus of claim 13, further comprising:
the memory is used for storing program instructions and data.
CN201810451370.5A 2018-05-11 2018-05-11 Method and device for extracting feature points Active CN110472459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810451370.5A CN110472459B (en) 2018-05-11 2018-05-11 Method and device for extracting feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810451370.5A CN110472459B (en) 2018-05-11 2018-05-11 Method and device for extracting feature points

Publications (2)

Publication Number Publication Date
CN110472459A CN110472459A (en) 2019-11-19
CN110472459B true CN110472459B (en) 2022-12-27

Family

ID=68504753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810451370.5A Active CN110472459B (en) 2018-05-11 2018-05-11 Method and device for extracting feature points

Country Status (1)

Country Link
CN (1) CN110472459B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242020A (en) * 2020-01-10 2020-06-05 广州康行信息技术有限公司 Face recognition method and device

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100343874C (en) * 2005-07-11 2007-10-17 北京中星微电子有限公司 Voice-based colored human face synthesizing method and system, coloring method and apparatus
JP2007094906A (en) * 2005-09-29 2007-04-12 Toshiba Corp Characteristic point detection device and method
JP4745207B2 (en) * 2006-12-08 2011-08-10 株式会社東芝 Facial feature point detection apparatus and method
US8610710B2 (en) * 2009-12-18 2013-12-17 Electronics And Telecommunications Research Institute Method for automatic rigging and shape surface transfer of 3D standard mesh model based on muscle and nurbs by using parametric control
CN102654903A (en) * 2011-03-04 2012-09-05 井维兰 Face comparison method
CN104751112B (en) * 2013-12-31 2018-05-04 石丰 A kind of fingerprint template and fingerprint identification method based on fuzzy characteristics point information
CN104050642B (en) * 2014-06-18 2017-01-18 上海理工大学 Color image restoration method
CN105205779B (en) * 2015-09-15 2018-10-19 厦门美图之家科技有限公司 A kind of eyes image processing method, system and camera terminal based on anamorphose
CN106326867B (en) * 2016-08-26 2019-06-07 维沃移动通信有限公司 A kind of method and mobile terminal of recognition of face
CN106875329A (en) * 2016-12-20 2017-06-20 北京光年无限科技有限公司 A kind of face replacement method and device
CN107451453B (en) * 2017-07-28 2020-05-26 Oppo广东移动通信有限公司 Unlocking control method and related product
CN107563304B (en) * 2017-08-09 2020-10-16 Oppo广东移动通信有限公司 Terminal equipment unlocking method and device and terminal equipment
CN107633204B (en) * 2017-08-17 2019-01-29 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107609514B (en) * 2017-09-12 2021-08-06 Oppo广东移动通信有限公司 Face recognition method and related product

Also Published As

Publication number Publication date
CN110472459A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
US11100208B2 (en) Electronic device and method for controlling the same
US10169639B2 (en) Method for fingerprint template update and terminal device
US20200026939A1 (en) Electronic device and method for controlling the same
CN111985265A (en) Image processing method and device
US20190138703A1 (en) Method For Controlling Unlocking And Terminal Device
CN109918975A (en) A kind of processing method of augmented reality, the method for Object identifying and terminal
EP3252665B1 (en) Method for unlocking terminal and terminal
CN109074171B (en) Input method and electronic equipment
EP3252659A1 (en) Method for controlling unlocking an terminal
EP3252639B1 (en) Method and terminal for controlling unlocking based on fingerprint biomentric data
CN107004073A (en) The method and electronic equipment of a kind of face verification
US10803159B2 (en) Electronic device and method for controlling the same
WO2019024718A1 (en) Anti-counterfeiting processing method, anti-counterfeiting processing apparatus and electronic device
CN113515987B (en) Palmprint recognition method, palmprint recognition device, computer equipment and storage medium
US10671713B2 (en) Method for controlling unlocking and related products
CN111080747B (en) Face image processing method and electronic equipment
US11341221B2 (en) Electric device and control method thereof
CN108351708B (en) Three-dimensional gesture unlocking method, gesture image obtaining method and terminal equipment
CN110472459B (en) Method and device for extracting feature points
CN115171196B (en) Face image processing method, related device and storage medium
CN116958715A (en) Method and device for detecting hand key points and storage medium
US11507648B2 (en) Electric device and control method thereof
CN111104823A (en) Face recognition method and device, storage medium and terminal equipment
CN111640204B (en) Method and device for constructing three-dimensional object model, electronic equipment and medium
CN117784936A (en) Control method, terminal device, wearable device, communication system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant