CN112836545A - 3D face information processing method and device and terminal - Google Patents

3D face information processing method and device and terminal Download PDF

Info

Publication number
CN112836545A
CN112836545A CN201911153141.6A CN201911153141A CN112836545A CN 112836545 A CN112836545 A CN 112836545A CN 201911153141 A CN201911153141 A CN 201911153141A CN 112836545 A CN112836545 A CN 112836545A
Authority
CN
China
Prior art keywords
face
model
aesthetic
image
displaying
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911153141.6A
Other languages
Chinese (zh)
Inventor
傅艳
娄心怡
宋莹莹
朱嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Soyoung Technology Beijing Co Ltd
Original Assignee
Soyoung Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Soyoung Technology Beijing Co Ltd filed Critical Soyoung Technology Beijing Co Ltd
Priority to CN201911153141.6A priority Critical patent/CN112836545A/en
Publication of CN112836545A publication Critical patent/CN112836545A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a 3D face information processing method, a device and a terminal. The method comprises the following steps: acquiring a face image of a user; generating a human face 3D model according to the acquired face image; generating and displaying a face point cloud picture according to the key feature points in the face 3D model; and performing aesthetic analysis and comparison on the characteristic value of the key characteristic point in the human face 3D model and a preset aesthetic characteristic value to obtain an aesthetic analysis result. The scheme provided by the disclosure can realize 3D face aesthetic analysis and improve the user experience.

Description

3D face information processing method and device and terminal
Technical Field
The disclosure relates to the technical field of face analysis, in particular to a 3D face information processing method, a device and a terminal.
Background
With the development of computer technology, the face recognition and analysis technology has developed very rapidly and has been gradually applied to various industries.
With the coming of the ' color age ', the demand of medical beauty treatment is increasing due to the pursuit of beauty and the display of beauty effect, and the medical beauty treatment becomes a topic of people's objection. People hope to have a beautiful face, but at present, the face beauty has no clear and universal judgment standard, and people often evaluate the face according to personal or general aesthetics in life. In the related art, a face picture is obtained through an application program APP, and then face feature analysis is performed to display 2D face features for a user and perform face aesthetic analysis.
In the related art, a scheme for 3D face aesthetic analysis has not yet appeared.
Disclosure of Invention
In view of this, the present disclosure aims to provide a method, an apparatus and a terminal for processing 3D face information, which can implement aesthetic analysis of a 3D face.
One aspect of the present disclosure provides a 3D face information processing method, including: acquiring a face image of a user; generating a human face 3D model according to the acquired face image; generating and displaying a face point cloud picture according to the key feature points in the face 3D model; and performing aesthetic analysis and comparison on the characteristic value of the key characteristic point in the human face 3D model and a preset aesthetic characteristic value to obtain an aesthetic analysis result.
In one embodiment, a display waiting interface is loaded during the process of generating the 3D model of the face and before the face cloud image is not displayed.
In one embodiment, the obtaining the result of the aesthetic analysis further comprises: displaying the results of the aesthetic analysis, and/or generating a report containing the results of the aesthetic analysis.
In one embodiment, the generating a 3D model of a human face from the acquired face image includes: when the face image is a face image shot by a 3D camera, generating a first 3D model of a human face according to the obtained face image; or when the face image is a face image shot by a 2D camera, generating a second 3D model of the face according to the obtained face image.
In one embodiment, the displaying the results of the aesthetic analysis comprises: and displaying the result of the aesthetic analysis on different interfaces.
In one embodiment, the displaying the results of the aesthetic analysis at the different interface comprises: displaying an aesthetic analysis result on the face point cloud picture and/or the first 3D model of the face; or, displaying an aesthetic analysis result on the face point cloud picture and/or the second 3D model of the face; or displaying an aesthetic analysis result on the face cloud point image and/or a third 3D model of the face, wherein the third 3D model of the face is obtained by superposing the face image shot by the 2D camera and the face cloud point image.
In one embodiment, the method further comprises: and switching and displaying the corresponding parts of the first 3D model, the second 3D model or the third 3D model of the face according to the fact that the label menus of different parts of the face are clicked.
In one embodiment, the menu of labels at different parts of the human face includes one or more of the following: face type label, eyebrow type label, eye type label, nose type label, lip type label.
In one embodiment, the generating and displaying a face point cloud picture according to key feature points in the face 3D model includes: obtaining key feature points in the face 3D model; and performing connection processing on the key feature points to generate and display a face point cloud picture.
In one embodiment, the generating and displaying a face point cloud picture according to key feature points in the face 3D model includes: when the face image is a face image shot by a 2D camera, obtaining a conversion 3D model according to the second 3D model of the face, determining a mapping matrix between key feature point coordinates of the face image and key feature point coordinates in the conversion 3D model according to a set algorithm, and generating and displaying a face point cloud image according to the mapping matrix; or when the face image is the face image shot by the 2D camera, determining a mapping matrix between key feature point coordinates of the face image and key feature point coordinates in the second 3D model of the face according to a set algorithm, and generating and displaying a face point cloud picture according to the mapping matrix.
Another aspect of the present disclosure provides a 3D face information processing apparatus, including: the acquisition module is used for acquiring a face image of a user; the human face 3D model generation module is used for generating a human face 3D model according to the facial image acquired by the acquisition module; the face cloud point generating module is used for generating and displaying a face cloud point according to key feature points in the face 3D model generated by the face 3D model generating module; and the aesthetic analysis processing module is used for performing aesthetic analysis comparison on the characteristic value of the key characteristic point in the human face 3D model generated by the human face 3D model generation module and a preset aesthetic characteristic value to obtain an aesthetic analysis result. .
In one embodiment, the apparatus further comprises: the aesthetic result display module is used for displaying the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module; and/or a report generation module for generating a report containing the results of the aesthetic analysis.
In one embodiment, the face 3D model generation module comprises: the first generation module is used for generating a first 3D model of the face according to the face image acquired by the acquisition module when the face image is a face image shot by a 3D camera; or the second generation module is used for generating a second 3D model of the human face according to the face image acquired by the acquisition module when the face image is a face image shot by a 2D camera.
In one embodiment, the aesthetic result display module comprises: the first display module is used for displaying the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module on the face point cloud picture and/or the face first 3D model; or, the second display module is used for displaying an aesthetic analysis result on the face cloud point picture and/or the second 3D model of the face; or, the third display module is configured to display an aesthetic analysis result analyzed and compared by the aesthetic analysis processing module on the face cloud image and/or a third 3D model of the face, where the third 3D model of the face is obtained by superimposing the face image shot by the 2D camera and the face cloud image.
In one embodiment, the apparatus further comprises: and the interaction module is used for switching and displaying the corresponding parts of the first 3D model, the second 3D model or the three 3D models of the face according to the fact that the label menus of different parts of the face are clicked.
Another aspect of the present disclosure provides a terminal device, including: a processor; and a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the above-described method.
Another aspect of the present disclosure provides a non-transitory machine-readable storage medium having stored thereon executable code, which, when executed by a processor of an electronic device, causes the processor to perform the above-described method.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the scheme provided by the embodiment of the disclosure is that a human face 3D model is generated according to the acquired face image; generating and displaying a face point cloud picture according to the key feature points in the face 3D model; and performing aesthetic analysis and comparison on the characteristic value of the key characteristic point in the human face 3D model and a preset aesthetic characteristic value to obtain an aesthetic analysis result. By means of the processing in the mode, the face 3D model can be generated firstly, then the key feature points in the face 3D model are used for generating the face point cloud picture, the feature values of the key feature points in the face 3D model are subjected to aesthetic analysis and compared with the preset aesthetic feature values, and an aesthetic analysis result is obtained, so that 3D face aesthetic analysis is realized, the face aesthetic analysis result can be displayed in a 3D form in a follow-up mode, the face aesthetic analysis has a 3D stereoscopic impression, and is more vivid and accurate, and the use experience of a user is improved.
Further, the embodiment of the disclosure may load a display waiting interface during the process of generating the 3D model of the face and before the face cloud image is not displayed, so that the user does not feel boring to wait, and the technological sense is increased.
Further, the embodiment of the disclosure may generate a first 3D model of a human face according to the acquired face image when the face image is a face image shot by a 3D camera; or when the face image is the face image shot by the 2D camera, generating a second 3D model of the face according to the obtained face image, so that the method can be suitable for application scenes of user terminal equipment with different types of cameras.
Furthermore, according to the embodiment of the present disclosure, the corresponding part can be switched and displayed on the first face 3D model, the second face 3D model or the third face 3D model according to the fact that the tag menu of different parts of the face is clicked, so that interactive operation is achieved, and user experience is improved.
Furthermore, the report containing the aesthetic analysis result can be generated, so that the user can simply and quickly obtain the report content, the user can check the aesthetic analysis result more conveniently and intuitively, the requirements of the user can be met more comprehensively, for example, the aesthetic analysis result can be shared to a social platform, and the user experience is further improved.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in greater detail exemplary embodiments thereof with reference to the attached drawings, in which like reference numerals generally represent like parts throughout.
FIG. 1 is a schematic flow diagram of a 3D face information processing method according to one embodiment of the present disclosure;
FIG. 2 is another schematic flow diagram of a 3D face information processing method according to one embodiment of the present disclosure;
FIG. 3 is another schematic flow diagram of a 3D face information processing method according to one embodiment of the present disclosure;
fig. 4 is a schematic interface diagram for guiding a user to photograph a face image in a 3D face information processing method according to an embodiment of the present disclosure;
FIG. 5 is a schematic interface diagram of a waiting interface in a 3D face information processing method according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a first interface for generating a point cloud image in a 3D face information processing method according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a second interface for generating a point cloud picture in a 3D face information processing method according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of a full face analysis interface in a cloud point generated in a 3D face information processing method according to an embodiment of the present disclosure;
fig. 9 is a schematic view of an eye analysis interface in a cloud point generated in a 3D face information processing method according to an embodiment of the present disclosure;
fig. 10 is a schematic view of a nasal analysis interface in a cloud point generated in a 3D face information processing method according to an embodiment of the present disclosure;
fig. 11 is a schematic view of a mouth analysis interface in a cloud point generated in a 3D face information processing method according to an embodiment of the present disclosure;
FIG. 12 is a schematic interface diagram of a 3D face information processing method according to an embodiment of the present disclosure, in which a true 3D model containing aesthetic analysis results is displayed;
fig. 13 is a schematic interface diagram showing a pseudo 3D model including an aesthetic analysis result in a 3D face information processing method according to an embodiment of the present disclosure;
FIG. 14 is a schematic diagram of an interface displaying a full screen report containing aesthetic analysis results in a 3D face information processing method according to an embodiment of the present disclosure;
FIG. 15 is a schematic block diagram of a 3D face information processing apparatus according to an embodiment of the present disclosure;
FIG. 16 is another schematic block diagram of a 3D face information processing apparatus according to an embodiment of the present disclosure;
fig. 17 is a schematic diagram illustrating a structure of a terminal device according to an exemplary embodiment.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The embodiment of the disclosure provides a 3D face information processing method, which can realize 3D face aesthetic analysis.
Technical solutions of embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a 3D face information processing method according to an embodiment of the present disclosure.
Referring to fig. 1, a method provided by an embodiment of the present disclosure includes:
in step 101, an image of the face of the user is acquired.
In this step, a face image that the user shot or directly uploaded through the camera may be acquired, and the face image generally includes a complete front face five sense organ region, a side face cheek region, ears, and the like.
In step 102, a 3D model of the human face is generated from the acquired face image.
In this step, after the face image of the user is acquired, a 3D face model can be generated by using a relevant 3D face recognition algorithm according to the acquired face image. The related 3D face recognition algorithm may be, for example, an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, and the disclosure is not limited thereto.
In the step, when the face image is a face image shot by a 3D camera, a first 3D model of the face is generated according to the obtained face image; or when the face image is the face image shot by the 2D camera, generating a second 3D model of the human face according to the obtained face image.
In step 103, a face point cloud picture is generated and displayed according to the key feature points in the face 3D model.
In the step, key feature points in the face 3D model can be obtained; and performing connection processing on the key feature points to generate and display a face point cloud picture. For example, a triangle or a square distribution with a preset rule can be formed by connecting lines, so as to generate a human face cloud point image. The key feature points in the face 3D model may include, but are not limited to, key feature points such as eye, pupil, nose tip, mouth corner point, ear, eyebrow, and contour points of each part of the face.
The point cloud data (point cloud) refers to the scanned data recorded in the form of points, each point includes three-dimensional coordinates, and some points may include color information (RGB) or reflection Intensity information (Intensity).
In this step, the face cloud images can be switched and displayed according to different set parts of the face, for example, the face cloud images of the whole face, the eyes, the mouth, the nose and other parts can be switched and displayed respectively, and the switching and displaying sequence can be flexibly set according to the requirement.
In step 104, performing aesthetic analysis comparison on the feature values of the key feature points in the face 3D model and preset aesthetic feature values to obtain an aesthetic analysis result.
In the step, the characteristic value of a key characteristic point in the face 3D model can be determined; and comparing the characteristic value of the determined key characteristic point with a preset aesthetic characteristic value to obtain an aesthetic analysis result which can be used for displaying in different graphs.
It can be found that the scheme provided by the embodiment of the present disclosure is to generate a 3D model of a human face according to an acquired face image; generating and displaying a face point cloud picture according to key feature points in the face 3D model; and performing aesthetic analysis comparison on the characteristic value of the key characteristic point in the human face 3D model and a preset aesthetic characteristic value to obtain an aesthetic analysis result. By means of the processing in the mode, the face 3D model can be generated firstly, then the key feature points in the face 3D model are used for generating the face point cloud picture, the feature values of the key feature points in the face 3D model are subjected to aesthetic analysis and compared with the preset aesthetic feature values, and an aesthetic analysis result is obtained, so that 3D face aesthetic analysis is realized, the face aesthetic analysis result can be displayed in a 3D form in a follow-up mode, the face aesthetic analysis has a 3D stereoscopic impression, and is more vivid and accurate, and the use experience of a user is improved.
Fig. 2 is another schematic flow diagram of a 3D face information processing method according to an embodiment of the present disclosure. Fig. 2 adds the steps of waiting for an interface and displaying the results of the aesthetic analysis to fig. 1.
Referring to fig. 2, a method provided by an embodiment of the present disclosure includes:
in step 201, an image of the user's face is acquired.
This step may be as described with reference to step 101.
In step 202, a display waiting interface is loaded.
In the step, in the process of generating the face 3D model and before the face cloud image is not displayed, a display waiting interface may be loaded, for example, a preset starry sky 3D model may be loaded and displayed as a waiting interface. The waiting interface can be displayed to enable the user to feel that the waiting process is not boring, and the technological sense is increased. It should be noted that the waiting interface may also be in the form of various animations, music, and the like, and the embodiment of the disclosure is not limited.
In step 203, a 3D model of the human face is generated from the acquired face image.
This step may be as described with reference to step 102.
In step 204, a face point cloud image is generated and displayed according to the key feature points in the face 3D model.
This step may be as described with reference to step 103.
In step 205, the feature values of the key feature points in the face 3D model are compared with the preset aesthetic feature values for aesthetic analysis, and an aesthetic analysis result is obtained.
This step may be as described with reference to step 104.
In step 206, the results of the aesthetic analysis are displayed.
According to the scheme of the embodiment of the disclosure, the aesthetic analysis result can be displayed on different interfaces. For example, the aesthetic analysis results are displayed on the face point cloud picture and/or the first 3D model of the face; or, displaying an aesthetic analysis result on the face point cloud picture and/or the second 3D model of the face; or displaying an aesthetic analysis result on the face cloud point image and/or a third 3D model of the face, wherein the third 3D model of the face is obtained by overlapping the face image shot by the 2D camera and the face cloud point image.
That is to say, in a 3D camera scene, the first aesthetic analysis result may be displayed on the face cloud point image, and then the second aesthetic analysis result may be displayed on the face first 3D model displayed subsequently, or the second aesthetic analysis result may be displayed only on the face first 3D model displayed subsequently, or the first aesthetic analysis result may be displayed only on the face cloud point image, and may be flexibly set as needed.
In the scene of the 2D camera, the first aesthetic analysis result may be displayed on the face cloud point image, and then the second aesthetic analysis result may be displayed on the second 3D model of the face displayed subsequently, or the second aesthetic analysis result may be displayed only on the second 3D model of the face displayed subsequently, or the first aesthetic analysis result may be displayed only on the face cloud point image, and may be flexibly set as needed.
In the scene of the 2D camera, the first aesthetic analysis result may be displayed on the face cloud point image, and then the second aesthetic analysis result may be displayed on the face third 3D model displayed subsequently, or the second aesthetic analysis result may be displayed only on the face third 3D model displayed subsequently, or the first aesthetic analysis result may be displayed only on the face cloud point image, and may be flexibly set as needed.
It should be noted that the second aesthetic analysis result may also be displayed on other interfaces, for example, on the face image of the user or other selected images.
It is further noted that embodiments of the present disclosure may also generate reports containing results of aesthetic analysis. The generated report can be a full screen report page, a long report page or a pictorial report page, and the aesthetic analysis results in the report can comprise the first aesthetic analysis result and the second aesthetic analysis result or only comprise the second aesthetic analysis result. The generated report may further include the first 3D model of the face, the second 3D model of the face, or the third 3D model of the face.
In addition, in the scheme of the embodiment of the present disclosure, the display of the result of the aesthetic analysis and the generation of the report including the result of the aesthetic analysis may be included, or only the result of the aesthetic analysis and not the generation of the report including the result of the aesthetic analysis may be included, or only the report including the result of the aesthetic analysis and not the display of the result of the aesthetic analysis may be generated, and the configuration may be flexible as required.
It can be found that according to the scheme provided by the embodiment of the disclosure, in the process of generating the face 3D model and before the face cloud picture is not displayed, the display waiting interface can be loaded, for example, the preset star field 3D model can be loaded and displayed as the waiting interface, so that the user does not feel boring to wait, and the technological sense is increased. In addition, the aesthetic analysis result can be displayed through different interfaces, and the method is more suitable for various scenes.
Fig. 3 is another schematic flow chart of a 3D face information processing method according to an embodiment of the present disclosure. Fig. 3 describes aspects of an embodiment of the present disclosure in more detail with respect to fig. 1 and 2.
Referring to fig. 3, a method provided by an embodiment of the present disclosure includes:
in step 301, an image of the user's face is acquired.
In this step, a face image that the user shot or directly uploaded through the camera may be acquired, and the face image generally includes a complete front face five sense organ region, a side face cheek region, ears, and the like.
Wherein, the shooting through the camera can be the shooting of video through the camera or the shooting of photos through the camera. Referring to fig. 4, taking a shot video as an example, the embodiment of the present disclosure may acquire a face image of a user by shooting the video. In the shooting process, guide information for prompting a user to turn around can be generated so that the user can complete video shooting according to the guide information, and the front and the side of the face are generally required to be shot in the shooting process. After the shooting is finished, a front picture and two side pictures can be taken from the shot video, and the side pictures can be pictures with set angles of 55 degrees or 45 degrees on the left side and the right side of the face.
The camera in this step may be a 3D camera or a 2D camera, and a user that can be photographed by the 3D camera is referred to as a 3D user, and a user that can be photographed by the 2D camera is referred to as a 2D user. The three-dimensional space coordinates of each point position in the space in the visual field can be collected through the 3D camera, and three-dimensional imaging can be obtained through restoration of an algorithm; while a 2D camera typically acquires two-dimensional spatial coordinates, i.e., (x, y) coordinates, of each point in the image.
When taking a facial image, the user may generally be instructed to meet the following requirements as much as possible, for example: the front face shooting keeps the face centered, the side face shooting angle is generally 40-80 degrees, the shooting picture keeps clear, glasses are not worn as far as possible, the face is not shielded as far as possible, and the hair of a woman is preferably pricked to avoid shielding the face.
The method of the embodiment of the disclosure can be applied to a mobile terminal device or other camera devices, for example, a face image is shot through a camera on the mobile terminal device.
In step 302, a display waiting interface is loaded.
In the process of generating the face 3D model and before the face cloud map is not displayed, the display waiting interface, for example, the preset starry sky 3D model may be loaded and displayed as the waiting interface. It should be noted that the waiting interface may also be in the form of various animations, music, and the like, and the embodiment of the disclosure is not limited. Referring to fig. 5, for example, but not limited to, displaying the preset starry sky 3D model in the waiting interface, the five sense organs of the head portrait may be gradually highlighted, and finally, a face 3D model containing the outline of the clearer five sense organs is formed, and simple aesthetic analysis and evaluation of the five sense organs of the face 3D model may also occur, such as aesthetic analysis and evaluation of "big eyes, high nose bridge, short chin, santing meditation, golden triangle 45" and the like. Because the generation of the human face 3D model and the calculation of key feature points and the like require time, the user can feel that the waiting process is not tedious and the technological sense is increased by displaying a waiting interface, such as displaying a preset star space 3D model.
In the waiting process of displaying the waiting interface of the preset starry sky 3D model, the progress state may also be displayed by loading the progress bar, for example, dots of the progress bar are highlighted and advance slowly, and the user waits for a long time at a set progress position, for example, 80%, until the model is taken and an image is returned.
In step 303, a 3D model of the face is generated from the acquired face image.
In this step, after the face image of the user is acquired, a 3D face model can be generated by using a relevant 3D face recognition algorithm according to the acquired face image. And (3) a related 3D face recognition algorithm. For example, it may be an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, etc., and the disclosure is not limited thereto.
It should be noted that, no matter the acquired face image of the 2D user or the 3D user, the face 3D model may be generated by using a related face recognition algorithm according to the acquired face image.
In the step, when the face image is a face image shot by a 3D camera, a first 3D model of the face is generated according to the obtained face image; or when the face image is the face image shot by the 2D camera, generating a second 3D model of the human face according to the obtained face image.
In step 304, a face point cloud image is generated and displayed according to the key feature points in the face 3D model.
In the step, key feature points in the face 3D model can be obtained; and performing connection processing on the key feature points to generate and display a face point cloud picture. For example, a triangle or a square distribution with a preset rule can be formed by connecting lines, so as to generate a human face cloud point image. The key feature points in the 3D model of the human face may include, but are not limited to, key feature points such as eye, pupil, nose, mouth, ear, eyebrow, and contour points of each part of the human face.
The point cloud (point cloud) refers to the scanned data recorded in the form of points, each point including three-dimensional coordinates, some of which may include color information (RGB) or reflection Intensity information (Intensity).
The point cloud images of the embodiment of the disclosure can be generated through the face 3D models, each face 3D model has many relevant feature points of the model, key feature points of the face 3D model can be extracted, and the key feature points are connected to form regular triangular or square distribution, different point cloud images of each face 3D model can be generated, that is, different face 3D models generate different point cloud images, and referring to fig. 6 to 7, an interface schematic diagram for generating the point cloud images is displayed.
For example, key feature points of the eyes, the eyebrows, the mouth and the like can be encrypted and connected, so that the point cloud picture looks closer to the face. Compare 2D face analysis, 3D point cloud picture can be more meticulous analysis to user's facial feature, and more directly perceived and three-dimensional show face detail feature, the figure is more pleasing to the eye, and science and technology feels strong for the user accepts more easily.
When the face image is the face image shot through the 2D camera, the converted 3D model is obtained according to the second 3D model of the face, the mapping matrix between the key feature point coordinates of the face image and the key feature point coordinates in the converted 3D model is determined according to the set algorithm, and the face point cloud image is generated and displayed according to the mapping matrix.
That is, if the user is a 2D user, because the 2D face image of the user is stored in the shooting process in the foregoing steps, and the 3D face model of the user is generated according to the image shot by the user, the 3D face model has model point locations of key feature points, a design model similar to the 3D face model can be determined, the similar design model is taken as a target, the key feature points on the 3D face model of the corresponding user are reserved, and a unified and beautiful converted 3D model is obtained. Then, a mapping matrix between 2D key feature point coordinates on the 2D face image and 3D key feature point coordinates corresponding to the obtained conversion 3D model is searched by using a PnP (spectral-N-point) algorithm, a point line is rendered in a window with the size of the 2D face image according to the mapping matrix, and positions of eyebrows, eyes, mouths and the like are subjected to line drawing processing to generate and obtain a cloud point image of the 2D user.
It should be noted that, the above-mentioned conversion of the 3D model may not be needed, and when the face image is a face image shot by a 2D camera, a mapping matrix between the key feature point coordinates of the face image and the key feature point coordinates in the second 3D model of the face is determined according to a set algorithm, and a face point cloud image is generated and displayed according to the mapping matrix. That is to say, if the user is a 2D user, because the 2D face image of the user is saved in the shooting process in the foregoing steps, a 3D face model of the user is generated according to the image shot by the user, the 3D face model has model point locations of key feature points, a PnP algorithm is used to find a mapping matrix between coordinates of the 2D key feature points on the 2D face image and the key feature points in the 3D face model, and a point line is rendered in a window of the size of the 2D face image according to the mapping matrix, and then line drawing is performed on the eyebrow, eye, mouth and other parts, so as to generate and obtain a 2D user's cloud point image.
In step 305, the feature values of the key feature points in the face 3D model are compared with preset aesthetic feature values for aesthetic analysis, and the first aesthetic analysis result is displayed on the face point cloud graph.
It should be noted that, this step is exemplified by, but not limited to, displaying the first aesthetic analysis result on the face point cloud image, and the first aesthetic analysis result may not be displayed on the face point cloud image.
In the step, the characteristic value of a key characteristic point in the face 3D model can be determined; and performing aesthetic analysis comparison on the determined characteristic value of the key characteristic point and a preset aesthetic characteristic value to obtain a first aesthetic analysis result.
In this step, the face cloud image may be switched and displayed according to different set portions of the face, the first aesthetic analysis results of the different set portions of the face are correspondingly displayed on the face cloud image, for example, the front full face, the eyes, the mouth, the nose, the side face of the face, and the like are respectively displayed, and the switching and displaying sequence may be flexibly set as required.
The aesthetic analysis of the face of the disclosed embodiments may include full-face and partial aesthetic analysis of the face, e.g., the front face of a human face cloud may exhibit full-face, eye, nose, mouth analysis; the side of the human face cloud point picture can show a gold triangle line on the side, a straight line on the side of the forehead plumpness and the like.
Wherein, the marked line (also marked reference line) and the characteristic value of the key characteristic point can be displayed on the point cloud picture. The key analysis area of the whole face or a certain part can be circled by marking and drawing lines.
The feature values of the key feature points can be calculated through a set algorithm, and the obtained face images of the user are different, and the feature values of the key feature points are different. The feature value may be a size value, a scale value, a depth value, an angle value, or the like of the key feature point of the face feature. The embodiments of the present disclosure may preset various aesthetic feature values for different types of analysis settings, for example, about 50-70 facial features may be set, the features may be divided into a plurality of grades, for example, three grades, each grade may have a feature score, and the sum of all feature scores is the final score, and the feature values may only show the features with obvious user advantages, but are not limited thereto. For each characteristic value, calculation can be realized through a related setting algorithm, and finally conclusions such as the size of eyes, the height of nose bridge and the like can be obtained.
For example, the feature value may be a composite color value, where the composite color value may be the addition of a face front value and a side face value. The characteristic value can also be the range value of a court and an atrium, and the size values of lip line proportion, nose width, eye distance and eye length, or the values of eyebrow length, eyebrow distance, lip trend and the like.
For example, referring to fig. 8, in a point cloud plot for a full-face analysis, the first aesthetic analysis result may show: "three divisions: the Shangting is in an ideal range of 5cm, the Zhongting is in an ideal range of 1.2cm, the proportion of lip lines is ideal of 1.5 ' and the like, in addition, descriptions of ' intelligence sense 21, age sense 56, full lips, heart-shaped faces ' and the like can be displayed, and the numerical values of the nose width, the inter-ocular distance, the eye length and the like are marked by marking lines in a point cloud picture. Detailed analysis of the eye, nose and mouth can be seen in the interface diagrams of fig. 9-11, respectively.
In an embodiment of the present invention, the 3D aesthetic analysis includes a full-face 3D aesthetic analysis and a partial 3D aesthetic analysis of the face, the full-face 3D aesthetic analysis may include a color value analysis, a quality analysis, a character analysis, a face dominant part and improvement part analysis, and the like, and the partial 3D aesthetic analysis may include a face shape analysis, an eyebrow shape analysis, an eye shape analysis, a lip shape analysis, and the like. The range of the full-face aesthetic analysis and the partial aesthetic analysis is not limited to the above categories, and may be other categories.
In step 306, the second aesthetic analysis result is displayed on a different interface.
The step can display a first 3D model of the face, and display a second aesthetic analysis result on the first 3D model of the face; or displaying a second 3D model of the face, and displaying a second aesthetic analysis result on the second 3D model of the face; or displaying a third 3D model of the face, and displaying the second aesthetic analysis result on the third 3D model of the face.
The first 3D model of the human face is a true 3D model generated according to the acquired face image when the face image is the face image shot by the 3D camera; the third 3D model of the human face is a false 3D model obtained by superposing the face image shot by the 2D camera and the human face point cloud image, and the second 3D model of the human face is a 3D model generated according to the obtained face image when the face image is the face image shot by the 2D camera.
In the step, the generated first 3D model of the face can be displayed, lines are marked on the set part of the first 3D model of the face, data of aesthetic analysis are displayed, and text description of the aesthetic analysis is displayed in an area outside the first 3D model of the face; or displaying the generated second 3D model of the human face, marking lines and displaying data of aesthetic analysis on the set part of the second 3D model of the human face, and displaying text description of the aesthetic analysis in an area outside the second 3D model of the human face; or displaying a third 3D model of the face obtained by superposing the generated face cloud image on the face image of the user, wherein a drawing line is marked and data of aesthetic analysis is displayed on a set part of the face cloud image of the third 3D model of the face, and text description of the aesthetic analysis is displayed in an area outside the face cloud image.
The first 3D model of the human face may be a true 3D model generated by using the face image of the user, as shown in fig. 12, and the third 3D model of the human face may be a false 3D model generated by using the face image of the user, as shown in fig. 13. The false 3D model is displayed by superposing a point cloud picture on a 2D face image, and marked lines and characteristic values are displayed on the superposed false 3D model. That is to say, for the 2D user, according to the cloud point map generated for the 2D user in the foregoing steps, the cloud point map and the 2D face image of the user are processed in an overlapping manner, and the thickness and color of the point line of the cloud point map can be adjusted to generate the false 3D model. Meanwhile, for the sake of attractiveness and making the dotted line more obvious, a masking layer treatment can be added in the middle of the overlapped dot cloud picture and the 2D face image.
In step 307, according to the fact that the menu of labels of different parts of the detected face is clicked, the 3D models of different faces are switched to display the corresponding parts of the face.
In this step, the corresponding parts of the first 3D model, the second 3D model or the third 3D model of the face may be switched and displayed according to the fact that the menu of the labels of the different parts of the face is clicked.
For example, a menu of labels of different parts of the face where the first 3D model (i.e. true 3D model), the second 3D model or the third 3D model (i.e. false 3D model) of the face is detected is clicked, and a corresponding area can be skipped to show the analysis of a certain part. For example: detecting that the face label menu is clicked, and switching to face analysis; and clicking the eyebrow label menu, and switching the 3D model of the human face to eyebrow analysis.
For example, the real 3D model can be displayed in an interactive manner, that is, the user can switch different parts to view the real 3D model, and can view the real 3D model more intuitively; it should be noted that the pseudo 3D model may also be presented in an interactive manner.
In step 308, a report is generated containing the results of the aesthetic analysis.
The report generated by this step may be a full-screen report page, a long report page, or a pictorial report page, and the tab menu in the report page may include: aesthetic analysis, face, eyebrow, eye, lip, skin detection. It should be noted that other labels may be added according to the analysis result. The results of the aesthetic analysis in the generated report may include the first and second aesthetic analysis results previously described, or only the second aesthetic analysis result. The generated report may further include the first 3D model of the face, the second 3D model of the face, or the third 3D model of the face. A full screen reporting page may refer to fig. 14.
For example, the step displays the aesthetic analysis result in a long report page by regions, wherein the long report page displays tags jumping to the corresponding regions, and the analysis of a certain part can be displayed by clicking a tag menu to jump to the corresponding regions. For example: clicking a face label menu in the long report page, switching the displayed 3D model of the face to the face analysis, and simultaneously jumping the report area of the long report page to the face area; clicking an eyebrow label menu in the long report page, switching the displayed 3D model of the face to an eyebrow analysis, and simultaneously jumping the report area of the long report page to the eyebrow area.
Wherein, aesthetic analysis label menu area can show the whole face aesthetic analysis results, including: the color value is divided into data, a gas quality analysis result, a character analysis result, a face dominant part and an improvement part.
In the long report page, the result of the aesthetic analysis can be interpreted and displayed through data and text description.
And a long report page is displayed on the page of the displayed human face 3D model, and a full-face analysis result and a local analysis result are displayed in the long report page. For example: the presentation of the full-face aesthetic analysis results may include the following: "composite color value 124; the whole style of the face belongs to gentle and gentle faces, the maturity of the structure of five sense organs is high, but the structure is soft, and the primary expression has no attack degree, so that the face looks gentle and gentle. Quality of gas: the sensation of gas brought to the person is young, sensitive, but occasionally appears light. Character lattice: your character will appear perceptual, romantic and sometimes camouflaged at ordinary times. The dominant part is as follows: dripping nose, melon seed face, standard eyebrow; improving the part: the height of the chin and the nose bridge. "
In the long report page, the interpretation or description of the corresponding aesthetic analysis result is respectively displayed in the face shape, the eyebrow shape, the eye shape, the lip shape, and the skin detection area, for example, the display of the eyebrow shape analysis result may include: "your eyebrows are standard eyebrows; you are more suitable for leveling eyebrows; leveling the eyebrows is the most suitable eyebrows for you. "
Wherein, in the long report page, an intelligent beauty tool label is also displayed, and the label of the intelligent beauty tool is displayed in the corresponding analysis display area, such as: and a label of the intelligent eyebrow drawing tool is displayed in the eyebrow type analysis area, and after the label of the intelligent eyebrow drawing tool is clicked, a user can enter a page of the intelligent eyebrow drawing tool.
The long report page also comprises face-type similar star information and links, and information such as how stars improve the face can be known by clicking; and clicking a style recommendation button to jump to a style recommendation page of aesthetic diagnosis. In addition, analysis reports and shared videos can be saved and shared to the social platform. The long report page may have a video playing area for recording or playing, where the video is, for example, the recorded screen content of the 3D face analysis process, and may be directly shared with social software such as a trembler.
It should be noted that, the embodiment of the present disclosure is illustrated and not limited to displaying the result of the aesthetic analysis first, and then generating the report containing the result of the aesthetic analysis, and may also display only the result of the aesthetic analysis without generating the report containing the result of the aesthetic analysis, or generate only the report containing the result of the aesthetic analysis without displaying the result of the aesthetic analysis, and may be flexibly configured as required.
The technical scheme provided by the embodiment of the disclosure can realize 3D face aesthetic analysis, and the face aesthetic analysis result is displayed in a 3D form in a three-dimensional manner, so that the face aesthetic analysis has more 3D stereoscopic impression, is more vivid and accurate, and improves the use experience of a user; interactive operation can be realized; the report page can be generated, so that a user can simply and quickly obtain report contents, the user can check the aesthetic analysis result more conveniently and intuitively, and the requirements of the user can be met more comprehensively.
The 3D face information processing method according to the present disclosure is described in detail above, and a 3D face information processing apparatus and a terminal corresponding to the present disclosure are described below.
Fig. 15 is a schematic block diagram of a 3D face information processing apparatus according to an embodiment of the present invention.
The apparatus may be located in a terminal device, such as a mobile terminal device or a computer device. Referring to fig. 15, a 3D face information processing apparatus includes: the system comprises an acquisition module 151, a face 3D model generation module 152, a face cloud image generation module 153 and an aesthetic analysis processing module 154.
An obtaining module 151, configured to obtain a face image of the user. The obtaining module 151 may obtain a face image captured by a camera or directly uploaded by a user, where the face image generally includes a complete facial five-sense organ region, a side cheek region, ears, and the like.
A face 3D model generating module 152, configured to generate a face 3D model according to the face image acquired by the acquiring module 151. The face 3D model generation module 152 may generate a face 3D model by using a relevant 3D face recognition algorithm according to the acquired face image. The related 3D face recognition algorithm may be, for example, an algorithm based on image features, an algorithm based on model variable parameters, or an algorithm based on deep learning, and the disclosure is not limited thereto.
The face cloud image generation module 153 is configured to generate and display a face cloud image according to the key feature points in the face 3D model generated by the face 3D model generation module 152. The face cloud image generation module 153 may obtain key feature points in the face 3D model; and performing line connection processing on the key feature points to generate a face point cloud picture.
And the aesthetic analysis processing module 154 is configured to perform aesthetic analysis comparison on the feature values of the key feature points in the face 3D model generated by the face 3D model generation module 153 with preset aesthetic feature values to obtain an aesthetic analysis result. The aesthetic analysis processing module 154 may determine feature values of key feature points in the face 3D model; and performing aesthetic analysis comparison on the determined characteristic value of the key characteristic point and a preset aesthetic characteristic value to obtain an aesthetic analysis result.
It can be found that the scheme provided by the embodiment of the disclosure realizes 3D face aesthetic analysis, and the face aesthetic analysis result can be displayed in a 3D form in a subsequent three-dimensional manner, so that the face aesthetic analysis has more 3D stereoscopic impression, is more vivid and accurate, and improves the use experience of users.
Fig. 16 is another schematic block diagram of a 3D face information processing apparatus according to an embodiment of the present invention.
The apparatus may be located in a terminal device, such as a mobile terminal device or a computer device. Referring to fig. 16, a 3D face information processing apparatus includes: the system comprises an acquisition module 151, a face 3D model generation module 152, a face cloud image generation module 153, an aesthetic analysis processing module 154, an aesthetic result display module 155, an interaction module 156 and a report generation module 157.
The functions of the obtaining module 151, the face 3D model generating module 152, the face cloud image generating module 153, and the aesthetic analysis processing module 154 may refer to the description in fig. 15, and are not described herein again.
And an aesthetic result display module 155 for displaying the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154.
The face 3D model generation module 152 may further include: a first generation module 1521 or a second generation module 1522.
A first generating module 1521, configured to generate a first 3D model of the human face according to the face image acquired by the acquiring module 151 when the face image is a face image captured by a 3D camera.
A second generating module 1522, configured to generate a second 3D model of the human face according to the face image acquired by the acquiring module 151 when the face image is a face image captured by a 2D camera.
Aesthetic result display module 155 may also include: a first display module 1551, a second display module 1552, and a third display module 1553. The aesthetic result display module 155 may display the aesthetic analysis results at different interfaces.
A first display module 1551, configured to display the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154 on the face cloud picture and/or the first 3D model of the face.
A second display module 1552, configured to display the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154 on the face cloud picture and/or the second 3D model of the face.
A third display module 1553, configured to display the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module 154 on the face cloud image and/or a third 3D model of the face, where the third 3D model of the face is obtained by superimposing the face image shot by the 2D camera and the face cloud image.
And the interaction module 156 is configured to switch and display corresponding parts of the first 3D model of the face, the second 3D model of the face, or the third 3D model of the face according to that the tag menus of different parts of the face are clicked.
A report generation module 157 for generating a report containing the results of the aesthetic analysis. The report generated by the report generating module 157 may be a full-screen report page, a long report page, or a pictorial report page, and the tags in the report page may include: aesthetic analysis, face, eyebrow, eye, lip, skin detection. It should be noted that other labels may be added according to the analysis result.
The functions of the modules in the 3D face information processing apparatus may refer to the description in the method at the same time, and are not described herein again.
Fig. 17 is a schematic structural diagram of a terminal device according to an exemplary embodiment, where the terminal device may be used to implement the method. The terminal device may be a mobile terminal device or a computer device, and the mobile terminal device may be a mobile phone, an iPad, and the like.
Referring to fig. 17, terminal device 1700 includes memory 1710 and processor 1720.
Processor 1720 may be a single multicore processor or may include multiple processors. In some embodiments, processor 1720 may include a general-purpose host processor and one or more special purpose coprocessors such as a Graphics Processor (GPU), Digital Signal Processor (DSP), or the like. In some embodiments, processor 1720 may be implemented using custom circuits, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA).
Memory 1710 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions for processor 1720 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a mass storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. In addition, memory 1710 may include any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash, programmable read-only memory), magnetic and/or optical disks, among others. In some embodiments, memory 1710 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a digital versatile disc read only (e.g., DVD-ROM, dual layer DVD-ROM), a Blu-ray disc read only, an ultra-dense disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
Memory 1710 has stored thereon executable code that, when processed by processor 1720, can cause processor 1720 to perform the above-described methods.
The above-described method according to the present disclosure has been described in detail hereinabove with reference to the accompanying drawings.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Furthermore, the method according to the present disclosure may also be implemented as a computer program or computer program product comprising computer program code instructions for performing the above-mentioned steps defined in the above-mentioned method of the present disclosure.
Alternatively, the present disclosure may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) that, when executed by a processor of an electronic device (or computing device, server, or the like), causes the processor to perform the various steps of the above-described method according to the present disclosure.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (17)

1. A3D face information processing method is characterized by comprising the following steps:
acquiring a face image of a user;
generating a human face 3D model according to the acquired face image;
generating and displaying a face point cloud picture according to the key feature points in the face 3D model;
and performing aesthetic analysis and comparison on the characteristic value of the key characteristic point in the human face 3D model and a preset aesthetic characteristic value to obtain an aesthetic analysis result.
2. The method of claim 1, wherein:
and loading a display waiting interface in the process of generating the face 3D model and before the face point cloud picture is not displayed.
3. The method of claim 1, wherein obtaining the results of the aesthetic analysis further comprises:
displaying the results of the aesthetic analysis, and/or,
generating a report containing the results of the aesthetic analysis.
4. The method according to any one of claims 1 to 3, wherein said generating a 3D model of a human face from said acquired facial image comprises:
when the face image is a face image shot by a 3D camera, generating a first 3D model of a human face according to the obtained face image; or the like, or, alternatively,
and when the face image is the face image shot by the 2D camera, generating a second 3D model of the face according to the obtained face image.
5. The method of claim 4, the displaying aesthetic analysis results, comprising:
and displaying the result of the aesthetic analysis on different interfaces.
6. The method of claim 5, the displaying aesthetic analysis results at a different interface, comprising:
displaying an aesthetic analysis result on the face point cloud picture and/or the first 3D model of the face; or the like, or, alternatively,
displaying an aesthetic analysis result on the face point cloud picture and/or the face second 3D model; or the like, or, alternatively,
and displaying an aesthetic analysis result on the face cloud point image and/or a third face 3D model, wherein the third face 3D model is obtained by superposing the face image shot by the 2D camera and the face cloud point image.
7. The method of claim 6, further comprising:
and switching and displaying the corresponding parts of the first 3D model, the second 3D model or the third 3D model of the face according to the fact that the label menus of different parts of the face are clicked.
8. The method of claim 7, wherein the menu of labels for different parts of the face comprises one or more of:
face type label, eyebrow type label, eye type label, nose type label, lip type label.
9. The method according to any one of claims 1 to 3, wherein the generating and displaying a face point cloud according to key feature points in the face 3D model comprises:
obtaining key feature points in the face 3D model;
and performing connection processing on the key feature points to generate and display a face point cloud picture.
10. The method of claim 4, wherein generating and displaying a face point cloud according to key feature points in the 3D face model comprises:
when the face image is a face image shot by a 2D camera, obtaining a conversion 3D model according to the second 3D model of the face, determining a mapping matrix between key feature point coordinates of the face image and key feature point coordinates in the conversion 3D model according to a set algorithm, and generating and displaying a face point cloud image according to the mapping matrix; or the like, or, alternatively,
and when the face image is the face image shot by the 2D camera, determining a mapping matrix between the key feature point coordinates of the face image and the key feature point coordinates in the second 3D model of the face according to a set algorithm, and generating and displaying a face point cloud picture according to the mapping matrix.
11. A 3D face information processing apparatus characterized by comprising:
the acquisition module is used for acquiring a face image of a user;
the human face 3D model generation module is used for generating a human face 3D model according to the facial image acquired by the acquisition module;
the face cloud point generating module is used for generating and displaying a face cloud point according to key feature points in the face 3D model generated by the face 3D model generating module;
and the aesthetic analysis processing module is used for performing aesthetic analysis comparison on the characteristic value of the key characteristic point in the human face 3D model generated by the human face 3D model generation module and a preset aesthetic characteristic value to obtain an aesthetic analysis result.
12. The apparatus of claim 11, the apparatus further comprising:
the aesthetic result display module is used for displaying the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module; and/or the presence of a gas in the gas,
a report generation module for generating a report containing the results of the aesthetic analysis.
13. The apparatus according to claim 11 or 12, wherein the face 3D model generation module comprises:
the first generation module is used for generating a first 3D model of the face according to the face image acquired by the acquisition module when the face image is a face image shot by a 3D camera; or the like, or, alternatively,
and the second generation module is used for generating a second 3D model of the human face according to the face image acquired by the acquisition module when the face image is a face image shot by a 2D camera.
14. The apparatus of claim 12, wherein the aesthetic result display module comprises:
the first display module is used for displaying the aesthetic analysis result analyzed and compared by the aesthetic analysis processing module on the face point cloud picture and/or the face first 3D model; or the like, or, alternatively,
the second display module is used for displaying an aesthetic analysis result on the face point cloud picture and/or the face second 3D model; or the like, or, alternatively,
and the third display module is used for displaying an aesthetic analysis result analyzed and compared by the aesthetic analysis processing module on the face cloud point image and/or a third 3D model of the face, wherein the third 3D model of the face is obtained by superposing the face image shot by the 2D camera and the face cloud point image.
15. The apparatus of claim 14, further comprising:
and the interaction module is used for switching and displaying the corresponding parts of the first 3D model, the second 3D model or the third 3D model of the face according to the fact that the label menus of different parts of the face are clicked.
16. A terminal device, comprising:
a processor; and
a memory having executable code stored thereon, which when executed by the processor, causes the processor to perform the method of any one of claims 1-10.
17. A non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to perform the method of any one of claims 1-10.
CN201911153141.6A 2019-11-22 2019-11-22 3D face information processing method and device and terminal Pending CN112836545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911153141.6A CN112836545A (en) 2019-11-22 2019-11-22 3D face information processing method and device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911153141.6A CN112836545A (en) 2019-11-22 2019-11-22 3D face information processing method and device and terminal

Publications (1)

Publication Number Publication Date
CN112836545A true CN112836545A (en) 2021-05-25

Family

ID=75921671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911153141.6A Pending CN112836545A (en) 2019-11-22 2019-11-22 3D face information processing method and device and terminal

Country Status (1)

Country Link
CN (1) CN112836545A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793611A (en) * 2014-02-18 2014-05-14 中国科学院上海技术物理研究所 Medical information visualization method and device
CN105427385A (en) * 2015-12-07 2016-03-23 华中科技大学 High-fidelity face three-dimensional reconstruction method based on multilevel deformation model
CN106909875A (en) * 2016-09-12 2017-06-30 湖南拓视觉信息技术有限公司 Face shape of face sorting technique and system
US20180184062A1 (en) * 2016-12-22 2018-06-28 Aestatix LLC Image Processing to Determine Center of Balance in a Digital Image
CN109255328A (en) * 2018-09-07 2019-01-22 北京相貌空间科技有限公司 User's makings determines method and device
CN109859305A (en) * 2018-12-13 2019-06-07 中科天网(广东)科技有限公司 Three-dimensional face modeling, recognition methods and device based on multi-angle two-dimension human face

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793611A (en) * 2014-02-18 2014-05-14 中国科学院上海技术物理研究所 Medical information visualization method and device
CN105427385A (en) * 2015-12-07 2016-03-23 华中科技大学 High-fidelity face three-dimensional reconstruction method based on multilevel deformation model
CN106909875A (en) * 2016-09-12 2017-06-30 湖南拓视觉信息技术有限公司 Face shape of face sorting technique and system
US20180184062A1 (en) * 2016-12-22 2018-06-28 Aestatix LLC Image Processing to Determine Center of Balance in a Digital Image
CN109255328A (en) * 2018-09-07 2019-01-22 北京相貌空间科技有限公司 User's makings determines method and device
CN109859305A (en) * 2018-12-13 2019-06-07 中科天网(广东)科技有限公司 Three-dimensional face modeling, recognition methods and device based on multi-angle two-dimension human face

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
刘博文: ""基于二维特征点的三维人脸点云配准技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》, vol. 2018, no. 12, pages 138 - 1103 *
桑高丽: "《基于真实测量三维人脸的识别技术研究》", 31 July 2019, 西安电子科技大学出版社, pages: 16 - 17 *
齐向东, 秦建增, 赵卫东, 樊继宏, 李鉴轶, 张美超, 黄文华, 钟世镇: "软组织激光全息扫描鼻眶窝的三维数字图像分析", 中华整形外科杂志, no. 04, pages 8 - 11 *

Similar Documents

Publication Publication Date Title
US10832039B2 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
US11619989B2 (en) Gaze and saccade based graphical manipulation
CN105404392B (en) Virtual method of wearing and system based on monocular cam
US9495008B2 (en) Detecting a primary user of a device
CN105659200B (en) For showing the method, apparatus and system of graphic user interface
CN110363867B (en) Virtual decorating system, method, device and medium
McDonnell et al. Eye-catching crowds: saliency based selective variation
KR20180108709A (en) How to virtually dress a user's realistic body model
KR20220051376A (en) 3D Data Generation in Messaging Systems
CN108876886B (en) Image processing method and device and computer equipment
KR20210024984A (en) Image-based detection of surfaces that provide specular reflections and reflection modification
CN104969543A (en) Electronic mirror device
KR20170002100A (en) Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
WO2022047463A1 (en) Cross-domain neural networks for synthesizing image with fake hair combined with real image
US9208606B2 (en) System, method, and computer program product for extruding a model through a two-dimensional scene
US20190302880A1 (en) Device for influencing virtual objects of augmented reality
US10891801B2 (en) Method and system for generating a user-customized computer-generated animation
CN110866139A (en) Cosmetic treatment method, device and equipment
CN114283052A (en) Method and device for cosmetic transfer and training of cosmetic transfer network
US11631208B1 (en) Systems and methods for generating clinically relevant images that preserve physical attributes of humans while protecting personal identity
JP2013168146A (en) Method, device and system for generating texture description of real object
US20210383097A1 (en) Object scanning for subsequent object detection
EP3652617B1 (en) Mixed reality object rendering based on ambient light conditions
JP2013200867A (en) Animation creation device and camera
US11328187B2 (en) Information processing apparatus and information processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination