WO2022111001A1 - Face image processing method and apparatus, and electronic device and storage medium - Google Patents

Face image processing method and apparatus, and electronic device and storage medium Download PDF

Info

Publication number
WO2022111001A1
WO2022111001A1 PCT/CN2021/119080 CN2021119080W WO2022111001A1 WO 2022111001 A1 WO2022111001 A1 WO 2022111001A1 CN 2021119080 W CN2021119080 W CN 2021119080W WO 2022111001 A1 WO2022111001 A1 WO 2022111001A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
point cloud
cloud data
dense point
target
Prior art date
Application number
PCT/CN2021/119080
Other languages
French (fr)
Chinese (zh)
Inventor
陈祖凯
徐胜伟
朴镜潭
王权
钱晨
Original Assignee
上海商汤智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海商汤智能科技有限公司 filed Critical 上海商汤智能科技有限公司
Publication of WO2022111001A1 publication Critical patent/WO2022111001A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the dense point cloud in the dense point cloud data can be directly adjusted based on the deformation coefficient.
  • the adjustment can be directly accurate to the adjustment of each point in the dense point cloud that constitutes the virtual face image, and the adjustment accuracy can be improved on the basis of improving the adjustment efficiency.
  • adjusting the deformation coefficient in response to an adjustment operation on the initial virtual face image to obtain a target deformation coefficient includes: responding to an adjustment operation on the initial virtual face image The adjustment operation is to determine the target adjustment position for the initial virtual face image and the adjustment range for the target adjustment position; according to the adjustment range, adjust the deformation coefficient associated with the target adjustment position to obtain the Describe the target deformation coefficient.
  • the generating the target virtual face image based on the target dense point cloud data includes: determining a virtual face model corresponding to the target dense point cloud data; face attribute features and the virtual face model to generate the target virtual face image.
  • an embodiment of the present disclosure provides an apparatus for processing a face image, including: an acquisition module configured to acquire original dense point cloud data of a target face, and generate the target based on the initial dense point cloud data an initial virtual face image of a human face; a determination module for determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image; an adjustment module for responding to the The adjustment operation of the initial virtual face image is to adjust the deformation coefficient to obtain the target deformation coefficient; the generating module is used for generating the corresponding target face based on the target deformation coefficient and the standard dense point cloud data.
  • the target virtual face image including: an acquisition module configured to acquire original dense point cloud data of a target face, and generate the target based on the initial dense point cloud data an initial virtual face image of a human face; a determination module for determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image; an adjustment module for responding to the The adjustment operation of the initial virtual
  • FIG. 8 shows a flowchart of a method for adjusting a deformation coefficient provided by an embodiment of the present disclosure
  • the acquired initial dense point cloud data of the target face is the dense point cloud data corresponding to the target face in the preset style, such as when the target face corresponds to the dense point cloud data in the classical style
  • the initial virtual face image of the target face displayed by the initial dense point cloud data is also a face image in the classical style. How to obtain the dense point cloud data corresponding to the target face in the preset style will be described later.
  • the dense point cloud data corresponding to each second face image in various styles can be acquired and saved in advance, such as the dense point cloud data corresponding to the classical style and the dense point cloud corresponding to the modern style.
  • Data, the corresponding dense point cloud data in the Western style, and the corresponding dense point cloud data in the Chinese style are convenient for subsequent determination of the virtual face model corresponding to the first face image in different styles.
  • the association relationship between the first face image and the plurality of second face images can be found, for example, the plurality of second face images and the first face image can be determined by means of linear fitting. Then, according to the linear fitting coefficient and the dense point cloud data corresponding to multiple second face images of the preset style, the dense points of the target face in the preset style can be determined. cloud data.
  • S3022 Determine the dense point cloud data of the target face in the preset style according to the dense point cloud data and the linear fitting coefficients respectively corresponding to the plurality of second face images of the preset style.
  • the second loss value between the extracted face parameter value of the first face image and the predicted face parameter value of the first face image can be determined based on the gap.
  • the current linear fitting coefficient is adjusted, so that the predicted current face parameter value of the first face image and the neural-based The face parameter values of the first face image extracted by the network are closer, and then based on the adjusted current linear fitting coefficient, return to S30212 until the adjustment operation on the current linear fitting coefficient meets the second adjustment cut-off condition
  • the linear fitting coefficient is obtained after the second loss value is smaller than the second preset threshold and/or the number of times of adjustment for the current linear fitting coefficient reaches a preset number of times.
  • the coordinate value of each point in the three-dimensional coordinate system in the average dense point cloud data corresponding to the plurality of second face images can be obtained.
  • the coordinate mean of a plurality of points corresponding to each other in the dense point cloud data of the plurality of second face images constitutes the coordinate value of the corresponding point in the average dense point cloud data here.
  • the linear fitting coefficient may represent the relationship between the face parameter value corresponding to the first face image and the face parameter values corresponding to the plurality of second face images respectively, while the face corresponding to the face image
  • the linear fitting coefficient can also represent the dense point cloud data corresponding to the first face image and the dense point cloud data corresponding to multiple second face images respectively. The relationship between point cloud data.
  • the above formula (3) or formula (4) is a process of adjusting one of the points in the standard dense point cloud data. In the same way, other points in the standard dense point cloud data can be adjusted in turn to complete the adjustment. An adjustment to the standard dense point cloud data based on the current deformation coefficient.
  • S502 Determine a first loss value based on the adjusted dense point cloud data and the initial dense point cloud data of the target face.
  • step S904 replace the initial bone deformation coefficient according to the adjusted bone deformation coefficient, and replace the initial mixed deformation coefficient according to the adjusted mixed deformation coefficient, and return to step S902 to continue to adjust the bone coefficient and the mixed deformation coefficient until the initial density of the target face is
  • the difference value between the point cloud data V input and the adjusted dense point cloud data V output is smaller than the first preset threshold, or the number of iterations exceeds the preset number of times.
  • the generating module 1004 when used to generate the target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data, it includes: based on the target deformation coefficient, the standard dense point The cloud data is adjusted to obtain the target dense point cloud data; based on the target dense point cloud data, the target virtual face image is generated.
  • the dense point cloud data includes coordinate values of each point in the dense point cloud; the obtaining module is used for the dense point cloud data and linear A fitting coefficient to determine the dense point cloud data of the target face under the preset style, including: determining the average dense point based on the coordinate values of each point in the dense point cloud corresponding to the multiple second face images of the preset style respectively The coordinate value of the corresponding point in the cloud data; based on the coordinate value of each point in the dense point cloud corresponding to the multiple second face images of the preset style, and the coordinate value of the corresponding point in the average dense point cloud data, determine the preset The coordinate difference values corresponding to the multiple second face images of the style respectively; the coordinate difference corresponding to the first face image is determined based on the coordinate difference values and the linear fitting coefficients corresponding to the multiple second face images of the preset style respectively. value; based on the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, determine the dense point cloud data of the target
  • the processing device further includes a training module 1005, and the training module 1005 is configured to pre-train the neural network in the following manner: acquiring a sample image set, where the sample image set includes multiple sample images and the corresponding Label the face parameter values; input multiple sample images into the neural network to obtain the predicted face parameter values corresponding to each sample image; based on the predicted face parameter values and labeled face parameter values corresponding to each sample image, the neural network The network parameter values are adjusted to obtain the trained neural network.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the processing method of a face image described in the foregoing method embodiment is executed. step.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the method for processing a face image described in the above method embodiments. Refer to the above method embodiments, which are not repeated here.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Geometry (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A face image processing method and apparatus, and an electronic device and a storage medium. The processing method comprises: obtaining initial dense point cloud data of a target face, and generating an initial virtual face image of the target face on the basis of the initial dense point cloud data (S101); determining a deformation coefficient of standard dense point cloud data corresponding to the initial dense point cloud data with respect to a standard virtual face image (S102); in response to an adjustment operation for the initial virtual face image, adjusting the deformation coefficient to obtain a target deformation coefficient (S103); and generating, on the basis of the target deformation coefficient and the standard dense point cloud data, a target virtual face image corresponding to the target face (S104).

Description

人脸图像的处理方法、装置、电子设备及存储介质Face image processing method, device, electronic device and storage medium
相关申请的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS
本公开要求于2020年11月25日提交的、申请号为202011339586.6、发明名称为“人脸图像的处理方法、装置、电子设备及存储介质”的中国专利申请的优先权,该申请以引用的方式并入本文中。This disclosure claims the priority of the Chinese patent application filed on November 25, 2020, with the application number of 202011339586.6 and the invention titled "Facial Image Processing Method, Device, Electronic Device and Storage Medium", which is cited by manner is incorporated herein.
技术领域technical field
本公开涉及人脸重建技术领域,具体而言,涉及一种人脸图像的处理方法、装置、电子设备及存储介质。The present disclosure relates to the technical field of face reconstruction, and in particular, to a method, device, electronic device and storage medium for processing a face image.
背景技术Background technique
在三维世界中,可以通过三维点云对物体的形貌进行表征,比如可以通过人脸稠密点云来表示人脸形貌,但是考虑到表征人脸形貌的人脸稠密点云由成千上万个点构成,在需要对应人脸形貌进行调整时,需要对点进行逐一调整,过程繁琐,效率较低。In the three-dimensional world, the shape of objects can be represented by three-dimensional point clouds. For example, the face shape can be represented by the face dense point cloud, but considering that the face dense point cloud representing the face shape consists of thousands of It is composed of tens of thousands of points. When it is necessary to adjust the face shape, it is necessary to adjust the points one by one, which is a cumbersome process and low efficiency.
发明内容SUMMARY OF THE INVENTION
本公开实施例至少提供一种人脸图像的处理方案。The embodiments of the present disclosure provide at least one solution for processing a face image.
第一方面,本公开实施例提供了一种人脸图像的处理方法,包括:获取目标人脸的初始稠密点云数据,并基于所述初始稠密点云数据生成所述目标人脸的初始虚拟人脸图像;确定所述初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数;响应于针对所述初始虚拟人脸图像的调整操作,对所述形变系数进行调整,得到目标形变系数;基于所述目标形变系数和所述标准稠密点云数据,生成所述目标人脸对应的目标虚拟人脸图像。In a first aspect, an embodiment of the present disclosure provides a method for processing a face image, including: acquiring initial dense point cloud data of a target face, and generating an initial virtual image of the target face based on the initial dense point cloud data face image; determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image; in response to the adjustment operation for the initial virtual face image, the deformation coefficient is Adjusting to obtain a target deformation coefficient; generating a target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data.
本公开实施例中,提出通过稠密点云数据确定用于对目标人脸的虚拟人脸图像进行调整的形变系数,这样可以建立稠密点云数据和形变系数之间的对应关系,从而可以直接基于形变系数对虚拟人脸图像进行调整,相对于通过对稠密点云数据中的点进行逐一调整的方式,可以提高调整效率、快速生成调整后的目标虚拟人脸图像。In the embodiment of the present disclosure, it is proposed to determine the deformation coefficient for adjusting the virtual face image of the target face by using the dense point cloud data, so that the correspondence between the dense point cloud data and the deformation coefficient can be established, so that the corresponding relationship between the dense point cloud data and the deformation coefficient can be established directly. The deformation coefficient adjusts the virtual face image. Compared with the method of adjusting the points in the dense point cloud data one by one, the adjustment efficiency can be improved and the adjusted target virtual face image can be quickly generated.
另一方面,考虑到这里的形变系数是根据稠密点云数据确定的,在基于形变系数对初始虚拟人脸图像的调整过程中,可以直接基于形变系数对稠密点云数据中的稠密点云进行调整,这样可以直接精确到对构成虚拟人脸图像的稠密点云中各个点的调整,在提高调整效率的基础上同时可以提高调整精度。On the other hand, considering that the deformation coefficient here is determined according to the dense point cloud data, in the process of adjusting the initial virtual face image based on the deformation coefficient, the dense point cloud in the dense point cloud data can be directly adjusted based on the deformation coefficient. The adjustment can be directly accurate to the adjustment of each point in the dense point cloud that constitutes the virtual face image, and the adjustment accuracy can be improved on the basis of improving the adjustment efficiency.
在一种可能的实施方式中,所述形变系数包含至少一个骨骼系数和至少一个混合形变系数中的至少一项;其中,每个骨骼系数用于对与该骨骼系数关联的第一稠密点云构成的骨骼的初始位姿进行调整;每个混合形变系数用于对与该混合形变系数关联的第二稠密点云对应的初始位置进行调整。In a possible implementation manner, the deformation coefficient includes at least one of at least one bone coefficient and at least one mixed deformation coefficient; wherein, each bone coefficient is used to compare the first dense point cloud associated with the bone coefficient The initial pose of the formed bones is adjusted; each blend shape coefficient is used to adjust the initial position corresponding to the second dense point cloud associated with the blend shape coefficient.
本公开实施例中,基于形变系数中的骨骼系数和/或混合形变系数能够分别调整不同类型的稠密点云的位置,以实现对稠密点云的精准调整。In the embodiment of the present disclosure, the positions of different types of dense point clouds can be adjusted respectively based on the bone coefficients and/or the mixed deformation coefficients in the deformation coefficients, so as to achieve precise adjustment of the dense point clouds.
在一种可能的实施方式中,所述确定所述初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数,包括:基于当前形变系数对所述标准稠密点云数据进行调整,得到调整后的稠密点云数据,初始的所述当前形变系数为预先设置的; 基于调整后的稠密点云数据和所述初始稠密点云数据,确定第一损失值;基于所述第一损失值以及预设的形变系数的约束范围,调整所述当前形变系数;基于调整后的所述当前形变系数,返回执行对所述标准稠密点云数据进行调整的步骤,直至对所述当前形变系数的调整操作符合第一调整截止条件的情况下,基于所述当前形变系数得到所述初始稠密点云数据相对于所述标准稠密点云数据的形变系数。In a possible implementation manner, the determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image includes: determining the standard dense point cloud based on the current deformation coefficient The data is adjusted to obtain adjusted dense point cloud data, and the initial current deformation coefficient is preset; based on the adjusted dense point cloud data and the initial dense point cloud data, determine the first loss value; According to the first loss value and the constraint range of the preset deformation coefficient, adjust the current deformation coefficient; based on the adjusted current deformation coefficient, return to the step of adjusting the standard dense point cloud data, until all When the adjustment operation of the current deformation coefficient meets the first adjustment cut-off condition, the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data is obtained based on the current deformation coefficient.
本公开实施例中,在确定形变系数的过程中,是通过对标准稠密点云数据中的多个点进行调整确定的,因此得到的形变系数可以表示出目标人脸的初始稠密点云相比标准稠密点云的变化量,从而在对目标人脸的初始虚拟人脸图像进行调整过程中,可以基于形变系数对稠密点云数据中关联的点进行调整,从而提高调整精度。In the embodiment of the present disclosure, in the process of determining the deformation coefficient, it is determined by adjusting a plurality of points in the standard dense point cloud data, so the obtained deformation coefficient can represent the initial dense point cloud of the target face compared to the original dense point cloud. The change amount of the standard dense point cloud, so that in the process of adjusting the initial virtual face image of the target face, the associated points in the dense point cloud data can be adjusted based on the deformation coefficient, thereby improving the adjustment accuracy.
另一方面,在确定形变系数的过程中,是在对所有稠密点云进行调整后,再基于调整后的稠密点云数据以及目标人脸的初始稠密点云数据确定的损失值,对当前形变系数进行的优化,充分考虑形变系数与整体稠密点云之间的关联性,提高优化效率;此外在调整过程中通过预设的形变系数的约束范围进行调整约束,可以有效避免形变系数发生畸变,得到无法表示正常的目标人脸的形变系数。On the other hand, in the process of determining the deformation coefficient, after adjusting all the dense point clouds, the current deformation is determined based on the adjusted dense point cloud data and the initial dense point cloud data of the target face. The optimization of the coefficients fully considers the correlation between the deformation coefficients and the overall dense point cloud, and improves the optimization efficiency; in addition, during the adjustment process, the adjustment constraints are carried out through the preset deformation coefficient constraints, which can effectively avoid the deformation coefficients. Obtain the deformation coefficient that cannot represent the normal target face.
在一种可能的实施方式中,所述响应于针对所述初始虚拟人脸图像的调整操作,对所述形变系数进行调整,得到目标形变系数,包括:响应针对所述初始虚拟人脸图像的调整操作,确定针对所述初始虚拟人脸图像的目标调整位置,以及针对所述目标调整位置的调整幅度;按照所述调整幅度,对与所述目标调整位置关联的形变系数进行调整,得到所述目标形变系数。In a possible implementation manner, adjusting the deformation coefficient in response to an adjustment operation on the initial virtual face image to obtain a target deformation coefficient includes: responding to an adjustment operation on the initial virtual face image The adjustment operation is to determine the target adjustment position for the initial virtual face image and the adjustment range for the target adjustment position; according to the adjustment range, adjust the deformation coefficient associated with the target adjustment position to obtain the Describe the target deformation coefficient.
本公开实施例中,可以根据调整操作,确定目标形变系数,便于后期基于该目标形变系数确定出调整后的目标虚拟人脸图像,该方式可以基于用户需求个性化地对形变系数进行调整。In the embodiment of the present disclosure, the target deformation coefficient can be determined according to the adjustment operation, so that the adjusted target virtual face image can be determined later based on the target deformation coefficient, and the deformation coefficient can be adjusted individually based on user requirements.
在一种可能的实施方式中,所述基于所述目标形变系数和所述标准稠密点云数据,生成所述目标人脸对应的目标虚拟人脸图像,包括:基于所述目标形变系数,对所述标准稠密点云数据进行调整,得到目标稠密点云数据;基于所述目标稠密点云数据,生成所述目标虚拟人脸图像。In a possible implementation manner, the generating a target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data includes: based on the target deformation coefficient, The standard dense point cloud data is adjusted to obtain target dense point cloud data; based on the target dense point cloud data, the target virtual face image is generated.
本公开实施例中,在确定目标形变系数后,可以直接根据目标形变系数对标准稠密点云数据进行调整,确定目标稠密点云数据,这样可以根据目标稠密点云数据快速得到目标人脸对应的目标虚拟人脸图像。In the embodiment of the present disclosure, after the target deformation coefficient is determined, the standard dense point cloud data can be adjusted directly according to the target deformation coefficient to determine the target dense point cloud data, so that the target face corresponding to the target face can be quickly obtained according to the target dense point cloud data. The target virtual face image.
在一种可能的实施方式中,所述基于所述目标稠密点云数据,生成所述目标虚拟人脸图像,包括:确定与所述目标稠密点云数据对应的虚拟人脸模型;基于预选的人脸属性特征和所述虚拟人脸模型,生成所述目标虚拟人脸图像。In a possible implementation manner, the generating the target virtual face image based on the target dense point cloud data includes: determining a virtual face model corresponding to the target dense point cloud data; face attribute features and the virtual face model to generate the target virtual face image.
本公开实施例中,在对初始虚拟人脸图像进行调整时,还可以结合用户选定的人脸属性特征进行个性化地调整,从而使得目标虚拟人脸图像更贴合用户的实际需求。In the embodiment of the present disclosure, when the initial virtual face image is adjusted, the user's selected face attributes can also be used for personalized adjustment, so that the target virtual face image is more suitable for the actual needs of the user.
在一种可能的实施方式中,所述获取目标人脸的初始稠密点云数据,并基于所述初始稠密点云数据生成所述目标人脸的初始虚拟人脸图像,包括:获取所述目标人脸对应的第一人脸图像,以及预设风格的多张第二人脸图像分别对应的稠密点云数据;基于所述第一人脸图像和所述预设风格的多张第二人脸图像分别对应的稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据;基于所述目标人脸在所述预设风格下的初始稠密点云数据,生成所述目标人脸在所述预设风格下的初始虚拟人脸图像。In a possible implementation manner, the obtaining initial dense point cloud data of the target face, and generating the initial virtual face image of the target face based on the initial dense point cloud data includes: obtaining the target face The first face image corresponding to the face, and the dense point cloud data corresponding to the multiple second face images of the preset style respectively; based on the first face image and the multiple second face images of the preset style The dense point cloud data corresponding to the face images respectively, determine the initial dense point cloud data of the target face under the preset style; based on the initial dense point cloud data of the target face under the preset style, An initial virtual face image of the target face in the preset style is generated.
本公开实施例中,可以根据预先存储的多张基底图像在预设风格下分别对应的稠密点云数据,来确定第一人脸图像在预设风格下的稠密点云数据,从而可以快速展示出目 标人脸在预设风格下的虚拟人脸图像。In the embodiment of the present disclosure, the dense point cloud data of the first face image in the preset style can be determined according to the pre-stored dense point cloud data corresponding to a plurality of base images in the preset style, so that it can be displayed quickly The virtual face image of the target face in the preset style is obtained.
在一种可能的实施方式中,所述基于所述第一人脸图像和所述预设风格的多张第二人脸图像分别对应的稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据,包括:提取所述第一人脸图像的人脸参数值,以及所述预设风格的多张第二人脸图像分别对应的人脸参数值;其中,所述人脸参数值包含表征人脸形状的参数值和表征人脸表情的参数值;基于所述第一人脸图像的人脸参数值、以及所述预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据。In a possible implementation manner, determining that the target face is in the The initial dense point cloud data under the preset style includes: extracting the face parameter value of the first face image, and the face parameter values corresponding to the plurality of second face images of the preset style respectively; wherein , the face parameter value includes a parameter value that characterizes the shape of the face and a parameter value that characterizes the facial expression; based on the face parameter value of the first face image, and a plurality of second person images of the preset style The face parameter values and dense point cloud data corresponding to the face image respectively determine the initial dense point cloud data of the target face under the preset style.
本公开实施例中,提出在确定第一人脸图像在预设风格下的稠密点云数据的过程中,可以结合第一人脸图像和预设风格的多张第二人脸图像的人脸参数值来确定,因为在通过人脸参数值表示人脸时使用的参数值数量较少,因此能够更加快速的确定出目标人脸在预设风格下的稠密点云数据。In the embodiment of the present disclosure, it is proposed that in the process of determining the dense point cloud data of the first face image in the preset style, the face of the first face image and multiple second face images of the preset style can be combined Since the number of parameter values used to represent the face by the face parameter value is small, the dense point cloud data of the target face in the preset style can be determined more quickly.
在一种实施方式中,所述基于所述第一人脸图像的人脸参数值、以及所述预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据,包括:基于所述第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值,确定所述第一人脸图像和所述预设风格的多张第二人脸图像之间的线性拟合系数;根据所述预设风格的多张第二人脸图像分别对应的稠密点云数据和所述线性拟合系数,确定所述目标人脸在所述预设风格下的初始稠密点云数据。In one embodiment, the face parameter values and dense point cloud data corresponding to the face parameter values based on the first face image and the plurality of second face images of the preset style, respectively, Determining the initial dense point cloud data of the target face under the preset style includes: based on the face parameter value of the first face image, and a plurality of second face images of the preset style corresponding to face parameter value, determine the linear fitting coefficient between the first face image and the multiple second face images of the preset style; according to the multiple second face images of the preset style The corresponding dense point cloud data and the linear fitting coefficient are respectively used to determine the initial dense point cloud data of the target face under the preset style.
本公开实施例中,可以提出通过数量较少的人脸参数值快速得到表示第一人脸图像和多张第二人脸图像之间的关联关系的线性拟合系数,进一步可以根据该线性拟合系数对预设风格的多张第二人脸图像的稠密点云数据进行调整,可以快速得到目标人脸在预设风格下的稠密点云数据。In the embodiment of the present disclosure, it can be proposed to quickly obtain a linear fitting coefficient representing the correlation between the first face image and the plurality of second face images by using a small number of face parameter values, and further, according to the linear fitting The combination coefficient adjusts the dense point cloud data of multiple second face images of the preset style, and can quickly obtain the dense point cloud data of the target face under the preset style.
在一种可能的实施方式中,所述基于所述第一人脸图像的人脸参数值,以及所述预设风格的多张第二人脸图像分别对应的人脸参数值,确定所述第一人脸图像和所述预设风格的多张第二人脸图像之间的线性拟合系数,包括:获取当前线性拟合系数,初始的所述当前线性拟合系数为预先设置;基于所述当前线性拟合系数和所述预设风格的多张第二人脸图像分别对应的人脸参数值,预测所述第一人脸图像的当前人脸参数值;基于预测的当前人脸参数值和所述第一人脸图像的人脸参数值,确定第二损失值;基于所述第二损失值以及预设的所述线性拟合系数对应的约束范围,调整所述当前线性拟合系数;基于调整后的所述当前线性拟合系数,返回执行预测当前人脸参数值的步骤,直至对所述当前线性拟合系数的调整操作符合第二调整截止条件的情况下,基于当前线性拟合系数得到第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数。In a possible implementation manner, the determination of the Linear fitting coefficients between the first face image and the plurality of second face images of the preset style, including: obtaining a current linear fitting coefficient, the initial current linear fitting coefficient is preset; based on The current linear fitting coefficient and the face parameter values corresponding to the plurality of second face images of the preset style respectively, to predict the current face parameter value of the first face image; based on the predicted current face The parameter value and the face parameter value of the first face image determine a second loss value; based on the second loss value and the preset constraint range corresponding to the linear fitting coefficient, adjust the current linear fitting. based on the adjusted current linear fitting coefficient, return to the step of predicting the current face parameter value, until the adjustment operation on the current linear fitting coefficient meets the second adjustment cut-off condition, based on the current The linear fitting coefficient obtains the linear fitting coefficient between the first face image and the plurality of second face images of the preset style.
本公开实施例中,在调整第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数的过程中,可以通过第二损失值和/或调整次数对线性拟合系数进行多次调整,从而可以提高线性拟合系数的准确度;另一方面在调整过程中通过预设的线性拟合系数的约束范围进行调整约束,这样得到线性拟合系数,能够更加合理的确定目标人脸对应的稠密点云数据。In the embodiment of the present disclosure, in the process of adjusting the linear fitting coefficient between the first face image and the multiple second face images of the preset style, the linear fitting may be adjusted by the second loss value and/or the adjustment times. On the other hand, during the adjustment process, the constraints are adjusted through the preset constraint range of the linear fitting coefficient, so that the linear fitting coefficient can be obtained, which can be more reasonable. to determine the dense point cloud data corresponding to the target face.
在一种可能的实施方式中,所述稠密点云数据包括稠密点云中各个点的坐标值;所述根据所述预设风格的多张第二人脸图像分别对应的稠密点云数据和所述线性拟合系数,确定所述目标人脸在所述预设风格下的初始稠密点云数据,包括:基于所述预设风格的多张第二人脸图像分别对应的所述稠密点云中各个点的坐标值,确定平均稠密点云数据对应点的坐标值;基于所述预设风格的多张第二人脸图像分别对应的所述稠密点云数据中各个点的坐标值、和所述平均稠密点云数据中对应点的坐标值,确定所述预设风 格的多张第二人脸图像分别对应的坐标差异值;基于所述预设风格多张第二人脸图像分别对应的所述坐标差异值和所述线性拟合系数,确定所述第一人脸图像对应的坐标差异值;基于所述第一人脸图像对应的坐标差异值和所述平均稠密点云数据中对应点的坐标值,确定所述目标人脸在所述预设风格下的所述初始稠密点云数据。In a possible implementation manner, the dense point cloud data includes coordinate values of each point in the dense point cloud; the dense point cloud data corresponding to the plurality of second face images according to the preset style and The linear fitting coefficient determines the initial dense point cloud data of the target face under the preset style, including: the dense points corresponding to the plurality of second face images based on the preset style respectively The coordinate value of each point in the cloud, determine the coordinate value of the point corresponding to the average dense point cloud data; the coordinate value of each point in the dense point cloud data corresponding to the plurality of second face images based on the preset style respectively, and the coordinate value of the corresponding point in the average dense point cloud data, determine the coordinate difference values corresponding to the plurality of second face images of the preset style respectively; based on the preset style, the plurality of second face images are respectively corresponding to the coordinate difference value and the linear fitting coefficient, determine the coordinate difference value corresponding to the first face image; based on the coordinate difference value corresponding to the first face image and the average dense point cloud data The coordinate value of the corresponding point in the middle, to determine the initial dense point cloud data of the target face under the preset style.
本公开实施例中,在第二人脸图像较少的情况下,可以通过多样性的第二人脸图像的稠密点云数据准确的表示出不同的目标人脸在预设风格下的稠密点云数据。In the embodiment of the present disclosure, when there are few second face images, the dense point cloud data of the diverse second face images can accurately represent the dense points of different target faces under the preset style cloud data.
在一种可能的实施方式中,所述人脸参数值由预先训练的神经网络提取,所述神经网络基于预先标注人脸参数值的样本图像训练得到。In a possible implementation manner, the face parameter values are extracted by a pre-trained neural network, and the neural network is obtained by training based on sample images marked with face parameter values in advance.
本公开实施例中,提出通过预先训练的神经网络来提取人脸图像的人脸参数值,可以提高人脸参数值的提取精度和提取效率。In the embodiments of the present disclosure, it is proposed to extract the face parameter value of the face image through a pre-trained neural network, which can improve the extraction accuracy and extraction efficiency of the face parameter value.
在一种可能的实施方式中,按照以下方式预先训练所述神经网络:获取样本图像集,所述样本图像集包含多张样本图像以及每张样本图像对应的标注人脸参数值;将所述多张样本图像输入神经网络,得到每张样本图像对应的预测人脸参数值;基于每张样本图像对应的预测人脸参数值和标注人脸参数值,对所述神经网络的网络参数值进行调整,得到训练完成的神经网络。In a possible implementation manner, the neural network is pre-trained in the following manner: a sample image set is obtained, the sample image set includes multiple sample images and the labeled face parameter value corresponding to each sample image; Multiple sample images are input into the neural network, and the predicted face parameter value corresponding to each sample image is obtained; based on the predicted face parameter value and the labeled face parameter value corresponding to each sample image, the network parameter value of the neural network is calculated. Adjust to get the trained neural network.
本公开实施例中,在对用于提取人脸参数值的神经网络进行训练过程中,提出通过每张样本图像的标注人脸参数值,对神经网络的网络参数值进行不断调整,从而可以得到准确度较高的神经网络。In the embodiment of the present disclosure, during the training process of the neural network for extracting the face parameter value, it is proposed to continuously adjust the network parameter value of the neural network by labeling the face parameter value of each sample image, so as to obtain High-accuracy neural network.
第二方面,本公开实施例提供了一种人脸图像的处理装置,包括:获取模块,用于获取目标人脸的原始稠密点云数据,并基于所述初始稠密点云数据生成所述目标人脸的初始虚拟人脸图像;确定模块,用于确定所述初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数;调整模块,用于响应于针对所述初始虚拟人脸图像的调整操作,对所述形变系数进行调整,得到目标形变系数;生成模块,用于基于所述目标形变系数和所述标准稠密点云数据,生成所述目标人脸对应的目标虚拟人脸图像。In a second aspect, an embodiment of the present disclosure provides an apparatus for processing a face image, including: an acquisition module configured to acquire original dense point cloud data of a target face, and generate the target based on the initial dense point cloud data an initial virtual face image of a human face; a determination module for determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image; an adjustment module for responding to the The adjustment operation of the initial virtual face image is to adjust the deformation coefficient to obtain the target deformation coefficient; the generating module is used for generating the corresponding target face based on the target deformation coefficient and the standard dense point cloud data. The target virtual face image.
第三方面,本公开实施例提供了一种电子设备,包括:处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如第一方面所述的处理方法的步骤。In a third aspect, embodiments of the present disclosure provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions executable by the processor, and when the electronic device runs, the processing The processor and the memory communicate through a bus, and when the machine-readable instructions are executed by the processor, the steps of the processing method according to the first aspect are performed.
第四方面,本公开实施例提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器运行时执行如第一方面所述的处理方法的步骤。In a fourth aspect, an embodiment of the present disclosure provides a computer-readable storage medium on which a computer program is stored, and when the computer program is run by a processor, the steps of the processing method described in the first aspect are executed.
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more obvious and easy to understand, the preferred embodiments are exemplified below, and are described in detail as follows in conjunction with the accompanying drawings.
附图说明Description of drawings
为了更清楚地说明本公开实施例的技术方案,下面将对实施例中所需要使用的附图作简单地介绍。这些附图示出了符合本公开的实施例,并与说明书一起用于说明本公开的技术方案。应当理解,以下附图仅示出了本公开的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。In order to illustrate the technical solutions of the embodiments of the present disclosure more clearly, the accompanying drawings required in the embodiments will be briefly introduced below. These drawings illustrate embodiments consistent with the present disclosure, and together with the description, serve to explain the technical solutions of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. Other related figures are obtained from these figures.
图1示出了本公开实施例所提供的一种人脸图像的处理方法的流程图;FIG. 1 shows a flowchart of a method for processing a face image provided by an embodiment of the present disclosure;
图2示出了本公开实施例所提供的一种稠密点云数据表示的人脸的三维模型的示意 图;2 shows a schematic diagram of a three-dimensional model of a face represented by dense point cloud data provided by an embodiment of the present disclosure;
图3示出了本公开实施例所提供的一种生成初始虚拟人脸图像的方法流程图;3 shows a flowchart of a method for generating an initial virtual face image provided by an embodiment of the present disclosure;
图4示出了本公开实施例所提供的一种确定目标人脸在预设风格下的稠密点云数据的方法流程图;4 shows a flowchart of a method for determining dense point cloud data of a target face in a preset style provided by an embodiment of the present disclosure;
图5示出了本公开实施例所提供的一种训练神经网络的方法流程图;FIG. 5 shows a flowchart of a method for training a neural network provided by an embodiment of the present disclosure;
图6示出了本公开实施例所提供的一种具体地确定目标人脸在预设风格下的稠密点云数据的方法流程图;6 shows a flowchart of a method for specifically determining dense point cloud data of a target face under a preset style provided by an embodiment of the present disclosure;
图7示出了本公开实施例所提供的一种确定形变系数的方法流程图;FIG. 7 shows a flowchart of a method for determining a deformation coefficient provided by an embodiment of the present disclosure;
图8示出了本公开实施例所提供的一种调整形变系数的方法流程图;FIG. 8 shows a flowchart of a method for adjusting a deformation coefficient provided by an embodiment of the present disclosure;
图9示出了本公开实施例所提供的一种针对虚拟人脸图像的调整界面示意图;FIG. 9 shows a schematic diagram of an adjustment interface for a virtual face image provided by an embodiment of the present disclosure;
图10示出了本公开实施例所提供的一种生成目标人脸的目标虚拟人脸图像的方法流程图;10 shows a flowchart of a method for generating a target virtual face image of a target face provided by an embodiment of the present disclosure;
图11示出了本公开实施例所提供的一种人脸图像的处理装置的结构示意图;FIG. 11 shows a schematic structural diagram of an apparatus for processing a face image provided by an embodiment of the present disclosure;
图12示出了本公开实施例所提供的一种电子设备的示意图。FIG. 12 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对在附图中提供的本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only These are some, but not all, embodiments of the present disclosure. The components of the disclosed embodiments generally described and illustrated in the drawings herein may be arranged and designed in a variety of different configurations. Therefore, the following detailed description of the embodiments of the disclosure provided in the accompanying drawings is not intended to limit the scope of the disclosure as claimed, but is merely representative of selected embodiments of the disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present disclosure.
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。It should be noted that like numerals and letters refer to like items in the following figures, so once an item is defined in one figure, it does not require further definition and explanation in subsequent figures.
本文中术语“和/或”,仅仅是描述一种关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中术语“至少一种”表示多种中的任意一种或多种中的至少两种的任意组合,例如,包括A、B、C中的至少一种,可以表示包括从A、B和C构成的集合中选择的任意一个或多个元素。The term "and/or" in this paper only describes an association relationship, which means that there can be three kinds of relationships, for example, A and/or B, which can mean: the existence of A alone, the existence of A and B at the same time, the existence of B alone. a situation. In addition, the term "at least one" herein refers to any combination of any one of the plurality or at least two of the plurality, for example, including at least one of A, B, and C, and may mean including from A, B, and C. Any one or more elements selected from the set of B and C.
在三维建模领域中,一张人脸可以通过针对该人脸采集的稠密点云来表示,表示人脸的稠密点云一般包含成千上万个点,在需要对人脸的虚拟人脸图像的形状进行调整时,需要对上千上万个点的位置逐一调整,过程繁琐,效率较低。In the field of 3D modeling, a face can be represented by a dense point cloud collected for the face. The dense point cloud representing the face generally contains thousands of points. When the shape of the image is adjusted, the positions of thousands of points need to be adjusted one by one, which is a cumbersome process and low efficiency.
基于上述研究,本公开提供了一种人脸图像的处理方法,在获取到目标人脸的原始稠密点云数据后,可以确定出目标人脸的初始稠密点云数据相对于标准人脸图像对应的标准稠密点云数据的形变系数,按照这样的方式,建立了稠密点云数据和形变系数之间的对应关系,这样在检测到针对初始虚拟人脸图像的调整操作的情况下,可以直接针对形变系数进行调整,从而完成对初始虚拟人脸图像的调整,该方式无需对稠密点云数据中的点进行逐一调整,提高了调整效率,另外本方案是根据稠密点云数据确定的形变系数,在针对初始虚拟人脸图像进行调整过程中,调整的精度更高。Based on the above research, the present disclosure provides a face image processing method. After obtaining the original dense point cloud data of the target face, it can be determined that the initial dense point cloud data of the target face corresponds to the standard face image. In this way, the correspondence between the dense point cloud data and the deformation coefficients is established, so that when the adjustment operation for the initial virtual face image is detected, the The deformation coefficient is adjusted to complete the adjustment of the initial virtual face image. This method does not need to adjust the points in the dense point cloud data one by one, which improves the adjustment efficiency. During the adjustment process for the initial virtual face image, the adjustment precision is higher.
为便于对本实施例进行理解,首先对本公开实施例所公开的一种人脸图像的处理方法进行详细介绍,本公开实施例所提供的处理方法的执行主体一般为具有一定计算能力的计算机设备,该计算机设备例如包括:终端设备或服务器或其它处理设备,终端设备可以为用户设备(User Equipment,UE)、移动设备、用户终端、终端、手持设备、计算设备、可穿戴设备等。在一些可能的实现方式中,该处理方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。In order to facilitate the understanding of this embodiment, a method for processing a face image disclosed by the embodiment of the present disclosure is first introduced in detail. The computer device includes, for example, a terminal device or a server or other processing device, and the terminal device may be a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a handheld device, a computing device, a wearable device, and the like. In some possible implementations, the processing method may be implemented by a processor invoking computer-readable instructions stored in a memory.
参见图1所示,本公开实施例提供了一种人脸图像的处理方法,该处理方法包括以下步骤S101至S104。Referring to FIG. 1 , an embodiment of the present disclosure provides a method for processing a face image, and the processing method includes the following steps S101 to S104.
S101,获取目标人脸的原始稠密点云数据,并基于原始稠密点云数据生成目标人脸的初始虚拟人脸图像。S101 , obtaining original dense point cloud data of the target face, and generating an initial virtual face image of the target face based on the original dense point cloud data.
示例性地,稠密点云数据可以表示人脸的三维模型,具体地,稠密点云数据可以包含人脸表面的多个点在预先构建的三维坐标系下的坐标值,多个点连接后形成的三维网络(3Dmesh)和多个点的坐标值可以用来表示人脸的三维模型,如图2所示,表示通过不同稠密点云数据表示的人脸的三维模型的示意图,稠密点云中所包含的点的个数越多,稠密点云数据在表示人脸的三维模型时也越精细。Exemplarily, the dense point cloud data can represent a three-dimensional model of a human face. Specifically, the dense point cloud data can include the coordinate values of multiple points on the face surface in a pre-built three-dimensional coordinate system, and the multiple points are connected to form The three-dimensional network (3Dmesh) and the coordinate values of multiple points can be used to represent the three-dimensional model of the face, as shown in Figure 2, which represents the schematic diagram of the three-dimensional model of the face represented by different dense point cloud data, in the dense point cloud The more points included, the finer the dense point cloud data is when representing the 3D model of the face.
示例性地,初始虚拟人脸图像可以为三维人脸图像或者二维人脸图像,与具体的应用场景相关,对应的,当初始虚拟人脸图像为三维人脸图像时,后文提到的人脸图像也为三维人脸图像,当初始虚拟人脸图像为二维人脸图像时,后文提到的人脸图像也为二维人脸图像,本公开实施例将以虚拟人脸图像为三维人脸图像为例进行说明。Exemplarily, the initial virtual face image may be a three-dimensional face image or a two-dimensional face image, which is related to a specific application scenario. Correspondingly, when the initial virtual face image is a three-dimensional face image, the later mentioned The face image is also a three-dimensional face image. When the initial virtual face image is a two-dimensional face image, the face image mentioned later is also a two-dimensional face image. A three-dimensional face image is taken as an example for description.
示例性地,获取的目标人脸的初始稠密点云数据为目标人脸在预设风格下对应的稠密点云数据时,比如目标人脸在古典风格下对应的稠密点云数据时,基于该初始稠密点云数据展示的目标人脸的初始虚拟人脸图像也是在古典风格下的人脸图像,具体如何获取目标人脸在预设风格下对应的稠密点云数据将在后文进行说明。Exemplarily, when the acquired initial dense point cloud data of the target face is the dense point cloud data corresponding to the target face in the preset style, such as when the target face corresponds to the dense point cloud data in the classical style, based on the The initial virtual face image of the target face displayed by the initial dense point cloud data is also a face image in the classical style. How to obtain the dense point cloud data corresponding to the target face in the preset style will be described later.
S102,确定初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数。S102: Determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image.
示例性地,这里标准虚拟人脸图像对应的标准稠密点云数据可以是预先设定的虚拟人脸图像对应的稠密点云数据,该预先设定的虚拟人脸图像具有预先设定的脸型以及五官特征,进一步可以以该标准虚拟人脸图像为基础,确定目标人脸的初始稠密点云数据相比标准稠密点云数据的形变系数。Exemplarily, the standard dense point cloud data corresponding to the standard virtual face image here may be the dense point cloud data corresponding to a preset virtual face image, and the preset virtual face image has a preset face shape and The facial features can further be based on the standard virtual face image to determine the deformation coefficient of the initial dense point cloud data of the target face compared with the standard dense point cloud data.
示例性地,形变系数与稠密点云数据关联,可以表示稠密点云数据相比标准稠密点云数据的形变量,这样针对目标人脸对应的形变系数,可以表示目标人脸相比标准人脸的形变量,比如可以包含鼻梁增高、眼睛变大、嘴角上扬、脸颊变小等。Exemplarily, the deformation coefficient is associated with the dense point cloud data, and can represent the deformation value of the dense point cloud data compared with the standard dense point cloud data, so that the deformation coefficient corresponding to the target face can indicate that the target face is compared with the standard face. The deformation variables of , for example, can include increased nose bridge, larger eyes, raised corners of the mouth, smaller cheeks, etc.
具体地,形变系数包含至少一个骨骼系数和/或至少一个混合形变系数;Specifically, the deformation coefficient includes at least one bone coefficient and/or at least one mixed deformation coefficient;
其中,每个骨骼系数用于对与该骨骼系数关联的第一稠密点云构成的骨骼的初始位姿进行调整;每个混合形变系数用于对与该混合形变系数关联的第二稠密点云对应的初始位置进行调整。Wherein, each bone coefficient is used to adjust the initial pose of the bone formed by the first dense point cloud associated with the bone coefficient; each blending deformation coefficient is used to adjust the second dense point cloud associated with the blending Adjust the corresponding initial position.
示例性地,骨骼系数可以包含多个,可以用于对人脸的骨骼进行调整,具体调整时,可以对骨骼在预先构建的三维坐标系(可以为预先以人脸的其中一个点为坐标原点构建的世界坐标系,将在后文介绍)中的初始位姿进行调整,以其中一个与人脸鼻梁对应的骨骼系数为例,通过调整该骨骼系数可以对构成人脸鼻梁的第一稠密点云的初始位姿进行调整,从而完成对该人脸鼻梁的初始位姿的调整,比如将人脸鼻梁调整的更加挺拔。Exemplarily, the skeleton coefficients may include multiple, which can be used to adjust the skeleton of the human face. During the specific adjustment, the skeleton can be adjusted in a pre-built three-dimensional coordinate system (which can be pre-determined by taking one of the points of the human face as the coordinate origin. The constructed world coordinate system, which will be described later), is adjusted for the initial pose. Taking one of the skeleton coefficients corresponding to the nose bridge of the human face as an example, the first dense point that constitutes the nose bridge of the human face can be adjusted by adjusting the skeleton coefficient. The initial pose of the cloud is adjusted, so as to complete the adjustment of the initial pose of the nose bridge of the human face, for example, adjusting the nose bridge of the human face to be more upright.
混合形变系数也可以包含多个,用于对关联的第二稠密点云在预先构建的三维坐标 系中的初始位置进行调整,可以达到对脸部轮廓和五官的尺寸、形状等进行调整的目标,以其中一个与脸部轮廓对应的混合形变系数为例,通过调整该混合形变系数可以对构成脸部轮廓的第二稠密点云的初始位置进行调整,从而完成对脸部轮廓的尺寸和/形状进行调整,比如将大圆脸的尺寸调小,或者调整成瓜子脸。The blending deformation coefficient can also contain multiple, which is used to adjust the initial position of the associated second dense point cloud in the pre-built three-dimensional coordinate system, which can achieve the goal of adjusting the face contour and the size and shape of the facial features. , taking one of the blending deformation coefficients corresponding to the face contour as an example, by adjusting the blending deformation coefficient, the initial position of the second dense point cloud constituting the face contour can be adjusted, thereby completing the size and/or adjustment of the face contour. Adjust the shape, such as reducing the size of a big round face, or adjusting it to a melon face.
示例性地,响应于不同的调整需求,骨骼系数关联的第一稠密点云和混合形变系数关联的第二稠密点云之间可以有至少一部分点重叠,比如,以用于对人脸鼻头的位姿进行调整的一骨骼系数为例,通过调整该骨骼系数关联的第一稠密点云,可以达到对人脸鼻头的位姿进行调整的目的,在需要对人脸鼻头的尺寸进行调整时,该人脸鼻头对应的混合形变系数关联的第二稠密点云可以和用于对人脸鼻头的位姿进行调整的骨骼系数关联的第一稠密点云相同;当然,骨骼系数关联的第一稠密点云和混合形变系数关联的第二稠密点云也可以为不相同的稠密点云,比如用于对人脸鼻头的位姿进行条件的骨骼系数关联的第一稠密点云,与用于对脸颊尺寸进行调整的混合形变系数关联的第二稠密点云。Exemplarily, in response to different adjustment requirements, there may be at least a part of point overlap between the first dense point cloud associated with the skeleton coefficients and the second dense point cloud associated with the blending deformation coefficients, for example, for the adjustment of the nose and head of a human face. Take a bone coefficient whose pose is adjusted as an example. By adjusting the first dense point cloud associated with the bone coefficient, the purpose of adjusting the pose of the nose of the face can be achieved. When the size of the nose of the face needs to be adjusted, The second dense point cloud associated with the blending deformation coefficients corresponding to the nose of the face may be the same as the first dense point cloud associated with the bone coefficients used to adjust the pose of the nose of the face; of course, the first dense point cloud associated with the bone coefficients The second dense point cloud associated with the point cloud and the mixed deformation coefficient can also be a different dense point cloud, such as the first dense point cloud associated with the bone coefficients used to condition the pose of the nose and head of the face, and the first dense point cloud used for the The cheek size is adjusted by the blend shape coefficient associated with the second dense point cloud.
示例性地,为了表示目标人脸的初始稠密点云数据相对于标准稠密点云数据的形变系数,可以预先以目标人脸包含的稠密点云中的其中一个点为坐标系原点,选定三个互相垂直的方向作为坐标系的三个坐标轴构建世界坐标系,在该世界坐标系下,可以确定目标人脸的初始稠密点云数据相对于标准稠密点云数据的形变系数,形变系数的具体确定过程可以根据机器学习算法来确定,将在后文进行详细说明。Exemplarily, in order to represent the deformation coefficient of the initial dense point cloud data of the target face relative to the standard dense point cloud data, one of the points in the dense point cloud included in the target face can be used as the origin of the coordinate system, and three points can be selected. In this world coordinate system, the deformation coefficient of the initial dense point cloud data of the target face relative to the standard dense point cloud data can be determined. The specific determination process may be determined according to a machine learning algorithm, which will be described in detail later.
本公开实施例中提出,形变系数包含用于对骨骼的初始位姿进行调整的骨骼系数,以及包含用于对稠密点云进行初始位置调整的混合形变系数,这样可以基于形变系数对目标人脸进行全面调整。It is proposed in the embodiment of the present disclosure that the deformation coefficient includes a bone coefficient for adjusting the initial pose of the bone, and a mixed deformation coefficient for adjusting the initial position of the dense point cloud, so that the target face can be adjusted based on the deformation coefficient. Make a full adjustment.
S103,响应于针对初始虚拟人脸图像的调整操作,对形变系数进行调整,得到目标形变系数。S103, in response to the adjustment operation on the initial virtual face image, adjust the deformation coefficient to obtain the target deformation coefficient.
示例性地,在展示目标人脸的初始虚拟人脸图像时,还可以展示用于对该初始虚拟人脸图像进行调整的操作按钮,允许用户通过操作按钮对展示的初始虚拟人脸图像进行形貌调整,在调整过程中,为了方便用户能够直观地对初始虚拟人脸图像进行调整,可以预先建立多种待调整位置与形变系数之间的对应关系,比如建立嘴巴、眼睛、鼻翼、眉毛、脸型等待调整位置分别与形变系数之间的对应关系,这样便于用户直接基于展示的初始虚拟人脸图像,对待调整位置进行调整,从而可以达到对形变系数进行调整的目的。Exemplarily, when displaying the initial virtual face image of the target face, an operation button for adjusting the initial virtual face image may also be displayed, allowing the user to shape the displayed initial virtual face image through the operation button. During the adjustment process, in order to facilitate the user to intuitively adjust the initial virtual face image, various correspondences between the positions to be adjusted and the deformation coefficients can be established in advance, such as the establishment of mouth, eyes, nose, eyebrows, The corresponding relationship between the position to be adjusted and the deformation coefficient of the face shape, so that the user can directly adjust the position to be adjusted based on the displayed initial virtual face image, so as to achieve the purpose of adjusting the deformation coefficient.
S104,基于目标形变系数和标准稠密点云数据,生成目标人脸对应的目标虚拟人脸图像。S104, based on the target deformation coefficient and the standard dense point cloud data, generate a target virtual face image corresponding to the target face.
在得到目标形变系数后,进一步可以基于该目标形变系数对标准稠密点云数据进行调整,得到目标人脸对应的目标稠密点云数据,然后根据该目标稠密点云数据,生成目标人脸对应的目标虚拟人脸图像。After the target deformation coefficient is obtained, the standard dense point cloud data can be further adjusted based on the target deformation coefficient to obtain the target dense point cloud data corresponding to the target face, and then according to the target dense point cloud data, the corresponding target face is generated. The target virtual face image.
本公开实施例中,提出通过稠密点云数据确定用于对目标人脸的虚拟人脸图像进行调整的形变系数,这样可以建立稠密点云数据和形变系数之间的对应关系,从而可以直接基于形变系数对虚拟人脸图像进行调整,相比通过对稠密点云数据中的点进行逐一调整的方式,可以提高调整效率。In the embodiment of the present disclosure, it is proposed to determine the deformation coefficient for adjusting the virtual face image of the target face by using the dense point cloud data, so that the correspondence between the dense point cloud data and the deformation coefficient can be established, so that the corresponding relationship between the dense point cloud data and the deformation coefficient can be established directly. The deformation coefficient adjusts the virtual face image, which can improve the adjustment efficiency compared to adjusting the points in the dense point cloud data one by one.
另一方面,考虑到这里的形变系数是根据稠密点云数据确定的,在基于形变系数对初始虚拟人脸图像的调整过程中,可以直接基于形变系数对稠密点云数据中的点进行调整,这样可以直接精确到对构成虚拟人脸图像的各个点的调整,在提高调整效率的基础上同时可以提高调整精度。On the other hand, considering that the deformation coefficient here is determined according to the dense point cloud data, in the process of adjusting the initial virtual face image based on the deformation coefficient, the points in the dense point cloud data can be adjusted directly based on the deformation coefficient, In this way, the adjustment of each point constituting the virtual face image can be directly accurate, and the adjustment accuracy can be improved on the basis of improving the adjustment efficiency.
下面将结合具体实施例对上述步骤S101至S104进行具体介绍。The foregoing steps S101 to S104 will be described in detail below with reference to specific embodiments.
针对上述S101,在获取目标人脸的初始稠密点云数据,并基于初始稠密点云数据展示目标人脸的初始虚拟人脸图像时,如图3所示,可以包括以下步骤S201至S203。For the above S101, when acquiring the initial dense point cloud data of the target face and displaying the initial virtual face image of the target face based on the initial dense point cloud data, as shown in FIG. 3, the following steps S201 to S203 may be included.
S201,获取目标人脸对应的第一人脸图像,以及预设风格的多张第二人脸图像分别对应的稠密点云数据。S201: Acquire a first face image corresponding to a target face, and dense point cloud data corresponding to a plurality of second face images of a preset style respectively.
示例性地,目标人脸对应的第一人脸图像可以为通过图像采集设备采集的目标人脸的彩色人脸图像,或者目标人脸的灰度人脸图像,在此不做具体限定。Exemplarily, the first face image corresponding to the target face may be a color face image of the target face collected by an image acquisition device, or a grayscale face image of the target face, which is not specifically limited herein.
示例性地,多张第二人脸图像为预先选择具有一些特征的图像,通过这些第二人脸图像能够表征出不同的第一人脸图像,比如选择n张第二人脸图像,针对每张第一人脸图像,则可以通过这n张第二人脸图像和线性拟合系数来表征该第一人脸图像。示例性地,为了使得多张第二人脸图像能够拟合表示大部分的第一人脸图像,可以选择相比平均人脸具有一些突出特征的人脸的图像作为第二人脸图像,例如,选择相比平均人脸的脸部尺寸较小的人脸的人脸图像作为第二人脸图像,或者,选择相比平均人脸的嘴巴尺寸较大的人脸的人脸图像作为第二人脸图像,或者,选择相比平均人脸的眼睛尺寸较大的人脸的人脸图像作为第二人脸图像,通过选择具有特定特征的人脸的人脸图像作为第二人脸图像,可以通过调整线性拟合系数,来表征出第一人脸图像。Exemplarily, the plurality of second face images are pre-selected images with some features, and different first face images can be represented by these second face images, for example, n second face images are selected, for each The n second face images and the linear fitting coefficients can be used to characterize the first face image. Exemplarily, in order to enable the plurality of second face images to fit and represent most of the first face images, an image of a face with some prominent features compared to the average face may be selected as the second face image, for example , select a face image of a face with a smaller face size than the average face as the second face image, or select a face image with a larger mouth size than the average face as the second face image a face image, or, selecting a face image of a face with a larger eye size than the average face as the second face image, by selecting a face image of a face with a specific feature as the second face image, The first face image can be characterized by adjusting the linear fitting coefficient.
示例性地,可以预先获取并保存每张第二人脸图像在多种风格下分别对应的稠密点云数据,比如在古典风格下对应的稠密点云数据,在现代风格下对应的稠密点云数据,在西式风格下对应的稠密点云数据、以及在中式风格下对应的稠密点云数据,便于后续确定出第一人脸图像在不同风格下对应的虚拟人脸模型。Exemplarily, the dense point cloud data corresponding to each second face image in various styles can be acquired and saved in advance, such as the dense point cloud data corresponding to the classical style and the dense point cloud corresponding to the modern style. Data, the corresponding dense point cloud data in the Western style, and the corresponding dense point cloud data in the Chinese style, are convenient for subsequent determination of the virtual face model corresponding to the first face image in different styles.
示例性地,预先可以针对每张第二人脸图像,可以提取该张第二人脸图像对应的稠密点云数据、以及该张第二人脸图像的人脸参数值,比如可以提取第二张人脸图像的三维可变形模型(3D Morphable Face Model,3DMM)参数值,然后根据人脸参数值对稠密点云数据中多个点的坐标值进行调整,得到每张第二人脸图像在多种风格下分别对应的稠密点云数据,比如可以得到每张第二人脸图像在古典风格下对应的稠密点云数据、在卡通风格下对应的稠密点云数据,然后对每张第二人脸图像在不同风格下的稠密点云数据进行保存。Exemplarily, for each second face image, the dense point cloud data corresponding to the second face image and the face parameter value of the second face image can be extracted in advance, for example, the second face image can be extracted. The 3D Morphable Face Model (3DMM) parameter value of a face image, and then adjust the coordinate values of multiple points in the dense point cloud data according to the face parameter value to obtain each second face image in The dense point cloud data corresponding to various styles, for example, the dense point cloud data corresponding to each second face image in the classical style and the dense point cloud data corresponding to the cartoon style can be obtained, and then the corresponding dense point cloud data in each second face image can be obtained. Face images are stored in dense point cloud data in different styles.
示例性地,人脸参数值包括表示脸部形状的参数值,以及,表示面部表情的参数值,比如人脸参数值中可以包含K维度的用于表示面部形状的参数值,包含M维度的用于表示面部表情的参数值,其中,K维度的用于表示面部形状的参数值共同体现出该第二人脸图像的面部形状,M维度的用于表示面部表情的参数值共同体现出该第二人脸图像的面部表情。Exemplarily, the face parameter value includes a parameter value representing the shape of the face, and a parameter value representing the facial expression, for example, the face parameter value may include K-dimensional parameter values for representing the face shape, including M-dimensional parameter values. The parameter value used to represent the facial expression, wherein the parameter value used to represent the facial shape in the K dimension collectively reflects the facial shape of the second face image, and the parameter value used to represent the facial expression in the M dimension collectively represents the facial expression. The facial expression of the second face image.
示例性地,K的维度取值范围一般在150至400之间,K的维度越小,能够表征的面部形状越简单,K的维度越大,能够表征的面部形状越复杂;M的取值范围一般在10至40之间,M的维度越少,能够表征的面部表情越简单,M的维度越多,能够表征的面部表情越复杂,可见,本公开实施例提出可以通过数量范围较少的人脸参数值来表示一张人脸,从而为后续确定目标人脸对应的初始虚拟人脸模型提供便利。Exemplarily, the value of the dimension of K generally ranges from 150 to 400. The smaller the dimension of K, the simpler the facial shape that can be represented, the larger the dimension of K, the more complex the facial shape that can be represented; the value of M The range is generally between 10 and 40. The smaller the dimension of M, the simpler the facial expressions that can be represented, and the more the dimensions of M, the more complex the facial expressions that can be represented. The value of the face parameter is used to represent a face, so as to provide convenience for the subsequent determination of the initial virtual face model corresponding to the target face.
示例性地,结合人脸参数值的含义,上述提到的根据人脸参数值对稠密点云中多个点的坐标值进行调整,得到每张第二人脸图像在多种风格下分别对应的稠密点云数据,可以理解为是根据人脸参数值以及多种风格分别对应的特征属性(比如卡通风格的特征属性、古典风格的特征属性等),对稠密点云中的点在预先建立的三维坐标系下的坐标值进行调整,从而得到第二人脸图像在多种风格下分别对应的稠密点云数据。Exemplarily, combined with the meaning of the face parameter value, the above-mentioned coordinate values of multiple points in the dense point cloud are adjusted according to the face parameter value, so that each second face image corresponding to each of the various styles is obtained. The dense point cloud data of a The coordinate values in the three-dimensional coordinate system are adjusted to obtain the dense point cloud data corresponding to the second face image in various styles.
S202,基于第一人脸图像和预设风格的多张第二人脸图像分别对应的稠密点云数据,确定目标人脸在预设风格下的稠密点云数据。S202, based on the dense point cloud data corresponding to the first face image and the plurality of second face images of the preset style respectively, determine the dense point cloud data of the target face in the preset style.
示例性地,可以通过找到第一人脸图像与多张第二人脸图像之间的关联关系,比如可以通过线性拟合的方式,确定出多张第二人脸图像与第一人脸图像之间的线性拟合系数,然后进一步可以根据该线性拟合系数以及预设风格的多张第二人脸图像分别对应的稠密点云数据,确定出目标人脸在预设风格下的稠密点云数据。Exemplarily, the association relationship between the first face image and the plurality of second face images can be found, for example, the plurality of second face images and the first face image can be determined by means of linear fitting. Then, according to the linear fitting coefficient and the dense point cloud data corresponding to multiple second face images of the preset style, the dense points of the target face in the preset style can be determined. cloud data.
S203,基于目标人脸在预设风格下的稠密点云数据,生成并展示目标人脸在预设风格下的初始虚拟人脸图像。S203 , based on the dense point cloud data of the target face in the preset style, generate and display an initial virtual face image of the target face in the preset style.
在获取到目标人脸在预设风格下的稠密点云数据后,可以按照该目标人脸对应的稠密点云数据,生成并展示目标人脸在预设风格下的初始虚拟人脸图像,比如可以基于默认设置的风格,或者用户设置的风格展示出目标人脸的初始虚拟人脸图像。After obtaining the dense point cloud data of the target face in the preset style, the initial virtual face image of the target face in the preset style can be generated and displayed according to the dense point cloud data corresponding to the target face, such as The initial virtual face image of the target face can be displayed based on the style set by default or the style set by the user.
本公开实施例中,可以根据预先存储的基底图像库中的每张基底图像在预设风格下分别对应的稠密点云数据,来确定第一人脸图像在预设风格下的稠密点云数据,从而快速展示出目标人脸在预设风格下的虚拟人脸图像。In the embodiment of the present disclosure, the dense point cloud data of the first face image in the preset style may be determined according to the dense point cloud data corresponding to each base image in the pre-stored base image library under the preset style. , so as to quickly display the virtual face image of the target face in the preset style.
针对上述S202,稠密点云数据包含稠密点云中多个点的坐标值,在基于第一人脸图像和预设风格的多张第二人脸图像分别对应的稠密点云数据,确定目标人脸在预设风格下的稠密点云数据时,如图4所示,可以包括以下步骤S301至S302。For the above S202, the dense point cloud data includes the coordinate values of multiple points in the dense point cloud, and the target person is determined based on the dense point cloud data corresponding to the first face image and the multiple second face images of the preset style respectively. When the face is in the dense point cloud data in the preset style, as shown in FIG. 4 , the following steps S301 to S302 may be included.
S301,提取第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值;其中,人脸参数值包含表征人脸形状的参数值和表征人脸表情的参数值。S301, extracting a face parameter value of a first face image, and a face parameter value corresponding to a plurality of second face images of a preset style respectively; wherein, the face parameter value includes a parameter value representing a face shape and a representation The parameter value of the facial expression.
示例性地,这里可以通过预先训练的神经网络来分别提取第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值,比如可以将第一人脸图像和每张第二人脸图像分别输入预先训练的神经网络,得到各自对应的人脸参数值。Exemplarily, a pre-trained neural network can be used to extract the face parameter values of the first face image, and the face parameter values corresponding to the multiple second face images of the preset style respectively. A face image and each second face image are respectively input into a pre-trained neural network to obtain their corresponding face parameter values.
S302,基于第一人脸图像的人脸参数值、以及预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定目标人脸在预设风格下的稠密点云数据。S302, based on the face parameter values of the first face image and the face parameter values and dense point cloud data corresponding to the plurality of second face images of the preset style respectively, determine the denseness of the target face under the preset style point cloud data.
考虑到人脸参数值和稠密点云数据在表征同一张人脸时具有对应关系,因此可以通过第一人脸图像和预设风格的多张第二人脸图像各自对应的人脸参数值,确定出第一人脸图像和预设风格的多张第二人脸图像之间的关联关系,然后根据该关联关系,以及预设风格多张第二人脸图像分别对应的稠密点云数据,确定目标人脸在预设风格下的稠密点云数据。Considering that the face parameter value and the dense point cloud data have a corresponding relationship when characterizing the same face, the face parameter values corresponding to the first face image and the multiple second face images of the preset style can be used respectively, Determine the association relationship between the first face image and the plurality of second face images of the preset style, and then according to the association relationship and the dense point cloud data corresponding to the plurality of second face images of the preset style, respectively, Determine the dense point cloud data of the target face in the preset style.
本公开实施例中,提出在确定目标人脸图像在预设风格下的稠密点云数据的过程中,可以结合第一人脸图像和多张第二人脸图像的人脸参数值来确定,因为在通过人脸参数值表示人脸时使用的参数值数量较少,因此能够更加快速的确定出目标人脸在预设风格下的稠密点云数据。In the embodiment of the present disclosure, it is proposed that in the process of determining the dense point cloud data of the target face image in the preset style, the face parameter values of the first face image and the plurality of second face images can be combined to determine, Because the number of parameter values used to represent the face by the face parameter values is small, the dense point cloud data of the target face in the preset style can be determined more quickly.
示例性地,上述提到的人脸参数值由预先训练的神经网络提取,神经网络基于预先标注人脸参数值的样本图像训练得到。Exemplarily, the above-mentioned face parameter values are extracted by a pre-trained neural network, and the neural network is obtained by training based on sample images pre-labeled with face parameter values.
本公开实施例中,提出通过预先训练的神经网络来提取人脸图像的人脸参数值,可以提高人脸参数值的提取效率。In the embodiment of the present disclosure, it is proposed to extract the face parameter value of the face image through a pre-trained neural network, which can improve the extraction efficiency of the face parameter value.
具体地,可以按照以下方式预先训练神经网络,如图5所示,可以包括以下步骤S401至S403。Specifically, the neural network may be pre-trained in the following manner. As shown in FIG. 5 , the following steps S401 to S403 may be included.
S401,获取样本图像集,样本图像集包含多张样本图像以及每张样本图像对应 的标注人脸参数值;S401, obtain a sample image set, where the sample image set includes a plurality of sample images and a marked face parameter value corresponding to each sample image;
S402,将多张样本图像输入神经网络,得到每张样本图像对应的预测人脸参数值;S402, inputting multiple sample images into a neural network to obtain a predicted face parameter value corresponding to each sample image;
S403,基于每张样本图像对应的预测人脸参数值和标注人脸参数值,对神经网络的网络参数值进行调整,得到训练完成的神经网络。S403 , based on the predicted face parameter value and the labeled face parameter value corresponding to each sample image, adjust the network parameter value of the neural network to obtain a trained neural network.
示例性地,可以采集大量的人脸图像以及每张人脸图像对应的标注人脸参数值作为这里的样本图像集,将每张样本图像输入神经网络,可以得到神经网络输出的该张样本图像对应的预测人脸参数值,进一步可以基于样本图像对应的标注人脸参数值和预测人脸参数值确定神经网络对应的第三损失值,然后根据第三损失值对神经网络的网络参数值进行调整,直至调整次数达到预设次数和/或第三损失值小于第三预设阈值后,得到训练完成的神经网络。Exemplarily, a large number of face images and the labeled face parameter values corresponding to each face image can be collected as the sample image set here, and each sample image is input into the neural network, and the sample image output by the neural network can be obtained. For the corresponding predicted face parameter value, the third loss value corresponding to the neural network can be determined based on the labeled face parameter value corresponding to the sample image and the predicted face parameter value, and then the network parameter value of the neural network can be calculated according to the third loss value. Adjust until the number of adjustments reaches the preset number of times and/or the third loss value is smaller than the third preset threshold, and then the trained neural network is obtained.
本公开实施例中,在对用于提取人脸参数值的神经网络进行训练过程中,提出通过每张样本图像的标注人脸参数值,对神经网络的网络参数值进行不断调整,从而可以得到准确度较高的神经网络。In the embodiment of the present disclosure, during the training process of the neural network for extracting the face parameter value, it is proposed to continuously adjust the network parameter value of the neural network by labeling the face parameter value of each sample image, so as to obtain High-accuracy neural network.
具体地,针对上述S302,在基于第一人脸图像的人脸参数值、以及预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定目标人脸在预设风格下的稠密点云数据时,如图6所示,可以包括以下S3021~S3032:Specifically, for the above S302, based on the face parameter value of the first face image and the face parameter values and dense point cloud data corresponding to the plurality of second face images of the preset style, it is determined that the target face is in the The dense point cloud data in the preset style, as shown in Figure 6, may include the following S3021-S3032:
S3021,基于第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值,确定第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数;S3021, based on the face parameter values of the first face image and the face parameter values corresponding to the multiple second face images of the preset style respectively, determine the first face image and the multiple second face images of the preset style Linear fitting coefficients between face images;
S3022,根据预设风格的多张第二人脸图像分别对应的稠密点云数据和线性拟合系数,确定目标人脸在预设风格下的稠密点云数据。S3022: Determine the dense point cloud data of the target face in the preset style according to the dense point cloud data and the linear fitting coefficients respectively corresponding to the plurality of second face images of the preset style.
示例性地,以人脸参数值为3DMM参数值为例,考虑到第一人脸图像的3DMM参数值可以表征该第一人脸图像对应的人脸形状和表情,同样每张第二人脸图像对应的3DMM参数值可以表征该第二人脸图像对应的人脸形状和表情,第一人脸图像和多张第二人脸图像之间的关联关系可以通过3DMM参数值来进行确定,具体地,假设多张第二人脸图像包含n张第二人脸图像,这样第一人脸图像和多张第二人脸图像之间的线性拟合系数也包含n个线性拟合系数值,可以按照以下公式(1)来表示第一人脸图像的人脸参数值和多张第二人脸图像分别对应的人脸参数值之间的关联关系:Exemplarily, taking the face parameter value as the 3DMM parameter value as an example, considering that the 3DMM parameter value of the first face image can characterize the face shape and expression corresponding to the first face image, similarly every second face The 3DMM parameter value corresponding to the image can represent the face shape and expression corresponding to the second face image, and the relationship between the first face image and multiple second face images can be determined by the 3DMM parameter value. So, suppose that the multiple second face images include n second face images, so that the linear fitting coefficient between the first face image and the multiple second face images also includes n linear fitting coefficient values, The relationship between the face parameter value of the first face image and the face parameter values corresponding to the plurality of second face images can be expressed according to the following formula (1):
Figure PCTCN2021119080-appb-000001
Figure PCTCN2021119080-appb-000001
其中,IN 3DMM表示第一人脸图像对应的3DMM参数值;α x表示第一人脸图像和第x张第二人脸图像之间的线性拟合系数值;BASE 3DMM(x)表示第x张第二人脸图像对应的人脸参数值;L表示确定第一人脸图像对应的人脸参数值时使用到的第二人脸图像的数量;x用于指示第x张第二人脸图像,其中,x∈(1,L)。 Among them, IN 3DMM represents the 3DMM parameter value corresponding to the first face image; α x represents the linear fitting coefficient value between the first face image and the xth second face image; BASE 3DMM(x) represents the xth The face parameter value corresponding to the second face image; L represents the number of the second face image used when determining the face parameter value corresponding to the first face image; x is used to indicate the xth second face image, where x∈(1,L).
本公开实施例中,可以提出通过数量较少的人脸参数值快速得到表示第一人脸图像和多张第二人脸图像之间的关联关系的线性拟合系数,进一步可以根据该线性拟合系数对预设风格的多张第二人脸图像的稠密点云数据进行调整,可以快速得到目标人脸在预设风格下的稠密点云数据。In the embodiment of the present disclosure, it can be proposed to quickly obtain a linear fitting coefficient representing the correlation between the first face image and the plurality of second face images by using a small number of face parameter values, and further, according to the linear fitting The combination coefficient adjusts the dense point cloud data of multiple second face images of the preset style, and can quickly obtain the dense point cloud data of the target face under the preset style.
具体地,针对上述S3021,在基于第一人脸图像的人脸参数值,以及预设风格 的多张第二人脸图像分别对应的人脸参数值,确定第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数时,包括以下步骤S30211至S30214。Specifically, for the above S3021, the first face image and the preset style are determined based on the face parameter value of the first face image and the face parameter values corresponding to the plurality of second face images of the preset style respectively. The following steps S30211 to S30214 are included when the linear fitting coefficients between the plurality of second face images are obtained.
S30211,获取当前线性拟合系数;其中,初始的当前线性拟合系数为预先设置。S30211: Obtain the current linear fitting coefficient; wherein, the initial current linear fitting coefficient is preset.
当前线性拟合系数可以为按照以下步骤S30212至S30214调整过至少一次的线性拟合系数,也可以为初始的线性拟合系数,在该当前线性拟合系数为初始的线性拟合系数的情况下,该初始的线性拟合系数可以为预先根据经验设置的。The current linear fitting coefficient may be the linear fitting coefficient adjusted at least once according to the following steps S30212 to S30214, or may be the initial linear fitting coefficient, in the case that the current linear fitting coefficient is the initial linear fitting coefficient , and the initial linear fitting coefficient can be set empirically in advance.
S30212,基于当前线性拟合系数和多张第二人脸图像分别对应的人脸参数值,预测第一人脸图像的当前人脸参数值。S30212: Predict the current face parameter value of the first face image based on the current linear fitting coefficient and the face parameter values corresponding to the plurality of second face images respectively.
示例性地,多张第二人脸图像分别对应的人脸参数值可以由上述提到的预先训练的神经网络提取得到,然后可以将当前线性拟合系数和多张第二人脸图像分别对应的人脸参数值输入上述公式(1)中,预测得到第一人脸图像的当前人脸参数值。Exemplarily, the face parameter values corresponding to the plurality of second face images can be extracted from the above-mentioned pre-trained neural network, and then the current linear fitting coefficients and the plurality of second face images can be respectively corresponding to each other. The face parameter value of , is input into the above formula (1), and the current face parameter value of the first face image is predicted.
S30213,基于预测的当前人脸参数值和第一人脸图像的人脸参数值,确定第二损失值。S30213: Determine a second loss value based on the predicted current face parameter value and the face parameter value of the first face image.
在调整线性拟合系数的过程中,预测得到的第一人脸图像的当前人脸参数值和通过上述提到的预先训练的神经网络提取的第一人脸图像的人脸参数值之间具有一定的差距,可以基于该差距,确定提取的第一人脸图像的人脸参数值和预测的第一人脸图像的人脸参数值之间的第二损失值。In the process of adjusting the linear fitting coefficient, there is a difference between the current face parameter value of the predicted first face image and the face parameter value of the first face image extracted by the above-mentioned pre-trained neural network. If there is a certain gap, the second loss value between the extracted face parameter value of the first face image and the predicted face parameter value of the first face image can be determined based on the gap.
S30214,基于第二损失值以及预设的线性拟合系数对应的约束范围,调整当前线性拟合系数,基于调整后的当前线性拟合系数,返回执行预测当前人脸参数值的步骤,直至对当前线性拟合系数的调整操作符合第二调整截止条件的情况下,基于当前线性拟合系数得到第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数。S30214: Based on the second loss value and the constraint range corresponding to the preset linear fitting coefficient, adjust the current linear fitting coefficient, and return to the step of predicting the current face parameter value based on the adjusted current linear fitting coefficient, until the When the adjustment operation of the current linear fitting coefficient meets the second adjustment cut-off condition, the linear fitting coefficient between the first face image and the plurality of second face images of the preset style is obtained based on the current linear fitting coefficient.
示例性地,考虑到人脸参数值是用来表示脸部形状和尺寸的,为了避免后期通过线性拟合系数确定出的第一人脸图像的稠密点云数据在表征人脸面部时发生失真,这里提出在基于第二损失值调整当前线性拟合系数的过程中,需要结合预设的线性拟合系数的约束范围,一同对当前线性拟合系数进行调整,比如,这里可以通过大量数据统计,确定预设的线性拟合系数对应的约束范围设置为-0.5到0.5之间,这样在基于第二损失值调整当前线性拟合系数的过程中,可以使得每个调整后的线性拟合系数在-0.5到0.5之间。Exemplarily, considering that the face parameter values are used to represent the shape and size of the face, in order to avoid the distortion of the dense point cloud data of the first face image determined by the linear fitting coefficient in the later stage when characterizing the face. , it is proposed here that in the process of adjusting the current linear fitting coefficient based on the second loss value, the current linear fitting coefficient needs to be adjusted in combination with the preset constraint range of the linear fitting coefficient. For example, a large number of data statistics can be used here. , determine that the constraint range corresponding to the preset linear fitting coefficient is set to be between -0.5 and 0.5, so that in the process of adjusting the current linear fitting coefficient based on the second loss value, each adjusted linear fitting coefficient can be Between -0.5 and 0.5.
示例性地,在基于第二损失值以及预设的线性拟合系数对应的约束范围,对当前线性拟合系数进行调整,以使得预测的第一人脸图像的当前人脸参数值和基于神经网络提取的第一人脸图像的人脸参数值之间更加接近,然后基于调整后的当前线性拟合系数,返回S30212,直至对当前线性拟合系数的调整操作符合第二调整截止条件的情况下,比如在第二损失值小于第二预设阈值和/或针对当前线性拟合系数的调整次数达到预设次数后,得到线性拟合系数。Exemplarily, in the constraint range based on the second loss value and the preset linear fitting coefficient, the current linear fitting coefficient is adjusted, so that the predicted current face parameter value of the first face image and the neural-based The face parameter values of the first face image extracted by the network are closer, and then based on the adjusted current linear fitting coefficient, return to S30212 until the adjustment operation on the current linear fitting coefficient meets the second adjustment cut-off condition For example, the linear fitting coefficient is obtained after the second loss value is smaller than the second preset threshold and/or the number of times of adjustment for the current linear fitting coefficient reaches a preset number of times.
本公开实施例中,在调整第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数的过程中,通过第二损失值和/或调整次数对线性拟合系数进行多次调整,可以提高线性拟合系数的准确度;另一方面在调整过程中通过预设的线性拟合系数的约束范围进行调整约束,这样得到线性拟合系数,能够更加合理的确定目标人脸对应的稠密点云数据。In the embodiment of the present disclosure, in the process of adjusting the linear fitting coefficient between the first face image and the multiple second face images of the preset style, the linear fitting is performed by the second loss value and/or the adjustment times. Adjusting the coefficient multiple times can improve the accuracy of the linear fitting coefficient; on the other hand, during the adjustment process, the constraints are adjusted through the preset constraint range of the linear fitting coefficient, so that the linear fitting coefficient can be obtained, which can be determined more reasonably Dense point cloud data corresponding to the target face.
具体地,稠密点云数据包含稠密点云中各个点的坐标值,针对上述S3022,在根据预设风格的多张第二人脸图像分别对应的稠密点云数据和线性拟合系数,确定目标人脸在预设风格下的稠密点云数据时,包括以下步骤S30221至S30224。Specifically, the dense point cloud data includes the coordinate values of each point in the dense point cloud. For the above S3022, the target is determined according to the dense point cloud data and the linear fitting coefficients respectively corresponding to the plurality of second face images of the preset style. The following steps S30221 to S30224 are included when the face is in the dense point cloud data in the preset style.
S30221,基于预设风格的多张第二人脸图像分别对应的稠密点云中各个点的坐标值,确定平均稠密点云数据中对应点的坐标值;S30221, determining the coordinate value of the corresponding point in the average dense point cloud data based on the coordinate value of each point in the dense point cloud corresponding to the plurality of second face images of the preset style respectively;
示例性地,在确定预设风格的多张第二人脸图像对应的平均稠密点云数据中各个点的坐标值时,可以基于多张第二人脸图像分别对应的各个点的坐标值,以及多张第二人脸图像的张数进行确定。比如多张第二人脸图像包含20张,每张第二人脸图像对应的稠密点云数据包含100个点的三维坐标值,针对第一个点,可以将第一个点在20张第二人脸图像中对应的三维坐标值进行求和,然后将求和结果除以20得到的值作为平均稠密点云数据中对应的第一个点的坐标值。按照同样的方式,可以得到多张第二人脸图像对应的平均稠密点云数据中每个点在三维坐标系下的坐标值。换言之,多张第二人脸图像各自的稠密点云数据中相互对应的多个个点的坐标均值构成这里的平均稠密点云数据中对应点的坐标值。Exemplarily, when determining the coordinate value of each point in the average dense point cloud data corresponding to the plurality of second face images of the preset style, the coordinate value of each point corresponding to the plurality of second face images may be based on, and the number of the plurality of second face images to be determined. For example, there are 20 second face images, and the dense point cloud data corresponding to each second face image includes 3D coordinate values of 100 points. For the first point, the first point can be placed in the 20th The corresponding three-dimensional coordinate values in the two face images are summed, and then the value obtained by dividing the summation result by 20 is used as the coordinate value of the corresponding first point in the average dense point cloud data. In the same way, the coordinate value of each point in the three-dimensional coordinate system in the average dense point cloud data corresponding to the plurality of second face images can be obtained. In other words, the coordinate mean of a plurality of points corresponding to each other in the dense point cloud data of the plurality of second face images constitutes the coordinate value of the corresponding point in the average dense point cloud data here.
S30222,基于预设风格的多张第二人脸图像分别对应的稠密点云中各个点的坐标值、和平均稠密点云数据中对应点的坐标值,确定预设风格的多张第二人脸图像分别对应的坐标差异值。S30222, based on the coordinate value of each point in the dense point cloud corresponding to the multiple second face images of the preset style respectively, and the coordinate value of the corresponding point in the average dense point cloud data, determine multiple second person images of the preset style The coordinate difference values corresponding to the face images respectively.
示例性地,平均稠密点云数据中各点的坐标值可以表示多张第二人脸图像对应的平均虚拟人脸模型,比如平均稠密点云数据中各点的坐标值表示的五官尺寸可以为多张第二人脸图像对应的平均五官尺寸,平均稠密点云数据中各点的坐标值表示的脸部尺寸可以为多张第二人脸图像对应的平均脸部尺寸等。Exemplarily, the coordinate value of each point in the average dense point cloud data may represent the average virtual face model corresponding to the plurality of second face images, for example, the facial feature size represented by the coordinate value of each point in the average dense point cloud data may be: The average facial feature size corresponding to the multiple second face images, and the face size represented by the coordinate values of each point in the average dense point cloud data may be the average face size corresponding to the multiple second face images, etc.
示例性地,通过多张第二人脸图像分别对应的稠密点云的坐标值和平均稠密点云数据中对应点的坐标值进行作差,可以得到多张第二人脸图像分别对应的稠密点云中各点的坐标值相对于平均稠密点云数据中对应点的坐标值的坐标差异值(本文中也可简称为“第二人脸图像对应的坐标差异值”),从而可以表征该第二人脸图像对应的虚拟人脸模型相比上述提到的平均人脸模型的差异性。Exemplarily, by making a difference between the coordinate values of the dense point clouds corresponding to the multiple second face images and the coordinate values of the corresponding points in the average dense point cloud data, the dense point cloud corresponding to the multiple second face images can be obtained. The coordinate difference value of the coordinate value of each point in the point cloud relative to the coordinate value of the corresponding point in the average dense point cloud data (this paper may also be referred to as "the coordinate difference value corresponding to the second face image"), so that the The difference between the virtual face model corresponding to the second face image and the average face model mentioned above.
S30223,基于预设风格的多张第二人脸图像分别对应的坐标差异值和线性拟合系数,确定第一人脸图像对应的坐标差异值。S30223: Determine the coordinate difference value corresponding to the first face image based on the coordinate difference values and the linear fitting coefficients corresponding to the plurality of second face images of the preset style respectively.
示例性地,线性拟合系数可以表示第一人脸图像对应的人脸参数值与多张第二人脸图像分别对应的人脸参数值之间的关联关系,而人脸图像对应的人脸参数值和该人脸图像对应的稠密点云数据之间具有对应关系,因此线性拟合系数也可以表示第一人脸图像对应的稠密点云数据与多张第二人脸图像分别对应的稠密点云数据之间的关联关系。Exemplarily, the linear fitting coefficient may represent the relationship between the face parameter value corresponding to the first face image and the face parameter values corresponding to the plurality of second face images respectively, while the face corresponding to the face image There is a correspondence between the parameter value and the dense point cloud data corresponding to the face image, so the linear fitting coefficient can also represent the dense point cloud data corresponding to the first face image and the dense point cloud data corresponding to multiple second face images respectively. The relationship between point cloud data.
在对应相同的平均稠密点云数据的情况下,该线性拟合系数还可以表示第一人脸图像对应的坐标差异值与多张第二人脸图像分别对应的坐标差异值之间的关联关系,因此,这里可以基于多张第二人脸图像分别对应的坐标差异值和线性拟合系数,确定第一人脸图像对应的稠密点云数据相对于平均稠密点云数据的坐标差异值。In the case of corresponding to the same average dense point cloud data, the linear fitting coefficient can also represent the correlation between the coordinate difference value corresponding to the first face image and the coordinate difference values corresponding to the plurality of second face images respectively , therefore, the coordinate difference value of the dense point cloud data corresponding to the first face image relative to the average dense point cloud data can be determined based on the coordinate difference values and linear fitting coefficients corresponding to the plurality of second face images respectively.
S30224,基于第一人脸图像对应的坐标差异值和平均稠密点云数据中对应点的坐标值,确定目标人脸在预设风格下的稠密点云数据。S30224, based on the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, determine the dense point cloud data of the target face in the preset style.
将第一人脸图像对应的坐标差异值和平均稠密点云数据中对应点的坐标值进行求和,可以得到第一人脸图像对应的稠密点云数据,基于该稠密点云数据可以表示该第一人脸图像对应的虚拟人脸模型。By summing the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, the dense point cloud data corresponding to the first face image can be obtained. The virtual face model corresponding to the first face image.
具体地,这里确定目标人脸对应的稠密点云数据时,考虑到稠密点云数据和3DMM之间的关系,目标人脸(第一人脸图像)对应的稠密点云数据可以通过OUT 3dmesh表示,具体可以根据以下公式(2)进行确定: Specifically, when determining the dense point cloud data corresponding to the target face, considering the relationship between the dense point cloud data and 3DMM, the dense point cloud data corresponding to the target face (the first face image) can be represented by OUT 3dmesh , which can be determined according to the following formula (2):
Figure PCTCN2021119080-appb-000002
Figure PCTCN2021119080-appb-000002
其中,
Figure PCTCN2021119080-appb-000003
表示第x张第二人脸图像对应的稠密点云的坐标值;
Figure PCTCN2021119080-appb-000004
表示根据多张第二人脸图像确定的平均稠密点云数据中对应点的坐标值;
Figure PCTCN2021119080-appb-000005
可以表示第一人脸图像对应的点的坐标值相对于平均稠密点云数据中对应点的坐标值的坐标差异值。
in,
Figure PCTCN2021119080-appb-000003
Indicates the coordinate value of the dense point cloud corresponding to the xth second face image;
Figure PCTCN2021119080-appb-000004
Represents the coordinate value of the corresponding point in the average dense point cloud data determined according to the plurality of second face images;
Figure PCTCN2021119080-appb-000005
It can represent a coordinate difference value between the coordinate value of the point corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data.
这里在确定第一人脸图像的稠密点云数据时,采用步骤S30221至S30224的方式进行确定,即通过上述公式(2)的方式进行确定,相比通过多张第二人脸图像分别对应的稠密点云数据和线性拟合系数来确定目标人脸对应的稠密点云数据的方式,可以包含以下好处。Here, when determining the dense point cloud data of the first face image, the method of steps S30221 to S30224 is used for determination, that is, the determination is carried out by the above formula (2). The method of using dense point cloud data and linear fitting coefficients to determine the dense point cloud data corresponding to the target face can contain the following benefits.
本公开实施例中,考虑到线性拟合系数是用于对多张第二人脸图像分别对应的坐标差异值进行线性拟合,这样得到的是第一人脸图像对应的点的坐标值相对于平均稠密点云数据中对应点的坐标值的坐标差异值(本文中也可简称为“第一人脸图像对应的坐标差异值”),因此无需对这些线性拟合系数之和等于1进行限定,第一人脸图像对应的坐标差异值和平均稠密点云数据中对应点的坐标值相加后,得到的稠密点云数据也能够表示一张正常的人脸图像。In the embodiment of the present disclosure, considering that the linear fitting coefficient is used to perform linear fitting on the coordinate difference values corresponding to the plurality of second face images respectively, what is obtained in this way is that the coordinate values of the points corresponding to the first face image are relative to each other. The coordinate difference value of the coordinate value of the corresponding point in the average dense point cloud data (this paper may also be referred to as "the coordinate difference value corresponding to the first face image"), so there is no need to make the sum of these linear fitting coefficients equal to 1. Restriction, after adding the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, the obtained dense point cloud data can also represent a normal face image.
另外,在第二人脸图像较少的情况下,按照本公开实施例提供的方式可以通过对线性拟合系数进行合理的调整,达到使用较少数量的第二人脸图像确定出目标人脸在预设风格下对应的稠密点云数据的目的,比如,第一人脸图像的眼睛尺寸为小眼睛,通过上述方式无需对多张第二人脸图像的眼睛尺寸进行限定,而可以通过线性拟合系数对坐标差异值进行调整,使得调整后的坐标差异值和平均稠密点云数据中对应点的坐标值叠加后,可以得到表示小眼睛的稠密点云数据。具体地,即使在多张第二人脸图像均为大眼睛时,对应的平均稠密点云数据表示的眼睛也为大眼睛,仍然可以调整线性拟合系数,使得通过将调整后的坐标差异值与平均稠密点云数据中对应点的坐标值求和可以得到表示小眼睛的稠密点云数据。In addition, in the case where there are few second face images, according to the method provided by the embodiments of the present disclosure, the linear fitting coefficient can be reasonably adjusted, so that the target face can be determined by using a smaller number of second face images The purpose of the corresponding dense point cloud data in the preset style, for example, the eye size of the first face image is small eyes, the above method does not need to limit the eye size of multiple second face images, but can use linear The fitting coefficient adjusts the coordinate difference value, so that after the adjusted coordinate difference value and the coordinate value of the corresponding point in the average dense point cloud data are superimposed, the dense point cloud data representing the small eye can be obtained. Specifically, even when the multiple second face images are big eyes, the eyes represented by the corresponding average dense point cloud data are also big eyes, and the linear fitting coefficient can still be adjusted so that the adjusted coordinate difference values Summing the coordinate values of the corresponding points in the average dense point cloud data can obtain the dense point cloud data representing the ommatidium.
可见,本公开实施例针对不同的第一人脸图像,无需挑选与该第一人脸图像的五官特征相似的第二人脸图像来确定该第一人脸图像对应的稠密点云数据,该方式在第二人脸图像较少的情况下,可以通过多样性的第二人脸图像的稠密点云数据准确地表示出不同的目标人脸在预设风格下的稠密点云数据。It can be seen that, for different first face images, the embodiment of the present disclosure does not need to select a second face image that is similar to the facial features of the first face image to determine the dense point cloud data corresponding to the first face image. Method In the case where there are few second face images, the dense point cloud data of different target faces under the preset style can be accurately represented by the dense point cloud data of the second face images of diversity.
按照上述方式,可以得到目标人脸在预设风格下的稠密点云数据,比如得到目标人脸在古典风格下的稠密点云数据,进一步基于该稠密点云数据展示目标人脸在古典风格下的初始虚拟人脸图像。According to the above method, the dense point cloud data of the target face in the preset style can be obtained, for example, the dense point cloud data of the target face in the classical style can be obtained, and further based on the dense point cloud data, the target face in the classical style can be displayed the initial virtual face image.
针对上述S102,在确定初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数时,如图7所示,包括以下步骤S501至S504。For the above S102, when determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image, as shown in FIG. 7, the following steps S501 to S504 are included.
S501,基于当前形变系数对标准稠密点云数据进行调整,得到当前稠密点云数据,初始的当前形变系数为预先设置的。S501: Adjust the standard dense point cloud data based on the current deformation coefficient to obtain the current dense point cloud data, where the initial current deformation coefficient is preset.
示例性地,在形变系数包含骨骼系数的情况下,可以基于当前骨骼系数和初始骨骼变化矩阵来共同确定针对该骨骼系数关联的第一稠密点云进行调整时的变化矩阵;在形变系数包含混合形变系数的情况下,可以基于当前混合形变系数和单位混合形变量共同确定针对该混合形变系数关联的第二稠密点云进行调整时的变化量,具体情况详见 下文。Exemplarily, in the case that the deformation coefficient includes a bone coefficient, the change matrix for adjusting the first dense point cloud associated with the bone coefficient can be jointly determined based on the current bone coefficient and the initial bone change matrix; when the deformation coefficient includes a mixed In the case of the deformation coefficient, the amount of change when adjusting the second dense point cloud associated with the mixed deformation coefficient can be jointly determined based on the current mixed deformation coefficient and the unit mixture deformation amount. For details, please refer to the following.
示例性地,为了对标准稠密点云数据进行调整的过程进行解释,这里可以引入骨骼坐标系,以及骨骼坐标系和世界坐标系之间的转换关系,其中骨骼坐标系是针对每个骨骼建立三维坐标系,即每个骨骼对应的局部坐标系,世界坐标系为针对整张人脸建立的三维坐标系,每个骨骼对应的局部坐标系与世界坐标系之间具有转换关系,按照该转换关系,可以将稠密点云在骨骼坐标系下的位置转换至世界坐标系下的位置。Exemplarily, in order to explain the process of adjusting the standard dense point cloud data, the bone coordinate system and the transformation relationship between the bone coordinate system and the world coordinate system can be introduced here, wherein the bone coordinate system is for each bone to establish a three-dimensional The coordinate system is the local coordinate system corresponding to each bone. The world coordinate system is a three-dimensional coordinate system established for the entire face. There is a conversion relationship between the local coordinate system corresponding to each bone and the world coordinate system. According to the conversion relationship , which can convert the position of the dense point cloud in the bone coordinate system to the position in the world coordinate system.
特别地,在基于当前形变系数对标准稠密点云数据进行调整的过程中,可以分为两种情况,第一种情况为在基于混合形变系数对标准稠密点云数据中对应的点进行调整时,会受到骨骼系数的影响的情况,下文将结合公式(3)进行说明;第二种情况为在基于混合形变系数对标准稠密点云数据中对应的点进行调整时,不会受到骨骼系数的影响的情况,下文将结合公式(4)说明。In particular, in the process of adjusting the standard dense point cloud data based on the current deformation coefficient, it can be divided into two cases. The first case is when the corresponding point in the standard dense point cloud data is adjusted based on the mixed deformation coefficient. , will be affected by the bone coefficient, which will be described in conjunction with formula (3) below; the second case is when the corresponding point in the standard dense point cloud data is adjusted based on the mixed deformation coefficient, it will not be affected by the bone coefficient. The situation of the impact will be described below in conjunction with formula (4).
具体地,第一种情况可以按照以下公式(3)来确定调整后的稠密点云数据:Specifically, in the first case, the adjusted dense point cloud data can be determined according to the following formula (3):
Figure PCTCN2021119080-appb-000006
Figure PCTCN2021119080-appb-000006
其中,V output(m)为针对标准稠密点云数据中第m个点进行调整过程中,在预先以人脸建立的世界坐标系下的坐标值;M boneworld(i)表示第i个骨骼对应的骨骼坐标系向世界坐标系进行变换的变换矩阵;M bindpose(i)表示预先设定的第i个骨骼在该骨骼对应的骨骼坐标系下的初始骨骼变换矩阵;boneweight (i)表示在第i个骨骼在第i个骨骼对应的骨骼坐标系下的值;V local(mi)表示标准稠密点云数据中的第m个点在第i个骨骼对应的骨骼坐标系下的初始坐标值(当该第m个点不在第i个骨骼中时,该初始坐标值为0);blendshape(mi)表示预先设定与第m个点关联的混合形变系数在第i个骨骼对应的骨骼坐标系下的单位形变量;bsweights(i)表示与第m个点关联的混合形变系数在第i个骨骼对应的骨骼坐标系下的坐标值;i用于指示第i个骨骼,i∈(1,n);n表示标准虚拟人脸图像对应的骨骼个数;m表示稠密点云数据中的第m个点。 Among them, V output(m) is the coordinate value in the world coordinate system pre-established with the face during the adjustment process for the m-th point in the standard dense point cloud data; M boneworld(i) represents the i-th bone corresponding to The transformation matrix of the bone coordinate system to the world coordinate system; M bindpose (i) represents the preset initial bone transformation matrix of the i-th bone in the bone coordinate system corresponding to the bone; boneweight (i) represents the initial bone transformation matrix in the ith bone The value of the i bone in the bone coordinate system corresponding to the i-th bone; V local(mi) represents the initial coordinate value of the m-th point in the standard dense point cloud data in the bone coordinate system corresponding to the i-th bone ( When the m-th point is not in the i-th bone, the initial coordinate value is 0); blendshape(mi) indicates that the blend shape coefficient associated with the m-th point is preset in the bone coordinate system corresponding to the i-th bone The unit deformation variable below; bsweights(i) represents the coordinate value of the blend deformation coefficient associated with the mth point in the bone coordinate system corresponding to the ith bone; i is used to indicate the ith bone, i∈(1, n); n represents the number of bones corresponding to the standard virtual face image; m represents the mth point in the dense point cloud data.
可见针对上述第一种情况,在基于混合形变系数对标准稠密点云数据中的点在骨骼坐标系下的坐标值进行调整后,还需要结合骨骼形变系数才能最终确定标准稠密点云数据中的稠密点云在世界坐标系下的坐标值,即上述提到的在基于混合形变系数对标准稠密点云数据中的稠密点云进行调整时,会受到骨骼系数的影响。It can be seen that for the first case above, after adjusting the coordinate values of the points in the standard dense point cloud data in the bone coordinate system based on the mixed deformation coefficient, it is necessary to combine the bone deformation coefficients to finally determine the standard dense point cloud data. The coordinate value of the dense point cloud in the world coordinate system, that is, the above-mentioned adjustment of the dense point cloud in the standard dense point cloud data based on the mixed deformation coefficient, will be affected by the bone coefficient.
针对第二种情况,可以按照以下公式(4)来确定调整后的稠密点云数据:For the second case, the adjusted dense point cloud data can be determined according to the following formula (4):
Figure PCTCN2021119080-appb-000007
Figure PCTCN2021119080-appb-000007
其中,V output(m)针对标准稠密点云数据中第m个点进行调整过程中,在世界坐标系下的坐标值;M′ boneworld(i)表示第i个骨骼对应的骨骼坐标系向世界坐标系进行变换的变换矩阵;M′ bindpose(i)表示预先设定的第i个骨骼在该骨骼对应的骨骼坐标系下的初始骨骼变换矩阵;boneweight′ (i)表示在第i个骨骼在第i个骨骼对应的骨骼坐标系下的值; V′ local(mi)表示标准稠密点云数据中的第m个点在第i个骨骼对应的骨骼坐标系下的初始位置(当该第m个点不在第i个骨骼中时,该初始位置为0);blendshape′(m)表示预先设定与第m个点关联的混合形变系数在世界坐标系下的单位形变量;bsweights′(m)表示与第m个点关联的混合形变系数在世界坐标系下的值;i用于指示第i个骨骼,i∈(1,n);n表示需要调整的骨骼个数。 Among them, V output(m) is the coordinate value in the world coordinate system during the adjustment process for the mth point in the standard dense point cloud data; The transformation matrix for transforming the coordinate system; M' bindpose(i) represents the preset initial bone transformation matrix of the i-th bone in the bone coordinate system corresponding to the bone; boneweight' (i) represents the i-th bone in the The value in the bone coordinate system corresponding to the ith bone; V′ local(mi) represents the initial position of the mth point in the standard dense point cloud data in the bone coordinate system corresponding to the ith bone (when the mth point is in the bone coordinate system corresponding to the ith bone) When the point is not in the i-th bone, the initial position is 0); blendshape'(m) represents the unit shape variable of the pre-set blend deformation coefficient associated with the m-th point in the world coordinate system; bsweights'(m ) represents the value of the blend deformation coefficient associated with the mth point in the world coordinate system; i is used to indicate the ith bone, i∈(1,n); n represents the number of bones to be adjusted.
可见针对上述第二种情况,可以直接基于混合形变系数对标准稠密点云数据中的点在世界坐标系下的坐标值进行调整,即上述提到的在基于混合形变系数对标准稠密点云数据中的点进行调整时,不会受到骨骼系数的影响。It can be seen that for the second case above, the coordinate values of the points in the standard dense point cloud data in the world coordinate system can be adjusted directly based on the mixed deformation coefficient, that is, the above-mentioned adjustment of the standard dense point cloud data based on the mixed deformation coefficient When adjusting the points in the , it will not be affected by the bone coefficients.
上述公式(3)或者公式(4)均为针对标准稠密点云数据中的其中一个点进行调整的过程,按照同样的方式,可以依次针对标准稠密点云数据中的其它点进行调整,从而完成基于当前形变系数对标准稠密点云数据的一次调整。The above formula (3) or formula (4) is a process of adjusting one of the points in the standard dense point cloud data. In the same way, other points in the standard dense point cloud data can be adjusted in turn to complete the adjustment. An adjustment to the standard dense point cloud data based on the current deformation coefficient.
S502,基于调整后的稠密点云数据和目标人脸的初始稠密点云数据,确定第一损失值。S502: Determine a first loss value based on the adjusted dense point cloud data and the initial dense point cloud data of the target face.
示例性地,第一损失值可以通过目标人脸的初始稠密点云数据和调整后的稠密点云数据之间的差值进行表示。Exemplarily, the first loss value may be represented by a difference between the original dense point cloud data of the target face and the adjusted dense point cloud data.
具体地,该第一损失值可以通过以下公式(5)进行表示:Specifically, the first loss value can be expressed by the following formula (5):
Figure PCTCN2021119080-appb-000008
Figure PCTCN2021119080-appb-000008
其中,V diff表示调整后的稠密点云数据相比目标人脸的初始稠密点云数据的第一损失值;V input(m)表示目标人脸的初始稠密点云数据中的第m个点在世界坐标系下的坐标值;V output(m)表示调整后的稠密点云数据中的第m个点在世界坐标系下的坐标值;m表示稠密点云数据中的第m个点;M表示稠密点云数据中的点的数量。 Among them, V diff represents the first loss value of the adjusted dense point cloud data compared to the initial dense point cloud data of the target face; V input(m) represents the mth point in the initial dense point cloud data of the target face The coordinate value in the world coordinate system; V output(m) represents the coordinate value of the mth point in the adjusted dense point cloud data in the world coordinate system; m represents the mth point in the dense point cloud data; M represents the number of points in the dense point cloud data.
S503,基于第一损失值以及预设的形变系数的约束范围,调整当前形变系数,基于调整后的当前形变系数,返回执行对标准稠密点云数据进行调整的步骤,直至对当前形变系数的调整操作符合第一调整截止条件,得到初始稠密点云数据相对于标准稠密点云数据的形变系数。S503, based on the first loss value and the preset constraint range of the deformation coefficient, adjust the current deformation coefficient, and based on the adjusted current deformation coefficient, return to the step of adjusting the standard dense point cloud data until the adjustment of the current deformation coefficient The operation conforms to the first adjustment cut-off condition, and the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data is obtained.
示例性地,考虑到当前形变系数是目标人脸相对于标准人脸的形变系数,即当前形变系数是用来表示正常的脸部形貌的,为了避免对当前形变系数的调整幅度过大,使得其表示的脸部形貌失真,这里提出在基于损失函数值调整当前形变系数的过程中,需要结合预设的线性拟合系数的约束范围,一同对当前线性拟合系数进行调整,具体地,这里预设的形变系数为混合形变系数,比如约束混合形变系数的取值为0至1之间。Exemplarily, considering that the current deformation coefficient is the deformation coefficient of the target face relative to the standard face, that is, the current deformation coefficient is used to represent the normal face shape, in order to avoid the adjustment of the current deformation coefficient is too large, To make the facial appearance represented by it distorted, it is proposed here that in the process of adjusting the current deformation coefficient based on the loss function value, the current linear fitting coefficient needs to be adjusted in combination with the constraint range of the preset linear fitting coefficient. Specifically, , the preset deformation coefficient here is a mixed deformation coefficient, for example, the value of the constrained mixed deformation coefficient is between 0 and 1.
示例性地,在基于第一损失值以及预设的形变系数对应的约束范围,对当前形变系数进行调整,以使得目标人脸的初始稠密点云数据和调整后的稠密点云数据之间更加接近,然后基于调整后的当前形变系数,返回S501,直至在对当前形变系数的调整操作符合第一调整截止条件,比如在第一损失值小于第一预设阈值和/或针对当前形变系数的调整次数达到预设次数后,得到目标人脸对应的形变系数。Exemplarily, within the constraint range corresponding to the first loss value and the preset deformation coefficient, the current deformation coefficient is adjusted to make the difference between the initial dense point cloud data of the target face and the adjusted dense point cloud data more accurate. approach, and then return to S501 based on the adjusted current deformation coefficient, until the adjustment operation on the current deformation coefficient meets the first adjustment cut-off condition, such as when the first loss value is less than the first preset threshold and/or for the current deformation coefficient. After the adjustment times reach the preset times, the deformation coefficient corresponding to the target face is obtained.
本公开实施例中,在确定形变系数的过程中,是通过对标准稠密点云数据中的多个点进行调整确定的,因此得到的形变系数可以表示出目标人脸的初始稠密点云相比 标准稠密点云的精确变化量,从而在需要对目标人脸的初始虚拟人脸图像进行调整过程中,可以基于形变系数对稠密点云数据中关联的点进行调整,从而提高调整精度。In the embodiment of the present disclosure, in the process of determining the deformation coefficient, it is determined by adjusting a plurality of points in the standard dense point cloud data, so the obtained deformation coefficient can represent the initial dense point cloud of the target face compared to the original dense point cloud. The precise change amount of the standard dense point cloud, so that when the initial virtual face image of the target face needs to be adjusted, the associated points in the dense point cloud data can be adjusted based on the deformation coefficient, thereby improving the adjustment accuracy.
另一方面,在确定形变系数的过程中,损失值是在对所有稠密点云进行调整后,再基于调整后的稠密点云数据以及目标人脸的初始稠密点云数据而确定的,对当前形变系数进行的优化,充分考虑形变系数与整体稠密点云之间的关联性,提高优化效率;此外在调整过程中通过预设的形变系数的约束范围进行调整约束,可以防止形变系数发生畸变,得到无法表示正常的目标人脸的形变系数。On the other hand, in the process of determining the deformation coefficient, the loss value is determined based on the adjusted dense point cloud data and the initial dense point cloud data of the target face after adjusting all dense point clouds. The optimization of the deformation coefficient fully considers the correlation between the deformation coefficient and the overall dense point cloud, and improves the optimization efficiency; Obtain the deformation coefficient that cannot represent the normal target face.
针对上述S103,在响应于针对初始虚拟人脸图像的调整操作,对形变系数进行调整,得到目标形变系数时,如图8所示,可以包括以下步骤S601至S602:For the above S103, when the deformation coefficient is adjusted in response to the adjustment operation for the initial virtual face image to obtain the target deformation coefficient, as shown in FIG. 8, the following steps S601 to S602 may be included:
S601,响应针对初始虚拟人脸图像的调整操作,确定针对初始虚拟人脸图像的目标调整位置,以及针对目标调整位置的调整幅度;S601, in response to the adjustment operation for the initial virtual face image, determine the target adjustment position for the initial virtual face image, and the adjustment range for the target adjustment position;
S602,按照调整幅度,对与目标调整位置关联的形变系数进行调整,得到目标形变系数。S602, according to the adjustment range, adjust the deformation coefficient associated with the target adjustment position to obtain the target deformation coefficient.
示例性地,在对初始虚拟人脸图像进行调整过程中,考虑到初始虚拟人脸图像包含的可以调整的位置较多,在向用户进行展示这些可以调整的位置时,可以预选对这些可以调整的位置进行分类,比如按照脸部不同区域进行分类,比如可以分为脸部下巴区域、眉毛区域、眼睛区域等,对应地,可以展示下巴区域、眉毛区域、眼睛区域等分别对应的调整操作按钮,用户可以基于不同区域分别对应的调整操作按钮选择目标调整区域;或者,可以每次向用户展示设定个数的调整位置对应的调整按钮,以及展示更换调整位置的指示按钮。比如,如图9所示,图9中左图展示了6种调整位置的调整界面,具体包含鼻翼上下、鼻梁高低、鼻头大小、鼻头朝向、嘴大小、嘴的上部和下部各自对应的幅度条,用户可以拖动幅度条对调整位置进行调整,也可以在选中调整位置后,通过位于调整位置上方的调整按键进行调整,比如“减一”按键和“加一”按键,调整界面的右下角还展示有用于指示更换调整位置的箭头按钮,用户可以触发该箭头按钮,更换至图9中右图展示的6种调整位置。Exemplarily, in the process of adjusting the initial virtual face image, considering that the initial virtual face image contains many positions that can be adjusted, when displaying these positions that can be adjusted to the user, these positions that can be adjusted can be preselected. For example, it can be classified according to different areas of the face, for example, it can be divided into the chin area of the face, the eyebrow area, the eye area, etc. Correspondingly, it can display the corresponding adjustment operation buttons for the chin area, the eyebrow area, the eye area, etc. , the user can select the target adjustment area based on the adjustment operation buttons corresponding to different areas; or, each time, the user can display the adjustment buttons corresponding to the set number of adjustment positions and the instruction buttons for changing the adjustment positions. For example, as shown in Figure 9, the left figure in Figure 9 shows the adjustment interface for 6 adjustment positions, including the upper and lower alar, the height of the bridge of the nose, the size of the tip of the nose, the orientation of the tip of the nose, the size of the mouth, and the corresponding amplitude bars for the upper and lower parts of the mouth. , the user can drag the amplitude bar to adjust the adjustment position, or after selecting the adjustment position, adjust the adjustment button located above the adjustment position, such as the "minus one" button and "plus one" button, adjust the lower right corner of the interface An arrow button is also shown for indicating the replacement of the adjustment position, and the user can trigger the arrow button to change to the 6 adjustment positions shown in the right picture in FIG. 9 .
具体地,针对每种调整位置,可以按照该调整位置对应的幅度条确定针对该调整位置的调整幅度,用户针对其中一个调整位置对应的幅度条进行调整时,可以将该调整位置作为目标调整位置,基于幅度条的变化数据确定针对该目标调整位置的调整幅度,进一步按照该调整幅度,以及预先设定的每种调整位置与形变系数之间的关联关系,对与目标调整位置关联的形变系数进行调整,得到目标形变系数。Specifically, for each adjustment position, the adjustment range for the adjustment position can be determined according to the range bar corresponding to the adjustment position, and when the user adjusts the range bar corresponding to one of the adjustment positions, the adjustment position can be used as the target adjustment position , determine the adjustment range for the target adjustment position based on the change data of the range bar, and further according to the adjustment range and the preset correlation between each adjustment position and the deformation coefficient, the deformation coefficient associated with the target adjustment position Make adjustments to get the target deformation coefficient.
本公开实施例中,可以根据调整操作,确定目标形变系数,便于后期基于该目标形变系数可以确定出目标虚拟人脸图像,该方式可以基于个性化的用户需求来针对形变系数进行调整。In the embodiment of the present disclosure, the target deformation coefficient can be determined according to the adjustment operation, so that the target virtual face image can be determined later based on the target deformation coefficient.
针对上述S104,在基于目标形变系数和标准稠密点云数据,生成目标人脸对应的目标虚拟人脸图像时,如图10所示,可以包括以下步骤S801至S802:For the above S104, when generating the target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data, as shown in FIG. 10, the following steps S801 to S802 may be included:
S801,基于目标形变系数,对标准稠密点云数据进行调整,得到目标稠密点云数据;S801, adjust the standard dense point cloud data based on the target deformation coefficient to obtain the target dense point cloud data;
S802,基于目标稠密点云数据,生成目标虚拟人脸图像。S802, based on the target dense point cloud data, generate a target virtual face image.
示例性地,目标形变系数可以包含与目标调整位置关联的发生变化的形变系数,也可以包含未进行调整的不发生变化的形变系数,考虑到形变系数是目标人脸的初始稠密点云数据相比标准稠密点云数据确定的,因此在基于目标形变系数对初始虚拟人脸图像进行调整过程中,可以基于该目标形变系数和标准稠密点云数据来得到目标人脸对应 的目标稠密点云数据,进一步基于该目标稠密点云数据,生成目标虚拟人脸图像。Exemplarily, the target deformation coefficient may include a changed deformation coefficient associated with the target adjustment position, and may also include a non-adjusted deformation coefficient that does not change, considering that the deformation coefficient is the initial dense point cloud data phase of the target face. Therefore, in the process of adjusting the initial virtual face image based on the target deformation coefficient, the target dense point cloud data corresponding to the target face can be obtained based on the target deformation coefficient and the standard dense point cloud data. , and further generate the target virtual face image based on the target dense point cloud data.
示例性地,比如针对如上图9所示,用户点击针对鼻梁高低进行调整,将鼻梁的高度调高,则相比初始虚拟人脸图像,目标虚拟人脸图像的鼻梁变高。Exemplarily, for example, as shown in FIG. 9 above, the user clicks to adjust the height of the bridge of the nose to increase the height of the bridge of the nose, and the nose bridge of the target virtual face image becomes higher than the initial virtual face image.
本公开实施例中,在确定目标形变系数后,可以直接根据目标形变系数对标准稠密点云数据进行调整,确定目标稠密点云数据,这样可以根据调目标稠密点云数据快速得到目标人脸对应的目标虚拟人脸图像。In the embodiment of the present disclosure, after the target deformation coefficient is determined, the standard dense point cloud data can be adjusted directly according to the target deformation coefficient to determine the target dense point cloud data, so that the corresponding target face can be quickly obtained according to the adjusted target dense point cloud data target virtual face image.
具体地,在基于目标稠密点云数据,生成目标虚拟人脸图像时,包括以下步骤S8021至S8022:Specifically, when generating the target virtual face image based on the target dense point cloud data, the following steps S8021 to S8022 are included:
S8021,确定与目标稠密点云数据对应的虚拟人脸模型;S8021, determine a virtual face model corresponding to the target dense point cloud data;
S8022,基于预选的人脸属性特征和虚拟人脸模型,生成目标虚拟人脸图像。S8022: Generate a target virtual face image based on the preselected face attribute features and the virtual face model.
示例性地,虚拟人脸模型可以为三维人脸模型,也可以是二维人脸模型,与具体的应用场景相关,在此不进行限定。Exemplarily, the virtual face model may be a three-dimensional face model or a two-dimensional face model, which is related to a specific application scenario and is not limited herein.
示例性地,人脸属性特征可以包含肤色、发型等特征,人脸属性特征可以根据用户的选择确定,比如用户可以选择设置肤色为白色系、发型为棕色卷发。Exemplarily, the face attribute feature may include features such as skin color, hairstyle, etc., and the face attribute feature may be determined according to the user's selection, for example, the user may choose to set the skin color to white and the hairstyle to brown curly hair.
在得到目标稠密点云数据后,可以基于该目标稠密点云数据,生成目标虚拟人脸模型,目标虚拟人脸模型可以包含目标人脸的形状以及表情特征,然后结合人脸属性特征,可以生成符合用户个性需求的目标虚拟人脸图像。After obtaining the target dense point cloud data, the target virtual face model can be generated based on the target dense point cloud data. The target virtual face model can include the shape and expression features of the target face, and then combined with the face attribute features, can generate The target virtual face image that meets the user's individual needs.
本公开实施例中,在对初始虚拟人脸图像进行调整时,还可以结合用户选定的人脸属性特征进行个性化地调整,从而使得目标虚拟人脸图像更贴合用户的实际需求。In the embodiment of the present disclosure, when the initial virtual face image is adjusted, the user's selected face attributes can also be used for personalized adjustment, so that the target virtual face image is more suitable for the actual needs of the user.
下面将以一具体实施例对人脸图像的处理过程进行阐述,包括以下步骤S901~S904:The processing process of the face image will be described below with a specific embodiment, including the following steps S901-S904:
S901,针对输入的目标人脸,使用计算机读取输入的目标人脸的初始稠密点云数据V input(其中V input表示稠密点云中M个点的坐标值),再获取标准虚拟人脸图像对应的标准稠密点云数据和预设的初始形变系数(包括初始骨骼形变系数和初始混合形变系数); S901, for the input target face, use a computer to read the initial dense point cloud data V input of the input target face (wherein V input represents the coordinate value of M points in the dense point cloud), and then obtain a standard virtual face image Corresponding standard dense point cloud data and preset initial deformation coefficients (including initial bone deformation coefficients and initial mixed deformation coefficients);
S902,根据初始骨骼形变系数和初始混合形变系数对标准稠密点云数据进行调整,得到调整后的稠密点云数据V output(其中V output表示稠密点云中M个点调整后的坐标值),具体可以通过上述公式(3)或者上述公式(4)进行调整; S902, adjust the standard dense point cloud data according to the initial bone deformation coefficient and the initial mixed deformation coefficient, and obtain the adjusted dense point cloud data V output (wherein V output represents the adjusted coordinate value of M points in the dense point cloud), Specifically, it can be adjusted by the above formula (3) or the above formula (4);
S903,计算目标人脸的初始稠密点云数据V input和调整后的稠密点云数据V output之间的差异值V diff=V input-V output,并通过该差异值以及针对初始混合形变系数的约束项,对初始骨骼形变系数和初始混合形变系数进行调整; S903, calculate the difference value V diff =V input -V output between the initial dense point cloud data V input of the target face and the adjusted dense point cloud data V output , and use the difference value and the difference value for the initial mixed deformation coefficient Constraint item to adjust the initial bone deformation coefficient and the initial mixed deformation coefficient;
S904,根据调整后的骨骼形变系数替换初始骨骼形变系数,以及根据调整后的混合形变系数替换初始混合形变系数,返回S902步骤继续对骨骼系数和混合形变系数进行调整,直至目标人脸的初始稠密点云数据V input和调整后的稠密点云数据V output的差异值小于第一预设阈值,或者迭代次数超过预设次数。 S904, replace the initial bone deformation coefficient according to the adjusted bone deformation coefficient, and replace the initial mixed deformation coefficient according to the adjusted mixed deformation coefficient, and return to step S902 to continue to adjust the bone coefficient and the mixed deformation coefficient until the initial density of the target face is The difference value between the point cloud data V input and the adjusted dense point cloud data V output is smaller than the first preset threshold, or the number of iterations exceeds the preset number of times.
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的具体执行顺序应当以其功能和可能的内在逻辑确定。Those skilled in the art can understand that in the above method of the specific implementation, the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
基于同一技术构思,本公开实施例中还提供了与人脸图像的处理方法对应的处理装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述处理方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。Based on the same technical concept, the embodiment of the present disclosure also provides a processing device corresponding to the processing method of a face image. For the implementation, refer to the implementation of the method, and the repetition will not be repeated.
参照图11所示,本公开实施例提供一种人脸图像的处理装置1000,该处理装置包括:获取模块1001,用于获取目标人脸的初始稠密点云数据,并基于初始稠密点云数据生成目标人脸的初始虚拟人脸图像;确定模块1002,用于确定初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数;调整模块1003,用于响应于针对初始虚拟人脸图像的调整操作,对形变系数进行调整,得到目标形变系数;生成模块1004,用于基于目标形变系数和标准稠密点云数据,生成目标人脸对应的目标虚拟人脸图像。Referring to FIG. 11 , an embodiment of the present disclosure provides an apparatus 1000 for processing a face image. The processing apparatus includes: an acquisition module 1001 for acquiring initial dense point cloud data of a target face, and based on the initial dense point cloud data Generate the initial virtual face image of the target face; the determining module 1002 is used to determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image; the adjustment module 1003 is used to respond to the In the adjustment operation of the initial virtual face image, the deformation coefficient is adjusted to obtain the target deformation coefficient; the generating module 1004 is used for generating the target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data.
在一种可能的实施方式中,形变系数包含至少一个骨骼系数和至少一个混合形变系数中的至少一项;其中,每个骨骼系数用于对与该骨骼系数关联的第一稠密点云构成的骨骼的初始位姿进行调整;每个混合形变系数用于对与该混合形变系数关联的第二稠密点云对应的初始位置进行调整。In a possible implementation, the deformation coefficient includes at least one of at least one bone coefficient and at least one mixed deformation coefficient; wherein, each bone coefficient is used to form the first dense point cloud associated with the bone coefficient. The initial pose of the bone is adjusted; each blend shape coefficient is used to adjust the initial position corresponding to the second dense point cloud associated with that blend shape coefficient.
在一种可能的实施方式中,确定模块1002在用于确定初始稠密点云数据相对标准稠密点云数据的形变系数时,包括:基于当前形变系数对标准稠密点云数据进行调整,得到调整后的稠密点云数据,初始的当前形变系数为预先设置的;基于调整后的稠密点云数据和初始稠密点云数据,确定第一损失值;基于第一损失值以及预设的形变系数的约束范围,调整当前形变系数;基于调整后的当前形变系数,返回执行对标准稠密点云数据进行调整的步骤,直至对当前形变系数的调整操作符合第一调整截止条件的情况下,根据当前形变系数得到初始稠密点云数据相对于标准稠密点云数据的形变系数。In a possible implementation, when the determining module 1002 is used to determine the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data, the method includes: adjusting the standard dense point cloud data based on the current deformation coefficient, and obtaining the adjusted The initial current deformation coefficient is preset; the first loss value is determined based on the adjusted dense point cloud data and the initial dense point cloud data; based on the constraints of the first loss value and the preset deformation coefficient range, adjust the current deformation coefficient; based on the adjusted current deformation coefficient, return to the step of adjusting the standard dense point cloud data, until the adjustment operation of the current deformation coefficient meets the first adjustment cut-off condition, according to the current deformation coefficient Obtain the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data.
在一种可能的实施方式中,调整模块1003在用于响应于针对初始虚拟人脸图像的调整操作,对形变系数进行调整,得到目标形变系数时,包括:响应针对初始虚拟人脸图像的调整操作,确定针对初始虚拟人脸图像的目标调整位置,以及针对目标调整位置的调整幅度;按照调整幅度,对与目标调整位置关联的形变系数进行调整,得到目标形变系数。In a possible implementation manner, when the adjustment module 1003 is configured to adjust the deformation coefficient in response to the adjustment operation on the initial virtual face image to obtain the target deformation coefficient, the method includes: responding to the adjustment on the initial virtual face image operation, determine the target adjustment position for the initial virtual face image, and the adjustment range for the target adjustment position; according to the adjustment range, adjust the deformation coefficient associated with the target adjustment position to obtain the target deformation coefficient.
在一种可能的实施方式中,生成模块1004在用于基于目标形变系数和标准稠密点云数据,生成目标人脸对应的目标虚拟人脸图像时,包括:基于目标形变系数,对标准稠密点云数据进行调整,得到目标稠密点云数据;基于目标稠密点云数据,生成目标虚拟人脸图像。In a possible implementation, when the generating module 1004 is used to generate the target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data, it includes: based on the target deformation coefficient, the standard dense point The cloud data is adjusted to obtain the target dense point cloud data; based on the target dense point cloud data, the target virtual face image is generated.
在一种可能的实施方式中,生成模块1004在用于基于目标稠密点云数据,生成目标虚拟人脸图像时,包括:确定与目标稠密点云数据对应的虚拟人脸模型;基于预选的人脸属性特征和虚拟人脸模型,生成目标虚拟人脸图像。In a possible implementation, when the generating module 1004 is used to generate the target virtual face image based on the target dense point cloud data, the method includes: determining a virtual face model corresponding to the target dense point cloud data; Face attribute features and virtual face model to generate target virtual face image.
在一种可能的实施方式中,获取模块1001在用于获取目标人脸的初始稠密点云数据,并基于初始稠密点云数据展示目标人脸的初始虚拟人脸图像时,包括:获取目标人脸对应的第一人脸图像,以及预设风格的多张第二人脸图像分别对应的稠密点云数据;基于第一人脸图像和预设风格的多张第二人脸图像分别对应的稠密点云数据,确定目标人脸在预设风格下的稠密点云数据;基于目标人脸在预设风格下的稠密点云数据,生成并展示目标人脸在预设风格下的初始虚拟人脸图像。In a possible implementation manner, when the acquisition module 1001 is used to acquire initial dense point cloud data of the target face, and display the initial virtual face image of the target face based on the initial dense point cloud data, it includes: acquiring the target person The first face image corresponding to the face, and the dense point cloud data corresponding to the multiple second face images of the preset style respectively; based on the first face image and the multiple second face images of the preset style respectively corresponding to Dense point cloud data, determine the dense point cloud data of the target face in the preset style; generate and display the initial virtual human of the target face in the preset style based on the dense point cloud data of the target face in the preset style face image.
在一种可能的实施方式中,获取模块1001在用于基于第一人脸图像和预设风格的多张第二人脸图像分别对应的稠密点云数据,确定目标人脸在预设风格下的稠密点云数据时,包括:提取第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值;其中,人脸参数值包含表征人脸形状的参数值和表征人脸表情的 参数值;基于第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定目标人脸在预设风格下的稠密点云数据。In a possible implementation, the acquisition module 1001 is used to determine that the target face is in the preset style based on the dense point cloud data corresponding to the first face image and the plurality of second face images of the preset style respectively. When the dense point cloud data of the The parameter value of the face shape and the parameter value representing the facial expression; the face parameter value based on the first face image, and the face parameter value and dense point cloud data corresponding to the multiple second face images of the preset style respectively , to determine the dense point cloud data of the target face under the preset style.
在一种可能的实施方式中,获取模块1001在用于基于第一人脸图像的人脸参数值、以及预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定目标人脸在预设风格下的稠密点云数据时,包括:基于第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值,确定第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数;根据预设风格的多张第二人脸图像分别对应的稠密点云数据和线性拟合系数,确定目标人脸在预设风格下的稠密点云数据。In a possible implementation, the acquisition module 1001 is used to obtain face parameter values and dense point clouds corresponding to the face parameter values of the first face image and the plurality of second face images of the preset style respectively. data, when determining the dense point cloud data of the target face under the preset style, including: the face parameter value based on the first face image, and the face parameters corresponding to the multiple second face images of the preset style respectively value to determine the linear fitting coefficient between the first face image and the multiple second face images of the preset style; according to the dense point cloud data and the linear fitting corresponding to the multiple second face images of the preset style respectively The combination coefficient is used to determine the dense point cloud data of the target face under the preset style.
在一种可能的实施方式中,获取模块1001在用于基于第一人脸图像的人脸参数值,以及预设风格的多张第二人脸图像分别对应的人脸参数值,确定第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数时,包括:获取当前线性拟合系数,初始的当前线性拟合系数为预先设置;基于当前线性拟合系数和预设风格的多张第二人脸图像分别对应的人脸参数值,预测第一人脸图像的当前人脸参数值;基于预测的当前人脸参数值和第一人脸图像的人脸参数值,确定第二损失值;基于第二损失值以及预设的线性拟合系数对应的约束范围,调整当前线性拟合系数;基于调整后的当前线性拟合系数,返回执行预测当前人脸参数值的步骤,直至对当前线性拟合系数的调整操作符合第二调整截止条件的情况下,基于当前线性拟合系数得到第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数。In a possible implementation manner, the obtaining module 1001 is used to determine the first face parameter value based on the face parameter value of the first face image and the face parameter values corresponding to the plurality of second face images of the preset style respectively. When the linear fitting coefficients between the face image and the multiple second face images of the preset style include: obtaining the current linear fitting coefficients, the initial current linear fitting coefficients are preset; based on the current linear fitting coefficients and the face parameter values corresponding to multiple second face images of the preset style respectively, and predict the current face parameter value of the first face image; based on the predicted current face parameter value and the face of the first face image Parameter value, determine the second loss value; based on the second loss value and the constraint range corresponding to the preset linear fitting coefficient, adjust the current linear fitting coefficient; based on the adjusted current linear fitting coefficient, return to perform prediction of the current face In the step of parameter value, until the adjustment operation on the current linear fitting coefficient meets the second adjustment cut-off condition, obtain the first face image and the multiple second face images of the preset style based on the current linear fitting coefficient. Linear fitting coefficients between .
在一种可能的实施方式中,稠密点云数据包括稠密点云中各个点的坐标值;获取模块在用于根据预设风格的多张第二人脸图像分别对应的稠密点云数据和线性拟合系数,确定目标人脸在预设风格下的稠密点云数据,包括:基于预设风格的多张第二人脸图像分别对应的稠密点云中各个点的坐标值,确定平均稠密点云数据中对应点的坐标值;基于预设风格的多张第二人脸图像分别对应的稠密点云中各个点的坐标值、和平均稠密点云数据中对应点的坐标值,确定预设风格的多张第二人脸图像分别对应的坐标差异值;基于预设风格的多张第二人脸图像分别对应的坐标差异值和线性拟合系数,确定第一人脸图像对应的坐标差异值;基于第一人脸图像对应的坐标差异值和平均稠密点云数据中对应点的坐标值,确定目标人脸在预设风格下的稠密点云数据。In a possible implementation manner, the dense point cloud data includes coordinate values of each point in the dense point cloud; the obtaining module is used for the dense point cloud data and linear A fitting coefficient to determine the dense point cloud data of the target face under the preset style, including: determining the average dense point based on the coordinate values of each point in the dense point cloud corresponding to the multiple second face images of the preset style respectively The coordinate value of the corresponding point in the cloud data; based on the coordinate value of each point in the dense point cloud corresponding to the multiple second face images of the preset style, and the coordinate value of the corresponding point in the average dense point cloud data, determine the preset The coordinate difference values corresponding to the multiple second face images of the style respectively; the coordinate difference corresponding to the first face image is determined based on the coordinate difference values and the linear fitting coefficients corresponding to the multiple second face images of the preset style respectively. value; based on the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, determine the dense point cloud data of the target face in the preset style.
在一种可能的实施方式中,人脸参数值由预先训练的神经网络提取,神经网络基于预先标注人脸参数值的样本图像训练得到。In a possible implementation manner, the face parameter values are extracted by a pre-trained neural network, and the neural network is obtained by training based on sample images marked with face parameter values in advance.
在一种可能的实施方式中,处理装置还包括训练模块1005,训练模块1005用于按照以下方式预先训练神经网络:获取样本图像集,样本图像集包含多张样本图像以及每张样本图像对应的标注人脸参数值;将多张样本图像输入神经网络,得到每张样本图像对应的预测人脸参数值;基于每张样本图像对应的预测人脸参数值和标注人脸参数值,对神经网络的网络参数值进行调整,得到训练完成的神经网络。In a possible implementation manner, the processing device further includes a training module 1005, and the training module 1005 is configured to pre-train the neural network in the following manner: acquiring a sample image set, where the sample image set includes multiple sample images and the corresponding Label the face parameter values; input multiple sample images into the neural network to obtain the predicted face parameter values corresponding to each sample image; based on the predicted face parameter values and labeled face parameter values corresponding to each sample image, the neural network The network parameter values are adjusted to obtain the trained neural network.
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述方法实施例中的相关说明,这里不再详述。For the description of the processing flow of each module in the apparatus and the interaction flow between the modules, reference may be made to the relevant descriptions in the foregoing method embodiments, which will not be described in detail here.
对应于图1中的人脸图像的处理方法,本公开实施例还提供了一种电子设备1100,如图12所示,为本公开实施例提供的电子设备1100结构示意图,包括:处理器111、存储器112、和总线113;存储器112用于存储执行指令,包括内存1121和外部存储器1122;这里的内存1121也称内存储器,用于暂时存放处理器111中的运算数据,以及与硬盘等外部存储器1122交换的数据,处理器111通过内存1121与外部存储器1122进行数据交换,当所述电子设备1100运行时,所述处理器111与所述存储器112之间通过总线113通信,使得所述处理器111执行以下指令:获取目标人脸的初始稠密点云数据,并基于初始稠密点云数据生成目标人脸的初始虚拟人脸图像;确定初始稠密 点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数;响应于针对初始虚拟人脸图像的调整操作,对形变系数进行调整,得到目标形变系数;基于目标形变系数和标准稠密点云数据,生成目标人脸对应的目标虚拟人脸图像。Corresponding to the processing method of the face image in FIG. 1 , an embodiment of the present disclosure further provides an electronic device 1100 . As shown in FIG. 12 , a schematic structural diagram of the electronic device 1100 provided by the embodiment of the present disclosure includes: a processor 111 , memory 112, and bus 113; memory 112 is used to store execution instructions, including memory 1121 and external memory 1122; the memory 1121 here is also called internal memory, used to temporarily store the operation data in the processor 111, and external memory such as hard disks The data exchanged by the memory 1122, the processor 111 exchanges data with the external memory 1122 through the memory 1121, and when the electronic device 1100 is running, the processor 111 and the memory 112 communicate through the bus 113, so that the processing The device 111 executes the following instructions: obtain the initial dense point cloud data of the target face, and generate an initial virtual face image of the target face based on the initial dense point cloud data; determine the corresponding value of the initial dense point cloud data relative to the standard virtual face image. The deformation coefficient of the standard dense point cloud data; in response to the adjustment operation for the initial virtual face image, the deformation coefficient is adjusted to obtain the target deformation coefficient; based on the target deformation coefficient and the standard dense point cloud data, the target corresponding to the target face is generated Virtual face image.
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的人脸图像的处理方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the processing method of a face image described in the foregoing method embodiment is executed. step. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.
本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的人脸图像的处理方法的步骤,具体可参见上述方法实施例,在此不再赘述。Embodiments of the present disclosure further provide a computer program product, where the computer program product carries program codes, and the instructions included in the program codes can be used to execute the steps of the method for processing a face image described in the above method embodiments. Refer to the above method embodiments, which are not repeated here.
其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product can be specifically implemented by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的***和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的***、装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个***,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the specific working process of the system and device described above, reference may be made to the corresponding process in the foregoing method embodiments, which will not be repeated here. In the several embodiments provided by the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The apparatus embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。The functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium. Based on this understanding, the technical solutions of the present disclosure can be embodied in the form of software products in essence, or the parts that make contributions to the prior art or the parts of the technical solutions. The computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .
最后应说明的是:以上所述实施例仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。Finally, it should be noted that the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure, but not to limit them. The present disclosure is described in detail in the examples, and those of ordinary skill in the art should understand that: any person skilled in the art who is familiar with the technical field of the present disclosure can still modify or modify the technical solutions described in the foregoing embodiments within the technical scope disclosed in the present disclosure. Changes can be easily conceived, or equivalent replacements are made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be covered by the present disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be based on the protection scope of the claims.

Claims (16)

  1. 一种人脸图像的处理方法,包括:A method for processing a face image, comprising:
    获取目标人脸的初始稠密点云数据,并基于所述初始稠密点云数据生成所述目标人脸的初始虚拟人脸图像;Obtain the initial dense point cloud data of the target face, and generate the initial virtual face image of the target face based on the initial dense point cloud data;
    确定所述初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数;determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image;
    响应于针对所述初始虚拟人脸图像的调整操作,对所述形变系数进行调整,得到目标形变系数;In response to the adjustment operation on the initial virtual face image, adjusting the deformation coefficient to obtain a target deformation coefficient;
    基于所述目标形变系数和所述标准稠密点云数据,生成所述目标人脸对应的目标虚拟人脸图像。Based on the target deformation coefficient and the standard dense point cloud data, a target virtual face image corresponding to the target face is generated.
  2. 根据权利要求1所述的处理方法,其特征在于,所述形变系数包含至少一个骨骼系数和至少一个混合形变系数中的至少一项;The processing method according to claim 1, wherein the deformation coefficient comprises at least one of at least one bone coefficient and at least one mixed deformation coefficient;
    其中,每个骨骼系数用于对与该骨骼系数关联的第一稠密点云构成的骨骼的初始位姿进行调整;每个混合形变系数用于对与该混合形变系数关联的第二稠密点云对应的初始位置进行调整。Wherein, each bone coefficient is used to adjust the initial pose of the bone formed by the first dense point cloud associated with the bone coefficient; each blending deformation coefficient is used to adjust the second dense point cloud associated with the blending Adjust the corresponding initial position.
  3. 根据权利要求1或2所述的处理方法,其特征在于,所述确定所述初始稠密点云数据相对于所述标准稠密点云数据的形变系数,包括:The processing method according to claim 1 or 2, wherein the determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data comprises:
    基于当前形变系数对所述标准稠密点云数据进行调整,得到调整后的稠密点云数据,初始的所述当前形变系数为预先设置的;Adjust the standard dense point cloud data based on the current deformation coefficient to obtain the adjusted dense point cloud data, and the initial current deformation coefficient is preset;
    基于所述调整后的稠密点云数据和所述初始稠密点云数据,确定第一损失值;determining a first loss value based on the adjusted dense point cloud data and the initial dense point cloud data;
    基于所述第一损失值以及预设的形变系数的约束范围,调整所述当前形变系数;adjusting the current deformation coefficient based on the first loss value and a preset constraint range of the deformation coefficient;
    基于调整后的所述当前形变系数,返回执行对所述标准稠密点云数据进行调整的步骤,直至对所述当前形变系数的调整操作符合第一调整截止条件的情况下,基于所述当前形变系数得到所述初始稠密点云数据相对于所述标准稠密点云数据的形变系数。Based on the adjusted current deformation coefficient, return to the step of adjusting the standard dense point cloud data, until the adjustment operation of the current deformation coefficient meets the first adjustment cut-off condition, based on the current deformation The coefficient obtains the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data.
  4. 根据权利要求1至3任一所述的处理方法,其特征在于,所述响应于针对所述初始虚拟人脸图像的调整操作,对所述形变系数进行调整,得到目标形变系数,包括:The processing method according to any one of claims 1 to 3, wherein, in response to an adjustment operation on the initial virtual face image, the deformation coefficient is adjusted to obtain a target deformation coefficient, comprising:
    响应针对所述初始虚拟人脸图像的调整操作,确定针对所述初始虚拟人脸图像的目标调整位置,以及针对所述目标调整位置的调整幅度;In response to the adjustment operation for the initial virtual face image, determining a target adjustment position for the initial virtual face image, and an adjustment range for the target adjustment position;
    按照所述调整幅度,对与所述目标调整位置关联的所述形变系数进行调整,得到所述目标形变系数。According to the adjustment range, the deformation coefficient associated with the target adjustment position is adjusted to obtain the target deformation coefficient.
  5. 根据权利要求1至4任一所述的处理方法,其特征在于,所述基于所述目标形变系数和所述标准稠密点云数据,生成所述目标人脸对应的目标虚拟人脸图像,包括:The processing method according to any one of claims 1 to 4, wherein the generating a target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data, comprising: :
    基于所述目标形变系数,对所述标准稠密点云数据进行调整,得到目标稠密点云数据;Based on the target deformation coefficient, the standard dense point cloud data is adjusted to obtain the target dense point cloud data;
    基于所述目标稠密点云数据,生成所述目标虚拟人脸图像。Based on the target dense point cloud data, the target virtual face image is generated.
  6. 根据权利要求5所述的处理方法,其特征在于,所述基于所述目标稠密点云数据,生成所述目标虚拟人脸图像,包括:The processing method according to claim 5, wherein the generating the target virtual face image based on the target dense point cloud data comprises:
    确定与所述目标稠密点云数据对应的虚拟人脸模型;determining a virtual face model corresponding to the target dense point cloud data;
    基于预选的人脸属性特征和所述虚拟人脸模型,生成所述目标虚拟人脸图像。The target virtual face image is generated based on the preselected face attribute features and the virtual face model.
  7. 根据权利要求1至6任一所述的处理方法,其特征在于,所述获取目标人脸的初始稠密点云数据,并基于所述初始稠密点云数据生成所述目标人脸的初始虚拟人脸图像,包括:The processing method according to any one of claims 1 to 6, wherein the initial dense point cloud data of the target face is obtained, and an initial virtual person of the target face is generated based on the initial dense point cloud data face images, including:
    获取所述目标人脸对应的第一人脸图像,以及预设风格的多张第二人脸图像分别对应的稠密点云数据;obtaining the first face image corresponding to the target face, and the dense point cloud data corresponding to a plurality of second face images of the preset style respectively;
    基于所述第一人脸图像和所述预设风格的多张第二人脸图像分别对应的稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据;Determine the initial dense point cloud data of the target face under the preset style based on the dense point cloud data corresponding to the first face image and the plurality of second face images of the preset style respectively;
    基于所述目标人脸在所述预设风格下的初始稠密点云数据,生成所述目标人脸在所述预设风格下的初始虚拟人脸图像。Based on the initial dense point cloud data of the target face in the preset style, an initial virtual face image of the target face in the preset style is generated.
  8. 根据权利要求7所述的处理方法,其特征在于,所述基于所述第一人脸图像和所述预设风格的多张第二人脸图像分别对应的稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据,包括:The processing method according to claim 7, wherein the target is determined based on the dense point cloud data corresponding to the first face image and the plurality of second face images of the preset style respectively. The initial dense point cloud data of the face under the preset style, including:
    提取所述第一人脸图像的人脸参数值,以及所述预设风格的多张第二人脸图像分别对应的人脸参数值;其中,所述人脸参数值包含表征人脸形状的参数值和表征人脸表情的参数值;Extract the face parameter value of the first face image, and the face parameter values corresponding to the plurality of second face images of the preset style respectively; wherein, the face parameter value includes a face shape representing the face parameter value. Parameter values and parameter values that characterize facial expressions;
    基于所述第一人脸图像的人脸参数值、以及所述预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据。Based on the face parameter values of the first face image, and the face parameter values and dense point cloud data corresponding to the plurality of second face images of the preset style respectively, it is determined that the target face is in the The initial dense point cloud data in the preset style.
  9. 根据权利要求8所述的处理方法,其特征在于,所述基于所述第一人脸图像的人脸参数值、以及所述预设风格的多张第二人脸图像分别对应的人脸参数值和稠密点云数据,确定所述目标人脸在所述预设风格下的初始稠密点云数据,包括:The processing method according to claim 8, wherein the face parameter value based on the first face image and the face parameters corresponding to the plurality of second face images of the preset style respectively value and dense point cloud data, determine the initial dense point cloud data of the target face under the preset style, including:
    基于所述第一人脸图像的人脸参数值,以及所述预设风格的多张第二人脸图像分别对应的人脸参数值,确定所述第一人脸图像和所述预设风格的多张第二人脸图像之间的线性拟合系数;Determine the first face image and the preset style based on the face parameter values of the first face image and the face parameter values corresponding to the plurality of second face images of the preset style respectively The linear fitting coefficients between the multiple second face images;
    根据所述预设风格的多张第二人脸图像分别对应的稠密点云数据和所述线性拟合系数,确定所述目标人脸在所述预设风格下的初始稠密点云数据。The initial dense point cloud data of the target face under the preset style is determined according to the dense point cloud data corresponding to the plurality of second face images of the preset style and the linear fitting coefficient respectively.
  10. 根据权利要求9所述的处理方法,其特征在于,所述基于所述第一人脸图像的人脸参数值,以及所述预设风格的多张第二人脸图像分别对应的人脸参数值,确定所述第一人脸图像和所述预设风格的多张第二人脸图像之间的线性拟合系数,包括:The processing method according to claim 9, wherein the face parameter value based on the first face image and the face parameters corresponding to the plurality of second face images of the preset style respectively value, determine the linear fitting coefficient between the first face image and the multiple second face images of the preset style, including:
    获取当前线性拟合系数,初始的所述当前线性拟合系数为预先设置;Obtain the current linear fitting coefficient, and the initial current linear fitting coefficient is preset;
    基于所述当前线性拟合系数和所述预设风格的多张第二人脸图像分别对应的人脸参数值,预测所述第一人脸图像的当前人脸参数值;Predicting the current face parameter value of the first face image based on the current linear fitting coefficient and the face parameter values corresponding to the plurality of second face images of the preset style respectively;
    基于预测的当前人脸参数值和所述第一人脸图像的人脸参数值,确定第二损失值;determining a second loss value based on the predicted current face parameter value and the face parameter value of the first face image;
    基于所述第二损失值以及预设的所述线性拟合系数对应的约束范围,调整所述当前线性拟合系数;adjusting the current linear fitting coefficient based on the second loss value and the preset constraint range corresponding to the linear fitting coefficient;
    基于调整后的所述当前线性拟合系数,返回执行预测当前人脸参数值的步骤,直至对所述当前线性拟合系数的调整操作符合第二调整截止条件的情况下,基于所述当前线性拟合系数得到第一人脸图像和预设风格的多张第二人脸图像之间的线性拟合系数。Based on the adjusted current linear fitting coefficient, return to the step of predicting the current face parameter value, until the adjustment operation on the current linear fitting coefficient meets the second adjustment cut-off condition, based on the current linear fitting coefficient The fitting coefficient obtains a linear fitting coefficient between the first face image and a plurality of second face images in a preset style.
  11. 根据权利要求9或10所述的处理方法,其特征在于,所述稠密点云数据包括稠密点云中各个点的坐标值;所述根据所述预设风格的多张第二人脸图像分别对应的稠密点云数据和所述线性拟合系数,确定所述目标人脸在所述预设风格下的初始稠密点云数据,包括:The processing method according to claim 9 or 10, wherein the dense point cloud data includes coordinate values of each point in the dense point cloud; the plurality of second face images according to the preset style are respectively The corresponding dense point cloud data and the linear fitting coefficient determine the initial dense point cloud data of the target face under the preset style, including:
    基于所述预设风格的多张第二人脸图像分别对应的所述稠密点云数据中各个点的坐标值,确定平均稠密点云数据中对应点的坐标值;Determine the coordinate value of the corresponding point in the average dense point cloud data based on the coordinate value of each point in the dense point cloud data corresponding to the plurality of second face images of the preset style respectively;
    基于所述预设风格的多张第二人脸图像分别对应的所述稠密点云数据中各个点的坐标值、和所述平均稠密点云数据中对应点的坐标值,确定所述预设风格的多张第二人脸图像分别对应的坐标差异值;Determine the preset based on the coordinate values of each point in the dense point cloud data corresponding to the plurality of second face images of the preset style, respectively, and the coordinate values of the corresponding points in the average dense point cloud data The coordinate difference values corresponding to the multiple second face images of the style respectively;
    基于所述预设风格的多张第二人脸图像分别对应的所述坐标差异值和所述线性拟合系数,确定所述第一人脸图像对应的坐标差异值;Determine the coordinate difference value corresponding to the first face image based on the coordinate difference value and the linear fitting coefficient corresponding to the plurality of second face images of the preset style respectively;
    基于所述第一人脸图像对应的坐标差异值和所述平均稠密点云数据中对应点的坐标值,确定所述目标人脸在所述预设风格下的所述初始稠密点云数据。Based on the coordinate difference value corresponding to the first face image and the coordinate value of the corresponding point in the average dense point cloud data, the initial dense point cloud data of the target face in the preset style is determined.
  12. 根据权利要求8至11任一所述的处理方法,其特征在于,所述人脸参数值由预先训练的神经网络提取,所述神经网络基于预先标注人脸参数值的样本图像训练得到。The processing method according to any one of claims 8 to 11, wherein the face parameter value is extracted by a pre-trained neural network, and the neural network is obtained by training based on sample images marked with face parameter values in advance.
  13. 根据权利要求12所述的处理方法,其特征在于,按照以下方式预先训练所述神经网络:The processing method according to claim 12, wherein the neural network is pre-trained in the following manner:
    获取样本图像集,所述样本图像集包含多张样本图像以及每张样本图像对应的标注人脸参数值;Obtaining a sample image set, the sample image set includes a plurality of sample images and annotated face parameter values corresponding to each sample image;
    将所述多张样本图像输入神经网络,得到每张样本图像对应的预测人脸参数值;Inputting the multiple sample images into the neural network to obtain the predicted face parameter value corresponding to each sample image;
    基于每张样本图像对应的预测人脸参数值和标注人脸参数值,对所述神经网络的网络参数值进行调整,得到训练完成的神经网络。Based on the predicted face parameter value and the labeled face parameter value corresponding to each sample image, the network parameter value of the neural network is adjusted to obtain a trained neural network.
  14. 一种人脸图像的处理装置,其特征在于,包括:A device for processing a face image, comprising:
    获取模块,用于获取目标人脸的初始稠密点云数据,并基于所述初始稠密点云数据生成所述目标人脸的初始虚拟人脸图像;an acquisition module for acquiring initial dense point cloud data of the target face, and generating an initial virtual face image of the target face based on the initial dense point cloud data;
    确定模块,用于确定所述初始稠密点云数据相对于标准虚拟人脸图像对应的标准稠密点云数据的形变系数;a determination module, used for determining the deformation coefficient of the initial dense point cloud data relative to the standard dense point cloud data corresponding to the standard virtual face image;
    调整模块,用于响应于针对所述初始虚拟人脸图像的调整操作,对所述形变系数进行调整,得到目标形变系数;an adjustment module, configured to adjust the deformation coefficient in response to an adjustment operation for the initial virtual face image to obtain a target deformation coefficient;
    生成模块,用于基于所述目标形变系数和所述标准稠密点云数据,生成所述目标人脸对应的目标虚拟人脸图像。A generating module is configured to generate a target virtual face image corresponding to the target face based on the target deformation coefficient and the standard dense point cloud data.
  15. 一种电子设备,包括处理器、存储器和总线,所述存储器存储有所述处理器可执行的机器可读指令,当电子设备运行时,所述处理器与所述存储器之间通过总线通信,所述机器可读指令被所述处理器执行时执行如权利要求1至13任一所述的处理方法的步骤。An electronic device includes a processor, a memory and a bus, the memory stores machine-readable instructions executable by the processor, and when the electronic device is running, the processor and the memory communicate through the bus, The machine-readable instructions, when executed by the processor, perform the steps of the processing method of any one of claims 1 to 13.
  16. 一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器运行时执行如权利要求1至13任一所述的处理方法的步骤。A computer-readable storage medium having a computer program stored thereon, the computer program executing the steps of the processing method according to any one of claims 1 to 13 when the computer program is executed by a processor.
PCT/CN2021/119080 2020-11-25 2021-09-17 Face image processing method and apparatus, and electronic device and storage medium WO2022111001A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011339586.6 2020-11-25
CN202011339586.6A CN112419144B (en) 2020-11-25 2020-11-25 Face image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
WO2022111001A1 true WO2022111001A1 (en) 2022-06-02

Family

ID=74843582

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119080 WO2022111001A1 (en) 2020-11-25 2021-09-17 Face image processing method and apparatus, and electronic device and storage medium

Country Status (3)

Country Link
CN (1) CN112419144B (en)
TW (1) TWI780919B (en)
WO (1) WO2022111001A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953821A (en) * 2023-02-28 2023-04-11 北京红棉小冰科技有限公司 Virtual face image generation method and device and electronic equipment

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419144B (en) * 2020-11-25 2024-05-24 上海商汤智能科技有限公司 Face image processing method and device, electronic equipment and storage medium
CN113409437B (en) * 2021-06-23 2023-08-08 北京字节跳动网络技术有限公司 Virtual character face pinching method and device, electronic equipment and storage medium
CN113808249B (en) * 2021-08-04 2022-11-25 北京百度网讯科技有限公司 Image processing method, device, equipment and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140160123A1 (en) * 2012-12-12 2014-06-12 Microsoft Corporation Generation of a three-dimensional representation of a user
CN108876893A (en) * 2017-12-14 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of three-dimensional facial reconstruction
CN109376698A (en) * 2018-11-29 2019-02-22 北京市商汤科技开发有限公司 Human face model building and device, electronic equipment, storage medium, product
CN110163054A (en) * 2018-08-03 2019-08-23 腾讯科技(深圳)有限公司 A kind of face three-dimensional image generating method and device
CN112419144A (en) * 2020-11-25 2021-02-26 上海商汤智能科技有限公司 Face image processing method and device, electronic equipment and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851123B (en) * 2014-02-13 2018-02-06 北京师范大学 A kind of three-dimensional face change modeling method
CN104504410A (en) * 2015-01-07 2015-04-08 深圳市唯特视科技有限公司 Three-dimensional face recognition device and method based on three-dimensional point cloud
US11127163B2 (en) * 2015-06-24 2021-09-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Skinned multi-infant linear body model
CN108629294A (en) * 2018-04-17 2018-10-09 华南理工大学 Human body based on deformation pattern and face net template approximating method
CN109978989B (en) * 2019-02-26 2023-08-01 腾讯科技(深圳)有限公司 Three-dimensional face model generation method, three-dimensional face model generation device, computer equipment and storage medium
CN111710035B (en) * 2020-07-16 2023-11-07 腾讯科技(深圳)有限公司 Face reconstruction method, device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140160123A1 (en) * 2012-12-12 2014-06-12 Microsoft Corporation Generation of a three-dimensional representation of a user
CN108876893A (en) * 2017-12-14 2018-11-23 北京旷视科技有限公司 Method, apparatus, system and the computer storage medium of three-dimensional facial reconstruction
CN110163054A (en) * 2018-08-03 2019-08-23 腾讯科技(深圳)有限公司 A kind of face three-dimensional image generating method and device
CN109376698A (en) * 2018-11-29 2019-02-22 北京市商汤科技开发有限公司 Human face model building and device, electronic equipment, storage medium, product
CN112419144A (en) * 2020-11-25 2021-02-26 上海商汤智能科技有限公司 Face image processing method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115953821A (en) * 2023-02-28 2023-04-11 北京红棉小冰科技有限公司 Virtual face image generation method and device and electronic equipment

Also Published As

Publication number Publication date
TWI780919B (en) 2022-10-11
CN112419144A (en) 2021-02-26
CN112419144B (en) 2024-05-24
TW202221638A (en) 2022-06-01

Similar Documents

Publication Publication Date Title
WO2022111001A1 (en) Face image processing method and apparatus, and electronic device and storage medium
US11270489B2 (en) Expression animation generation method and apparatus, storage medium, and electronic apparatus
US11682155B2 (en) Skeletal systems for animating virtual avatars
US11074748B2 (en) Matching meshes for virtual avatars
US11430169B2 (en) Animating virtual avatar facial movements
US20210005003A1 (en) Method, apparatus, and system generating 3d avatar from 2d image
WO2017193906A1 (en) Image processing method and processing system
WO2022110851A1 (en) Facial information processing method and apparatus, electronic device, and storage medium
WO2022143645A1 (en) Three-dimensional face reconstruction method and apparatus, device, and storage medium
CN111325846B (en) Expression base determination method, avatar driving method, device and medium
KR20120005587A (en) Method and apparatus for generating face animation in computer system
Yang et al. Example-based caricature generation with exaggeration control
CN112802162B (en) Face adjusting method and device for virtual character, electronic equipment and storage medium
RU2703327C1 (en) Method of processing a two-dimensional image and a user computing device thereof
KR102689515B1 (en) Methods and apparatus, electronic devices and storage media for processing facial information
JP7525813B2 (en) Facial information processing method, apparatus, electronic device, and storage medium
JP7145359B1 (en) Inference model construction method, inference model construction device, program, recording medium, configuration device and configuration method
US20230409110A1 (en) Information processing apparatus, information processing method, computer-readable recording medium, and model generating method
WO2021256319A1 (en) Information processing device, information processing method, and recording medium
Yang et al. Example-based automatic caricature generation
CN114742951A (en) Material generation method, image processing method, device, electronic device and storage medium
KR20240032981A (en) System for creating presentations of eyebrow designs
CN114742939A (en) Human body model reconstruction method and device, computer equipment and storage medium
CN114663628A (en) Image processing method, image processing device, electronic equipment and storage medium
Sugimoto A Method to Visualize Information of Words Expressing Facial Features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21896505

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 24.10.2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21896505

Country of ref document: EP

Kind code of ref document: A1