TWI778723B - Method, device, computer equipment and storage medium for reconstruction of human face - Google Patents

Method, device, computer equipment and storage medium for reconstruction of human face Download PDF

Info

Publication number
TWI778723B
TWI778723B TW110127359A TW110127359A TWI778723B TW I778723 B TWI778723 B TW I778723B TW 110127359 A TW110127359 A TW 110127359A TW 110127359 A TW110127359 A TW 110127359A TW I778723 B TWI778723 B TW I778723B
Authority
TW
Taiwan
Prior art keywords
face
data
real
target
models
Prior art date
Application number
TW110127359A
Other languages
Chinese (zh)
Other versions
TW202221652A (en
Inventor
徐勝偉
王權
錢晨
Original Assignee
大陸商北京市商湯科技開發有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商北京市商湯科技開發有限公司 filed Critical 大陸商北京市商湯科技開發有限公司
Publication of TW202221652A publication Critical patent/TW202221652A/en
Application granted granted Critical
Publication of TWI778723B publication Critical patent/TWI778723B/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • A63F13/655Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • A63F2300/695Imported photos, e.g. of the player
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • Electric Double-Layer Capacitors Or The Like (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Generation (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a method, device, computer equipment and storage medium for reconstruction of human face, wherein the method comprises: generating a first real face model based on a target image; fitting the first real face model with pre-generated multiple second real face models, to obtain fitting coefficients corresponding to the multiple second real face models respectively; generating a target virtual face model corresponding to the target image, based on the fitting coefficients corresponding to the multiple second real face models respectively, and virtual face models with a predetermined style corresponding to the multiple second real face models respectively.

Description

重建人臉的方法、裝置、電腦設備及存儲介質Method, device, computer equipment and storage medium for reconstructing human face

本發明涉及影像處理技術領域,具體而言,涉及一種重建人臉的方法、裝置、電腦設備及存儲介質。The present invention relates to the technical field of image processing, and in particular, to a method, device, computer equipment and storage medium for reconstructing a human face.

相關申請的交叉引用CROSS-REFERENCE TO RELATED APPLICATIONS

本專利申請要求於2020年11月25日提交的、申請號為202011342169.7、發明名稱為“一種人臉重建方法、裝置、電腦設備及存儲介質”的中國專利申請的優先權,該申請以引用的方式併入本文中。This patent application claims the priority of the Chinese patent application filed on November 25, 2020, with the application number of 202011342169.7 and the invention titled "A method, device, computer equipment and storage medium for face reconstruction". manner is incorporated herein.

通常,能夠根據真實人臉或自身喜好建立虛擬人臉三維模型,以實現人臉的重建,在遊戲、動漫、虛擬社交等領域具有廣泛應用。例如在遊戲中,玩家可以通過遊戲程式提供的人臉重建系統來依照玩家提供的圖像中包括的真實人臉而生成虛擬人臉三維模型,並利用所生成的虛擬人臉三維模型更有代入感的參與遊戲。Usually, a three-dimensional model of a virtual face can be established according to a real face or one's own preferences, so as to realize the reconstruction of the face, which has a wide range of applications in the fields of games, animation, and virtual social interaction. For example, in a game, the player can use the face reconstruction system provided by the game program to generate a virtual face 3D model according to the real face included in the image provided by the player, and use the generated virtual face 3D model to make more substitutions Sense of participation in the game.

目前,在基於人像圖像中的真實人臉進行人臉重建時,通常是基於人臉圖像來提取人臉輪廓特徵,然後將提取的人臉輪廓特徵和預先生成的虛擬三維模型進行匹配、融合,以生成與真實人臉對應的虛擬人臉三維模型;但是,由於人臉輪廓特徵與預先生成的虛擬三維模型的匹配度較低,使得生成的虛擬人臉三維模型與真實人臉形象之間的相似度較低。At present, when performing face reconstruction based on real faces in portrait images, face contour features are usually extracted based on face images, and then the extracted face contour features are matched with the pre-generated virtual three-dimensional model. However, due to the low matching degree between the face contour features and the pre-generated virtual 3D model, the generated virtual face 3D model and the real face image have a low matching degree. The similarity between them is low.

本公開實施例至少提供一種重建人臉的方法、裝置、電腦設備及存儲介質。The embodiments of the present disclosure provide at least a method, an apparatus, a computer device, and a storage medium for reconstructing a human face.

第一方面,本公開實施例提供了一種重建人臉的方法,包括:基於目標圖像生成第一真實人臉模型;利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數;基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型。In a first aspect, an embodiment of the present disclosure provides a method for reconstructing a face, including: generating a first real face model based on a target image; performing a fitting process on the face model to obtain fitting coefficients corresponding to a plurality of second real face models respectively; based on the fitting coefficients corresponding to the plurality of second real face models respectively, and The face models respectively correspond to virtual face models with preset styles, and a target virtual face model corresponding to the target image is generated.

本實施例中,利用擬合係數作為媒介,建立了多個第二真實人臉模型與第一真實人臉模型之間的關聯關係,該關聯關係,能夠表徵基於第二真實人臉模型建立的虛擬人臉模型、和基於第一真實人臉模型建立的目標虛擬人臉模型之間的關聯,使得基於擬合係數確定的目標虛擬人臉模型,既具有預設風格、及第一真實人臉模型對應的原始人臉的特徵,所生成的目標虛擬人臉模型,和第一真實人臉模型對應的原始人臉之間具有更高的相似度。In this embodiment, the fitting coefficient is used as a medium to establish a plurality of associations between the second real face models and the first real face The association between the virtual face model and the target virtual face model established based on the first real face model enables the target virtual face model determined based on the fitting coefficient to have both the preset style and the first real face The features of the original face corresponding to the model, the generated target virtual face model, and the original face corresponding to the first real face model have a higher degree of similarity.

一種可選的實施方式中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型,包括:基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料;基於所述目標骨骼資料,生成所述目標虛擬人臉模型。In an optional implementation manner, the fitting coefficients corresponding to the plurality of second real face models and the virtual people with preset styles corresponding to the plurality of second real face models respectively. face model, generating a target virtual face model corresponding to the target image, including: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the corresponding corresponding virtual face models The skeleton data is used to determine the target skeleton data; based on the target skeleton data, the target virtual face model is generated.

一種可選的實施方式中,所述基於所述目標骨骼資料,生成所述目標虛擬人臉模型,包括:基於所述目標骨骼資料、以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,對標準蒙皮資料進行位置變換處理,生成目標蒙皮資料;基於所述目標骨骼資料、以及所述目標蒙皮資料,生成所述目標虛擬人臉模型。In an optional implementation manner, generating the target virtual face model based on the target skeleton data includes: based on the target skeleton data, and standard skeleton data and standard skin data in the standard virtual face model. The standard skin data is subjected to position transformation processing to generate target skin data; based on the target bone data and the target skin data, the target virtual face model is generated.

該實施方式中,可以通過目標蒙皮資料、以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,使得得到的目標虛擬人臉模型可以更好的將目標骨骼資料與目標蒙皮資料貼合,從而使得利用目標骨骼資料構成的目標虛擬人臉模型中,較少的出現由於骨骼資料的變化導致的目標虛擬人臉模型顯示異常凸起或者凹陷的情況。In this embodiment, the target skin data and the correlation between the standard skeleton data and the standard skin data in the standard virtual face model can be used, so that the obtained target virtual face model can better match the target skeleton data with the standard skin data. The target skin data is fitted, so that in the target virtual face model composed of the target skeleton data, the target virtual face model caused by the change of the skeleton data shows less abnormal convexity or concave.

一種可選的實施方式中,所述虛擬人臉模型對應的骨骼資料包括以下至少一種資料:所述虛擬人臉模型的多塊人臉骨骼中每塊人臉骨骼對應的骨骼旋轉資料、骨骼位置資料、以及骨骼縮放資料;所述目標骨骼資料包括以下至少一種資料:目標骨骼位置資料、目標骨骼縮放資料、以及目標骨骼旋轉資料。In an optional implementation manner, the skeleton data corresponding to the virtual face model includes at least one of the following data: skeleton rotation data, skeleton position corresponding to each face skeleton in the multiple face skeletons of the virtual face model; data, and bone scaling data; the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.

該實施方式中,利用骨骼資料能夠更精確的表徵多塊人臉骨骼中每塊骨骼對應的骨骼資料,並且利用目標骨骼資料,能夠更精確的確定目標虛擬人臉模型。In this embodiment, the skeleton data can be used to more accurately characterize the skeleton data corresponding to each of the multiple face bones, and the target virtual face model can be more accurately determined by using the target skeleton data.

一種可選的實施方式中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料,包括:基於多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼位置資料進行插值處理,得到目標骨骼位置資料。In an optional embodiment, the determining of the target skeleton data based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively includes: Based on the fitting coefficients corresponding to the plurality of second real face models, interpolation processing is performed on the bone position data corresponding to the plurality of virtual face models to obtain the target bone position data.

一種可選的實施方式中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料,包括:基於多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼縮放資料進行插值處理,得到目標骨骼縮放資料。In an optional embodiment, the determining of the target skeleton data based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively includes: Based on the fitting coefficients corresponding to the plurality of second real face models, interpolation processing is performed on the skeleton scaling data corresponding to the plurality of virtual face models, to obtain the target skeleton scaling data.

一種可選的實施方式中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料,包括:將所述多個虛擬人臉模型分別對應的骨骼旋轉資料轉換為四元數,並對所述多個虛擬人臉模型分別對應的四元數進行正則化處理,得到正則化四元數;基於多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的正則化四元數進行插值處理,得到目標骨骼旋轉資料。In an optional embodiment, the determining of the target skeleton data based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively includes: Converting the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions, and performing regularization processing on the quaternions corresponding to the plurality of virtual face models respectively, to obtain a regularized quaternion; based on The fitting coefficients respectively corresponding to the plurality of second real face models are interpolated for the regularization quaternions corresponding to the plurality of virtual face models respectively, so as to obtain the target bone rotation data.

該實施方式中,利用目標骨骼資料可以更精確的對虛擬人臉模型進行調整,使得得到的目標虛擬人臉模型中的骨骼細節更加細緻,並且與原始人臉的骨骼細節更相似,從而使得目標虛擬人臉模型與原始人臉之間具有更高的相似度。In this embodiment, the target skeleton data can be used to adjust the virtual face model more accurately, so that the skeleton details in the obtained target virtual face model are more detailed and more similar to the skeleton details of the original face, so that the target The virtual face model has a higher similarity with the original face.

一種可選的實施方式中,所述基於目標圖像生成第一真實人臉模型,包括:獲取包括原始人臉的目標圖像;對所述目標圖像中包括的所述原始人臉進行三維人臉重建,得到所述第一真實人臉模型。In an optional embodiment, generating the first real face model based on the target image includes: acquiring a target image including an original face; performing a three-dimensional analysis of the original face included in the target image. face reconstruction to obtain the first real face model.

該實施方式中,利用對原始人臉進行三維人臉重建得到的第一真實人臉模型,可以更準確且全面的表徵目標圖像中原始人臉的人臉特徵。In this embodiment, the face features of the original face in the target image can be more accurately and comprehensively represented by using the first real face model obtained by reconstructing the original face in three dimensions.

一種可選的實施方式中,針對所述多個第二真實人臉模型中的每個第二真實人臉模型,根據以下方式生成所述第二真實人臉模型:獲取包括參考人臉的多張參考圖像;針對所述多張參考圖像中的每張參考圖像,對所述參考圖像中包括的所述參考人臉進行三維人臉重建,得到所述參考圖像對應的所述第二真實人臉模型。In an optional implementation manner, for each second real face model in the plurality of second real face models, the second real face model is generated according to the following manner: obtaining multiple data including the reference face; a reference image; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image, and obtain the corresponding image of the reference image. Describe the second real face model.

該實施方式中,利用多張參考圖像,可以儘量覆蓋到較為廣泛的人臉外形特徵,因此,基於多張參考圖像中的每張參考圖像進行三維人臉重建得到的第二真實人臉模型同樣可以儘量覆蓋到較為廣泛的人臉外形特徵。In this embodiment, using a plurality of reference images can cover a wider range of face shape features as much as possible. Therefore, based on each reference image in the plurality of reference images, the second real person is obtained by performing three-dimensional face reconstruction. The face model can also cover a wider range of facial features as far as possible.

一種可選的實施方式中,所述利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數,包括:對多個所述第二真實人臉模型以及所述第一真實人臉模型進行最小二乘處理,得到所述多個第二真實人臉模型分別對應的擬合係數。In an optional implementation manner, the first real face model is fitted with a plurality of pre-generated second real face models to obtain the fitting corresponding to the plurality of second real face models respectively. The coefficients include: performing least squares processing on a plurality of the second real face models and the first real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.

該實施方式中,利用擬合係數,可以準確的表徵在利用多個第二真實人臉模型擬合第一真實人臉模型時的擬合情況。In this embodiment, by using the fitting coefficient, the fitting situation when the first real face model is fitted by using a plurality of second real face models can be accurately represented.

第二方面,本公開實施例還提供一種重建人臉的裝置,包括:In a second aspect, an embodiment of the present disclosure further provides an apparatus for reconstructing a human face, including:

第一生成模組,用於基於目標圖像生成第一真實人臉模型;a first generation module for generating a first real face model based on the target image;

處理模組,用於利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數;a processing module, configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;

第二生成模組,用於基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型。The second generation module is configured to be based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively , and generate a target virtual face model corresponding to the target image.

一種可選的實施方式中,所述第二生成模組在基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型時,用於:基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料;基於所述目標骨骼資料,生成所述目標虛擬人臉模型。In an optional embodiment, the second generation module is based on the fitting coefficients corresponding to the plurality of second real face models and the The virtual face model of the preset style, when generating the target virtual face model corresponding to the target image, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and a plurality of The skeleton data corresponding to the virtual face models are determined to determine the target skeleton data; based on the target skeleton data, the target virtual face model is generated.

一種可選的實施方式中,所述第二生成模組在基於所述目標骨骼資料,生成所述目標虛擬人臉模型時,用於:基於所述目標骨骼資料、以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,對標準蒙皮資料進行位置變換處理,生成目標蒙皮資料;基於所述目標骨骼資料、以及所述目標蒙皮資料,生成所述目標虛擬人臉模型。In an optional embodiment, when the second generation module generates the target virtual face model based on the target skeleton data, it is used for: based on the target skeleton data and the standard virtual face model. The relationship between the standard bone data and the standard skin data, the standard skin data is subjected to position transformation processing to generate target skin data; based on the target bone data and the target skin data, the target virtual data is generated. face model.

一種可選的實施方式中,所述虛擬人臉模型對應的骨骼資料包括以下至少一種資料:所述虛擬人臉模型的多塊人臉骨骼中每塊人臉骨骼對應的骨骼旋轉資料、骨骼位置資料、以及骨骼縮放資料;所述目標骨骼資料包括以下至少一種資料:目標骨骼位置資料、目標骨骼縮放資料、以及目標骨骼旋轉資料。In an optional implementation manner, the skeleton data corresponding to the virtual face model includes at least one of the following data: skeleton rotation data, skeleton position corresponding to each face skeleton in the multiple face skeletons of the virtual face model; data, and bone scaling data; the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.

一種可選的實施方式中,所述第二生成模組在基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料時,用於:基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼位置資料進行插值處理,得到所述目標骨骼位置資料。In an optional implementation manner, the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively. When the target skeleton data is used, it is used to: perform interpolation processing on the skeleton position data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton position data .

一種可選的實施方式中,所述第二生成模組在基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料時,用於:基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼縮放資料進行插值處理,得到所述目標骨骼縮放資料。In an optional implementation manner, the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively. When the target skeleton data is used, it is used to: perform interpolation processing on the skeleton scaling data corresponding to the plurality of virtual face models based on the fitting coefficients corresponding to the plurality of second real face models respectively, to obtain the target skeleton scaling data .

一種可選的實施方式中,所述第二生成模組在基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料時,用於:將所述多個虛擬人臉模型分別對應的骨骼旋轉資料轉換為四元數,並對所述多個虛擬人臉模型分別對應的四元數進行正則化處理,得到正則化四元數;基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的正則化四元數進行插值處理,得到所述目標骨骼旋轉資料。In an optional implementation manner, the second generation module determines based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively. When the target skeleton data is used, it is used to: convert the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions, and perform regularization processing on the quaternions corresponding to the plurality of virtual face models respectively, obtaining a regularized quaternion; based on the fitting coefficients corresponding to the multiple second real face models respectively, perform interpolation processing on the regularized quaternions corresponding to the multiple virtual face models respectively, to obtain the target bone rotation material.

一種可選的實施方式中,所述第一生成模組在基於目標圖像生成第一真實人臉模型時,用於:獲取包括原始人臉的目標圖像;對所述目標圖像中包括的所述原始人臉進行三維人臉重建,得到所述第一真實人臉模型。In an optional embodiment, when generating the first real face model based on the target image, the first generation module is used to: acquire a target image including the original face; The original face is reconstructed in three dimensions to obtain the first real face model.

一種可選的實施方式中,所述處理模組針對所述多個第二真實人臉模型中的每個第二真實人臉模型,根據以下方式生成所述第二真實人臉模型:獲取包括參考人臉的多張參考圖像;針對所述多張參考圖像中的每張參考圖像,對所述參考圖像中包括的參考人臉進行三維人臉重建,得到所述參考圖像對應的所述第二真實人臉模型。In an optional embodiment, the processing module generates the second real face model for each second real face model in the plurality of second real face models according to the following manner: obtaining the second real face model includes: multiple reference images of the reference face; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image to obtain the reference image the corresponding second real face model.

一種可選的實施方式中,所述處理模組利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數時,用於:對多個所述第二真實人臉模型以及所述第一真實人臉模型進行最小二乘處理,得到所述多個第二真實人臉模型分別對應的擬合係數。In an optional embodiment, the processing module uses a plurality of pre-generated second real face models to perform fitting processing on the first real face model, and obtains a plurality of second real face models corresponding to When the fitting coefficient is , it is used to: perform least squares processing on a plurality of the second real face models and the first real face models to obtain the corresponding fitting coefficients of the plurality of second real face models respectively. Combined coefficient.

第三方面,本公開可選實現方式還提供一種電腦設備,處理器、記憶體,所述記憶體存儲有所述處理器可執行的機器可讀指令,所述處理器用於執行所述記憶體中存儲的機器可讀指令,所述機器可讀指令被所述處理器執行時,所述機器可讀指令被所述處理器執行時執行上述第一方面,或第一方面中任一種可能的實施方式中的步驟。In a third aspect, an optional implementation manner of the present disclosure further provides a computer device, a processor, and a memory, where the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the memory The machine-readable instructions stored in the machine-readable instructions, when the machine-readable instructions are executed by the processor, the machine-readable instructions are executed by the processor to execute the above-mentioned first aspect, or any one of the possible first aspects. steps in the implementation.

第四方面,本公開可選實現方式還提供一種電腦可讀存儲介質,該電腦可讀存儲介質上存儲有電腦程式,該電腦程式被運行時執行上述第一方面,或第一方面中任一種可能的實施方式中的步驟。In a fourth aspect, an optional implementation of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program executes the first aspect or any one of the first aspects when the computer program is run. steps in a possible implementation.

關於上述重建人臉的裝置、電腦設備、及電腦可讀存儲介質的效果描述參見上述重建人臉的方法的說明,這裡不再贅述。For a description of the effects of the above-mentioned apparatus for reconstructing a human face, computer equipment, and a computer-readable storage medium, please refer to the description of the above-mentioned method for reconstructing a human face, which will not be repeated here.

為使本公開的上述目的、特徵和優點能更明顯易懂,下文特舉較佳實施例,並配合所附附圖,作詳細說明如下。In order to make the above-mentioned objects, features and advantages of the present disclosure more obvious and easy to understand, the preferred embodiments are exemplified below, and are described in detail as follows in conjunction with the accompanying drawings.

為使本公開實施例的目的、技術方案和優點更加清楚,下面將結合本公開實施例中附圖,對本公開實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例僅僅是本公開一部分實施例,而不是全部的實施例。通常在此處描述和示出的本公開實施例的元件可以以各種不同的配置來佈置和設計。因此,以下對本公開的實施例的詳細描述並非旨在限制要求保護的本公開的範圍,而是僅僅表示本公開的選定實施例。基於本公開的實施例,本領域技術人員在沒有做出創造性勞動的前提下所獲得的所有其他實施例,都屬於本公開保護的範圍。In order to make the purposes, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only These are some, but not all, embodiments of the present disclosure. The elements of the disclosed embodiments generally described and illustrated herein may be arranged and designed in a variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure is not intended to limit the scope of the disclosure as claimed, but is merely representative of selected embodiments of the disclosure. Based on the embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without creative work fall within the protection scope of the present disclosure.

經研究發現,利用人臉重建的方法可以根據真實人臉或自身喜好建立虛擬人臉三維模型。其中,在基於人像圖像中的真實人臉進行人臉重建時,通常先對人像圖像中的真實人臉進行特徵提取,以得到人臉輪廓特徵,再將人臉輪廓特徵與預先生成的虛擬三維模型中的特徵進行匹配,並基於匹配的結果,將人臉輪廓特徵與虛擬三維模型進行融合,以獲取與人像圖像中的真實人臉對應的虛擬人臉三維模型。由於在將人臉輪廓特徵與預先生成的虛擬三維模型中的特徵進行匹配時,匹配的準確率較低,使得虛擬三維模型與人臉輪廓特徵之間匹配的誤差較大,容易造成依據匹配結果對人臉輪廓特徵與人臉虛擬三維模型進行融合得到的虛擬人臉三維模型與人像圖像中的人臉相似度較低的問題。After research, it is found that the method of face reconstruction can establish a three-dimensional model of virtual face according to the real face or one's own preferences. Among them, when performing face reconstruction based on the real face in the portrait image, the feature extraction of the real face in the portrait image is usually performed first to obtain the face contour features, and then the face contour features are combined with the pre-generated face contour features. The features in the virtual 3D model are matched, and based on the matching results, the facial contour features are fused with the virtual 3D model to obtain a virtual 3D model of the face corresponding to the real face in the portrait image. Since the matching accuracy is low when the face contour features are matched with the features in the pre-generated virtual three-dimensional model, the matching error between the virtual three-dimensional model and the face contour features is relatively large, and it is easy to cause the matching results based on the matching results. The problem of low similarity between the virtual face 3D model obtained by fusing the face contour feature and the face virtual 3D model and the face in the portrait image.

針對以上方案所存在的缺陷,本公開實施例提供了一種重建人臉的方法,能夠生成具有預設風格的目標虛擬人臉模型,並且該目標虛擬人臉模型與真實人臉之間具有較高的相似度。In view of the defects of the above solutions, the embodiments of the present disclosure provide a method for reconstructing a human face, which can generate a target virtual face model with a preset style, and the target virtual face model has a relatively high difference between the real face and the real face. similarity.

為便於對本實施例進行理解,首先對本公開實施例所公開的一種重建人臉的方法進行詳細介紹,本公開實施例所提供的重建人臉的方法的執行主體一般為具有一定計算能力的電腦設備,該電腦設備例如包括:終端設備或伺服器或其它處理設備,終端設備可以為使用者設備(User Equipment,UE)、移動設備、使用者終端、終端、蜂窩電話、無繩電話、個人數位助理(Personal Digital Assistant,PDA)、手持設備、計算設備、車載設備、可穿戴設備等。在一些可能的實現方式中,該重建人臉的方法可以通過處理器調用記憶體中存儲的電腦可讀指令的方式來實現。In order to facilitate the understanding of this embodiment, a method for reconstructing a face disclosed by an embodiment of the present disclosure is first introduced in detail. The execution subject of the method for reconstructing a face provided by the embodiment of the present disclosure is generally a computer device with a certain computing capability. , the computer equipment includes, for example: terminal equipment or server or other processing equipment, the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant ( Personal Digital Assistant, PDA), handheld devices, computing devices, in-vehicle devices, wearable devices, etc. In some possible implementations, the method for reconstructing a human face can be implemented by the processor calling computer-readable instructions stored in the memory.

下面對本公開實施例提供的重建人臉的方法加以說明。The following describes the method for reconstructing a human face provided by the embodiments of the present disclosure.

圖1為本公開一實施例提供的重建人臉的方法的流程圖,如圖1所示,所述方法包括步驟S101至S103,其中:FIG. 1 is a flowchart of a method for reconstructing a face provided by an embodiment of the present disclosure. As shown in FIG. 1 , the method includes steps S101 to S103, wherein:

S101:基於目標圖像生成第一真實人臉模型;S101: Generate a first real face model based on the target image;

S102:利用預先生成的多個第二真實人臉模型對第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數;S102: Perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively;

S103:基於多個第二真實人臉模型分別對應的擬合係數、以及與多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與目標圖像對應的目標虛擬人臉模型。S103: Generate a target virtual face model corresponding to the target image based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively face model.

本公開實施例利用預先生成的多個真實人臉模型擬合第一真實人臉模型的過程,得到個第二真實人臉模型分別對應的擬合係數,並將擬合係數作為媒介,建立多個第二真實人臉模型於第一真實人臉模型之間的關聯關係,然後利用該擬合係數,以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型。這種方法使得基於擬合係數以及虛擬人臉模型所確定的目標虛擬人臉模型既具有預設風格,又具有與第一真實人臉模型對應的原始人臉的特徵,也即所生成的目標虛擬人臉模型與第一真實人臉模型對應的原始人臉之間具有較高的相似度。In the embodiment of the present disclosure, a process of fitting a first real face model with multiple pre-generated real face models is used to obtain fitting coefficients corresponding to a second real face model, and the fitting coefficients are used as a medium to establish multiple The relationship between a second real face model and the first real face model, and then use the fitting coefficient and the virtual faces with preset styles corresponding to the plurality of second real face models respectively. model, and generate a target virtual face model corresponding to the target image. This method makes the target virtual face model determined based on the fitting coefficient and the virtual face model not only has a preset style, but also has the characteristics of the original face corresponding to the first real face model, that is, the generated target There is a high degree of similarity between the virtual face model and the original face corresponding to the first real face model.

下面對上述步驟S101至S103加以詳細說明。The above steps S101 to S103 will be described in detail below.

針對上述步驟S101,目標圖像例如為獲取的包括人臉的圖像,例如,在利用諸如相機等的拍攝設備對某一物件進行拍攝時獲取的包括人臉的圖像,並且圖像中所包括的任一張人臉均可以作為原始人臉。For the above step S101, the target image is, for example, an acquired image including a human face. Any of the included faces can be used as the original face.

在將本公開實施例提供的重建人臉的方法應用於不同的場景下時,目標圖像的獲取方法也有所區別。When the method for reconstructing a face provided by the embodiment of the present disclosure is applied to different scenarios, the method for acquiring the target image is also different.

例如,在將該重建人臉的方法應用於遊戲中的情況下,可以通過遊戲裝置中安裝的圖像獲取設備獲取包含遊戲玩家的臉部的圖像,或者可以從遊戲裝置的相冊中選擇包含遊戲玩家的臉部的圖像、並將獲取的包含遊戲玩家的臉部的圖像作為目標圖像。For example, when the method for reconstructing a human face is applied to a game, an image containing the face of the game player can be acquired through an image acquisition device installed in the game device, or an image containing the face of the game player can be selected from the album of the game device. The image of the face of the game player, and the acquired image including the face of the game player is used as the target image.

又例如,在將重建人臉的方法應用於手機等終端設備的情況下,可以由終端設備的攝像頭採集包括使用者的臉部的圖像,或者從終端設備的相冊中選擇包含使用者人臉的圖像,或者從終端設備中安裝的其他應用程式中接收包含使用者的臉部的圖像。For another example, when the method for reconstructing a face is applied to a terminal device such as a mobile phone, an image including the user's face can be collected by the camera of the terminal device, or an image including the user's face can be selected from the album of the terminal device. , or receive images containing the user's face from other applications installed in the terminal device.

又例如,在將重建人臉的方法應用於直播場景下,可以從直播設備的視頻流中所包括的多幀視頻幀圖像中獲取包含人臉的視頻幀圖像;並將包含人臉的視頻幀圖像作為目標圖像。此處,目標圖像例如可以有多幀;多幀目標圖像例如可以是對視頻流進行採樣獲得。For another example, when the method for reconstructing a human face is applied to a live broadcast scene, a video frame image containing a human face can be obtained from multiple frames of video frame images included in the video stream of the live broadcast device; The video frame image is used as the target image. Here, for example, the target image may have multiple frames; for example, the multiple-frame target image may be obtained by sampling a video stream.

在基於目標圖像生成第一真實人臉模型時,例如可以採用下述方式:獲取包含原始人臉的目標圖像;對目標圖像中包括的原始人臉進行三維人臉重建,得到第一真實人臉模型。When generating the first real face model based on the target image, for example, the following methods can be used: obtaining a target image including the original face; performing three-dimensional face reconstruction on the original face included in the target image to obtain the first real face model. Real face model.

此處,在對目標圖像中包括的原始人臉進行三維人臉重建時,例如可以採用三維可變形人臉模型(3 Dimensions Morphable Models,3DMM)得到原始人臉對應的第一真實人臉模型。其中,第一真實人臉模型例如包括目標圖像中原始人臉的多個關鍵點中每個關鍵點在預設的相機坐標系中的位置資訊。Here, when performing 3D face reconstruction on the original face included in the target image, for example, three-dimensional deformable face models (3 Dimensions Morphable Models, 3DMM) can be used to obtain the first real face model corresponding to the original face . Wherein, the first real face model includes, for example, position information of each key point in the preset camera coordinate system among the multiple key points of the original face in the target image.

針對上述步驟S102,第二真實人臉模型是基於包括參考人臉的參考圖像生成的。其中,不同參考圖像中的參考人臉可以不同;示例性地,可以選取性別、年齡、膚色、胖瘦程度等中至少一項不同的多個人,針對多個人中的每個人獲取每個人的人臉圖像,並將獲取的人臉圖像作為參考圖像。這樣,基於多個參考圖像獲取的多個第二真實人臉模型能夠儘量覆蓋到較為廣泛的人臉外形特徵。For the above step S102, the second real face model is generated based on the reference image including the reference face. Wherein, the reference faces in different reference images may be different; exemplarily, multiple people with different at least one item in gender, age, skin color, degree of fatness and thinness, etc. may be selected, and the information of each person may be obtained for each of the multiple people. face image, and use the acquired face image as a reference image. In this way, the plurality of second real face models obtained based on the plurality of reference images can cover a relatively wide range of face shape features as much as possible.

其中,參考人臉例如包括N個不同個體物件對應的N個人臉,(N為大於1的整數)。在獲取多張包括參考人臉的參考圖像的情況下,示例性地,可以通過對N個不同個體物件分別進行拍攝,得到分別對應於N個不同個體物件的N張照片,且每張照片均對應一個參考人臉,將此N張照片作為參考圖像;或者,從預先拍攝完成的包括不同人臉的多張圖像中確定N張參考圖像。The reference face includes, for example, N faces corresponding to N different individual objects, (N is an integer greater than 1). In the case of acquiring multiple reference images including the reference face, for example, N photos corresponding to the N different individual objects can be obtained by separately photographing N different individual objects, and each photo Each corresponds to a reference face, and the N photos are used as reference images; or, N reference images are determined from a plurality of images including different faces that have been photographed in advance.

示例性地,針對所述多個第二真實人臉模型中的每個第二真實人臉模型,生成第二真實人臉模型的方法包括:獲取包括參考人臉的多張參考圖像;針對多張參考圖像中的每張參考圖像,對所述參考圖像中包括的參考人臉進行三維人臉重建,得到該參考圖像對應的第二真實人臉模型。Exemplarily, for each second real face model in the plurality of second real face models, the method for generating the second real face model includes: acquiring a plurality of reference images including a reference face; For each reference image in the plurality of reference images, three-dimensional face reconstruction is performed on the reference face included in the reference image to obtain a second real face model corresponding to the reference image.

其中,對參考人臉進行三維人臉重建的方法與上述對原始人臉進行三維人臉重建的方法類似,在此不再贅述。所得到的第二真實人臉模型包括參考圖像中參考人臉的多個關鍵點中每個關鍵點在預設的相機坐標系中的位置資訊。此處,該第二真實人臉模型的坐標系和第一真實人臉模型的坐標系可以為同一坐標系。Wherein, the method for performing 3D face reconstruction on the reference face is similar to the above-mentioned method for performing 3D face reconstruction on the original face, and will not be repeated here. The obtained second real face model includes position information of each key point in the preset camera coordinate system among the multiple key points of the reference face in the reference image. Here, the coordinate system of the second real face model and the coordinate system of the first real face model may be the same coordinate system.

利用預先生成的多個第二真實人臉模型對第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數時,例如可以採用下述方式:對多個第二真實人臉模型以及第一真實人臉模型進行最小二乘處理,得到多個第二真實人臉模型分別對應的擬合係數。When using multiple pre-generated second real face models to perform fitting processing on the first real face model, to obtain the fitting coefficients corresponding to the multiple second real face models, for example, the following method may be used: The second real face model and the first real face model are subjected to least squares processing to obtain fitting coefficients corresponding to the plurality of second real face models respectively.

示例性地,在預先生成N個第二真實人臉模型的情況下,可以將第一真實人臉模型對應的模型資料表示為

Figure 02_image001
,將第二真實人臉模型對應的模型資料表示為
Figure 02_image003
,其中,
Figure 02_image005
表示N個第二真實人臉模型中的第i個第二真實人臉模型。 Exemplarily, in the case of generating N second real face models in advance, the model data corresponding to the first real face model can be expressed as:
Figure 02_image001
, the model data corresponding to the second real face model is expressed as
Figure 02_image003
,in,
Figure 02_image005
Indicates the i-th second real face model among the N second real face models.

利用

Figure 02_image001
Figure 02_image007
Figure 02_image009
中的每一項進行最小二乘處理,可以得到N個擬合值,該擬合值表示為
Figure 02_image011
。其中,
Figure 02_image013
表徵第i個第二真實人臉模型對應的擬合值。利用N個擬合值,可以確定擬合係數Alpha,例如可以用係數矩陣表示,也即
Figure 02_image015
。 use
Figure 02_image001
right
Figure 02_image007
to
Figure 02_image009
Each item in the least squares process can be obtained to get N fitting values, which are expressed as
Figure 02_image011
. in,
Figure 02_image013
Indicates the fitting value corresponding to the ith second real face model. Using the N fitted values, the fitting coefficient Alpha can be determined, for example, it can be represented by a coefficient matrix, that is,
Figure 02_image015
.

此處,在通過多個第二真實人臉模型擬合第一真實人臉模型的過程中,要使得通過多個擬合係數對多個第二真實人臉模型進行加權求和後得到的資料與第一真實人臉模型的資料盡可能接近。Here, in the process of fitting the first real face model through multiple second real face models, it is necessary to make the data obtained after the weighted summation of multiple second real face models through multiple fitting coefficients As close as possible to the data of the first real face model.

該擬合係數又可視為利用多個第二真實人臉模型表達第一真實人臉模型時每個第二真實人臉模型的表達係數。也即,利用多個第二真實人臉模型分別在表達係數中對應的擬合值,可以將第二真實人臉模型向第一真實人臉模型進行轉化擬合。The fitting coefficient can also be regarded as an expression coefficient of each second real face model when the first real face model is expressed by using a plurality of second real face models. That is, by using the fitting values corresponding to the expression coefficients of the plurality of second real face models respectively, the second real face model can be transformed and fitted to the first real face model.

針對上述步驟S103,預設風格例如可以為卡通風格、古代風格或抽象風格等,可以根據實際的需要進行具體地設定。示例性地,針對預設風格為卡通風格的情況,具有預設風格的虛擬人臉模型例如可以為具有某種卡通風格的虛擬人臉模型。For the above step S103, the preset style may be, for example, a cartoon style, an ancient style, or an abstract style, and may be specifically set according to actual needs. Exemplarily, for the case where the preset style is a cartoon style, the virtual face model with the preset style may be, for example, a virtual face model with a certain cartoon style.

其中,獲取分別與多個第二真實人臉模型對應的具有預設風格的多個虛擬人臉模型的方法例如包括下述(a1)和(a2)中至少一種。Wherein, the method for acquiring multiple virtual face models with preset styles corresponding to multiple second real face models respectively includes, for example, at least one of the following (a1) and (a2).

(a1)以獲取一個第二真實人臉模型對應的虛擬人臉模型為例,可以基於參考圖像設計製作具有參考人臉特徵的、且具有預設風格的虛擬人臉圖像,並對虛擬人臉圖像中的虛擬人臉進行三維建模,得到虛擬人臉圖像中虛擬人臉模型的骨骼資料以及蒙皮資料。(a1) Taking obtaining a virtual face model corresponding to a second real face model as an example, a virtual face image with reference face features and a preset style can be designed and produced based on the reference image, and the virtual face image can be designed and produced based on the reference image. The virtual face in the face image is subjected to three-dimensional modeling, and the skeleton data and skin data of the virtual face model in the virtual face image are obtained.

其中,骨骼資料包括為虛擬人臉預設的多個骨骼在預設坐標系中的骨骼旋轉資料、骨骼縮放資料以及骨骼位置資料。此處,多個骨骼例如可以進行多層級的劃分;例如包括根(root)骨骼、五官骨骼和五官細節骨骼;其中五官骨骼可以包括眉骨骼、鼻骨骼、顴骨骨骼、下頜骨骼和嘴骨骼等;五官細節骨骼例如又可以將不同的五官骨骼進行進一步的詳細劃分。可以根據不同風格的虛擬影像需求進行具體地設定,在此不做限定。The skeleton data includes skeleton rotation data, skeleton scaling data, and skeleton position data of a plurality of bones preset for the virtual face in the preset coordinate system. Here, for example, multiple bones can be divided into multiple levels; for example, it includes a root (root) bone, facial bones, and facial detail bones; wherein the facial bones can include eyebrow bones, nasal bones, zygomatic bones, mandibular bones, and mouth bones, etc. ; For example, the detailed bones of the facial features can further divide the bones of different facial features in detail. Specific settings can be made according to different styles of virtual image requirements, which are not limited here.

蒙皮資料包括虛擬人臉的表面中多個位置點在預設坐標系中的位置資訊以及每個位置點與多個骨骼中至少一個骨骼的關聯關係資訊。The skinning data includes position information of a plurality of position points on the surface of the virtual face in a preset coordinate system and information about the relationship between each position point and at least one of the plurality of bones.

將對虛擬人臉圖像中的虛擬人臉進行三維建模得到的虛擬模型作為第二真實人臉模型對應的虛擬人臉模型。The virtual model obtained by performing three-dimensional modeling on the virtual face in the virtual face image is used as the virtual face model corresponding to the second real face model.

(a2)預先生成具有預設風格的標準虛擬人臉模型。該標準虛擬人臉模型同樣包括標準骨骼資料、標準蒙皮資料以及標準骨骼資料與標準蒙皮資料之間的關聯關係。基於參考人臉,對標準虛擬人臉模型中的標準骨骼資料做設計修改,以使設計修改後的標準虛擬人臉模型在具有預設風格的同時,還包括了參考圖像中參考人臉的特徵;然後,基於標準骨骼資料與標準蒙皮資料之間的關聯關係,對標準蒙皮資料進行調整,同時還可以為標準蒙皮資料添加參考人臉所具有的特徵資訊,基於修改後的標準骨骼資料和修改後的標準蒙皮資料,生成第二真實人臉模型對應的虛擬人臉模型。(a2) Pre-generate standard virtual face models with preset styles. The standard virtual face model also includes standard bone data, standard skin data, and an association relationship between the standard bone data and the standard skin data. Based on the reference face, design and modify the standard skeleton data in the standard virtual face model, so that the modified standard virtual face model has the preset style and also includes the reference face in the reference image. Then, based on the relationship between the standard bone data and the standard skin data, the standard skin data is adjusted, and the feature information of the reference face can also be added to the standard skin data. Based on the modified standard The skeleton data and the modified standard skin data are used to generate a virtual face model corresponding to the second real face model.

此處,虛擬人臉模型的具體資料表示可以參見上述(a1)中所描述的,在此不再贅述。Here, the specific data representation of the virtual face model may refer to the description in the above (a1), which will not be repeated here.

參見圖2所示,本公開實施例還提供了一種基於多個第二真實人臉模型分別對應的擬合係數、以及與多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型的骨骼資料,生成與目標圖像對應的目標虛擬人臉模型的方法,包括:Referring to FIG. 2 , an embodiment of the present disclosure further provides a virtual human with a preset style based on fitting coefficients corresponding to a plurality of second real face models respectively, and a preset style corresponding to the plurality of second real face models respectively The skeleton data of the face model, and the method for generating the target virtual face model corresponding to the target image, including:

S201:基於多個第二真實人臉模型分別對應的擬合係數、及多個虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料。S201: Determine target skeleton data based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively.

其中,骨骼資料包括以下資料中的至少一種資料:虛擬人臉模型的多塊人臉骨骼中每塊人臉骨骼對應的骨骼旋轉資料、骨骼位置資料和骨骼縮放資料。The bone data includes at least one of the following data: bone rotation data, bone position data, and bone scaling data corresponding to each face bone in the multiple face bones of the virtual face model.

在一種可能的實施方式中,可以基於多個第二真實人臉對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼資料進行插值處理,得到目標骨骼資料。得到的目標骨骼資料包括以下至少一種:目標骨骼位置資料、目標骨骼縮放資料、以及目標骨骼旋轉資料。In a possible implementation manner, based on the fitting coefficients corresponding to the plurality of second real faces, interpolation processing may be performed on the skeleton data corresponding to the plurality of virtual face models respectively to obtain the target skeleton data. The obtained target bone data includes at least one of the following: target bone position data, target bone scaling data, and target bone rotation data.

其中,目標骨骼位置資料例如包括骨骼的中心點在模型坐標系中的三維座標值;目標骨骼縮放資料例如包括目標骨骼相對於標準虛擬人臉模型中骨骼的縮放比例;目標骨骼旋轉資料例如包括骨骼的軸線在模型坐標系中的旋轉角度。The target bone position data includes, for example, the three-dimensional coordinate value of the center point of the bone in the model coordinate system; the target bone scaling data includes, for example, the scaling ratio of the target bone relative to the bones in the standard virtual face model; the target bone rotation data includes, for example, the bone The rotation angle of the axis in the model coordinate system.

示例性地,基於多個第二真實人臉模型分別對應的擬合係數、及多個虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料,例如可以採用下述(b1)至(b3)中至少一項來實現。Exemplarily, the target skeleton data is determined based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models, for example, the following (b1) to (b3) can be used. to achieve at least one of them.

(b1)基於多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼位置資料進行插值處理,得到目標骨骼位置資料。(b1) Based on the fitting coefficients corresponding to the plurality of second real face models respectively, perform interpolation processing on the skeleton position data corresponding to the plurality of virtual face models respectively to obtain the target skeleton position data.

(b2)基於多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼縮放資料進行插值處理,得到目標骨骼縮放資料。(b2) Based on the fitting coefficients corresponding to the multiple second real face models respectively, perform interpolation processing on the skeleton scaling data corresponding to the multiple virtual face models respectively, to obtain the target skeleton scaling data.

(b3)將所述多個虛擬人臉模型分別對應的骨骼旋轉資料轉換為四元數(Quaternion)資料,並對所述多個虛擬人臉模型分別對應的四元數進行正則化處理,得到正則化四元數;基於多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的正則化四元數進行插值處理,得到目標骨骼旋轉資料。(b3) Converting the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternion data, and performing regularization processing on the quaternions corresponding to the plurality of virtual face models respectively, to obtain Regularized quaternion: Based on the fitting coefficients corresponding to the multiple second real face models, the regularized quaternions corresponding to the multiple virtual face models are interpolated to obtain the target bone rotation data.

在具體實施中,針對上述方法(b1)以及(b2),在獲取骨骼位置資料以及骨骼縮放資料的情況下,還包括基於多個第二真實人臉模型確定各層級骨骼以及各層級骨骼對應的局部坐標系。其中,在對人臉模型進行骨骼層級分層的情況下,例如可以直接按照生物學骨骼分層方法確定骨骼層級,也可以根據人臉重建的要求確定骨骼層級,具體的分層方法可以根據實際情況確定,在此不再贅述。In a specific implementation, for the above-mentioned methods (b1) and (b2), in the case of acquiring the bone position data and the bone scaling data, it also includes determining each level of bones and the corresponding bones of each level based on multiple second real face models. local coordinate system. Among them, when the face model is layered at the bone level, for example, the bone level can be determined directly according to the biological bone layering method, or the bone level can be determined according to the requirements of face reconstruction. The specific layering method can be determined according to the actual situation. The situation is determined and will not be repeated here.

在確定各個骨骼層級後,可基於各個骨骼層級建立每個骨骼層級對應的骨骼坐標系。示例性地,可以將各層級骨骼表示為

Figure 02_image017
。 After each bone level is determined, a bone coordinate system corresponding to each bone level can be established based on each bone level. Exemplarily, each level of bones can be represented as
Figure 02_image017
.

此時,骨骼位置資料可以包括虛擬人臉模型中的各層級骨骼

Figure 02_image017
在對應的骨骼坐標系下的三維座標值;骨骼縮放資料可以包括虛擬人臉模型中的各層級骨骼
Figure 02_image017
在對應的骨骼坐標系下用於表徵骨骼縮放程度的百分比,例如為80%、90%或100%。 At this time, the bone position data may include the bones of each level in the virtual face model
Figure 02_image017
The three-dimensional coordinate value in the corresponding bone coordinate system; the bone scaling data can include the bones of each level in the virtual face model
Figure 02_image017
The percentage used to represent the scale of the bone in the corresponding bone coordinate system, such as 80%, 90% or 100%.

在一種可能的實施方式中,將第i個虛擬人臉模型對應的骨骼位置資料表示為

Figure 02_image019
,將第i個虛擬人臉模型對應的骨骼縮放資料表示為
Figure 02_image021
。此時,骨骼位置資料
Figure 02_image019
包含多個層級骨骼分別對應的骨骼位置資料,且骨骼縮放資料
Figure 02_image021
包含多個層級骨骼分別對應的骨骼縮放資料。 In a possible implementation, the bone position data corresponding to the i-th virtual face model is represented as
Figure 02_image019
, the bone scaling data corresponding to the i-th virtual face model is expressed as
Figure 02_image021
. At this point, the bone position data
Figure 02_image019
Contains bone position data corresponding to multiple levels of bones, and bone scaling data
Figure 02_image021
Contains the bone scaling data corresponding to multiple levels of bones.

此時所對應的擬合係數為

Figure 02_image023
。基於M個第二真實人臉模型對應的擬合係數,對M個虛擬人臉模型對應的位置骨骼資料
Figure 02_image019
進行插值處理,得到目標骨骼位置資料。 At this time, the corresponding fitting coefficient is
Figure 02_image023
. Based on the fitting coefficients corresponding to the M second real face models, the position skeleton data corresponding to the M virtual face models
Figure 02_image019
Perform interpolation processing to obtain the target bone position data.

示例性地,例如可以將擬合係數作為各個虛擬人臉模型對應的權重,對虛擬人臉模型對應的骨骼位置資料

Figure 02_image019
進行加權求和處理,實現插值處理的過程。此時,目標骨骼位置資料
Figure 02_image025
滿足下述公式(1):
Figure 02_image027
(1)。 Exemplarily, for example, the fitting coefficient can be used as the weight corresponding to each virtual face model, and the bone position data corresponding to the virtual face model
Figure 02_image019
Perform weighted summation processing to realize the process of interpolation processing. At this point, the target bone position data
Figure 02_image025
The following formula (1) is satisfied:
Figure 02_image027
(1).

類似地,基於M個第二真實人臉模型對應的擬合係數,對M個虛擬人臉模型對應的骨骼縮放資料進行插值處理,得到目標骨骼縮放資料,其中,將第i個虛擬人臉模型對應的骨骼縮放資料表示為

Figure 02_image021
,可以將M個第二真實人臉模型分別對應的擬合係數,作為對應虛擬人臉模型的權重,對M個虛擬人臉模型分別對應的骨骼縮放資料進行加權求和處理,以實現對M個虛擬人臉模型進行插值處理;在該種情況下,目標骨骼縮放資料
Figure 02_image029
滿足下述公式(2):
Figure 02_image031
(2)。 Similarly, based on the fitting coefficients corresponding to the M second real face models, interpolation processing is performed on the bone scaling data corresponding to the M virtual face models to obtain the target bone scaling data, wherein the i-th virtual face model is The corresponding bone scaling data is expressed as
Figure 02_image021
, the fitting coefficients corresponding to the M second real face models can be used as the weights of the corresponding virtual face models, and the bone scaling data corresponding to the M virtual face models can be weighted and summed to achieve the M A virtual face model is interpolated; in this case, the target skeleton scales the data
Figure 02_image029
The following formula (2) is satisfied:
Figure 02_image031
(2).

針對上述方法(b3),骨骼旋轉資料可以包括虛擬人臉模型中的各個骨骼在對應的骨骼坐標系下用於表徵骨骼的旋轉座標變換程度的向量值,例如,包含旋轉軸和旋轉角。在一種可能的實施方式中,將第i個虛擬人臉模型對應的骨骼旋轉資料表示為

Figure 02_image033
。由於骨骼旋轉資料所包含的旋轉角存在萬向節鎖死的問題,故將骨骼旋轉資料轉換為四元數,並且對四元數正則化,得到正則化四元數數資料,表示為
Figure 02_image035
,以防止直接對四元數進行加權求和處理時產生過擬合的現象。 For the above method (b3), the bone rotation data may include vector values used to represent the degree of rotation coordinate transformation of each bone in the virtual face model in the corresponding bone coordinate system, for example, including a rotation axis and a rotation angle. In a possible implementation, the bone rotation data corresponding to the ith virtual face model is represented as
Figure 02_image033
. Since the rotation angle contained in the bone rotation data has the problem of gimbal locking, the bone rotation data is converted into a quaternion, and the quaternion is regularized to obtain the regularized quaternion data, which is expressed as
Figure 02_image035
, in order to prevent over-fitting when the quaternion is directly weighted and summed.

在基於M個第二真實人臉模型對應的擬合係數,對M個虛擬人臉模型對應的正則化四元數

Figure 02_image035
進行插值處理時,也可以將M個第二真實人臉模型對應的擬合係數作為權重,對M個虛擬人臉模型對應的正則化四元數進行加權求和;在該種情況下,目標骨骼旋轉資料
Figure 02_image037
滿足下述公式(3):
Figure 02_image039
(3)。 Based on the fitting coefficients corresponding to the M second real face models, the regularization quaternion corresponding to the M virtual face models
Figure 02_image035
When performing interpolation processing, the fitting coefficients corresponding to the M second real face models can also be used as weights, and the regularization quaternions corresponding to the M virtual face models can be weighted and summed; in this case, the target Bone rotation data
Figure 02_image037
The following formula (3) is satisfied:
Figure 02_image039
(3).

基於上述(b1)、(b2)以及(b3)中得到的目標骨骼位置資料

Figure 02_image025
、目標骨骼縮放資料
Figure 02_image029
以及目標骨骼旋轉資料
Figure 02_image037
,即可確定目標骨骼資料,其表示為
Figure 02_image042
。示例性地,可以將該目標骨骼資料以向量形式表示為
Figure 02_image044
。 Based on the target bone position data obtained in (b1), (b2) and (b3) above
Figure 02_image025
, target bone scaling data
Figure 02_image029
and target bone rotation data
Figure 02_image037
, the target skeleton data can be determined, which is expressed as
Figure 02_image042
. Exemplarily, the target skeleton data can be represented in vector form as
Figure 02_image044
.

承接上述S201,生成目標圖像對應的目標虛擬人臉模型的方法還包括:Following the above S201, the method for generating the target virtual face model corresponding to the target image further includes:

S202:基於目標骨骼資料,生成目標虛擬人臉模型。S202: Generate a target virtual face model based on the target skeleton data.

參見圖3所示,為本公開實施例提供的一種基於目標骨骼資料生成與第一真實人臉模型對應的目標虛擬人臉模型的具體方法,包括:Referring to FIG. 3, a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data provided by an embodiment of the present disclosure includes:

S301:基於目標骨骼資料、以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,對標準蒙皮資料進行位置變換處理,生成目標蒙皮資料;S301: Based on the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model, perform position transformation processing on the standard skin data to generate target skin data;

S302:基於目標骨骼資料以及目標蒙皮資料,生成目標虛擬人臉模型。S302: Generate a target virtual face model based on the target skeleton data and the target skin data.

其中,標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係例如為各層級骨骼對應的標準骨骼資料與標準蒙皮資料之間的關聯關係。基於此關聯關係,可以將蒙皮綁定在虛擬人臉模型中的骨骼上。Wherein, the association relationship between the standard skeleton data and the standard skin data in the standard virtual face model is, for example, the association relationship between the standard skeleton data corresponding to each level of bones and the standard skin data. Based on this association, the skin can be bound to the bones in the virtual face model.

利用目標骨骼資料以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,可以對多個層級骨骼對應位置的蒙皮資料進行位置變換處理,以使生成的目標蒙皮資料中對應層級骨骼的位置可以與對應的目標骨骼資料中位置相符。Using the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model, position transformation can be performed on the skin data corresponding to the positions of multiple levels of bones, so that the generated target skin data The position of the corresponding level of bone in the file can be consistent with the position in the corresponding target bone data.

此處,標準虛擬人臉模型中骨骼資料與標準蒙皮資料之間的關聯關係包括:標準蒙皮變形資料中的各個位置點在模型坐標系中的座標值與骨骼的骨骼位置資料、骨骼縮放資料以及骨骼旋轉資料中至少一項之間的關聯關係。Here, the relationship between the bone data in the standard virtual face model and the standard skin data includes: the coordinate value of each position point in the standard skin deformation data in the model coordinate system, the bone position data of the bone, the bone scaling The relationship between the data and at least one of the bone rotation data.

在利用目標骨骼資料以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,對多個層級骨骼對應位置的蒙皮資料進行位置變換處理時,在目標骨骼資料確定的情況下,也即在目標骨骼的目標骨骼位置資料、目標骨骼縮放資料以及目標骨骼旋轉資料中至少一項確定的情況下,可以利用上述關聯關係,確定在骨骼從標準骨骼資料變換至目標骨骼資料後,標準蒙皮資料中的各個位置點在模型坐標系下的新的座標值,從而基於標準蒙皮資料中的各個位置點在模型坐標系下的新的座標值,得到目標虛擬人臉模型的目標蒙皮資料。When using the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model to perform position transformation processing on the skin data corresponding to the positions of multiple levels of bones, when the target skeleton data is determined In other words, when at least one of the target bone position data, target bone scaling data and target bone rotation data of the target bone is determined, the above-mentioned relationship can be used to determine that after the bone is transformed from the standard bone data to the target bone data , the new coordinate values of each position point in the standard skinning data under the model coordinate system, so as to obtain the target virtual face model based on the new coordinate value of each position point in the standard skinning data under the model coordinate system Target skin data.

利用目標骨骼資料,可以確定用於構建目標虛擬人臉模型的各層級骨骼;且利用目標蒙皮資料,可以確定將模型綁定至骨骼上的蒙皮,從而構成目標虛擬人臉模型。Using the target bone data, it is possible to determine the bones of each level used to construct the target virtual face model; and using the target skin data, it is possible to determine the skin that binds the model to the bones, thereby forming the target virtual face model.

其中,確定目標虛擬人臉模型的方式可以為基於目標骨骼資料以及目標蒙皮資料直接建立目標虛擬人臉模型;或者,也可以利用各層級骨骼對應的目標骨骼資料替換第一真實人臉模型中對應的各層級骨骼資料,再利用目標蒙皮資料建立目標虛擬人臉模型。具體建立目標虛擬人臉模型的方法可以按照實際情況確定,在此不再贅述。The method of determining the target virtual face model may be to directly establish the target virtual face model based on the target skeleton data and target skin data; Corresponding skeleton data of each level, and then use the target skin data to build the target virtual face model. The specific method for establishing the target virtual face model can be determined according to the actual situation, and will not be repeated here.

本公開實施例還提供了利用本公開實施例提供的重建人臉的方法,獲取目標圖像

Figure 02_image046
中的原始人臉A對應的目標虛擬人臉模型
Figure 02_image048
的具體過程的說明。 The embodiments of the present disclosure also provide the method for reconstructing a face provided by the embodiments of the present disclosure to obtain a target image
Figure 02_image046
The target virtual face model corresponding to the original face A in
Figure 02_image048
description of the specific process.

確定目標虛擬人臉模型

Figure 02_image048
包括下述步驟(c1)至(c5): Determine the target virtual face model
Figure 02_image048
It includes the following steps (c1) to (c5):

(c1)準備素材;其中,包括:準備標準虛擬人臉模型的素材;以及準備虛擬圖片的素材。(c1) Preparing materials; including: preparing materials for standard virtual face models; and preparing materials for virtual pictures.

在準備標準虛擬人臉模型的素材時,以選取卡通風格作為預設風格為例,首先設置一個卡通風格的標準虛擬人臉模型

Figure 02_image050
。 When preparing the material of the standard virtual face model, take the cartoon style as the default style as an example, first set a cartoon style standard virtual face model
Figure 02_image050
.

在準備虛擬圖片的素材時,收集24張虛擬圖片

Figure 02_image052
~
Figure 02_image054
;收集的24張虛擬圖片中的虛擬人臉
Figure 02_image056
~
Figure 02_image058
對應的男生、女生的數量均衡,並且盡可能包含較廣泛的五官特徵分佈。 Collect 24 dummy pictures when preparing material for dummy pictures
Figure 02_image052
~
Figure 02_image054
;Virtual faces in 24 virtual pictures collected
Figure 02_image056
~
Figure 02_image058
The corresponding number of boys and girls is balanced, and the distribution of facial features is as wide as possible.

(c2)人臉模型重建;其中,包括:利用目標圖像

Figure 02_image046
中原始人臉A生成第一真實人臉模型
Figure 02_image060
;以及利用虛擬圖片中的虛擬人臉
Figure 02_image056
~
Figure 02_image058
生成第二真實人臉模型
Figure 02_image062
~
Figure 02_image064
。 (c2) face model reconstruction; including: using the target image
Figure 02_image046
The original face A in the middle generates the first real face model
Figure 02_image060
; and using virtual faces in virtual pictures
Figure 02_image056
~
Figure 02_image058
Generate a second real face model
Figure 02_image062
~
Figure 02_image064
.

在確定原始人臉A生成第一真實人臉模型

Figure 02_image060
時,首先對目標圖像中的人臉進行轉正剪裁,然後利用預先訓練好的RGB重建神經網路,生成原始人臉A對應的第一真實人臉模型
Figure 02_image060
。同樣的,利用預先訓練好的RGB重建神經網路,可以確定虛擬人臉
Figure 02_image056
~
Figure 02_image058
分別對應的第二真實人臉模型
Figure 02_image062
~
Figure 02_image064
。 After determining the original face A, generate the first real face model
Figure 02_image060
When , firstly, the face in the target image is straightened and cropped, and then the neural network is reconstructed using the pre-trained RGB to generate the first real face model corresponding to the original face A.
Figure 02_image060
. Similarly, using a pre-trained RGB reconstruction neural network, the virtual face can be determined
Figure 02_image056
~
Figure 02_image058
The corresponding second real face models
Figure 02_image062
~
Figure 02_image064
.

在確定第二真實人臉模型

Figure 02_image062
~
Figure 02_image064
後,還包括:利用預設的風格,利用人工調整的方式,確定第二真實人臉模型
Figure 02_image062
~
Figure 02_image064
分別對應的具有預設風格的虛擬人臉模型
Figure 02_image066
~
Figure 02_image068
。 In determining the second real face model
Figure 02_image062
~
Figure 02_image064
After that, it also includes: using a preset style and using a manual adjustment method to determine the second real face model
Figure 02_image062
~
Figure 02_image064
Corresponding virtual face models with preset styles
Figure 02_image066
~
Figure 02_image068
.

(c3)擬合處理;其中,包括:利用多個第二真實人臉模型對第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數

Figure 02_image070
=
Figure 02_image072
。 (c3) Fitting processing, which includes: performing fitting processing on the first real face model by using a plurality of second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively
Figure 02_image070
=
Figure 02_image072
.

在利用多個第二真實人臉模型對第一真實人臉模型進行擬合時,選取最小二乘法的方法進行擬合,得到24維係數

Figure 02_image070
。 When using multiple second real face models to fit the first real face model, the least squares method is selected for fitting, and 24-dimensional coefficients are obtained
Figure 02_image070
.

(c4)確定目標骨骼資料;其中,在確定目標骨骼資料時,還包括下述步驟(c4-1)以及(c4-2)。(c4) Determining target bone data; wherein, when determining the target bone data, the following steps (c4-1) and (c4-2) are also included.

(c4-1)讀取骨骼資料;其中,骨骼資料包括:在各層級骨骼

Figure 02_image017
下具有預設風格的虛擬人臉模型
Figure 02_image066
~
Figure 02_image068
分別對應的骨骼位置資料
Figure 02_image019
、骨骼縮放資料
Figure 02_image021
以及骨骼旋轉資料
Figure 02_image033
。 (c4-1) Read the skeleton data; among them, the skeleton data includes: bones at all levels
Figure 02_image017
A virtual face model with preset styles
Figure 02_image066
~
Figure 02_image068
Corresponding skeleton position data
Figure 02_image019
, bone scaling data
Figure 02_image021
and bone rotation data
Figure 02_image033
.

(c4-2)利用擬合係數

Figure 02_image070
對預設風格的虛擬人臉模型
Figure 02_image066
~
Figure 02_image068
對應的骨骼資料進行差值處理,生成目標骨骼資料
Figure 02_image042
,該目標骨骼資料包括目標骨骼位置資料
Figure 02_image025
、目標骨骼縮放資料
Figure 02_image029
以及目標骨骼旋轉資料
Figure 02_image037
。 (c4-2) Using fitting coefficients
Figure 02_image070
Virtual face model for preset styles
Figure 02_image066
~
Figure 02_image068
The corresponding skeleton data is subjected to differential processing to generate target skeleton data
Figure 02_image042
, the target bone data includes the target bone position data
Figure 02_image025
, target bone scaling data
Figure 02_image029
and target bone rotation data
Figure 02_image037
.

(c5)生成目標虛擬人臉模型。(c5) Generate the target virtual face model.

利用目標骨骼資料確定用於構建目標虛擬人臉模型的各層級骨骼,將目標骨骼資料替換至標準虛擬人臉模型

Figure 02_image050
中,並利用目標蒙皮資料,可以確定將模型綁定至骨骼上的蒙皮,然後利用預先確定的標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,生成與第一真實人臉模型對應的目標虛擬人臉模型。 Use the target skeleton data to determine the bones of each level used to construct the target virtual face model, and replace the target skeleton data with the standard virtual face model
Figure 02_image050
, and use the target skin data to determine the skin that binds the model to the skeleton, and then use the pre-determined relationship between the standard skeleton data and the standard skin data in the standard virtual face model to generate a A target virtual face model corresponding to a real face model.

參見圖4所示,為本公開實施例提供的在上述具體示例包含的多個過程中使用的具體資料的示例。其中,圖4中a表示目標圖像,41表示原始人臉A;圖4中b表示卡通風格的標準虛擬人臉模型的示意圖;圖4中c表示生成的與第一真實人臉模型對應的目標虛擬人臉模型的示意圖。Referring to FIG. 4 , an example of specific materials used in multiple processes included in the above-mentioned specific examples provided in the embodiments of the present disclosure. Wherein, a in Figure 4 represents the target image, 41 represents the original face A; in Figure 4 b represents a schematic diagram of a cartoon-style standard virtual face model; in Figure 4 c represents the generated corresponding to the first real face model Schematic diagram of the target virtual face model.

此處,值得注意的是,上述步驟(c1)至(c5)僅是重建人臉的方法的一個具體示例,不對本公開實施例提供的重建人臉的方法造成限定。Here, it is worth noting that the above steps (c1) to (c5) are only a specific example of a method for reconstructing a human face, and do not limit the method for reconstructing a human face provided by the embodiments of the present disclosure.

本領域技術人員可以理解,在具體實施方式的上述方法中,各步驟的撰寫順序並不意味著嚴格的執行順序而對實施過程構成任何限定,各步驟的具體執行順序應當以其功能和可能的內在邏輯確定。Those skilled in the art can understand that in the above method of the specific implementation, the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.

基於同一發明構思,本公開實施例中還提供了與重建人臉的方法對應的重建人臉的裝置,由於本公開實施例中的裝置解決問題的原理與本公開實施例上述重建人臉的方法相似,因此裝置的實施可以參見方法的實施,重複之處不再贅述。Based on the same inventive concept, an apparatus for reconstructing a face corresponding to the method for reconstructing a face is also provided in the embodiment of the present disclosure, because the principle of solving the problem by the apparatus in the embodiment of the present disclosure is the same as the above-mentioned method for reconstructing a face in the embodiment of the present disclosure. Similar, therefore, the implementation of the apparatus may refer to the implementation of the method, and repeated descriptions will not be repeated.

參照圖5所示,本公開實施例提供一種重建人臉的裝置,所述裝置包括第一生成模組51、處理模組52以及第二生成模組53。Referring to FIG. 5 , an embodiment of the present disclosure provides an apparatus for reconstructing a human face. The apparatus includes a first generating module 51 , a processing module 52 and a second generating module 53 .

第一生成模組51,用於基於目標圖像生成第一真實人臉模型。The first generating module 51 is configured to generate a first real face model based on the target image.

處理模組52,用於利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數。The processing module 52 is configured to perform fitting processing on the first real face model by using a plurality of pre-generated second real face models to obtain fitting coefficients corresponding to the plurality of second real face models respectively.

第二生成模組53,用於基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的多個虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型。The second generation module 53 is configured to be based on the fitting coefficients corresponding to the plurality of second real face models, and a plurality of virtual models with preset styles corresponding to the plurality of second real face models respectively. A face model, generating a target virtual face model corresponding to the target image.

一種可選的實施方式中,所述第二生成模組53在基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型時,用於:基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料;基於所述目標骨骼資料,生成所述目標虛擬人臉模型。In an optional embodiment, the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models and the corresponding fitting coefficients of the plurality of second real face models respectively. A virtual face model with a preset style, when generating a target virtual face model corresponding to the target image, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, and a plurality of The skeleton data corresponding to the virtual face models are determined, and the target skeleton data is determined; the target virtual face model is generated based on the target skeleton data.

一種可選的實施方式中,所述第二生成模組53在基於所述目標骨骼資料,生成所述目標虛擬人臉模型時,用於:基於所述目標骨骼資料、以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,對標準蒙皮資料進行位置變換處理,生成目標蒙皮資料;基於所述目標骨骼資料、以及所述目標蒙皮資料,生成所述目標虛擬人臉模型。In an optional embodiment, when the second generation module 53 generates the target virtual face model based on the target skeleton data, it is used for: based on the target skeleton data and the standard virtual face model. The relationship between the standard bone data and the standard skin data, the standard skin data is subjected to position transformation processing to generate target skin data; based on the target bone data and the target skin data, the target is generated. Virtual face model.

一種可選的實施方式中,所述虛擬人臉模型的骨骼資料包括以下至少一種資料:所述虛擬人臉模型的多塊人臉骨骼中每塊人臉骨骼對應的骨骼旋轉資料、骨骼位置資料和骨骼縮放資料;所述目標骨骼資料包括以下至少一種資料:目標骨骼位置資料、目標骨骼縮放資料以及目標骨骼旋轉資料。In an optional embodiment, the skeleton data of the virtual face model includes at least one of the following data: bone rotation data and bone position data corresponding to each face bone in the multiple face bones of the virtual face model. and bone scaling data; the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data.

一種可選的實施方式中,所述目標骨骼資料包括所述目標骨骼位置資料,所述第二生成模組53在基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料時,用於:基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼位置資料進行插值處理,得到所述目標骨骼位置資料。In an optional implementation manner, the target skeleton data includes the target skeleton position data, and the second generation module 53 is based on the fitting coefficients corresponding to the plurality of second real face models, and the number of Each of the skeleton data corresponding to the virtual face models, when determining the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, for the corresponding skeleton data of the plurality of virtual face models Interpolate the bone position data to obtain the target bone position data.

一種可選的實施方式中,所述目標骨骼資料包括所述目標骨骼縮放資料,所述第二生成模組53在基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料時,用於:基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼縮放資料進行插值處理,得到所述目標骨骼縮放資料。In an optional implementation manner, the target skeleton data includes the target skeleton scaling data, and the second generation module 53 is based on the corresponding fitting coefficients and the multiple second real face models respectively. Each of the skeleton data corresponding to the virtual face models, when determining the target skeleton data, is used for: based on the fitting coefficients corresponding to the plurality of second real face models respectively, for the corresponding skeleton data of the plurality of virtual face models Interpolate the bone scaling data to obtain the target bone scaling data.

一種可選的實施方式中,所述目標骨骼資料包括所述目標骨骼旋轉資料,所述第二生成模組53在基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料時,用於:將所述多個虛擬人臉模型分別對應的骨骼旋轉資料轉換為四元數,並對所述多個虛擬人臉模型分別對應的四元數進行正則化處理,得到正則化四元數;基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的正則化四元數進行插值處理,得到所述目標骨骼旋轉資料。In an optional implementation manner, the target skeleton data includes the target skeleton rotation data, and the second generation module 53 is based on the corresponding fitting coefficients and the multiple second real face models respectively. The skeleton data corresponding to each of the virtual face models, when determining the target skeleton data, is used for: converting the skeleton rotation data corresponding to the plurality of virtual face models respectively into quaternions, and analyzing the plurality of virtual face models. Perform regularization processing on the quaternions corresponding to the face models to obtain a regularized quaternion; based on the fitting coefficients corresponding to the plurality of second real face models, respectively Interpolate the quaternion to obtain the target bone rotation data.

一種可選的實施方式中,所述第一生成模組51在基於目標圖像生成第一真實人臉模型時,用於:獲取包括原始人臉的目標圖像;對所述目標圖像中包括的所述原始人臉進行三維人臉重建,得到所述第一真實人臉模型。In an optional embodiment, when the first generation module 51 generates the first real face model based on the target image, it is used to: obtain a target image including the original face; The included original face is subjected to three-dimensional face reconstruction to obtain the first real face model.

一種可選的實施方式中,所述處理模組52針對所述多個第二真實人臉模型中的每個第二真實人臉模型,根據以下方式生成所述第二真實人臉模型:獲取包括參考人臉的多張參考圖像;針對所述多張參考圖像中的每張參考圖像,對所述參考圖像中包括的參考人臉進行三維人臉重建,得到所述參考圖像對應的所述第二真實人臉模型。In an optional embodiment, the processing module 52 generates the second real face model for each second real face model in the plurality of second real face models according to the following manner: obtaining the second real face model. including multiple reference images of the reference face; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference face included in the reference image to obtain the reference image Like the corresponding second real face model.

一種可選的實施方式中,所述處理模組52利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數時,用於:對多個所述第二真實人臉模型以及所述第一真實人臉模型進行最小二乘處理,得到所述多個第二真實人臉模型分別對應的擬合係數。In an optional embodiment, the processing module 52 uses a plurality of pre-generated second real face models to perform fitting processing on the first real face model, and obtains a plurality of second real face models respectively. When the corresponding fitting coefficient is used, it is used to: perform least squares processing on a plurality of the second real face models and the first real face models, so as to obtain the corresponding fitting coefficients.

關於裝置中的各模組的處理流程以及各模組之間的交互流程的描述可以參照上述方法實施例中的相關說明,這裡不再詳述。For the description of the processing flow of each module in the apparatus and the interaction flow between the modules, reference may be made to the relevant descriptions in the above method embodiments, which will not be described in detail here.

如圖6所示,本公開實施例還提供了一種電腦設備,包括處理器61和記憶體62。As shown in FIG. 6 , an embodiment of the present disclosure further provides a computer device including a processor 61 and a memory 62 .

記憶體62存儲有處理器61可執行的機器可讀指令,處理器61用於執行記憶體62中存儲的機器可讀指令,該機器可讀指令被處理器61執行時,處理器61執行下述步驟:基於目標圖像生成第一真實人臉模型;利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數;基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型。The memory 62 stores machine-readable instructions executable by the processor 61, and the processor 61 is used to execute the machine-readable instructions stored in the memory 62. When the machine-readable instructions are executed by the processor 61, the processor 61 executes the following: The following steps: generate a first real face model based on the target image; use multiple pre-generated second real face models to perform fitting processing on the first real face model to obtain multiple second real face models corresponding fitting coefficients; based on the fitting coefficients corresponding to the plurality of second real face models respectively, and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, A target virtual face model corresponding to the target image is generated.

上述記憶體62包括記憶體621和外部記憶體622;這裡的記憶體621也稱內記憶體,用於暫時存放處理器61中的運算資料,以及與硬碟等外部記憶體622交換的資料,處理器61通過記憶體621與外部記憶體622進行資料交換。The above-mentioned memory 62 includes a memory 621 and an external memory 622; the memory 621 here is also called an internal memory, which is used to temporarily store the computing data in the processor 61 and the data exchanged with the external memory 622 such as a hard disk, The processor 61 exchanges data with the external memory 622 through the memory 621 .

上述指令的具體執行過程可以參考本公開實施例中所述的重建人臉的方法,此處不再贅述。For the specific execution process of the above instruction, reference may be made to the method for reconstructing a human face described in the embodiments of the present disclosure, and details are not described herein again.

本公開實施例還提供一種電腦可讀存儲介質,該電腦可讀存儲介質上存儲有電腦程式,該電腦程式被處理器運行時執行上述方法實施例中所述的重建人臉的方法。其中,該存儲介質可以是易失性或非易失的電腦可讀取存儲介質。Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the method for reconstructing a human face described in the above method embodiments is executed. Wherein, the storage medium may be a volatile or non-volatile computer-readable storage medium.

本公開實施例還提供一種電腦程式產品,該電腦程式產品承載有程式碼,所述程式碼包括的指令可用於執行上述方法實施例中所述的重建人臉的方法,具體可參見上述方法實施例,在此不再贅述。An embodiment of the present disclosure further provides a computer program product, the computer program product carries a program code, and the instructions included in the program code can be used to execute the method for reconstructing a human face described in the above method embodiments. For details, please refer to the implementation of the above method For example, it will not be repeated here.

其中,上述電腦程式產品可以具體通過硬體、軟體或其結合的方式實現。在一個可選實施例中,所述電腦程式產品具體體現為電腦存儲介質,在另一個可選實施例中,電腦程式產品具體體現為軟體產品,例如軟體發展包(Software Development Kit,SDK)等等。Wherein, the above-mentioned computer program product can be specifically realized by means of hardware, software or a combination thereof. In an optional embodiment, the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.

所屬領域的技術人員可以清楚地瞭解到,為描述的方便和簡潔,上述描述的系統和裝置的具體工作過程,可以參考前述方法實施例中的對應過程,在此不再贅述。在本公開所提供的幾個實施例中,應該理解到,所揭露的系統、裝置和方法,可以通過其它的方式實現。以上所描述的裝置實施例僅僅是示意性的,例如,所述單元的劃分,僅僅為一種邏輯功能劃分,實際實現時可以有另外的劃分方式,又例如,多個單元或元件可以結合或者可以集成到另一個系統,或一些特徵可以忽略,或不執行。另一點,所顯示或討論的相互之間的耦合或直接耦合或通信連接可以是通過一些通信介面,裝置或單元的間接耦合或通信連接,可以是電性,機械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and brevity of description, for the specific working process of the system and device described above, reference may be made to the corresponding process in the foregoing method embodiments, which will not be repeated here. In the several embodiments provided by the present disclosure, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or elements may be combined or may be Integration into another system, or some features can be ignored, or not implemented. On the other hand, the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.

所述作為分離部件說明的單元可以是或者也可以不是物理上分開的,作為單元顯示的部件可以是或者也可以不是物理單元,即可以位於一個地方,或者也可以分佈到多個網路單元上。可以根據實際的需要選擇其中的部分或者全部單元來實現本實施例方案的目的。The unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may be distributed to multiple network units . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

另外,在本公開各個實施例中的各功能單元可以集成在一個處理單元中,也可以是各個單元單獨物理存在,也可以兩個或兩個以上單元集成在一個單元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.

所述功能如果以軟體功能單元的形式實現並作為獨立的產品銷售或使用時,可以存儲在一個處理器可執行的非易失的電腦可讀取存儲介質中。基於這樣的理解,本公開的技術方案本質上或者說對現有技術做出貢獻的部分或者該技術方案的部分可以以軟體產品的形式體現出來,該電腦軟體產品存儲在一個存儲介質中,包括若干指令用以使得一台電腦設備(可以是個人電腦,伺服器,或者網路設備等)執行本公開各個實施例所述方法的全部或部分步驟。而前述的存儲介質包括:隨身碟、移動硬碟、唯讀記憶體(Read-Only Memory,ROM)、隨機存取記憶體(Random Access Memory,RAM)、磁碟或者光碟等各種可以存儲程式碼的介質。The functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a processor-executable non-volatile computer-readable storage medium. Based on such understanding, the technical solutions of the present disclosure can be embodied in the form of software products in essence, or the parts that contribute to the prior art or the parts of the technical solutions. The computer software products are stored in a storage medium, including several The instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present disclosure. The aforementioned storage media include: flash drives, mobile hard drives, Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disks or CD-ROMs, etc. that can store program codes medium.

最後應說明的是:以上所述實施例,僅為本公開的具體實施方式,用以說明本公開的技術方案,而非對其限制,本公開的保護範圍並不局限於此,儘管參照前述實施例對本公開進行了詳細的說明,本領域的普通技術人員應當理解:任何熟悉本技術領域的技術人員在本公開揭露的技術範圍內,其依然可以對前述實施例所記載的技術方案進行修改或可輕易想到變化,或者對其中部分技術特徵進行等同替換;而這些修改、變化或者替換,並不使相應技術方案的本質脫離本公開實施例技術方案的精神和範圍,都應涵蓋在本公開的保護範圍之內。因此,本公開的保護範圍應所述以權利要求的保護範圍為准。Finally, it should be noted that the above-mentioned embodiments are only specific implementations of the present disclosure, and are used to illustrate the technical solutions of the present disclosure rather than limit them. The protection scope of the present disclosure is not limited thereto, although referring to the foregoing The embodiments describe the present disclosure in detail. Those of ordinary skill in the art should understand that: any person skilled in the art can still modify the technical solutions described in the foregoing embodiments within the technical scope disclosed by the present disclosure. Changes can be easily thought of, or equivalent replacements are made to some of the technical features; and these modifications, changes or replacements do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the present disclosure, and should be covered in the present disclosure. within the scope of protection. Therefore, the protection scope of the present disclosure should be based on the protection scope of the claims.

51:第一生成模組 52:處理模組 53:第二生成模組 61:處理器 62:記憶體 621:記憶體 622:外部記憶體 S101:基於目標圖像生成第一真實人臉模型的步驟 S102:利用預先生成的多個第二真實人臉模型對第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數的步驟 S103:基於多個第二真實人臉模型分別對應的擬合係數、以及與多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與目標圖像對應的目標虛擬人臉模型的步驟 S201:基於多個第二真實人臉模型分別對應的擬合係數、及多個虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料的步驟 S202:基於目標骨骼資料,生成目標虛擬人臉模型的步驟 S301:基於目標骨骼資料、以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,對標準蒙皮資料進行位置變換處理,生成目標蒙皮資料的步驟 S302:基於目標骨骼資料以及目標蒙皮資料,生成目標虛擬人臉模型的步驟 51: The first generation module 52: Processing modules 53: Second generation module 61: Processor 62: Memory 621: Memory 622: External memory S101: the step of generating the first real face model based on the target image S102: using a plurality of pre-generated second real face models to perform fitting processing on the first real face models, to obtain the steps of fitting coefficients corresponding to a plurality of second real face models respectively S103: Generate a target virtual face model corresponding to the target image based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively The steps of the face model S201: the step of determining the target skeleton data based on the fitting coefficients corresponding to the plurality of second real face models and the skeleton data corresponding to the plurality of virtual face models respectively S202: the step of generating the target virtual face model based on the target skeleton data S301: Based on the target skeleton data and the relationship between the standard skeleton data and the standard skin data in the standard virtual face model, perform position transformation processing on the standard skin data to generate the target skin data S302: the step of generating the target virtual face model based on the target skeleton data and the target skin data

為了更清楚地說明本公開實施例的技術方案,下面將對實施例中所需要使用的附圖作簡單地介紹,此處的附圖被併入說明書中並構成本說明書中的一部分,這些附圖示出了符合本公開的實施例,並與說明書一起用於說明本公開的技術方案。應當理解,以下附圖僅示出了本公開的某些實施例,因此不應被看作是對範圍的限定,對於本領域普通技術人員來講,在不付出創造性勞動的前提下,還可以根據這些附圖獲得其他相關的附圖。 圖1示出了本公開一實施例所提供的重建人臉的方法的流程圖; 圖2示出了本公開實施例所提供的生成與目標圖像對應的目標虛擬人臉模型的方法的流程圖; 圖3示出了本公開實施例所提供的一種基於目標骨骼資料生成與第一真實人臉模型對應的目標虛擬人臉模型的具體方法的流程圖; 圖4示出了本公開實施例所提供的重建人臉的方法中涉及的多個人臉以及人臉模型的示例; 圖5示出了本公開實施例所提供的一種重建人臉的裝置的示意圖; 圖6示出了本公開實施例所提供的一種電腦設備的示意圖。 In order to explain the technical solutions of the embodiments of the present disclosure more clearly, the following briefly introduces the accompanying drawings required in the embodiments, which are incorporated into the specification and constitute a part of the specification. The drawings illustrate embodiments consistent with the present disclosure, and together with the description serve to explain the technical solutions of the present disclosure. It should be understood that the following drawings only show some embodiments of the present disclosure, and therefore should not be regarded as limiting the scope. Other related figures are obtained from these figures. FIG. 1 shows a flowchart of a method for reconstructing a human face provided by an embodiment of the present disclosure; 2 shows a flowchart of a method for generating a target virtual face model corresponding to a target image provided by an embodiment of the present disclosure; 3 shows a flowchart of a specific method for generating a target virtual face model corresponding to a first real face model based on target skeleton data provided by an embodiment of the present disclosure; FIG. 4 shows an example of multiple faces and face models involved in the method for reconstructing a face provided by an embodiment of the present disclosure; FIG. 5 shows a schematic diagram of an apparatus for reconstructing a human face provided by an embodiment of the present disclosure; FIG. 6 shows a schematic diagram of a computer device provided by an embodiment of the present disclosure.

S101:基於目標圖像生成第一真實人臉模型的步驟 S101: the step of generating the first real face model based on the target image

S102:利用預先生成的多個第二真實人臉模型對第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數的步驟 S102: using a plurality of pre-generated second real face models to perform fitting processing on the first real face models, to obtain a step of respectively corresponding fitting coefficients of a plurality of second real face models

S103:基於多個第二真實人臉模型分別對應的擬合係數、以及與多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與目標圖像對應的目標虛擬人臉模型的步驟 S103: Generate a target virtual face model corresponding to the target image based on the fitting coefficients corresponding to the plurality of second real face models and the virtual face models with preset styles corresponding to the plurality of second real face models respectively The steps of the face model

Claims (13)

一種重建人臉的方法,包括:基於目標圖像生成第一真實人臉模型;利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數,其中,所述第二真實人臉模型是基於包括參考人臉的參考圖像生成的,所述參考人臉包括N個不同物件對應的N個人臉,所述擬合係數表示利用所述多個第二真實人臉模型表達所述第一真實人臉模型時每個第二真實人臉模型的表達係數,利用所述多個第二真實人臉模型分別在表達係數中對應的擬合值,能夠將所述第二真實人臉模型向所述第一真實人臉模型進行轉化擬合;基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型。 A method for reconstructing a face, comprising: generating a first real face model based on a target image; using a plurality of pre-generated second real face models to perform fitting processing on the first real face model to obtain multiple The fitting coefficients corresponding to the second real face models, wherein the second real face model is generated based on a reference image including a reference face, and the reference face includes N people corresponding to N different objects face, the fitting coefficient represents the expression coefficient of each second real face model when the first real face model is expressed by using the multiple second real face models, and the multiple second real face models are used to express the expression coefficient of each second real face model. The corresponding fitting values of the face models in the expression coefficients can transform and fit the second real face model to the first real face model; based on the plurality of second real face models corresponding to and the virtual face models with preset styles corresponding to the plurality of second real face models respectively, to generate a target virtual face model corresponding to the target image. 如申請專利範圍第1項所述的重建人臉的方法,其中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型,包括:基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料;基於所述目標骨骼資料,生成所述目標虛擬人臉模型。 The method for reconstructing a face according to claim 1, wherein the fitting coefficients corresponding to the plurality of second real face models and the plurality of second real face models respectively corresponding virtual face models with preset styles, and generating a target virtual face model corresponding to the target image, including: fitting coefficients corresponding to the plurality of second real face models, and multiple The skeleton data corresponding to each of the virtual face models are determined, and the target skeleton data is determined; and the target virtual face model is generated based on the target skeleton data. 如申請專利範圍第2項所述的重建人臉的方法,其中,所述基於所述目標骨骼資料,生成所述目標虛擬人臉模型,包括:基於所述目標骨骼資料以及標準虛擬人臉模型中標準骨骼資料與標準蒙皮資料之間的關聯關係,對標準蒙皮資料進行位置變換處理,生成目標蒙皮資料;基於所述目標骨骼資料以及所述目標蒙皮資料,生成所述目標虛擬人臉模型。 The method for reconstructing a face according to item 2 of the scope of the application, wherein the generating the target virtual face model based on the target skeleton data includes: based on the target skeleton data and a standard virtual face model The relationship between the standard bone data and the standard skin data, the standard skin data is subjected to position transformation processing to generate target skin data; based on the target bone data and the target skin data, the target virtual data is generated. face model. 如申請專利範圍第2或3項所述的重建人臉的方法,其中,所述虛擬人臉模型對應的骨骼資料包括以下至少一種資料:所述虛擬人臉模型的多塊人臉骨骼中每塊人臉骨骼對應的骨骼旋轉資料、骨骼位置資料、和骨骼縮放資料;所述目標骨骼資料包括以下至少一種資料:目標骨骼位置資料、目標骨骼縮放資料、以及目標骨骼旋轉資料。 The method for reconstructing a face according to item 2 or 3 of the scope of the application, wherein the skeleton data corresponding to the virtual face model includes at least one of the following data: each of the multiple face bones of the virtual face model Bone rotation data, bone position data, and bone scaling data corresponding to the face bones; the target bone data includes at least one of the following data: target bone position data, target bone scaling data, and target bone rotation data. 如申請專利範圍第4項所述的重建人臉的方法,其中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、 及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料,包括:基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼位置資料進行插值處理,得到所述目標骨骼位置資料。 The method for reconstructing a face according to item 4 of the scope of the application, wherein the fitting coefficients corresponding to the plurality of second real face models, and the skeleton data corresponding to the plurality of virtual face models respectively, and determining the target skeleton data includes: based on the corresponding fitting coefficients of the plurality of second real face models, respectively, for the plurality of virtual face models corresponding to the Interpolate the bone position data to obtain the target bone position data. 如申請專利範圍第4項所述的重建人臉的方法,其中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料,包括:基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的骨骼縮放資料進行插值處理,得到所述目標骨骼縮放資料。 The method for reconstructing a face according to item 4 of the scope of the application, wherein the fitting coefficients corresponding to the plurality of second real face models and the corresponding fitting coefficients of the plurality of virtual face models respectively Bone data, determining the target bone data, including: based on the fitting coefficients corresponding to the plurality of second real face models respectively, performing interpolation processing on the bone scaling data corresponding to the plurality of virtual face models respectively, to obtain the target bones Zoom data. 如申請專利範圍第4項所述的重建人臉的方法,其中,所述基於所述多個第二真實人臉模型分別對應的擬合係數、及多個所述虛擬人臉模型分別對應的骨骼資料,確定目標骨骼資料,包括:將所述多個虛擬人臉模型分別對應的骨骼旋轉資料轉換為四元數,並對所述多個虛擬人臉模型分別對應的四元數進行正則化處理,得到正則化四元數; 基於所述多個第二真實人臉模型分別對應的擬合係數,對多個虛擬人臉模型分別對應的正則化四元數進行插值處理,得到所述目標骨骼旋轉資料。 The method for reconstructing a face according to item 4 of the scope of the application, wherein the fitting coefficients corresponding to the plurality of second real face models and the corresponding fitting coefficients of the plurality of virtual face models respectively Bone data, determining the target bone data, including: converting the bone rotation data corresponding to the plurality of virtual face models respectively into quaternions, and regularizing the quaternions corresponding to the plurality of virtual face models respectively Process to get the regularized quaternion; Based on the fitting coefficients corresponding to the plurality of second real face models, interpolation processing is performed on the regularization quaternions corresponding to the plurality of virtual face models to obtain the target bone rotation data. 如申請專利範圍第1項所述的重建人臉的方法,其中,所述基於目標圖像生成第一真實人臉模型,包括:獲取包括原始人臉的目標圖像;對所述目標圖像中包括的所述原始人臉進行三維人臉重建,得到所述第一真實人臉模型。 The method for reconstructing a face according to item 1 of the scope of the application, wherein the generating a first real face model based on a target image includes: acquiring a target image including an original face; The original face included in the 3D face reconstruction is performed to obtain the first real face model. 如申請專利範圍第1項所述的重建人臉的方法,其中,針對所述多個第二真實人臉模型中的每個第二真實人臉模型,根據以下方式生成所述第二真實人臉模型:獲取包括參考人臉的多張參考圖像;針對所述多張參考圖像中的每張參考圖像,對所述參考圖像中包括的參考人臉進行三維人臉重建,得到所述參考圖像對應的所述第二真實人臉模型。 The method for reconstructing a human face as described in claim 1, wherein, for each second real human face model in the plurality of second real human face models, the second real human face model is generated according to the following manner Face model: obtain multiple reference images including reference faces; for each reference image in the multiple reference images, perform three-dimensional face reconstruction on the reference faces included in the reference images to obtain the second real face model corresponding to the reference image. 如申請專利範圍第1項所述的方法,其中,所述利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數,包括:對多個所述第二真實人臉模型以及所述第一真實人臉模型進行最小二乘處理,得到所述多個第二真實人臉模型分別對應的擬合係數。 The method of claim 1, wherein the first real face model is fitted by using multiple pre-generated second real face models to obtain multiple second real face models The fitting coefficients corresponding to the models respectively include: performing least squares processing on a plurality of the second real face models and the first real face models to obtain the corresponding corresponding second real face models respectively. fitting coefficients. 一種重建人臉的裝置,包括:第一生成模組,用於基於目標圖像生成第一真實人臉模型;處理模組,用於利用預先生成的多個第二真實人臉模型對所述第一真實人臉模型進行擬合處理,得到多個第二真實人臉模型分別對應的擬合係數,其中,所述第二真實人臉模型是基於包括參考人臉的參考圖像生成的,所述參考人臉包括N個不同物件對應的N個人臉,所述擬合係數表示利用所述多個第二真實人臉模型表達所述第一真實人臉模型時每個第二真實人臉模型的表達係數,利用所述多個第二真實人臉模型分別在表達係數中對應的擬合值,能夠將所述第二真實人臉模型向所述第一真實人臉模型進行轉化擬合;第二生成模組,用於基於所述多個第二真實人臉模型分別對應的擬合係數、以及與所述多個第二真實人臉模型分別對應的具有預設風格的虛擬人臉模型,生成與所述目標圖像對應的目標虛擬人臉模型。 A device for reconstructing a human face, comprising: a first generating module for generating a first real face model based on a target image; a processing module for using a plurality of pre-generated second real face models to Perform fitting processing on the first real face model to obtain fitting coefficients corresponding to a plurality of second real face models, wherein the second real face model is generated based on a reference image including a reference face, The reference face includes N faces corresponding to N different objects, and the fitting coefficient represents each second real face when the first real face model is expressed by using the plurality of second real face models. The expression coefficient of the model, using the corresponding fitting values of the plurality of second real face models in the expression coefficients respectively, the second real face model can be converted and fitted to the first real face model The second generation module is used for the respective corresponding fitting coefficients of the multiple second real face models and the virtual faces with preset styles corresponding to the multiple second real face models respectively model, and generate a target virtual face model corresponding to the target image. 一種電腦設備,包括處理器和記憶體,所述記憶體存儲有所述處理器可執行的機器可讀指令,所述處理器用於執行所述記憶體中存儲的機器可讀指令,所述機器可讀指令被所述處理器執行時,所述處理器執行如申請專利範圍第1至10項任一項所述的重建人臉的方法。 A computer device includes a processor and a memory, the memory stores machine-readable instructions executable by the processor, the processor is configured to execute the machine-readable instructions stored in the memory, the machine When the readable instructions are executed by the processor, the processor executes the method for reconstructing a human face according to any one of the claims 1 to 10 of the patent application scope. 一種電腦可讀存儲介質,其上存儲有電腦程式,所述電腦程式被電腦設備運行時,所述電腦設備執行如申請專利範圍第1至10項任一項所述的重建人臉的方法。A computer-readable storage medium stores a computer program thereon, and when the computer program is run by a computer device, the computer device executes the method for reconstructing a human face described in any one of items 1 to 10 of the patent application scope.
TW110127359A 2020-11-25 2021-07-26 Method, device, computer equipment and storage medium for reconstruction of human face TWI778723B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011342169.7 2020-11-25
CN202011342169.7A CN112419485B (en) 2020-11-25 2020-11-25 Face reconstruction method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
TW202221652A TW202221652A (en) 2022-06-01
TWI778723B true TWI778723B (en) 2022-09-21

Family

ID=74843538

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110127359A TWI778723B (en) 2020-11-25 2021-07-26 Method, device, computer equipment and storage medium for reconstruction of human face

Country Status (5)

Country Link
JP (1) JP2023507862A (en)
KR (1) KR20230110607A (en)
CN (1) CN112419485B (en)
TW (1) TWI778723B (en)
WO (1) WO2022110790A1 (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419454B (en) * 2020-11-25 2023-11-28 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium
CN112419485B (en) * 2020-11-25 2023-11-24 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium
CN114078184B (en) * 2021-11-11 2022-10-21 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN114529640B (en) * 2022-02-17 2024-01-26 北京字跳网络技术有限公司 Moving picture generation method, moving picture generation device, computer equipment and storage medium
CN115187822B (en) * 2022-07-28 2023-06-30 广州方硅信息技术有限公司 Face image dataset analysis method, live face image processing method and live face image processing device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
US20200058137A1 (en) * 2015-06-24 2020-02-20 Sergi PUJADES Skinned Multi-Person Linear Model
TW202032503A (en) * 2019-02-26 2020-09-01 大陸商騰訊科技(深圳)有限公司 Method, device, computer equipment, and storage medium for generating 3d face model
CN111710035A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Face reconstruction method and device, computer equipment and storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101696007B1 (en) * 2013-01-18 2017-01-13 한국전자통신연구원 Method and device for creating 3d montage
JP6207210B2 (en) * 2013-04-17 2017-10-04 キヤノン株式会社 Information processing apparatus and method
CN104157010B (en) * 2014-08-29 2017-04-12 厦门幻世网络科技有限公司 3D human face reconstruction method and device
CN110135226B (en) * 2018-02-09 2023-04-07 腾讯科技(深圳)有限公司 Expression animation data processing method and device, computer equipment and storage medium
CN109395390B (en) * 2018-10-26 2021-12-21 网易(杭州)网络有限公司 Method and device for processing face model of game character, processor and terminal
CN110111417B (en) * 2019-05-15 2021-04-27 浙江商汤科技开发有限公司 Method, device and equipment for generating three-dimensional local human body model
CN110111247B (en) * 2019-05-15 2022-06-24 浙江商汤科技开发有限公司 Face deformation processing method, device and equipment
CN110400369A (en) * 2019-06-21 2019-11-01 苏州狗尾草智能科技有限公司 A kind of method of human face rebuilding, system platform and storage medium
CN110717977B (en) * 2019-10-23 2023-09-26 网易(杭州)网络有限公司 Method, device, computer equipment and storage medium for processing game character face
CN111695471B (en) * 2020-06-02 2023-06-27 北京百度网讯科技有限公司 Avatar generation method, apparatus, device and storage medium
CN112419485B (en) * 2020-11-25 2023-11-24 北京市商汤科技开发有限公司 Face reconstruction method, device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
US20200058137A1 (en) * 2015-06-24 2020-02-20 Sergi PUJADES Skinned Multi-Person Linear Model
TW202032503A (en) * 2019-02-26 2020-09-01 大陸商騰訊科技(深圳)有限公司 Method, device, computer equipment, and storage medium for generating 3d face model
CN111710035A (en) * 2020-07-16 2020-09-25 腾讯科技(深圳)有限公司 Face reconstruction method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
KR20230110607A (en) 2023-07-24
WO2022110790A1 (en) 2022-06-02
CN112419485B (en) 2023-11-24
JP2023507862A (en) 2023-02-28
CN112419485A (en) 2021-02-26
TW202221652A (en) 2022-06-01

Similar Documents

Publication Publication Date Title
TWI778723B (en) Method, device, computer equipment and storage medium for reconstruction of human face
TWI773458B (en) Method, device, computer equipment and storage medium for reconstruction of human face
Kim et al. Deep video portraits
JP2022536441A (en) Animating avatars from the headset camera
JP2022513272A (en) Training A method and system for automatically generating mass training datasets from 3D models of deep learning networks
JP2013524357A (en) Method for real-time cropping of real entities recorded in a video sequence
CN111784821A (en) Three-dimensional model generation method and device, computer equipment and storage medium
WO2023077742A1 (en) Video processing method and apparatus, and neural network training method and apparatus
WO2013078404A1 (en) Perceptual rating of digital image retouching
CN111127309B (en) Portrait style migration model training method, portrait style migration method and device
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
CN111402394B (en) Three-dimensional exaggerated cartoon face generation method and device
CN114333034A (en) Face pose estimation method and device, electronic equipment and readable storage medium
CN113160418A (en) Three-dimensional reconstruction method, device and system, medium and computer equipment
WO2022110855A1 (en) Face reconstruction method and apparatus, computer device, and storage medium
CN108717730B (en) 3D character reconstruction method and terminal
JP2007102478A (en) Image processor, image processing method, and semiconductor integrated circuit
JP7523530B2 (en) Facial reconstruction method, apparatus, computer device, and storage medium
CN115393487A (en) Virtual character model processing method and device, electronic equipment and storage medium
CN114677476A (en) Face processing method and device, computer equipment and storage medium
CN114612614A (en) Human body model reconstruction method and device, computer equipment and storage medium
JP7525814B2 (en) Facial reconstruction method, device, computer device, and storage medium
JP5642583B2 (en) Image generating apparatus and image generating program
KR20200134623A (en) Apparatus and Method for providing facial motion retargeting of 3 dimensional virtual character
WO2023005359A1 (en) Image processing method and device

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent