CN115063516A - Digital human processing method and device - Google Patents

Digital human processing method and device Download PDF

Info

Publication number
CN115063516A
CN115063516A CN202210753409.5A CN202210753409A CN115063516A CN 115063516 A CN115063516 A CN 115063516A CN 202210753409 A CN202210753409 A CN 202210753409A CN 115063516 A CN115063516 A CN 115063516A
Authority
CN
China
Prior art keywords
face model
basic
data
basic face
model data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210753409.5A
Other languages
Chinese (zh)
Inventor
尚志广
费元华
郭建君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Weiling Times Technology Co Ltd
Original Assignee
Beijing Weiling Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Weiling Times Technology Co Ltd filed Critical Beijing Weiling Times Technology Co Ltd
Priority to CN202210753409.5A priority Critical patent/CN115063516A/en
Publication of CN115063516A publication Critical patent/CN115063516A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Architecture (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present application relates to the field of digital people, and in particular, to a method and an apparatus for processing a digital person. It comprises the following steps: acquiring a basic face model; determining basic face model data, wherein the basic face model data is obtained by decomposing a basic face model; acquiring preset user-defined face model data; modifying the basic face model data based on the user-defined face model data to obtain current basic face model data; and combining the data of the current basic face model to obtain a current basic face model so as to obtain a digital face according to the current basic face model. The method and the device have the advantages of improving convenience when the digital human face is constructed, improving the speed of digital human face processing and improving the production speed of the digital human.

Description

Digital human processing method and device
Technical Field
The present application relates to the field of digital people, and in particular, to a method and an apparatus for processing a digital person.
Background
The digital human body is a three-dimensional human body detailed structure synthesized in a computer, all data of the digital human body are from a real human body, the digital human body can simulate metabolism, growth and development, pathophysiological changes and other narrow meanings of the human body, is a product of fusion of information science and life science, and is a virtual simulation of the shape and functions of the human body at different levels by using an information science method. The generalized digital human refers to the penetration of digital technology in various levels and stages of human anatomy, physics, physiology and intelligence.
In addition, with the continuous development of the mobile internet, the network live broadcast technology is also rapidly improved, and the digital people are gradually applied to the network live broadcast industry. In order to enhance the interest and interactivity of live broadcast, live virtual broadcast is a very important part of live broadcast items, and in recent years, live broadcast items occupy a larger and larger proportion. In the live broadcast process, the preset virtual image, such as the artificial man anchor image, the cartoon character image and the like, can be used for live broadcast instead of the actual image of the man anchor. In the process of live broadcast in the virtual live broadcast room, some specific messages need to be broadcasted in the virtual live broadcast room faster, such as instant news, events with higher timeliness, even messages interacted with audiences, and the like.
However, in the process of making a digital person for live broadcasting, a making process for a certain specific role is generally developed, or a metahuman face-pinching process is used for making the digital person, the former often has low efficiency, and most of the former depends on complete self-setting in the making process of the digital person, so that all data need to be started from zero; the latter is currently difficult to achieve a digital person that is fully fit to expectations due to its imperfect face-pinching function and reliance on real scanning models.
Disclosure of Invention
In order to solve the problems of low writing degree and low manufacturing speed of manufacturing a digital human face in a digital human manufacturing method in the related art, the application provides a digital human processing method and a digital human processing device.
The digital human processing method and the digital human processing device adopt the following technical scheme:
a digital human processing method is applied to terminal equipment and comprises the following steps:
acquiring a basic face model;
determining basic face model data, wherein the basic face model data is obtained by decomposing a basic face model;
acquiring preset user-defined face model data;
modifying the basic face model data based on the user-defined face model data to obtain current basic face model data;
and combining the data of the current basic face model to obtain a current basic face model so as to obtain a digital face according to the current basic face model.
By adopting the technical scheme, when the digital human is processed by using the digital human processing method, the basic human face model can be obtained firstly, then the basic human face model is decomposed, so that basic human face model data for embodying the basic human face model is obtained, then the basic human face model data is modified according to the data of the user-defined model to be generated, so that the basic human face model is similar to the user-defined human face model, the user-defined model can be constructed subsequently by modifying the basic model, the convenience in constructing the human face of the digital human is improved, the digital human face processing speed is improved, and the production speed of the digital human is improved.
Preferably, when the base face model is decomposed, the base face model data is divided into adjustable model data and inherited model data, the adjustable model data includes a subjective adjustment part and a calibration part, the subjective adjustment part includes the shape of five sense organs, and the calibration part includes the face proportion and the oral cavity position.
By adopting the technical scheme, the basic face model data is further divided because in the process of processing the digital human face, part of the data can be adjusted in a characteristic point mode, such as the specific shapes of eyes, nose, mouth, eyebrows and ears and the difference of the positions of five sense organs caused by the difference of the length and the width of the face, and the other part of the data is completely consistent with the basic face model and is summarized into inheritance model data, and the writing degree of the digital human face is increased by continuously subdividing the basic face model data.
Preferably, when modifying the base face model data based on the customized face model, the method includes the following steps:
acquiring feature points of the subjective adjustment part, wherein the feature points comprise positioning feature points and fine adjustment feature points;
determining feature points of the subjective adjustment part and feature points of the user-defined face model data;
adjusting the positioning feature points of the subjective adjustment part according to the feature points of the user-defined face model data;
and changing the position of the fine adjustment feature point based on the adjustment result of the positioning feature point.
By adopting the technical scheme, when the basic face model data is modified, the data can be modified in a characteristic point positioning mode, the characteristic points can be divided into positioning characteristic points and fine-tuning characteristic points, for example, when the eyes are modified in a characteristic point passing mode, the positions of the canthus, the eyelids and the like are used as the positioning characteristic points, the general shape of the whole eye part can be basically determined, the fine-tuning characteristic points can represent the detailed part of the eyes, and the fine-tuning can be completed only by fine tuning after the positioning characteristic points are determined, so that the convenience and the accuracy in modifying the basic face model data are improved.
Preferably, the inheritance model data comprises UV information, and the UV information is used for positioning a texture map of the face model.
By adopting the technical scheme, the UV information is the relative position information of the texture mapping, the function of the UV information is mainly to position the texture position of the model, and the UV information is inherited, so that the basic face model and the positions of the five sense organs in the user-defined face model can be relatively unified, and the accuracy of converting the basic face model into the user-defined face model is improved.
Preferably, the inheritance data of the basic face model is consistent with the inheritance data of the user-defined face model.
By adopting the technical scheme, the inheritance data of the basic face model is consistent with the inheritance data of the user-defined face model, so that on one hand, the accuracy of the basic face model converted into the user-defined face model is improved, and on the other hand, the production speed of the digital human is improved.
Preferably, the method further comprises the following steps in the process of modifying the calibration part:
acquiring data of a calibration part of the basic face model;
comparing the outline proportion of the basic face model and the outline proportion of the user-defined face model;
acquiring inheritance model data of the user-defined model;
and adjusting the basic face model based on the self-defined inheritance model data.
By adopting the technical scheme, in the process of adjusting the calibration data, the data of the calibration part can be read, the outline proportion of the human face is changed timely, and because the texture position of the model can be positioned by the UV information, the adjustable model data can be correspondingly adjusted to the changed human face outline, the similarity degree of the basic human face model and the user-defined human face model is further improved, and the writing degree of the digital human is further improved.
Preferably, the method for acquiring the basic face model further comprises the following steps:
acquiring UV information of the basic face model;
comparing UV information of the basic face model and the user-defined face model to obtain UV similar parameters;
comparing the UV similarity parameter with a preset threshold value;
and if the UV similarity parameter is less than or equal to the threshold value, selecting the acquired basic face model.
By adopting the technical scheme, because the UV information is completely inherited, when the basic face model is selected, the UV information of the basic face model can be compared with the UV information of the user-defined model, so that the UV model which is most similar to the user-defined face model is screened out, the proportion of the basic face model is closer to that of the user-defined face model, and the similarity degree between the produced digital person and the user-defined digital person is further improved.
Preferably, in the comparison between the UV similarity parameter and a preset threshold, the preset threshold is adjusted according to the user-defined face model.
By adopting the technical scheme, the proportion of the five sense organs can be adjusted according to the use environment of the digital person by modifying the threshold value, so that the digital person not only appears in the image of a real person, but also can be applied to various cartoons and games, and the application range of the production of the digital person is expanded.
Preferably, a digital human processing apparatus comprises:
the digital human processing device comprises an acquisition unit, a decomposition unit, a modification unit and a combination unit;
the acquisition unit is used for acquiring a basic face model;
the decomposition unit is used for decomposing the basic face model after the acquisition unit acquires the basic face model to obtain basic face model data;
the modification unit is used for modifying the basic face model data;
the combination unit is used for combining the modified basic human face model data into a basic human face model.
By adopting the technical scheme, when the digital human is processed by using the digital human processing method, the basic human face model can be obtained through the obtaining unit, then the basic human face model is decomposed through the decomposing unit, so that basic human face model data for embodying the basic human face model is obtained, and the basic human face model data is modified through the modifying unit according to the data of the user-defined model to be generated, so that the basic human face model is similar to the user-defined human face model, so that the user-defined model can be constructed through the subsequent modification of the basic model through the combining unit, the convenience in constructing the digital human face is improved, the digital human face processing speed is improved, and the production speed of the digital human is improved.
In summary, the present application includes at least one of the following beneficial technical effects:
1. when the digital human processing method is used for processing the digital human, the basic human face model can be obtained firstly, then the basic human face model is decomposed, so that basic human face model data for embodying the basic human face model is obtained, then the basic human face model data is modified according to the data of the user-defined model to be generated, so that the basic human face model is similar to the user-defined human face model, the user-defined model can be built subsequently through the modification of the basic model, the convenience in building the digital human face is improved, the speed of processing the digital human face is improved, and the production speed of the digital human is improved.
2. When basic face model data is modified, modification can be performed in a characteristic point positioning mode, and characteristic points can be divided into positioning characteristic points and fine-tuning characteristic points, for example, when the form of eyes passing through the characteristic points is modified, the positions of the canthus, eyelids and the like are used as the positioning characteristic points, the general shape of the whole eye part can be basically determined, the fine-tuning characteristic points can represent the detailed part of the eyes, fine tuning can be completed only after the positioning characteristic points are determined, and convenience and accuracy in modification of the basic face model data are improved.
3. In the process of adjusting the calibration data, the data of the calibration part can be read, the outline proportion of the human face is changed timely, and the texture position of the model can be positioned by the UV information, so that the adjustable model data can be correspondingly adjusted to the changed human face outline, the similarity between the basic human face model and the user-defined human face model is further improved, and the writing degree of the digital human is improved.
Drawings
Fig. 1 is a schematic overall flow diagram provided in the present application.
Fig. 2 is a schematic development flow chart of the step S100 in fig. 1.
Fig. 3 is a schematic development flow chart of step S103 in fig. 1.
Fig. 4 is a schematic development flow chart of step S203 in fig. 3.
Fig. 5 is a schematic diagram of the overall structure of a digital human processing device in the present application.
Fig. 6 is a schematic structural diagram of an electronic device implementing a digital human process flow in the present application.
Description of reference numerals: 1. a determination unit; 2. a modification unit; 3. an acquisition unit; 4. a comparison unit; 5. a combination unit; 6. a decomposition unit; 1000. an electronic device; 1001. a processor; 1002. a communication bus; 1003. a user interface; 1004. a network interface; 1005. a memory.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
In the description of the embodiments of the present application, the words "exemplary," "for example," or "for instance" are used to indicate instances, or illustrations. Any embodiment or design described herein as "exemplary," "e.g.," or "e.g.," is not to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the words "exemplary," "such as," or "for example" are intended to present relevant concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "and/or" is only one kind of association relationship describing an associated object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time. In addition, the term "plurality" means two or more unless otherwise specified. For example, the plurality of systems refers to two or more systems, and the plurality of screen terminals refers to two or more screen terminals. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit indication of indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The following is an explanation of some of the nouns involved in this application:
a basic face model: the model body is recorded as the selected model body of the basic human face, and in this embodiment, is specifically the selected model in the Metahuman Creator.
Self-defining the face model: the pre-created ideal model can be embodied as a target model which needs to be modified from a basic model in the embodiment of the application in a manner of hand drawing, animation or simulation scanning.
Model data inheritance: the inheritance model data refers to the quantity, such as a UV value, from a basic face model to a custom face model without adjustment.
UV information, UV information refers to UV coordinates, which refer to a plane in which all image files are two-dimensional. With a horizontal direction U and a vertical direction V, any pixel on the image can be located by this planar, two-dimensional UV coordinate system.
The present application is described in further detail below with reference to figures 1-6.
The embodiment of the application discloses a processing method of a digital person. Referring to FIG. 1, it includes S100-S104:
s100, acquiring a basic face model;
in the embodiment of the present application, Metahuman Creator is selected as a basis to complete the creation of the digital human, but Metahuman Creator is only one of ways for digital human creation, and other software capable of achieving similar effects to Metahuman Creator may also be applied to the digital human processing method of the present application, and is not described herein again.
In the Metahuman Creator, a preset digital human model thereof can be selected as a basic model.
Referring to fig. 2, in one possible implementation, in acquiring the base face model, steps S400-S403 are further included;
s400, acquiring UV information of the basic face model;
however, because the human face has obvious difference and rich and diverse expressions, and the human face linear model in the three-dimensional space is limited, the three-dimensional coordinates of the key points of the human face correspond to the UV coordinates in the UV space by adopting a UV position map mode, and the three-dimensional coordinates are stored on a two-dimensional image, so that the mapping from three dimensions to two dimensions is realized; thus the UV position map P (ui, vi) can be expressed as
P( ui,vi) = ( xi,yi,zi)
Wherein, (ui, vi) represents the UV coordinate of the ith vertex in the three-dimensional face model, (xi, yi, zi) represents the three-dimensional space coordinate corresponding to the vertex, and simultaneously, (xi, yi) represents the corresponding pixel point position on the input two-dimensional image, and zi represents the depth of the point. Since (ui, vi) and (xi, yi) correspond to the same point on the face model, this representation method can retain alignment information.
When the UV information of the basic face model is obtained, the position of the five sense organs of the basic face can be selected, the basic face is used as a UV space, the position of the five sense organs on the basic face is positioned in a UV coordinate mode, and the position is recorded as the UV information of the face basic model.
Besides the five sense organs, facial muscles, bones and the like can be selected as positioning, different positioning references are determined according to the convenience degree and the accuracy degree in actual use, and the details are not repeated.
S401, comparing UV information of the basic face model and UV information of the user-defined face model to obtain UV similar parameters;
converting the original UV space according to the ratio of the length to the width of the user-defined face model, so that the current UV space is suitable for the user-defined face model on the basis of the unchanged UV coordinates; and recording the ratio of the current UV space to the original UV space as a UV similar parameter.
S402, comparing the UV similar parameters with a preset threshold value;
the preset threshold value can be changed according to the application scene of the digital person, for example, when the digital person requires to be attached to the portrait of the real person as much as possible, the threshold value is gradually reduced; when the digital person may be an animation character, or a digital person with an exaggerated face ratio, the threshold is gradually increased.
And S403, if the UV similarity parameter is less than or equal to the threshold value, selecting the acquired basic face model.
S101, decomposing a basic face model to obtain basic face model data, wherein the basic face model data are divided into adjustable model data and inherited model data, and the adjustable model data comprise a subjective adjustment part and a calibration part.
And S102, acquiring preset user-defined face model data.
In the obtaining of the user-defined face model data, the user-defined face model data may correspond to the base face model data, that is, the user-defined face model data is divided into adjustable model data and inherited model data, wherein the adjustable model data is modification target data of the base face model data, the inherited model data is UV coordinates, and the user-defined face model data and the inherited model data of the base face model data are kept consistent.
S103, modifying the basic face data based on the user-defined face model data to obtain the current basic face data;
referring to fig. 3, in one possible embodiment, when modifying the subjective adjustment part of the adjustable model data, steps S200-S203 are further included;
and S200, acquiring characteristic points of the subjective adjustment part, wherein the characteristic points comprise positioning characteristic points and fine adjustment characteristic points.
In the basic human face model, the adjustable model data is facial features information and the like, and can be specifically divided into tooth information, eyeball information, cornea information, eyelash information and facial skeleton information.
S201, determining feature points of the subjective adjustment part and feature points of the user-defined face model data.
And S202, adjusting the positioning characteristic points of the subjective adjustment part according to the characteristic points of the user-defined face model data.
In the method, parameterization decomposition can be simultaneously carried out on the five sense organs of the basic face model and the user-defined face model through Houdini software, and each node on the basic face model and each node on the user-defined face model are in one-to-one correspondence so as to rapidly change adjustable model data on the basic face model.
And S203, changing the position of the fine adjustment characteristic point based on the adjustment result of the positioning characteristic point.
When the characteristic points decomposed by the Houdini software are too many, some characteristic points which can determine the information form of the five sense organs are adjusted, specifically, when the eyes are adjusted, only the positions of the canthus and the eyelids can be adjusted, the characteristic points for positioning the canthus and the eyelids are the positioning characteristic points, the other characteristic points are the fine tuning characteristic points, after the general shape of the eyes is positioned, the fine tuning characteristic points on the periphery of the eyes are adjusted, so that the operation steps of the system are reduced, and the production process of a digital person is accelerated.
Referring to FIG. 4, in one possible embodiment, steps S300-S302 are included in modifying the calibration portion of the adjustable model data;
s300, acquiring data of a calibration part of the basic face model;
and the data of the calibration part of the basic face model is the UV coordinates of the basic face model, and the coordinates of the five sense organs of the basic face model are positioned in the form of the UV coordinates.
S301, comparing the outline proportion of the basic face model and the custom face model;
namely, the original UV space is transformed according to the ratio of the length to the width of the user-defined face model, so that the current UV space is suitable for the user-defined face model on the basis of the unchanged UV coordinate.
And S302, adjusting the basic face model based on the self-defined inheritance model data.
And under the condition of keeping the coordinates of the five sense organs unchanged, modifying the UV space of the basic face model, namely stretching the face of the basic face model, and keeping the relative positions of the five sense organs unchanged.
And S104, combining the data of the current basic face model to obtain the current basic face model.
And recombining the faces of the digital people according to the UV coordinates, the facial information of five sense organs in the modified basic face model and the like to obtain a basic face model which is similar to the user-defined face model, and further modifying the modified basic face model by using a Metahuman Creator so as to meet the face requirements of the digital people.
Referring to fig. 5, a digital human processing apparatus includes an acquisition unit 3, a decomposition unit 6, a modification unit 2, a combination unit 5, a comparison unit 4, and a determination unit 1;
the acquiring unit 3, the acquiring unit 3 is used for acquiring a basic face model;
the decomposition unit 6 is used for decomposing the basic face model to obtain basic face model data after the basic face model data is obtained by the obtaining unit 3;
in a possible embodiment, the obtaining unit 3 may be further configured to obtain preset custom face model data;
the modifying unit 2 is used for modifying the basic face model data based on the user-defined face model data to obtain the current basic face model data;
and the combination unit 5, the combination unit 5 combines the data of the current basic face model to obtain the current basic face model so as to obtain the digital face according to the current basic face model.
The acquiring unit 3 may also be configured to acquire feature points of the subjective adjustment part, where the feature points include positioning feature points and fine-tuning feature points;
the determination unit 1 is used for determining the feature points of the subjective adjustment part and the feature points of the user-defined face model data;
the modification unit 2, the modification unit 2 can also adjust the positioning feature points of the subjective regulation part according to the feature points of the user-defined face model data; and changing the position of the fine adjustment feature point based on the adjustment result of the positioning feature point.
In a possible embodiment, the obtaining unit 3 may also be configured to obtain data of a calibration portion of the base face model;
the comparison unit 4 is used for comparing the outline proportion of the basic face model and the custom face model;
the obtaining unit 3 can also be used for obtaining the inheritance model data of the user-defined model;
the modification unit 2 adjusts the base face model based on the user-defined inheritance model data.
In a possible implementation, the obtaining unit 3 may further obtain UV information of the base face model;
the comparison unit 4 may be configured to compare UV information of the basic face model and the custom face model to obtain UV similar parameters; the comparison unit 4 may also compare the UV similarity parameter with a preset threshold;
if the UV similarity parameter is less than or equal to the threshold, the determining unit 1 determines the acquired basic face model.
The embodiment of the application provides a structural schematic diagram of electronic equipment. As shown in fig. 6, the electronic device 1000 may include: at least one processor 1001, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002.
Wherein a communication bus 1002 is used to enable connective communication between these components.
The user interface 1003 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 1003 may further include a standard wired interface and a wireless interface.
The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Processor 1001 may include one or more processing cores, among other things. The processor 1001 connects various parts throughout the server 1000 using various interfaces and lines, and performs various functions of the server 1000 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1005, and calling data stored in the memory 1005. Alternatively, the processor 1001 may be implemented in at least one hardware form of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1001 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It is understood that the above modem may not be integrated into the processor 1001, and may be implemented by a single chip.
The Memory 1005 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 1005 includes a non-transitory computer-readable medium. The memory 1005 may be used to store an instruction, a program, code, a set of codes, or a set of instructions. The memory 1005 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like; the storage data area may store data and the like referred to in the above respective method embodiments. The memory 1005 may optionally be at least one memory device located remotely from the processor 1001. As shown in fig. 6, a memory 1005, which is a computer storage medium, may include an operating system, a network communication module, a user interface module, and an application program of a digital human processing method therein.
It should be noted that: in the above embodiment, when the device implements the functions thereof, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to implement all or part of the functions described above. In addition, the apparatus and method embodiments provided by the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments, which are not described herein again.
In the electronic device 1000 shown in fig. 6, the user interface 1003 is mainly used as an interface for providing input for a user, and acquiring data input by the user; and the processor 1001 may be used to invoke an application program in the memory 1005 that stores a digital human processing method that, when executed by the one or more processors, causes the electronic device to perform the method as described in one or more of the above embodiments.
An electronic device readable storage medium having instructions stored thereon. When executed by one or more processors, cause an electronic device to perform a method as described in one or more of the above embodiments.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE Gate Array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
The above embodiments are preferred embodiments of the present application, and the protection scope of the present application is not limited by the above embodiments, so: all equivalent changes made according to the structure, shape and principle of the present application shall be covered by the protection scope of the present application.

Claims (10)

1. A digital human processing method is applied to terminal equipment and is characterized in that: the method comprises the following steps:
acquiring a basic face model;
determining basic face model data, wherein the basic face model data is obtained by decomposing a basic face model;
acquiring preset user-defined face model data;
modifying the basic face model data based on the user-defined face model data to obtain current basic face model data;
and combining the data of the current basic face model to obtain a current basic face model so as to obtain a digital face according to the current basic face model.
2. A method of handling a digital person according to claim 1, characterized by:
when the basic face model is decomposed, the basic face model data is divided into adjustable model data and inheritance model data, the adjustable model data comprises a subjective adjustment part and a calibration part, the subjective adjustment part comprises five sense organs, and the calibration part comprises a face proportion and an oral cavity position.
3. A method of handling a digital person according to claim 2, characterized by: when the basic face model data is modified based on the user-defined face model, the method comprises the following steps:
acquiring feature points of the subjective adjustment part, wherein the feature points comprise positioning feature points and fine adjustment feature points;
determining feature points of the subjective adjustment part and feature points of the user-defined face model data;
adjusting the positioning feature points of the subjective adjustment part according to the feature points of the user-defined face model data;
and changing the position of the fine adjustment feature point based on the adjustment result of the positioning feature point.
4. A method of handling a digital person according to claim 2, characterized by: the inheritance model data comprise UV information, and the UV information is used for positioning texture maps of the face model.
5. A method of handling a digital person according to claim 2, characterized by: the inheritance data of the basic face model is consistent with the inheritance data of the user-defined face model.
6. A method of handling a digital person according to claim 2, characterized by: in the process of modifying the calibration part, the method further comprises the following steps:
acquiring data of a calibration part of the basic face model;
comparing the outline proportion of the basic face model and the outline proportion of the user-defined face model;
acquiring inheritance model data of the user-defined model;
and adjusting the basic face model based on the self-defined inheritance model data.
7. The method of claim 4, wherein the method further comprises: in obtaining the basic face model, the method further comprises the following steps:
acquiring UV information of the basic face model;
comparing UV information of the basic face model and the user-defined face model to obtain UV similar parameters;
comparing the UV similarity parameter with a preset threshold value;
and if the UV similarity parameter is less than or equal to the threshold value, selecting the acquired basic face model.
8. A method of handling a digital person according to claim 7, characterized by: and in the comparison of the UV similar parameters and a preset threshold value, adjusting the preset threshold value according to the user-defined face model.
9. A digital human processing device, for use in any one of the preceding claims 1-8, comprising:
the digital human processing device comprises an acquisition unit (3), a decomposition unit (6), a modification unit (2) and a combination unit (5);
the acquisition unit (3) is used for acquiring a basic face model;
the decomposition unit (6) is used for decomposing the basic face model after the acquisition unit (3) acquires the basic face model to obtain basic face model data;
the modification unit (2) is used for modifying the basic human face model data;
the combination unit (5) is used for combining the modified basic face model data into a basic face model.
10. An electronic device (1000) comprising a processor (1001), a memory (1005) and a transceiver, the memory (1005) being configured to store instructions and the transceiver being configured to communicate with other devices, the processor (1001) being configured to execute the instructions stored in the memory (1005) to cause the electronic device (1000) to perform the method of any of claims 1-8.
CN202210753409.5A 2022-06-29 2022-06-29 Digital human processing method and device Pending CN115063516A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210753409.5A CN115063516A (en) 2022-06-29 2022-06-29 Digital human processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210753409.5A CN115063516A (en) 2022-06-29 2022-06-29 Digital human processing method and device

Publications (1)

Publication Number Publication Date
CN115063516A true CN115063516A (en) 2022-09-16

Family

ID=83203561

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210753409.5A Pending CN115063516A (en) 2022-06-29 2022-06-29 Digital human processing method and device

Country Status (1)

Country Link
CN (1) CN115063516A (en)

Similar Documents

Publication Publication Date Title
CN109448099B (en) Picture rendering method and device, storage medium and electronic device
CN106710003B (en) OpenG L ES-based three-dimensional photographing method and system
US20200020173A1 (en) Methods and systems for constructing an animated 3d facial model from a 2d facial image
CN109151540B (en) Interactive processing method and device for video image
KR102491140B1 (en) Method and apparatus for generating virtual avatar
KR102386642B1 (en) Image processing method and apparatus, electronic device and storage medium
CN107452049B (en) Three-dimensional head modeling method and device
JP2023029984A (en) Method, device, electronic apparatus, and readable storage medium for generating virtual image
CN112288665A (en) Image fusion method and device, storage medium and electronic equipment
KR20220083830A (en) Image processing method and image synthesis method, image processing apparatus and image synthesis apparatus, and storage medium
CN115601484B (en) Virtual character face driving method and device, terminal equipment and readable storage medium
US11893705B2 (en) Reference image generation apparatus, display image generation apparatus, reference image generation method, and display image generation method
TWI780995B (en) Image processing method, equipment, computer storage medium
US20230401806A1 (en) Scene element processing method and apparatus, device, and medium
CN113095206A (en) Virtual anchor generation method and device and terminal equipment
CN109145688A (en) The processing method and processing device of video image
CN114359453A (en) Three-dimensional special effect rendering method and device, storage medium and equipment
CN113808249A (en) Image processing method, device, equipment and computer storage medium
US20230274495A1 (en) Avatar output device, terminal device, avatar output method, and program
CN110751026B (en) Video processing method and related device
CN112604279A (en) Special effect display method and device
CN115063516A (en) Digital human processing method and device
CN116977539A (en) Image processing method, apparatus, computer device, storage medium, and program product
WO2021227740A1 (en) Image processing method and image display device
CN110717373B (en) Image simulation method, electronic device, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination