CN114202597B - Image processing method and apparatus, device, medium and product - Google Patents

Image processing method and apparatus, device, medium and product Download PDF

Info

Publication number
CN114202597B
CN114202597B CN202111494653.6A CN202111494653A CN114202597B CN 114202597 B CN114202597 B CN 114202597B CN 202111494653 A CN202111494653 A CN 202111494653A CN 114202597 B CN114202597 B CN 114202597B
Authority
CN
China
Prior art keywords
model
key point
head
hair
head model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111494653.6A
Other languages
Chinese (zh)
Other versions
CN114202597A (en
Inventor
彭昊天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111494653.6A priority Critical patent/CN114202597B/en
Publication of CN114202597A publication Critical patent/CN114202597A/en
Application granted granted Critical
Publication of CN114202597B publication Critical patent/CN114202597B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Image Generation (AREA)

Abstract

The present disclosure provides an image processing method and apparatus, device, medium, and product, which relate to the field of artificial intelligence, and in particular to the technical field of computer vision, augmented/virtual reality, and image processing. The specific implementation scheme comprises the following steps: constructing a target head model based on the received head image of the object, the target head model comprising at least one first model keypoint; determining a mapping relation between at least one first model key point and at least one second model key point of a preset reference head model to obtain a key point mapping relation between a target head model and the reference head model; and generating a target hair style matched with the head of the object according to the key point mapping relation and a preset reference hair style model, wherein the reference hair style model is matched with the reference head model.

Description

Image processing method and apparatus, device, medium, and product
Technical Field
The present disclosure relates to the field of artificial intelligence, and more particularly to the field of computer vision, augmented/virtual reality, and image processing techniques, which may be applied in image processing scenarios.
Background
The virtual image has wide application in scenes such as social contact, live broadcast or games. The appearance of the virtual image is influenced by hairstyle reconstruction, and the construction cost of the virtual image can be reduced while the individual requirements of a user are effectively met. However, in some cases, when creating a virtual hairstyle, there are phenomena that the cost of creating the virtual hairstyle is high and the creating effect is poor.
Disclosure of Invention
The present disclosure provides an image processing method and apparatus, device, medium, and product.
According to an aspect of the present disclosure, there is provided an image processing method including: constructing a target head model based on the received subject head image, the target head model comprising at least one first model keypoint; determining a mapping relation between the at least one first model key point and at least one second model key point of a preset reference head model to obtain a key point mapping relation between the target head model and the reference head model; and generating a target hair style matched with the head of the object according to the key point mapping relation and a preset reference hair style model, wherein the reference hair style model is matched with the reference head model.
According to another aspect of the present disclosure, there is provided an image processing apparatus including: a first processing module for constructing a target head model based on the received image of the subject head, the target head model comprising at least one first model keypoint; the second processing module is used for determining the mapping relation between the at least one first model key point and at least one second model key point of a preset reference head model to obtain the key point mapping relation between the target head model and the reference head model; and the third processing module is used for generating a target hair style matched with the head of the object according to the key point mapping relation and a preset reference hair style model, and the reference hair style model is matched with the reference head model.
According to another aspect of the present disclosure, there is provided an electronic device including: at least one processor and a memory communicatively coupled to the at least one processor. Wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image processing method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the image processing method described above.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the image processing method described above.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 schematically shows a system architecture of an image processing and apparatus according to an embodiment of the present disclosure;
FIG. 2 schematically shows a flow diagram of an image processing method according to an embodiment of the present disclosure;
fig. 3 schematically shows a schematic diagram of an image processing method according to another embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a method of determining a projection mapping relationship according to an embodiment of the present disclosure;
FIG. 5 schematically illustrates a process diagram for determining a projection mapping relationship according to an embodiment of the present disclosure;
FIG. 6 schematically illustrates a diagram of a reference head model with texture according to an embodiment of the present disclosure;
FIG. 7 schematically illustrates a process diagram for determining projected coordinates of a hair root node according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a process for determining texture coordinates of a proxel according to an embodiment of the present disclosure;
fig. 9 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 10 schematically shows a block diagram of an electronic device for performing image processing according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs, unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.).
An embodiment of the present disclosure provides an image processing method. The image processing method comprises the following steps: the method comprises the steps of constructing a target head model based on a received target head image, wherein the target head model comprises at least one first model key point, determining a mapping relation between the at least one first model key point and at least one second model key point of a preset reference head model to obtain a key point mapping relation between the target head model and the reference head model, generating a target hair style matched with the head of a target according to the key point mapping relation and the preset reference hair style model, and enabling the reference hair style model to be matched with the reference head model.
Fig. 1 schematically shows a system architecture of an image processing method and apparatus according to an embodiment of the present disclosure. It should be noted that fig. 1 is only an example of a system architecture to which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, and does not mean that the embodiments of the present disclosure may not be applied to other devices, systems, environments or scenarios.
The system architecture 100 according to this embodiment may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used to provide a medium for communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The server 105 may be an independent physical server, a server cluster or a distributed system including a plurality of physical servers, or a cloud server providing basic cloud computing services such as cloud services, cloud computing, web services, and middleware services.
The user may use the terminal devices 101, 102, 103 to interact with the server 105 via the network 104 to receive or send messages or the like. The terminal devices 101, 102, 103 may have various communication client applications installed thereon, such as social platform software, entertainment interaction type applications, search type applications, instant messaging tools, game clients and/or tool type applications, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having display screens and supporting data interaction, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server providing various services, such as a background processing server (for example only) providing support for requests submitted by users with the terminal devices 101, 102, 103. The background processing server may analyze and process data such as the received user request, and feed back a processing result (for example, data, information, or a web page obtained or generated according to the user request) to the terminal device.
For example, the server 105 receives an image of the head of the object from the terminal device 101, 102, 103, the server 105 being configured to build a target head model based on the received image of the head of the object, the target head model comprising at least one first model keypoint. The server 105 is further configured to determine a mapping relationship between at least one first model key point and at least one second model key point of the preset reference head model, obtain a key point mapping relationship between the target head model and the reference head model, and generate a target hair style matched with the head of the object according to the key point mapping relationship and the preset reference hair style model, where the reference hair style model is adapted to the reference head model.
It should be noted that the image processing method provided by the embodiment of the present disclosure may be executed by the server 105. Accordingly, the image processing apparatus provided by the embodiment of the present disclosure may be provided in the server 105. The image processing method provided by the embodiment of the present disclosure may also be executed by a server or a server cluster that is different from the server 105 and is capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the image processing apparatus provided in the embodiment of the present disclosure may also be provided in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation.
An image processing method according to an exemplary embodiment of the present disclosure is described below with reference to fig. 2 to 8 in conjunction with the system architecture of fig. 1. The image processing method of the embodiment of the present disclosure may be performed by the server 105 shown in fig. 1, for example.
Fig. 2 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the image processing method 200 of the embodiment of the present disclosure may include, for example, operations S210 to S230.
In operation S210, a target head model is constructed based on the received head image of the subject, the target head model including at least one first model keypoint.
In operation S220, a mapping relationship between at least one first model key point and at least one second model key point of a preset reference head model is determined, so as to obtain a key point mapping relationship between the target head model and the reference head model.
In operation S230, a target hair style matching the head of the object is generated according to the key point mapping relationship and a preset reference hair style model, wherein the reference hair style model is adapted to the reference head model.
An exemplary flow of each operation of the image processing method of the present embodiment is illustrated below.
Illustratively, the subject of execution of the image processing method may obtain the subject head image in various published, legally compliant ways, such as from a published data set, or from an authorized user after obtaining user authorization associated with the subject head image. The subject header image is not image data for a specific user, and does not reflect personal information of a specific user.
And generating a virtual head model similar to the facial form, facial five-organ and other characteristic details of the target head image based on the received target head image to obtain a target head model. Illustratively, PTA technology (Photo-to-Avatar, an Avatar generation technology) may be utilized to generate a three-dimensional head model based on a single face image that aligns with the facial poses and appearances in the image, extract the facial poses in the subject head image, and generate a target head model based on the facial poses.
The target head model includes at least one first model keypoint, which may include, for example, a scalp layer keypoint, a facial feature point, or the like. The first model keypoints have corresponding coordinate information in the target head model, e.g., have corresponding vertex coordinates and texture coordinate information. The vertex coordinates may be three-dimensional coordinates represented by (x, y, z), and the texture coordinates may be two-dimensional coordinates represented by (u, v). A texture may comprise a picture in two-dimensional space, which may be viewed as a two-dimensional array of color values. A single color value is called a texel or texel, which has a unique coordinate address in the texture.
The reference head model may be a preset three-dimensional head model in a head model database. The reference head model includes at least one second model keypoint, which may include, for example, a scalp layer keypoint, a facial feature point, or the like. The second model keypoints have corresponding coordinate information in the reference head model, e.g. have corresponding vertex coordinates and texture coordinate information.
And determining the mapping relation between at least one first model key point of the target head model and at least one second model key point of the reference head model to obtain the key point mapping relation between the target head model and the reference head model. The target head model and the reference head model have a preset texture mapping relationship. By way of example, a matching keypoint pair of the at least one first model keypoint and the at least one second model keypoint may be determined from a texture mapping relationship between the target head model and the reference head model. And determining a key point mapping relation between the target head model and the reference head model according to the coordinate information of the matched key point pair.
The model keypoints may include facial feature points. By way of another example, at least one first facial feature point in the target head model may be extracted, and a coordinate transformation relationship between the at least one first facial feature point and a corresponding second facial feature point in the reference head model may be determined as a keypoint mapping relationship between the target head model and the reference head model.
Illustratively, the facial feature points in the target head Model may be automatically calibrated by using an ASM (Active Shape Model), so as to obtain at least one first facial feature point in the target head Model (for example, 73 facial feature points including 15 facial contour feature points, 12 brow feature points, 16 eye feature points, and the like). And determining a key point mapping relation between the target head model and the reference head model based on the facial feature points, so that the generation difficulty of the target hair style is reduced, and the matching degree between the target hair style and the head of the object is improved.
The first and second facial feature points have corresponding coordinate information in the head model, for example, having corresponding vertex coordinate information. A coordinate transformation relationship (which may be, for example, a vertex coordinate transformation relationship) between at least one first facial feature point and a corresponding second facial feature point is determined as a keypoint mapping relationship between the target head model and the reference head model.
And generating a target hair style matched with the head of the object according to the key point mapping relation between the target head model and the reference head model and a preset reference hair style model. The benchmark hair style model is matched with the benchmark head model, and at least one second model key point of the benchmark head model and a hair root node of the benchmark hair style model have a projection mapping relation.
The reference hair style model can be a preset virtual hair style model in a hair style model database, the virtual hair style models in the hair style model database are kept consistent on a topological structure, and the scale of the head model adapted by the virtual hair style model is kept normalized. For example, the coordinates of the hair nodes in the reference hair style model may be adjusted according to the mapping relationship of the key points between the target head model and the reference head model, so as to obtain the target hair style matching with the head of the subject.
According to the embodiment of the disclosure, a target head model is constructed based on a received target head image, the target head model comprises at least one first model key point, a mapping relation between the at least one first model key point and at least one second model key point of a preset reference head model is determined, a key point mapping relation between the target head model and the reference head model is obtained, a target hair style matched with the head of the object is generated according to the key point mapping relation and a preset reference hair style model, and the reference hair style model is matched with the reference head model.
And registering the reference hair style model matched with the reference head model according to the key point mapping relation between the target head model and the reference head model to obtain the target hair style matched with the head of the object. By calculating the key point mapping relation between different head models, the hair space relation between different head models is effectively established, the hair style can be effectively migrated between different head models, the good virtual hair style generation capability can be realized, and the virtual image construction cost and the construction difficulty can be reduced.
Fig. 3 schematically shows a schematic diagram of an image processing method according to another embodiment of the present disclosure.
As shown in fig. 3, the method 300 may include, for example, operation S210, operation S320 to operation S340.
In operation S210, a target head model is constructed based on the received head image of the subject, the target head model including at least one first model keypoint.
In operation S320, a matching keypoint pair of the at least one first model keypoint and the at least one second model keypoint is determined according to a texture mapping relationship between the target head model and the reference head model.
In operation S330, a key point mapping relationship between the target head model and the reference head model is determined according to the coordinate information of the matching key point pair.
In operation S340, according to the key point mapping relationship and the hair node coordinates in the reference hair style model, the reference hair style model is subjected to hair registration to obtain a target hair style matched with the head of the object.
An exemplary flow of each operation of the image processing method of the present embodiment is exemplified below.
Illustratively, at least one second model key point of the reference head model has a projection mapping relationship with the hair root node of the reference hair style model. The second model key points may include scalp layer key points, that is, there is a projection mapping relationship between the scalp layer key points in the reference head model and the hair root nodes of the reference hair style model.
The target head model and the reference head model have a preset texture mapping relationship therebetween. The texture mapping relationship indicates texture coordinates at which the matching key point pairs in the target head model and the reference head model have a preset mapping relationship, for example, the texture mapping relationship indicates that the matching scalp layer key point pairs in the target head model and the reference head model have the same texture coordinates. And matching the scalp layer key point pairs with hair roots with the same node serial numbers in the reference hair style model and the target hair style respectively.
According to the texture mapping relation, model key points of which corresponding texture coordinates in the target head model and the reference head model have a preset mapping relation are used as matching key point pairs. Illustratively, model keypoints in which the texture coordinates in the target head model and the reference head model are the same are taken as matching keypoint pairs.
By determining the key point mapping relation between the target head model and the reference head model, the suitability between the generated target hair style and the object head is ensured. According to the consistency of the texture coordinates, the key point mapping relation between different head models is determined, and the construction difficulty and construction cost of the target hairstyle are favorably reduced.
And determining the coordinate transformation relation between the matching key point pairs according to the vertex coordinates of the matching key point pairs in the corresponding head models to be used as the key point mapping relation between the target head model and the reference head model. And determining a coordinate transformation relation of at least one matching key point pair, and estimating a key point transformation matrix of the target head model relative to the reference head model by a least square method, for example, determining a scalp layer key point coordinate transformation matrix of the target head model relative to the reference head model. The coordinate transformation relation of the matching key points in different head models is determined, the pose change and shape change information of different head models can be determined, and the target hair style with high adaptability to the head of the object can be generated.
In another example, a fitting operation for the reference head model may be performed, and in the fitting process, a target head model is obtained by iteratively estimating fitting parameters through least square optimization according to a current fitting effect. In the head model fitting process, translation and rotation parameters for a reference head model are calculated, and scaling parameters are calculated that align the two head model bounding boxes. And determining a key point mapping relation between the target head model and the reference head model according to the translation and rotation parameters aiming at the reference head model and the scaling parameters for aligning the two head model bounding boxes.
And carrying out hair registration on the reference hair style model according to the key point mapping relation and the hair node coordinates in the reference hair style model to obtain a target hair style matched with the head of the object. The hair nodes can comprise hair root nodes and non-hair root nodes, and the hair root nodes are the first hair node in the hair. Illustratively, according to the key point mapping relation and the hair root node coordinates, the hair root nodes matched with at least one scalp layer key point of the reference head model are registered, and the registered hair root nodes are obtained. For example, the vertex coordinates of the hair root node matched with at least one scalp layer key point are adjusted to obtain the hair root node matched with the target head model.
And according to the registered hair root nodes, registering the non-hair root nodes in the reference hair style model to obtain a target hair style matched with the head of the object. The target hair style and the reference hair style model are kept consistent in topological structure, and a projection mapping relation exists between a hair root node in the target hair style and a scalp layer key point in the target head model.
Illustratively, there is a projection mapping relationship between the scalp layer key point of the reference head model and the hair root node of the reference hair style model, and there is a projection mapping relationship between the scalp layer key point of the target head model and the hair root node of the target hair style. The projection mapping relation between the matching scalp layer key point pairs in different head models and the corresponding hair root nodes is the same, so that the hair root nodes in different hair style models can have the hair root mapping relation. The coordinate transformation relation between the matching hair root node pairs in the corresponding different hair style models can be determined according to the coordinate transformation relation between the matching scalp layer key point pairs in the different head models.
Assuming that the hair root coordinates of the reference head model form a column vector A, determining the coordinate transformation relationship between the hair root node pairs in the target hair style and the reference hair style model according to the coordinate transformation relationship of at least one matching scalp layer key point pair in the target head model and the reference head model, and obtaining the hair root coordinate vector B of the target hair style.
AX = B is satisfied between the hair root coordinate vector a of the reference head model and the hair root coordinate vector B of the target hair style, and X represents a transformation matrix. The transformation matrix X, X = (a) may be estimated using a least squares method T A) -1 A T And B, a hair root coordinate vector B '= AX under the transformation matrix of the target hair style, and a hair root coordinate vector B' under the transformation matrix can more accurately reflect the hair root distribution condition of the target hair style.
Using A full Representing a reference hairstyle model, using B full Representing the target hairstyle, B full Hair root coordinate vector B' = AX in (B). The non-hair root nodes in different hair style models satisfy the same coordinate mapping relation, B full Can be represented as B full =A full And (4) X. By way of example, B may be utilized full ′=B full Computing a target hair style to perform local coordinate correction on hair in the target hair style.
The process of registering the reference hair style model according to the key point mapping relationship can be regarded as a process of performing horizontal scaling or rotation on the reference hair style model. According to the texture mapping relation and the key point mapping relation, the hair space relation between different head models is effectively established, and the hair style can be effectively transferred between different head models. The mapping relation of key points among different head models is determined, and the support of hair style migration on the head models with different sizes, different rotation angles and different affine transformations can be effectively guaranteed. By registering hair nodes of the reference hair style model, the construction cost and the construction difficulty of the target hair style can be effectively reduced, the adaptability between the target hair style and the head of the object is improved, and the better virtual hair style generation capability is favorably realized.
Fig. 4 schematically illustrates a schematic diagram of a method of determining a projection mapping relationship according to an embodiment of the present disclosure.
As shown in fig. 4, the method 400 may include, for example, operations S410 through S430.
In operation S410, in the reference head model, a reference model region matching the hair root node of the reference hair style model is determined, the reference model region including at least one second model sub-keypoint.
In operation S420, a coordinate mapping relationship between at least one second model child key point and a corresponding hair root node is determined.
In operation S430, a corresponding hair root node is projected in the reference model region according to the coordinate mapping relationship and the texture coordinates of the at least one second model sub-key point, so as to obtain a projection mapping relationship between the corresponding hair root node and the at least one second model sub-key point.
An exemplary flow of each operation of the training method of the image generation model of the present embodiment is illustrated below.
For example, for a hair root node of which the projection mapping relationship is to be determined, a model region closest to the hair root node may be determined in the reference head model as a reference model region matched with the hair root node. The reference model region comprises at least one second model sub-keypoint, it being understood that the at least one second model sub-keypoint may be at least part of a second model keypoint of the reference head model.
Illustratively, a model patch closest to the root node may be determined, which may be, for example, a triangle bounding box constructed based on three patch vertices. Let the coordinates of the hair root node be r i And the vertex coordinates of the vertexes of three patches in the model patch are v respectively ja 、v jb 、v jc ,v ja 、v jb 、v jc May be a three-dimensional coordinate vector. The distance between the root node and the model patch can be represented by equation (1),
d=(v ja -r i ) 2 +(v jb -r i ) 2 +(v jc -r i ) 2 (1)
when the coordinate mapping relation between at least one second model sub-key point and the corresponding hair root node is determined, the projection coordinate of the hair root node in the matched reference model area is determined, and the vertex proportion coefficient between the projection coordinate and the vertex coordinate of at least one second model sub-key point is determined according to the projection coordinate and the vertex coordinate of at least one second model sub-key point to serve as the coordinate mapping relation.
Knowing v ja ,v jb ,v jc ,r i
vec α =v jb -v ja (2)
vec β =v jc -v ja (3)
vec p =r i -v ja (4)
proj i =r i -vec n *len(vec p )*dot(normalize(vec p ),vec n ) (5)
* Representing vector cross product, dot representing vector dot product, normalize representing vector normalization, len representing vector modulo length, proj i Representing the projected coordinates of the hair root node within the matched reference model region.
The reference model area of the hair root node can be represented by the formula (6)Projection coordinates in the field proj i
proj i =v ja *w a +v jb *w b +v jc *w c (6)
w a 、w b 、w c Respectively represent and v ja 、v jb 、v jc Corresponding weight coefficient, at proj i 、v ja 、v jb 、v jc In the known case, w can be calculated by equation (6) a 、w b 、w c ,w a 、w b 、w c The vertex weight coefficient between the projection coordinates and the three vertex coordinates is indicated. And taking the calculated vertex proportion coefficient as a coordinate mapping relation between the hair root node and at least one second model child key point.
The texture coordinates of the vertexes of the three surface patches of the reference model area are respectively UV ja 、UV jb 、UV jc And the texture coordinate of the projection point of the hair root node in the reference model area is assumed to be UVproj i UVproj can be calculated by using the formula (7) i
UVproj i =UV ja *w a +UV jb *w b +UV jc *w c (7)
And projecting the hair root nodes in the reference model area according to the projection coordinates of the hair root nodes in the reference model area and the texture coordinates of the projection points of the hair root nodes in the reference model area, so as to obtain the projection mapping relation between the corresponding hair root nodes and the second model sub-key points in the reference head model.
By establishing the projection mapping relation between the reference hair style model and the reference head model, the adaptation degree between the reference hair style model and the reference head model can be effectively ensured, and the method is favorable for providing reliable data support for hair style migration based on the reference hair style model.
Fig. 5 schematically shows a process diagram for determining a projection mapping relationship according to an embodiment of the present disclosure.
As shown in fig. 5, determining a projection mapping relationship between the reference head model 501 and the hair root node 502 of the reference hair style model may determine, in the reference head model 501, a reference model region matching the hair root node 502 of the reference hair style model, the reference model region including at least one second model sub-keypoint. Determining a coordinate mapping relation between at least one second model sub-key point and the corresponding hair root node, and projecting the corresponding hair root node 502 in the reference model area 501 according to the coordinate mapping relation and the texture coordinate of at least one second model sub-key point to obtain a projected reference head model 503.
FIG. 6 schematically illustrates a diagram of a reference head model with texture according to an embodiment of the present disclosure.
As shown in fig. 6, a hair root node is projected in the reference model region based on the projection coordinates of the hair root node in the reference model region and the texture coordinates of the projection point of the hair root node in the reference model region, and a textured reference head model 601 is obtained.
Fig. 7 schematically shows a process diagram for determining projection coordinates of a hair root node according to an embodiment of the present disclosure.
As shown in FIG. 7, the hair root node has coordinates r i The vertex coordinates of three patch vertexes (namely, second model subkey points) in the reference model region matched with the hairy root node are respectively v ja 、v jb 、v jc ,r i 、v ja 、v jb 、v jc Respectively, three-dimensional coordinate vectors. vec α =v jb -v ja ,vec β =v jc -v ja ,vec p =r i -v ja
proj i Representing the projection coordinates, proj, of the hair root node in the reference model region i Can be represented by the following expression,
proj i =r i -vec n *len(vec p )*dot(normalize(vec p ),vec n )
proj i =v ja *w a +v jb *w b +v jc *w c
w a 、w b 、w c the vertex weight coefficients between the projection coordinates of the hair root node and the vertex coordinates of the vertices of the three patches are indicated.
Fig. 8 schematically shows a process diagram for determining texture coordinates of a proxel according to an embodiment of the present disclosure.
As shown in fig. 8, the vertex coordinates of the three patch vertices in the reference model region matching the hair root node are v ja 、v jb 、v jc The texture coordinates of the vertexes of the three surface patches are respectively UV ja 、UV jb 、UV jc ,proj i Representing the projection coordinates of the hair root node in the reference model region, UVproj i Texture coordinates, UVproj, representing the projected point of the hair root node in the reference model region i Can be represented by the following expression,
UVproj i =UV ja *w a +UV jb *w b +UV jc *w c
fig. 9 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 9, the image processing apparatus 900 of the embodiment of the present disclosure includes, for example, a first processing module 910, a second processing module 920, and a third processing module 930.
A first processing module 910 configured to construct a target head model based on the received head image of the subject, the target head model including at least one first model keypoint; a second processing module 920, configured to determine a mapping relationship between at least one first model key point and at least one second model key point of a preset reference head model, to obtain a key point mapping relationship between a target head model and the reference head model; and a third processing module 930, configured to generate a target hair style matched with the head of the object according to the key point mapping relationship and a preset reference hair style model, where the reference hair style model is adapted to the reference head model.
According to the embodiment of the disclosure, a target head model is constructed based on a received target head image, the target head model comprises at least one first model key point, a mapping relation between the at least one first model key point and at least one second model key point of a preset reference head model is determined, a key point mapping relation between the target head model and the reference head model is obtained, a target hair style matched with the head of the object is generated according to the key point mapping relation and a preset reference hair style model, and the reference hair style model is matched with the reference head model.
And registering the reference hair style model matched with the reference head model according to the key point mapping relation between the target head model and the reference head model to obtain the target hair style matched with the head of the object. By calculating the key point mapping relation between different head models, the hair space relation between different head models is effectively established, the hair style can be effectively migrated between different head models, the good virtual hair style generation capability can be realized, and the virtual image construction cost and the construction difficulty can be reduced.
According to an embodiment of the present disclosure, the second processing module includes: the first processing submodule is used for determining a matching key point pair in at least one first model key point and at least one second model key point according to the texture mapping relation between the target head model and the reference head model; and the second processing submodule is used for determining a key point mapping relation between the target head model and the reference head model according to the coordinate information of the matched key point pair.
According to an embodiment of the present disclosure, the texture mapping relationship indicates texture coordinates of matching key point pairs in the target head model and the reference head model having a preset mapping relationship; the first processing submodule includes: and the first processing unit is used for taking model key points of which corresponding texture coordinates in the target head model and the reference head model have a preset mapping relation as matching key point pairs according to the texture mapping relation.
According to an embodiment of the present disclosure, a first processing unit includes: and the first processing subunit is used for taking the model key points with the same texture coordinates in the target head model and the reference head model as matching key point pairs.
According to an embodiment of the present disclosure, the second processing sub-module includes: and the second processing unit is used for determining the coordinate transformation relation between the matching key point pairs according to the vertex coordinates of the matching key point pairs in the corresponding head models to serve as the key point mapping relation between the target head model and the reference head model.
According to an embodiment of the present disclosure, the third processing module includes: and the third processing submodule is used for carrying out hair registration on the reference hair style model according to the key point mapping relation and the hair node coordinates in the reference hair style model so as to obtain a target hair style matched with the head of the object.
According to an embodiment of the present disclosure, the model key points include scalp layer key points, and the hair nodes include hair root nodes and non-hair root nodes; the third processing submodule includes: the third processing unit is used for registering the hair root nodes matched with at least one scalp layer key point of the reference head model according to the key point mapping relation and the hair root node coordinates to obtain the registered hair root nodes; and the fourth processing unit is used for registering non-hair root nodes in the reference hair style model according to the registered hair root nodes so as to obtain a target hair style matched with the head of the object, and a projection mapping relation is formed between at least one scalp layer key point of the reference head model and the corresponding hair root node.
According to an embodiment of the present disclosure, the model keypoints comprise facial feature points; the second processing module further comprises: the fourth processing submodule is used for extracting at least one first facial feature point in the target head model; and the fifth processing submodule is used for determining a coordinate transformation relation between at least one first face characteristic point and a corresponding second face characteristic point in the reference head model to serve as a key point mapping relation.
According to an embodiment of the present disclosure, there is a projection mapping relationship between at least one second model key point of the reference head model and a hair root node of the reference hair style model; the device also comprises a fourth processing module used for determining the projection mapping relation, and the fourth processing module comprises: the sixth processing submodule is used for determining a reference model area matched with a hair root node of a reference hair style model in the reference head model, and the reference model area comprises at least one second model sub-key point; the seventh processing submodule is used for determining a coordinate mapping relation between at least one second model child key point and the corresponding hair root node; and the eighth processing submodule is used for projecting the corresponding hair root node in the reference model area according to the coordinate mapping relation and the texture coordinate of the at least one second model sub-key point so as to obtain the projection mapping relation between the corresponding hair root node and the at least one second model sub-key point.
According to an embodiment of the present disclosure, the sixth processing sub-module includes: and the fifth processing unit is used for determining a model area closest to the hair root node in the reference head model as a reference model area matched with the hair root node.
According to an embodiment of the disclosure, the seventh processing submodule includes a sixth processing unit to: determining projection coordinates of corresponding hair root nodes in a reference model area; and determining a vertex proportion coefficient between the projection coordinate and at least one vertex coordinate according to the projection coordinate and the vertex coordinate of at least one second model sub-key point to be used as a coordinate mapping relation.
According to an embodiment of the present disclosure, the eighth processing submodule includes: the seventh processing unit is used for determining texture coordinates associated with corresponding hair root nodes according to the vertex proportion coefficient and the texture coordinates of at least one second model sub-key point; and an eighth processing unit for projecting the corresponding hair root node in the reference model region according to the projection coordinates and the texture coordinates associated with the corresponding hair root node.
It should be noted that the technical solutions of the present disclosure, including the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like, all comply with the regulations of the relevant laws and regulations, and do not violate the customs of the public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
Fig. 10 schematically shows a block diagram of an electronic device for performing image processing according to an embodiment of the present disclosure.
FIG. 10 shows a schematic block diagram of an example electronic device 1000 that can be used to implement embodiments of the present disclosure. The electronic device 1000 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 10, the apparatus 1000 includes a computing unit 1001 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1002 or a computer program loaded from a storage unit 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the device 1000 can also be stored. The calculation unit 1001, the ROM1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
A number of components in device 1000 are connected to I/O interface 1005, including: an input unit 1006 such as a keyboard, a mouse, and the like; an output unit 1007 such as various types of displays, speakers, and the like; a storage unit 1008 such as a magnetic disk, an optical disk, or the like; and a communication unit 1009 such as a network card, a modem, a wireless communication transceiver, or the like. The communication unit 1009 allows the device 1000 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 1001 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1001 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 1001 executes the respective methods and processes described above, such as an image processing method. For example, in some embodiments, the image processing method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 1008. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1000 via ROM1002 and/or communications unit 1009. When the computer program is loaded into the RAM 1003 and executed by the computing unit 1001, one or more steps of the image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 1001 may be configured to perform the image processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with an object, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to an object; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which objects can provide input to the computer. Other kinds of devices may also be used to provide for interaction with an object; for example, feedback provided to the subject can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the object may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., an object computer having a graphical object interface or a web browser through which objects can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (22)

1. An image processing method comprising:
constructing a target head model based on the received subject head image, wherein the target head model comprises at least one first model keypoint;
determining a mapping relation between the at least one first model key point and at least one second model key point of a preset reference head model to obtain a key point mapping relation between the target head model and the reference head model; and
generating a target hair style matched with the head of the object according to the key point mapping relation and a preset reference hair style model, wherein the reference hair style model is matched with the reference head model,
the determining a mapping relationship between the at least one first model key point and at least one second model key point of a preset reference head model to obtain a key point mapping relationship between the target head model and the reference head model includes:
determining a matching keypoint pair of the at least one first model keypoint and the at least one second model keypoint according to a texture mapping relationship between the target head model and the reference head model;
determining a key point mapping relationship between the target head model and the reference head model according to the coordinate information of the matching key point pairs,
generating a target hair style matched with the head of the object according to the key point mapping relation and a preset reference hair style model, wherein the generating of the target hair style comprises the following steps:
and carrying out hair registration on the reference hair style model according to the key point mapping relation and the hair node coordinates in the reference hair style model to obtain a target hair style matched with the head of the object.
2. The method of claim 1, wherein,
the texture mapping relation indicates texture coordinates of matching key point pairs in the target head model and the reference head model with a preset mapping relation;
determining, by the processor, a matching keypoint pair of the at least one first model keypoint and the at least one second model keypoint according to a texture mapping relationship between the target head model and the reference head model, comprising:
and according to the texture mapping relation, taking model key points of which corresponding texture coordinates in the target head model and the reference head model have the preset mapping relation as the matching key point pairs.
3. The method according to claim 2, wherein said taking model key points of which corresponding texture coordinates in the target head model and the reference head model have the preset mapping relationship according to the texture mapping relationship as the matching key point pairs comprises:
and taking model key points with the same texture coordinates in the target head model and the reference head model as the matching key point pairs.
4. The method of claim 1, wherein said determining a keypoint mapping relationship between the target head model and the base head model from the coordinate information of the matching keypoint pairs comprises:
and determining a coordinate transformation relation between the matching key point pairs according to the vertex coordinates of the matching key point pairs in the corresponding head models to serve as a key point mapping relation between the target head model and the reference head model.
5. The method of claim 1, wherein,
the model key points comprise scalp layer key points, and the hair nodes comprise hair root nodes and non-hair root nodes;
the hair registration of the reference hair style model is performed according to the key point mapping relation and the hair node coordinates in the reference hair style model to obtain a target hair style matched with the head of the object, and the method comprises the following steps:
registering the hair root nodes matched with at least one scalp layer key point of the reference head model according to the key point mapping relation and the hair root node coordinates to obtain the registered hair root nodes; and
registering non-hair root nodes in the reference hair style model according to the registered hair root nodes to obtain a target hair style matched with the head of the object,
wherein, at least one scalp layer key point of the reference head model and the corresponding hair root node have a projection mapping relation.
6. The method of claim 1, wherein,
the model key points comprise facial feature points;
the determining a mapping relationship between the at least one first model key point and at least one second model key point of a preset reference head model to obtain a key point mapping relationship between the target head model and the reference head model includes:
extracting at least one first facial feature point in the target head model;
determining a coordinate transformation relationship between the at least one first facial feature point and a corresponding second facial feature point in the reference head model as the keypoint mapping relationship.
7. The method of claim 1, wherein,
at least one second model key point of the reference head model and a hair root node of the reference hair style model have a projection mapping relation;
the method for determining the projection mapping relationship comprises the following steps:
determining, in the reference head model, a reference model region matching a hair root node of the reference hair style model, wherein the reference model region comprises at least one second model sub-key point;
determining a coordinate mapping relation between the at least one second model child key point and the corresponding hair root node; and
and projecting a corresponding hair root node in the reference model area according to the coordinate mapping relation and the texture coordinates of the at least one second model sub key point to obtain a projection mapping relation between the corresponding hair root node and the at least one second model sub key point.
8. The method according to claim 7, wherein said determining, in the reference head model, a reference model region matching a hair root node of the reference hair style model comprises:
and in the reference head model, determining a model region closest to the hair root node as a reference model region matched with the hair root node.
9. The method of claim 7, wherein said determining a coordinate mapping relationship between said at least one second model child key point and a corresponding hair root node comprises:
determining projection coordinates of corresponding hair root nodes in the reference model area; and
and determining a vertex proportion coefficient between the projection coordinate and at least one vertex coordinate according to the projection coordinate and the vertex coordinate of the at least one second model sub-key point to be used as the coordinate mapping relation.
10. The method of claim 9, wherein said projecting a corresponding hair root node within the reference model region according to the coordinate mapping relationship and texture coordinates of the at least one second model child keypoint comprises:
determining texture coordinates associated with corresponding hair root nodes according to the vertex proportion coefficients and the texture coordinates of the at least one second model child key point; and
and projecting the corresponding hair root node in the reference model area according to the projection coordinate and the texture coordinate associated with the corresponding hair root node.
11. An image processing apparatus comprising:
a first processing module for constructing a target head model based on the received subject head image, wherein the target head model comprises at least one first model keypoint;
the second processing module is used for determining the mapping relation between the at least one first model key point and at least one second model key point of a preset reference head model to obtain the key point mapping relation between the target head model and the reference head model; and
a third processing module, configured to generate a target hair style matched with the head of the object according to the key point mapping relationship and a preset reference hair style model, where the reference hair style model is adapted to the reference head model;
the second processing module comprises:
a first processing sub-module, configured to determine, according to a texture mapping relationship between the target head model and the reference head model, a matching keypoint pair of the at least one first model keypoint and the at least one second model keypoint; and
a second processing submodule, configured to determine a key point mapping relationship between the target head model and the reference head model according to the coordinate information of the matching key point pair,
the third processing module comprises:
and the third processing submodule is used for carrying out hair registration on the reference hair style model according to the key point mapping relation and the hair node coordinates in the reference hair style model so as to obtain a target hair style matched with the head of the object.
12. The apparatus of claim 11, wherein,
the texture mapping relation indicates texture coordinates of matching key point pairs in the target head model and the reference head model with a preset mapping relation;
the first processing sub-module comprises:
and the first processing unit is used for taking model key points, corresponding to texture coordinates in the target head model and the reference head model and having the preset mapping relation, as the matching key point pairs according to the texture mapping relation.
13. The apparatus of claim 12, wherein the first processing unit comprises:
a first processing subunit, configured to use a model key point with the same texture coordinate in the target head model and the reference head model as the matching key point pair.
14. The apparatus of claim 11, wherein the second processing submodule comprises:
and the second processing unit is used for determining the coordinate transformation relation between the matching key point pairs according to the vertex coordinates of the matching key point pairs in the corresponding head models, so as to be used as the key point mapping relation between the target head model and the reference head model.
15. The apparatus of claim 11, wherein,
the model key points comprise scalp layer key points, and the hair nodes comprise hair root nodes and non-hair root nodes;
the third processing sub-module comprises:
a third processing unit, configured to register, according to the key point mapping relationship and the hair root node coordinates, a hair root node matched with at least one scalp layer key point of the reference head model, so as to obtain a hair root node after registration; and
a fourth processing unit, configured to perform registration on a non-hair root node in the reference hair style model according to the registered hair root node to obtain a target hair style matching the head of the object,
wherein, at least one scalp layer key point of the reference head model and the corresponding hair root node have a projection mapping relation.
16. The apparatus of claim 11, wherein,
the model key points comprise facial feature points;
the second processing module further comprises:
a fourth processing submodule, configured to extract at least one first facial feature point in the target head model;
a fifth processing submodule, configured to determine a coordinate transformation relationship between the at least one first facial feature point and a corresponding second facial feature point in the reference head model as the keypoint mapping relationship.
17. The apparatus of claim 11, wherein,
at least one second model key point of the reference head model and a hair root node of the reference hair style model have a projection mapping relation;
the apparatus further comprises a fourth processing module for determining the projection mapping relationship,
the fourth processing module comprises:
a sixth processing submodule, configured to determine, in the reference head model, a reference model region that matches a hair root node of the reference hair style model, where the reference model region includes at least one second model sub-key point;
the seventh processing submodule is used for determining the coordinate mapping relation between the at least one second model child key point and the corresponding hair root node; and
and the eighth processing submodule is used for projecting the corresponding hair root node in the reference model area according to the coordinate mapping relation and the texture coordinate of the at least one second model sub key point so as to obtain the projection mapping relation between the corresponding hair root node and the at least one second model sub key point.
18. The apparatus of claim 17, wherein the sixth processing sub-module comprises:
and the fifth processing unit is used for determining a model area closest to the hair root node in the reference head model to serve as a reference model area matched with the hair root node.
19. The apparatus of claim 17, wherein the seventh processing sub-module comprises a sixth processing unit to:
determining projection coordinates of corresponding hair root nodes in the reference model area; and
and determining a vertex proportion coefficient between the projection coordinate and at least one vertex coordinate according to the projection coordinate and the vertex coordinate of the at least one second model sub-key point to be used as the coordinate mapping relation.
20. The apparatus of claim 19, wherein the eighth processing submodule comprises:
a seventh processing unit, configured to determine, according to the vertex proportion coefficient and the texture coordinate of the at least one second model child key point, a texture coordinate associated with a corresponding hair root node; and
and the eighth processing unit is used for projecting the corresponding hair root node in the reference model area according to the projection coordinate and the texture coordinate associated with the corresponding hair root node.
21. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 10.
22. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-10.
CN202111494653.6A 2021-12-07 2021-12-07 Image processing method and apparatus, device, medium and product Active CN114202597B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111494653.6A CN114202597B (en) 2021-12-07 2021-12-07 Image processing method and apparatus, device, medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111494653.6A CN114202597B (en) 2021-12-07 2021-12-07 Image processing method and apparatus, device, medium and product

Publications (2)

Publication Number Publication Date
CN114202597A CN114202597A (en) 2022-03-18
CN114202597B true CN114202597B (en) 2023-02-03

Family

ID=80651415

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111494653.6A Active CN114202597B (en) 2021-12-07 2021-12-07 Image processing method and apparatus, device, medium and product

Country Status (1)

Country Link
CN (1) CN114202597B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115311403B (en) * 2022-08-26 2023-08-08 北京百度网讯科技有限公司 Training method of deep learning network, virtual image generation method and device
CN115345981B (en) * 2022-10-19 2023-03-24 北京百度网讯科技有限公司 Image processing method, image processing device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106021550A (en) * 2016-05-27 2016-10-12 湖南拓视觉信息技术有限公司 Hair style designing method and system
CN107615337B (en) * 2016-04-28 2020-08-25 华为技术有限公司 Three-dimensional hair modeling method and device
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN112541963A (en) * 2020-11-09 2021-03-23 北京百度网讯科技有限公司 Three-dimensional virtual image generation method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111510769B (en) * 2020-05-21 2022-07-26 广州方硅信息技术有限公司 Video image processing method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107615337B (en) * 2016-04-28 2020-08-25 华为技术有限公司 Three-dimensional hair modeling method and device
CN106021550A (en) * 2016-05-27 2016-10-12 湖南拓视觉信息技术有限公司 Hair style designing method and system
CN111695471A (en) * 2020-06-02 2020-09-22 北京百度网讯科技有限公司 Virtual image generation method, device, equipment and storage medium
CN112541963A (en) * 2020-11-09 2021-03-23 北京百度网讯科技有限公司 Three-dimensional virtual image generation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN114202597A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
CN113643412B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114202597B (en) Image processing method and apparatus, device, medium and product
CN114187633B (en) Image processing method and device, and training method and device for image generation model
CN115409933B (en) Multi-style texture mapping generation method and device
CN115049799B (en) Method and device for generating 3D model and virtual image
US11941737B2 (en) Artificial intelligence-based animation character control and drive method and apparatus
CN115147265B (en) Avatar generation method, apparatus, electronic device, and storage medium
US20180276870A1 (en) System and method for mass-animating characters in animated sequences
CN114612600B (en) Virtual image generation method and device, electronic equipment and storage medium
CN114549710A (en) Virtual image generation method and device, electronic equipment and storage medium
CN113658309A (en) Three-dimensional reconstruction method, device, equipment and storage medium
CN112652057A (en) Method, device, equipment and storage medium for generating human body three-dimensional model
CN114723888A (en) Three-dimensional hair model generation method, device, equipment, storage medium and product
CN111899159A (en) Method, device, apparatus and storage medium for changing hairstyle
CN114998433A (en) Pose calculation method and device, storage medium and electronic equipment
CN115359166B (en) Image generation method and device, electronic equipment and medium
CN116993955A (en) Three-dimensional model heavy topology method, device, equipment and storage medium
CN115147306A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114648601A (en) Virtual image generation method, electronic device, program product and user terminal
CN114581586A (en) Method and device for generating model substrate, electronic equipment and storage medium
CN114078184A (en) Data processing method, device, electronic equipment and medium
CN113608615B (en) Object data processing method, processing device, electronic device, and storage medium
CN111754632A (en) Business service processing method, device, equipment and storage medium
CN116030150B (en) Avatar generation method, device, electronic equipment and medium
CN114820908B (en) Virtual image generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant