CN115018749A - Image processing method, device, equipment, computer readable storage medium and product - Google Patents

Image processing method, device, equipment, computer readable storage medium and product Download PDF

Info

Publication number
CN115018749A
CN115018749A CN202210869301.2A CN202210869301A CN115018749A CN 115018749 A CN115018749 A CN 115018749A CN 202210869301 A CN202210869301 A CN 202210869301A CN 115018749 A CN115018749 A CN 115018749A
Authority
CN
China
Prior art keywords
head
image
decoration
target
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210869301.2A
Other languages
Chinese (zh)
Inventor
刘佳成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210869301.2A priority Critical patent/CN115018749A/en
Publication of CN115018749A publication Critical patent/CN115018749A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the disclosure provides an image processing method, an image processing device, an image processing apparatus, a computer readable storage medium and a computer readable product, wherein the method comprises the following steps: acquiring a head decoration request, wherein the head decoration request comprises an image to be processed and target head decoration; according to the head decoration request, identifying and segmenting a head region in the image to be processed to obtain a head region image corresponding to the image to be processed; generating a head three-dimensional model according to the head area image and a target head model with a preset depth, wherein the target head model comprises a neck area; and carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result. Thereby enabling normal display of the neck portion in the target decoration result, optimizing the decoration effect of the target head decoration.

Description

Image processing method, device, equipment, computer readable storage medium and product
Technical Field
Embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to an image processing method, apparatus, device, computer-readable storage medium, and product.
Background
With the improvement of the hardware performance of the terminal device and the continuous progress of the artificial intelligence technology, more and more Applications (APPs) are running on the terminal device. Wherein, various application programs can provide decorative functions for users to optimize image display effect.
The existing decoration methods generally place two-dimensional or three-dimensional decorations on an image or a three-dimensional head model after recognizing the head position of a user. However, when the head or face is decorated by the above method, the decoration may block the neck of the user, and the neck may not be displayed in a normal blocking relationship, resulting in a poor decoration effect.
Disclosure of Invention
The embodiment of the disclosure provides an image processing method, an image processing device, an image processing apparatus, a computer-readable storage medium and a computer-readable product, which are used for solving the technical problem that the existing head decoration method can shield a neck part, so that the decoration effect is poor.
In a first aspect, an embodiment of the present disclosure provides an image processing method, including:
acquiring a head decoration request, wherein the head decoration request comprises an image to be processed and target head decoration;
according to the head decoration request, identifying and segmenting a head region in the image to be processed to obtain a head region image corresponding to the image to be processed;
generating a head three-dimensional model according to the head area image and a target head model with a preset depth, wherein the target head model comprises a neck area;
and carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result.
In a second aspect, an embodiment of the present disclosure provides an image processing apparatus, including:
the head decoration processing device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a head decoration request, and the head decoration request comprises an image to be processed and a target head decoration;
the segmentation module is used for identifying and segmenting a head region in the image to be processed according to the head decoration request to obtain a head region image corresponding to the image to be processed;
the generating module is used for generating a head three-dimensional model according to the head area image and a target head model with a preset depth, wherein the target head model comprises a neck area;
and the fusion module is used for carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the image processing method as set forth above in the first aspect and various possible designs of the first aspect.
In a fourth aspect, the present disclosure provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the image processing method according to the first aspect and various possible designs of the first aspect is implemented.
In a fifth aspect, embodiments of the present disclosure provide a computer program product comprising a computer program that, when executed by a processor, implements an image processing method as set forth in the first aspect above and in various possible designs of the first aspect.
The method first obtains a head decoration request, wherein the head decoration request comprises an image to be processed and a target head decoration. And identifying and segmenting the image to be processed according to the head decoration request to obtain a head area image corresponding to the image to be processed. According to the head area image and the target head model with the preset depth, the head three-dimensional model with the neck area is obtained, so that normal display of the neck part can be realized in a target result, and the decorative effect of target head decoration is optimized.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram of a system architecture upon which the present disclosure is based;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the disclosure;
FIG. 3 is a schematic view of a head decoration provided by an embodiment of the present disclosure;
fig. 4 is a schematic flowchart of an image processing method according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a three-dimensional head model when a user leans to the head according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In order to solve the above-mentioned technical problem that the existing head decoration method can block the neck part, resulting in poor decoration effect, the present disclosure provides an image processing method, apparatus, device, computer-readable storage medium, and product.
It should be noted that the present disclosure provides an image processing method, an image processing apparatus, an image processing device, a computer readable storage medium, and an image processing product, which can be applied to various scenes requiring head decoration.
The existing head decoration method generally directly places the head decoration effect at a corresponding position on a 3D head die. However, some plate-covering head decorations surround the head of the user, and when the method is used for decoration, the neck area of the user cannot be normally displayed, so that the decoration effect is poor.
In the course of solving the above technical problems, the inventors found through research that, in order to achieve normal display of the neck region in the decoration result, a target head model having a preset depth may be previously set, wherein the target head model has a neck portion. After a head decoration request triggered by a user is acquired, a head area image in an image to be processed can be identified, and a head three-dimensional model with a neck area is obtained according to the head area image and a target head model with a preset depth. Furthermore, after the head three-dimensional model is fused with the target head decoration, normal display of the neck region can be realized.
Fig. 1 is a schematic diagram of a system architecture based on the present disclosure, as shown in fig. 1, the system architecture based on the present disclosure at least includes: the system comprises a terminal device 11 and a server 12, wherein the server 12 is provided with an image processing device which can be written by adopting languages such as C/C + +, Java, Shell or Python; the terminal device 11 may be a desktop computer, a tablet computer, or the like.
Based on the above system architecture, the image processing device in the server 12 may obtain the head decoration request sent by the terminal device 11, and perform image processing operation according to the head decoration request, so that the neck region of the user can be normally displayed in the target decoration result 15 obtained after the user head region image 13 is merged with the target head decoration 14.
Fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure, and as shown in fig. 2, the method includes:
step 201, obtaining a head decoration request, wherein the head decoration request comprises an image to be processed and a target head decoration.
The execution subject of the embodiment is an image processing apparatus, and the image processing apparatus may be coupled to a server. The server can be in communication connection with the terminal equipment, a user can initiate a head decoration request on the terminal equipment according to actual requirements, and accordingly the image processing device can obtain the head decoration request sent by the terminal equipment. Optionally, the image processing apparatus may also be coupled to the terminal device, so that the user may initiate a head decoration request on the terminal device according to actual needs, and the image processing apparatus may perform a head decoration operation according to the head decoration request.
The image processing method provided by the embodiment can be applied to the decoration scene of the digital ornaments. Wherein, the digital ornaments can be made of different materials, different coverage areas, different patterns and the like. Optionally, the image processing method provided by this embodiment may also be applied to an application scenario of fitting on line, for example, a decoration effect that a user really wears may be simulated according to a headwear such as a hat selected by the user. Optionally, the image processing method provided by this embodiment may be applied to an application scenario of image processing, for example, the headwear selected by the user may be added to the head region of the user according to the headwear, such as a special effect and a sticker, selected by the user. The present disclosure does not limit the application scenarios.
In this embodiment, a user may browse and select a head decoration on a terminal device, and after a target head decoration selected by the user is obtained, an image to be processed including at least the head of the user may be obtained, where the target head decoration is used to decorate a partial head region. Specifically, the image to be processed may be acquired in real time through an image acquisition device preset by the terminal device, or a pre-stored image to be processed may be selected from a preset storage path according to a selection operation of a user, which is not limited in this disclosure.
Accordingly, after acquiring the target head device selected by the user and the image to be processed, the terminal device may initiate a head decorating request. Accordingly, the image processing apparatus may acquire the head decorating request.
Step 202, according to the head decoration request, performing identification and segmentation operations on a head region in the image to be processed to obtain a head region image corresponding to the image to be processed.
In this embodiment, in order to implement a head decoration operation, after the head decoration request is obtained, the head region image corresponding to the image to be processed may be obtained by performing an operation of identifying and segmenting the head region in the image to be processed according to the head decoration request. Specifically, the head region in the image to be processed can be identified and segmented by adopting a head segmentation algorithm. The head segmentation algorithm can not only realize the recognition and segmentation of the features such as the face, but also recognize and segment the detailed parts such as the hair. The accuracy of the obtained head region image is improved.
Step 203, generating a head three-dimensional model according to the head region image and a target head model with a preset depth, wherein the target head model comprises a neck region.
In the present embodiment, although the head segmentation algorithm can accurately recognize and segment the head, it cannot segment the neck. When the result processed based on the head segmentation algorithm is fused with the target head decoration, the neck part is shielded by the target head decoration, and the decoration effect is poor.
In order to solve the above technical problem, a target head mold having a predetermined depth may be previously set. The target head model comprises a neck area, which can be a head model with a preset depth but without color, and the preset depth can be set in advance in order to correctly reflect the shielding relation between the head of the user and the target head decoration. Therefore, the accurate display of the neck region can be realized according to the preset depth on the basis of realizing the display of the neck region.
And 204, carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result.
In the present embodiment, after obtaining the head three-dimensional model with the neck region, the target decoration result may be obtained by performing a fusion operation on the head three-dimensional model and the target head decoration. In the target decoration result, the neck region can be normally displayed.
Fig. 3 is a schematic diagram of a head decoration according to an embodiment of the present disclosure, and as shown in fig. 3, after a head decoration request is obtained, a head region image in an image to be processed may be identified, and a head three-dimensional model 31 having a neck region is generated according to the head region image and a target head model with a preset depth. The three-dimensional head model 31 is fused with the target head decoration 32 selected by the user to obtain a target decoration result 33, and the neck region 34 can be displayed normally in the target decoration result 33.
In the image processing method provided by the embodiment, a head decoration request is first obtained, where the head decoration request includes an image to be processed and a target head decoration. And identifying and segmenting the image to be processed according to the head decoration request to obtain a head area image corresponding to the image to be processed. According to the head area image and the target head model with the preset depth, the head three-dimensional model with the neck area is obtained, so that normal display of the neck part can be realized in a target result, and the decorative effect of target head decoration is optimized.
Further, on the basis of any of the above embodiments, before step 204, the method further includes:
and respectively carrying out rendering operation on the head three-dimensional model and the target head decoration according to a preset rendering sequence.
And the preset rendering sequence is to render the head three-dimensional model preferentially, and render the target head decoration after the rendering of the head three-dimensional model is finished.
In this embodiment, the target head model may specifically be a head model made of an occlusion material, and after the head three-dimensional model is generated according to the target head model and the head region image, in order to enable the occlusion material to normally take effect, a rendering order may be preset, and the rendering operation is performed on the head three-dimensional model and the target head decoration according to the preset rendering order. The preset rendering sequence may be specifically a rendering of the head three-dimensional model preferentially, and after the rendering of the head three-dimensional model is completed, the target head decoration is rendered.
According to the image processing method provided by the embodiment, the target head decoration is rendered after the rendering of the head three-dimensional model is finished by rendering the head three-dimensional model preferentially, so that the correct shielding relation between the target head decoration and the head three-dimensional model can be processed, the normal display of a neck region is realized, and the decoration effect is further optimized.
Optionally, on the basis of any of the foregoing embodiments, step 203 includes:
and mapping the texture of the head region image to the target head model to obtain a head three-dimensional model comprising a neck region.
In this embodiment, in order to obtain a three-dimensional head model with a neck region, a texture mapping method may be specifically used to generate the three-dimensional head model. Specifically, the texture of the head region image may be recognized first, and the texture of the head region image is mapped onto the target head model with the neck region, so as to obtain a three-dimensional head model including the neck region.
The image processing method according to the present embodiment can obtain a three-dimensional model of a head including a neck region by mapping a texture of a head region image onto a target head model including a neck region. Further, when head decoration is performed based on the head three-dimensional model, the neck region can be normally displayed, and the display effect can be optimized.
Optionally, on the basis of any of the foregoing embodiments, step 203 includes:
and generating a to-be-processed three-dimensional model corresponding to the head region image according to the head region image.
And splicing the three-dimensional model to be processed and the neck area corresponding to the target head model to obtain the head three-dimensional model.
In this embodiment, the coverage area of the target head model in practical application is smaller than that of the head in the image to be processed. Therefore, in order to improve the accuracy of the generated three-dimensional head model, only the neck region corresponding to the target head model may be reserved. Specifically, a three-dimensional model to be processed corresponding to the head region image may be generated from the head region image. The generation operation of the three-dimensional model to be processed may be implemented in any manner, which is not limited in this disclosure. And splicing the three-dimensional model to be processed and the neck area corresponding to the target head model into a head three-dimensional model through splicing operation.
The image processing method provided by the embodiment generates the to-be-processed three-dimensional model corresponding to the head region image according to the head region image. And splicing the three-dimensional model to be processed and the neck area corresponding to the target head model, so that a head three-dimensional model comprising the neck area can be obtained. Further, when head decoration is performed based on the head three-dimensional model, the neck region can be normally displayed, and the display effect can be optimized.
Further, on the basis of any of the above embodiments, after the step 203, the method further includes:
and controlling the head three-dimensional model to rotate in real time according to the head action of the user through a preset face tracking algorithm.
In this embodiment, in many application scenarios, after decorating the head of the user according to the target head decoration selected by the user and displaying the decoration result, the user may perform head rotation to view the decoration result at different angles. Therefore, in order to meet the actual application requirements of the user, the head three-dimensional model can be controlled to rotate in real time according to the head action of the user through a preset face tracking algorithm when the head three-dimensional model is generated.
According to the image processing method provided by the embodiment, the preset face tracking algorithm is adopted to control the head three-dimensional model to rotate in real time according to the head action of the user, so that the head three-dimensional model can be controlled to rotate along with the head of the user in real time when the user rotates the head, the head decoration effect can be checked by the user in an all-around manner, and the use experience of the user is optimized.
Fig. 4 is a schematic flow chart of an image processing method according to an embodiment of the present disclosure, and on the basis of any one of the above embodiments, as shown in fig. 4, a bone driving point is further disposed between the head and the neck in the head three-dimensional model. Correspondingly, after step 203, the method further includes:
step 401, identifying a rotation angle corresponding to a neck portion in the image to be processed.
And step 402, controlling the bone driving point to rotate according to the rotation angle.
In this embodiment, after obtaining the head three-dimensional model with the neck region, if the head three-dimensional model does not include the bone driving point, the neck region cannot be correspondingly bent when the head rotates, resulting in poor display effect. Therefore, in order to ensure the display effect of the head decoration, a bone driving point can be further arranged between the head and the neck in the head three-dimensional model, and the corresponding rotation angle of the neck part in the image to be processed is identified. After the rotation angle is obtained, the bone driving point can be controlled to rotate according to the rotation angle.
Fig. 5 is a schematic view of a head three-dimensional model when a user leans to the head according to an embodiment of the disclosure, as shown in fig. 5, when the head three-dimensional model is in an upright state 51, if the user wants to lean to the right side, and if the head three-dimensional model does not include a bone driving point, a neck region cannot be bent, a head leaning result 52 is obtained, and the head three-dimensional model does not conform to a normal physiological structure. After adding the bone driving points to the three-dimensional model of the head, if the user wants to lean to the right, the neck region can be bent 53 to conform to the actual physiological structure.
According to the image processing method provided by the embodiment, the bone driving point is added between the head and the neck in the head three-dimensional model, so that when the head of a user rotates, the neck region can be controlled to correctly rotate correspondingly according to the rotation of the user, and the head decoration effect is optimized.
Further, on the basis of any of the above embodiments, step 401 includes:
and rotating angles of all bone key points corresponding to the user body in the image to be processed through a preset bone rotating angle recognition algorithm.
And determining a rotation angle corresponding to the neck part according to the rotation angle of each bone key point.
In this embodiment, the identification of the rotation angle may be implemented by using a bone rotation angle identification algorithm. Specifically, in order to realize the identification of the rotation angle, a preset bone rotation angle identification algorithm may be adopted to identify the rotation angle of each bone key point corresponding to the user body in the image to be processed. And determining a corresponding rotation angle of the neck part in the rotation angles of the key points of each bone.
According to the image processing method provided by the embodiment, the rotation angle corresponding to the neck part is identified by adopting the preset bone rotation angle identification algorithm, so that the rotation angle corresponding to the neck part can be accurately determined, and the neck part can be accurately controlled to rotate correspondingly according to the rotation angle.
Further, on the basis of any of the above embodiments, step 401 includes:
and inputting the image to be processed into a preset angle recognition model, and obtaining a rotation angle corresponding to a neck part in the image to be processed output by the angle recognition model.
In this embodiment, in order to realize the identification of the rotation angle, an angle identification model may be set in advance. Specifically, a training data set may be obtained in advance, where the training data set includes a plurality of images with a rotated head and rotation angle labeling information corresponding to each image. And randomly dividing a training data set into a training set and a testing set, carrying out iterative training on a preset angle recognition model through the training set, carrying out testing operation on the preset angle recognition model through the testing set until the loss value of the angle recognition model reaches a preset convergence condition, and obtaining the trained angle recognition model. The trained angle recognition model can accurately recognize the rotation angle of the neck part according to the to-be-processed image recognition. Therefore, the image to be processed can be input into the preset angle recognition model, and the rotation angle corresponding to the neck part in the image to be processed output by the angle recognition model can be obtained.
According to the image processing method provided by the embodiment, the preset angle identification model is adopted to identify the rotation angle corresponding to the neck part, so that the rotation angle corresponding to the neck part can be accurately determined, and the neck part can be accurately controlled to rotate correspondingly according to the rotation angle.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 6, the apparatus includes: an acquisition module 61, a segmentation module 62, a generation module 63, and a fusion module 64. The obtaining module 61 is configured to obtain a head decoration request, where the head decoration request includes an image to be processed and a target head decoration. And a segmentation module 62, configured to perform recognition and segmentation operations on a head region in the image to be processed according to the head decoration request, so as to obtain a head region image corresponding to the image to be processed. And a generating module 63, configured to generate a three-dimensional head model according to the head region image and a target head model with a preset depth, where the target head model includes a neck region. And a fusion module 64, configured to perform fusion operation on the three-dimensional head model and the target head decoration to obtain a target decoration result.
Further, on the basis of any one of the above embodiments, the apparatus further includes: and the rendering module is used for respectively performing rendering operation on the head three-dimensional model and the target head decoration according to a preset rendering sequence. And the preset rendering sequence is to render the head three-dimensional model preferentially, and render the target head decoration after the rendering of the head three-dimensional model is finished.
Further, on the basis of any of the above embodiments, the generating module is configured to: and mapping the texture of the head region image to the target head model to obtain a head three-dimensional model comprising a neck region.
Further, on the basis of any of the above embodiments, the generating module is configured to: and generating a to-be-processed three-dimensional model corresponding to the head region image according to the head region image. And splicing the three-dimensional model to be processed and the neck area corresponding to the target head model to obtain the head three-dimensional model.
Further, on the basis of any one of the above embodiments, the apparatus further includes: and the tracking module is used for controlling the head three-dimensional model to rotate in real time according to the head action of the user through a preset face tracking algorithm.
Further, on the basis of any of the above embodiments, a bone driving point is further disposed between the head and the neck in the head three-dimensional model. The device further comprises: and the identification module is used for identifying the corresponding rotation angle of the neck part in the image to be processed. And the control module is used for controlling the bone driving point to rotate according to the rotation angle.
Further, on the basis of any one of the above embodiments, the identification module is configured to: and rotating angles of all bone key points corresponding to the user body in the image to be processed through a preset bone rotating angle recognition algorithm. And determining a rotation angle corresponding to the neck part according to the rotation angle of each bone key point.
Further, on the basis of any of the above embodiments, the identification module is configured to: and inputting the image to be processed into a preset angle recognition model, and obtaining a rotation angle corresponding to a neck part in the image to be processed output by the angle recognition model.
The device provided in this embodiment may be used to implement the technical solution of the above method embodiment, and the implementation principle and technical effect are similar, which are not described herein again.
In order to implement the above embodiments, an embodiment of the present disclosure further provides an electronic device, including: a processor and a memory.
The memory stores computer-executable instructions.
The processor executes computer-executable instructions stored by the memory, so that the processor executes the image processing method according to any one of the embodiments.
Fig. 7 is a schematic structural diagram of an electronic device provided in the embodiment of the present disclosure, and as shown in fig. 7, the electronic device 700 may be a terminal device or a server. Among them, the terminal Device may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a Digital broadcast receiver, a Personal Digital Assistant (PDA), a tablet computer (PAD), a Portable Multimedia Player (PMP), a car terminal (e.g., car navigation terminal), etc., and a fixed terminal such as a Digital TV, a desktop computer, etc. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, the electronic device 700 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 701, which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, or the like; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The embodiment of the present disclosure further provides a computer-readable storage medium, in which computer-executable instructions are stored, and when a processor executes the computer-executable instructions, the image processing method according to any one of the above embodiments is implemented.
Embodiments of the present disclosure also provide a computer program product, which includes a computer program, and when executed by a processor, the computer program implements the image processing method according to any of the above embodiments.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to perform the methods shown in the above embodiments.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of Network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In a first aspect, according to one or more embodiments of the present disclosure, there is provided an image processing method including:
the method comprises the steps of obtaining a head decoration request, wherein the head decoration request comprises a to-be-processed image and target head decoration.
And according to the head decoration request, identifying and segmenting the head region in the image to be processed to obtain a head region image corresponding to the image to be processed.
And generating a head three-dimensional model according to the head area image and a target head model with a preset depth, wherein the target head model comprises a neck area.
And carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result.
According to one or more embodiments of the present disclosure, the generating a three-dimensional head model according to the head region image and a target head model with a preset depth includes:
and mapping the texture of the head region image to the target head model to obtain a head three-dimensional model comprising a neck region.
According to one or more embodiments of the present disclosure, the generating a three-dimensional head model according to the head region image and a target head model with a preset depth includes:
and generating a to-be-processed three-dimensional model corresponding to the head region image according to the head region image.
And splicing the three-dimensional model to be processed and the neck area corresponding to the target head model to obtain the head three-dimensional model.
According to one or more embodiments of the present disclosure, after generating a three-dimensional head model according to the head region image and a target head model with a preset depth, the method further includes:
and controlling the head three-dimensional model to rotate in real time according to the head action of the user through a preset face tracking algorithm.
According to one or more embodiments of the present disclosure, a bone driving point is further disposed between the head and the neck in the head three-dimensional model.
After generating the head three-dimensional model according to the head region image and the target head model with the preset depth, the method further comprises the following steps:
and identifying a rotation angle corresponding to the neck part in the image to be processed.
And controlling the bone driving point to rotate according to the rotation angle.
According to one or more embodiments of the present disclosure, the identifying a rotation angle corresponding to a neck portion in the image to be processed includes:
and the rotation angle of each bone key point corresponding to the user body in the image to be processed is identified through a preset bone rotation angle identification algorithm.
And determining a rotation angle corresponding to the neck part according to the rotation angle of each bone key point.
According to one or more embodiments of the present disclosure, the identifying a rotation angle corresponding to a neck portion in the image to be processed includes:
and inputting the image to be processed into a preset angle recognition model, and obtaining a rotation angle corresponding to a neck part in the image to be processed output by the angle recognition model.
According to one or more embodiments of the present disclosure, before performing the fusion operation on the three-dimensional head model and the target head decoration to obtain the target decoration result, the method further includes:
and respectively carrying out rendering operation on the head three-dimensional model and the target head decoration according to a preset rendering sequence.
And the preset rendering sequence is to render the head three-dimensional model preferentially, and render the target head decoration after the rendering of the head three-dimensional model is finished.
In a second aspect, according to one or more embodiments of the present disclosure, there is provided an image processing apparatus including:
the head decoration processing device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a head decoration request, and the head decoration request comprises an image to be processed and a target head decoration.
And the segmentation module is used for identifying and segmenting the head region in the image to be processed according to the head decoration request to obtain a head region image corresponding to the image to be processed.
And the generating module is used for generating a head three-dimensional model according to the head area image and a target head model with a preset depth, wherein the target head model comprises a neck area.
And the fusion module is used for carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result.
According to one or more embodiments of the present disclosure, the generating module is configured to:
and mapping the texture of the head region image to the target head model to obtain a head three-dimensional model comprising a neck region.
According to one or more embodiments of the present disclosure, the generating module is configured to:
and generating a to-be-processed three-dimensional model corresponding to the head region image according to the head region image.
And splicing the three-dimensional model to be processed and the neck area corresponding to the target head model to obtain the head three-dimensional model.
According to one or more embodiments of the present disclosure, the apparatus further comprises:
and the tracking module is used for controlling the head three-dimensional model to rotate in real time according to the head action of the user through a preset face tracking algorithm.
According to one or more embodiments of the present disclosure, a bone driving point is further disposed between the head and the neck in the head three-dimensional model.
The device further comprises:
and the identification module is used for identifying the corresponding rotation angle of the neck part in the image to be processed.
And the control module is used for controlling the bone driving point to rotate according to the rotation angle.
According to one or more embodiments of the present disclosure, the identification module is configured to:
and rotating angles of all bone key points corresponding to the user body in the image to be processed through a preset bone rotating angle recognition algorithm.
And determining a rotation angle corresponding to the neck part according to the rotation angle of each bone key point.
According to one or more embodiments of the present disclosure, the identification module is configured to:
and inputting the image to be processed into a preset angle recognition model, and obtaining a rotation angle corresponding to a neck part in the image to be processed output by the angle recognition model.
According to one or more embodiments of the present disclosure, the apparatus further comprises:
and the rendering module is used for respectively performing rendering operation on the head three-dimensional model and the target head decoration according to a preset rendering sequence.
And the preset rendering sequence is to render the head three-dimensional model preferentially, and render the target head decoration after the rendering of the head three-dimensional model is finished.
In a third aspect, according to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor and memory;
the memory stores computer-executable instructions;
the at least one processor executes computer-executable instructions stored by the memory to cause the at least one processor to perform the image processing method as set forth in the first aspect above and in various possible designs of the first aspect.
In a fourth aspect, according to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein computer-executable instructions that, when executed by a processor, implement the image processing method as described in the first aspect above and in various possible designs of the first aspect.
In a fifth aspect, according to one or more embodiments of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements an image processing method as described above in the first aspect and in various possible designs of the first aspect
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (12)

1. An image processing method, comprising:
acquiring a head decoration request, wherein the head decoration request comprises an image to be processed and target head decoration;
according to the head decoration request, identifying and segmenting a head region in the image to be processed to obtain a head region image corresponding to the image to be processed;
generating a head three-dimensional model according to the head area image and a target head model with a preset depth, wherein the target head model comprises a neck area;
and carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result.
2. The method according to claim 1, wherein the generating a three-dimensional head model according to the head region image and a target head model with a preset depth comprises:
and mapping the texture of the head region image to the target head model to obtain a head three-dimensional model comprising a neck region.
3. The method of claim 1, wherein generating a three-dimensional model of the head from the image of the head region and a target head model of a preset depth comprises:
generating a to-be-processed three-dimensional model corresponding to the head region image according to the head region image;
and splicing the three-dimensional model to be processed and the neck area corresponding to the target head model to obtain the head three-dimensional model.
4. The method according to claim 1, wherein after generating a three-dimensional head model according to the head region image and a target head model with a preset depth, the method further comprises:
and controlling the head three-dimensional model to rotate in real time according to the head action of the user through a preset face tracking algorithm.
5. The method according to any one of claims 1 to 4, wherein a bone driving point is further arranged between the head and the neck in the head three-dimensional model;
after generating the head three-dimensional model according to the head region image and the target head model with the preset depth, the method further comprises the following steps:
identifying a corresponding rotation angle of a neck part in the image to be processed;
and controlling the bone driving point to rotate according to the rotation angle.
6. The method according to claim 5, wherein the identifying the corresponding rotation angle of the neck portion in the image to be processed comprises:
rotating angles of all bone key points corresponding to the user body in the image to be processed are determined through a preset bone rotating angle identification algorithm;
and determining a rotation angle corresponding to the neck part according to the rotation angle of each bone key point.
7. The method according to claim 5, wherein the identifying the corresponding rotation angle of the neck portion in the image to be processed comprises:
and inputting the image to be processed into a preset angle recognition model, and obtaining a rotation angle corresponding to a neck part in the image to be processed output by the angle recognition model.
8. The method according to any one of claims 1-4, wherein before performing the fusion operation on the three-dimensional head model and the target head decoration to obtain the target decoration result, the method further comprises:
respectively performing rendering operation on the head three-dimensional model and the target head decoration according to a preset rendering sequence;
and the preset rendering sequence is to render the head three-dimensional model preferentially, and render the target head decoration after the rendering of the head three-dimensional model is finished.
9. An image processing apparatus characterized by comprising:
the head decoration processing device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring a head decoration request, and the head decoration request comprises an image to be processed and a target head decoration;
the segmentation module is used for identifying and segmenting a head region in the image to be processed according to the head decoration request to obtain a head region image corresponding to the image to be processed;
the generating module is used for generating a head three-dimensional model according to the head area image and a target head model with a preset depth, wherein the target head model comprises a neck area;
and the fusion module is used for carrying out fusion operation on the head three-dimensional model and the target head decoration to obtain a target decoration result.
10. An electronic device, comprising: a processor and a memory;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the image processing method of any of claims 1 to 8.
11. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement the image processing method according to any one of claims 1 to 8.
12. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the method of image processing according to any one of claims 1 to 8.
CN202210869301.2A 2022-07-22 2022-07-22 Image processing method, device, equipment, computer readable storage medium and product Pending CN115018749A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210869301.2A CN115018749A (en) 2022-07-22 2022-07-22 Image processing method, device, equipment, computer readable storage medium and product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210869301.2A CN115018749A (en) 2022-07-22 2022-07-22 Image processing method, device, equipment, computer readable storage medium and product

Publications (1)

Publication Number Publication Date
CN115018749A true CN115018749A (en) 2022-09-06

Family

ID=83081546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210869301.2A Pending CN115018749A (en) 2022-07-22 2022-07-22 Image processing method, device, equipment, computer readable storage medium and product

Country Status (1)

Country Link
CN (1) CN115018749A (en)

Similar Documents

Publication Publication Date Title
CN111242881A (en) Method, device, storage medium and electronic equipment for displaying special effects
US11594000B2 (en) Augmented reality-based display method and device, and storage medium
US11587280B2 (en) Augmented reality-based display method and device, and storage medium
CN110796721A (en) Color rendering method and device of virtual image, terminal and storage medium
TWI752473B (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN112073748A (en) Panoramic video processing method and device and storage medium
CN115131260A (en) Image processing method, device, equipment, computer readable storage medium and product
CN113613067B (en) Video processing method, device, equipment and storage medium
CN113806306B (en) Media file processing method, device, equipment, readable storage medium and product
CN112766215A (en) Face fusion method and device, electronic equipment and storage medium
JP7467780B2 (en) Image processing method, apparatus, device and medium
CN114842120A (en) Image rendering processing method, device, equipment and medium
CN113163135B (en) Animation adding method, device, equipment and medium for video
CN109816791B (en) Method and apparatus for generating information
CA3143817A1 (en) Sticker generating method and apparatus, and medium and electronic device
CN115760553A (en) Special effect processing method, device, equipment and storage medium
CN115018749A (en) Image processing method, device, equipment, computer readable storage medium and product
WO2022057576A1 (en) Facial image display method and apparatus, and electronic device and storage medium
CN114913058A (en) Display object determination method and device, electronic equipment and storage medium
CN113223128B (en) Method and apparatus for generating image
CN114049403A (en) Multi-angle three-dimensional face reconstruction method and device and storage medium
CN116527993A (en) Video processing method, apparatus, electronic device, storage medium and program product
CN118279460A (en) Ornament rendering method, ornament rendering device, ornament rendering equipment, computer readable storage medium and ornament rendering product
CN112053450B (en) Text display method and device, electronic equipment and storage medium
WO2024077792A1 (en) Video generation method and apparatus, device, and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination