CN109035288B - Image processing method and device, equipment and storage medium - Google Patents

Image processing method and device, equipment and storage medium Download PDF

Info

Publication number
CN109035288B
CN109035288B CN201810848300.3A CN201810848300A CN109035288B CN 109035288 B CN109035288 B CN 109035288B CN 201810848300 A CN201810848300 A CN 201810848300A CN 109035288 B CN109035288 B CN 109035288B
Authority
CN
China
Prior art keywords
image
identifier
face
target object
adjusting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810848300.3A
Other languages
Chinese (zh)
Other versions
CN109035288A (en
Inventor
陈晨
陶然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sensetime Technology Development Co Ltd
Original Assignee
Beijing Sensetime Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sensetime Technology Development Co Ltd filed Critical Beijing Sensetime Technology Development Co Ltd
Priority to CN201810848300.3A priority Critical patent/CN109035288B/en
Publication of CN109035288A publication Critical patent/CN109035288A/en
Application granted granted Critical
Publication of CN109035288B publication Critical patent/CN109035288B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses an image processing method, which comprises the following steps: acquiring a first image, wherein the first image comprises an identifier; acquiring a second image; carrying out matting on the target object in the second image to obtain a target object; and synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image. The embodiment of the invention also discloses an image processing device, equipment and a storage medium.

Description

Image processing method and device, equipment and storage medium
Technical Field
The present invention relates to the field of artificial intelligence, and in particular, to an image processing method and apparatus, a device, and a storage medium.
Background
Currently, most social Applications (APP) or instant messaging applications support video recording or picture taking, or directly select a locally existing video or picture for simple processing and then send out the video or picture, such as QQ, weChat, etc. Recently, face detection technology and face recognition technology are mature, so that some dress-up APP, such as a lovely one, a face-beautifying camera and the like, are rapidly developed. The APP can make special face treatment for pictures using face recognition and face detection techniques, such as automatic make-up and automatic decoration.
However, the video and picture processing modes of the clients are single, the effects of automatic makeup, automatic decoration and the like are also lack of interest, and how to synthesize recorded video and pictures, or locally and directly selected video and pictures, or real-time shooting content, or materials contained in the video and picture processing modes, such as cartoon video and decoration, so that the synthesized content is richer, three-dimensional and interesting is a problem to be solved urgently by the technicians in the field.
Disclosure of Invention
In view of the above, embodiments of the present invention provide an image processing method, apparatus, device and storage medium for solving at least one problem existing in the prior art.
The technical scheme of the embodiment of the invention is realized as follows:
the embodiment of the invention provides an image processing method, which comprises the following steps:
acquiring a first image, wherein the first image comprises an identifier;
acquiring a second image;
carrying out matting on the target object in the second image to obtain a target object;
and synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image.
In an embodiment of the present invention, the acquiring a first image, where the first image includes an identifier thereon includes:
Carrying out matting on a target area in the obtained preprocessed image to obtain a blank area;
and adding the identifier to the blank area according to the attribute value of the identifier to obtain a first image.
In an embodiment of the present invention, the adding the identifier to the blank area according to the attribute value of the identifier, to obtain a first image includes:
adding the identifier to the blank area;
and adjusting the attribute value of the added identifier to obtain a first image.
In an embodiment of the present invention, the attribute of the identifier includes a position, a size and/or a direction of the identifier, and the adjusting the attribute value of the added identifier to obtain a first image includes:
setting the position, the size and/or the direction of the added identifier to obtain a first image;
in an embodiment of the present invention, the attribute of the identifier includes a position, a size and/or a direction of the identifier, and the adjusting the attribute value of the added identifier to obtain a first image further includes:
and adjusting the position, the size and/or the direction of the added identifier according to the image characteristic information of the preprocessed image to obtain a first image.
In an embodiment of the present invention, the attribute value of the identifier at least includes a location of the identifier, and the synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image includes:
and adjusting the position of the target object on the first image according to the position of the identifier to obtain a synthesized image.
In an embodiment of the present invention, the attribute value further includes a size, and the synthesizing, according to the attribute value of the identifier, the target object and the first image to obtain a synthesized image further includes:
and adjusting the display size of the target object in the first image according to the size of the identifier to obtain a synthesized image.
In the embodiment of the present invention, the attribute value further includes a direction, and the synthesizing, according to the attribute value of the identifier, the target object and the first image to obtain a synthesized image further includes:
and adjusting the direction of the target object in the first image according to the direction of the identifier to obtain a synthesized image.
In an embodiment of the present invention, the second image includes a face image, the target object includes a face, and the adjusting the direction of the target object in the first image according to the direction of the identifier, to obtain a synthesized image includes:
Determining key point information of a human face according to the human face image;
determining three-dimensional information of the face according to the key point information of the face;
determining the initial direction of the face according to the three-dimensional information of the face;
and adjusting the direction of the face in the first image according to the direction of the identifier and the initial direction of the face to obtain a synthesized image.
In an embodiment of the present invention, the acquiring a first image, where the first image includes an identifier thereon includes:
carrying out matting on a target area in the acquired frame image of the first video to obtain a frame image with a blank area;
adding the identifier to a blank area of the frame image according to the attribute value of the identifier;
each frame of image having an identifier is taken as the first image.
In an embodiment of the present invention, the acquiring the second image includes:
acquiring a second video acquired by an image acquisition device;
and taking the current frame in the second video as the second image.
In an embodiment of the invention, the identifier comprises a placeholder.
An embodiment of the present invention provides an image processing apparatus including: the system comprises a first acquisition module, a second acquisition module, a matting module and a synthesis module, wherein:
The first acquisition module is used for acquiring a first image, wherein the first image comprises an identifier;
the second acquisition module is used for acquiring a second image;
the matting module is used for matting the target object in the second image to obtain the target object;
and the synthesis module is used for synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image.
In an embodiment of the present invention, the first obtaining module includes:
the first matting unit is used for matting the target area in the acquired preprocessed image to obtain a blank area;
and the first adding unit is used for adding the identifier to the blank area according to the attribute value of the identifier to obtain a first image.
In an embodiment of the present invention, the first adding unit includes:
first adding means for adding the identifier to the blank area;
and the first adjusting component is used for adjusting the attribute value of the identifier after being added to obtain a first image.
In an embodiment of the present invention, the attribute of the identifier includes a position, a size and/or a direction of the identifier, and the first adjusting part includes:
A first setting sub-component for setting the position, size and/or direction of the added identifier to obtain a first image;
in an embodiment of the present invention, the attribute of the identifier includes a position, a size, and/or a direction of the identifier, and the first adjusting unit further includes:
and the first adjustment sub-component is used for adjusting the position, the size and/or the direction of the added identifier according to the image characteristic information of the preprocessed image to obtain a first image.
In an embodiment of the present invention, the attribute value of the identifier includes at least a location of the identifier, and the synthesizing module includes:
and the first synthesis unit is used for adjusting the position of the target object on the first image according to the position of the identifier to obtain a synthesized image.
In an embodiment of the present invention, the attribute value further includes a size, and the synthesis module further includes:
and the second synthesis unit is used for adjusting the display size of the target object in the first image according to the size of the identifier to obtain a synthesized image.
In the embodiment of the present invention, the attribute value further includes a direction, and the synthesis module further includes:
And a third synthesizing unit, configured to adjust the direction of the target object in the first image according to the direction of the identifier, so as to obtain a synthesized image.
In an embodiment of the present invention, the second image includes a face image, the target object includes a face, and the third synthesizing unit includes:
the first determining part is used for determining key point information of the face according to the face image;
the second determining part is used for determining three-dimensional information of the face according to the key point information of the face;
a third determining unit, configured to determine an initial direction of the face according to three-dimensional information of the face;
and the first synthesis part is used for adjusting the direction of the face in the first image according to the direction of the identifier and the initial direction of the face to obtain a synthesized image.
In an embodiment of the present invention, the first obtaining module includes:
the second matting unit is used for matting the target area in the acquired frame image of the first video to obtain a frame image with a blank area;
a second adding unit configured to add the identifier to a blank area of the frame image according to an attribute value of the identifier;
A first execution unit for taking each frame image with an identifier as the first image.
In an embodiment of the present invention, the second obtaining module includes:
the first acquisition unit is used for acquiring a second video acquired by the image acquisition device;
and the second execution unit is used for taking the current frame in the second video as the second image.
In an embodiment of the invention, the identifier comprises a placeholder.
An embodiment of the present invention provides a computer storage medium, where computer executable instructions are stored on the computer storage medium, where the computer executable instructions, when executed, implement the method steps as described above.
An embodiment of the present invention provides a computer device, where the computer device includes a memory and a processor, where the memory stores computer executable instructions and the processor executes the computer executable instructions on the memory to implement the method steps as described above.
The embodiment of the invention provides an image processing method, an image processing device, image processing equipment and a storage medium, wherein a first image is acquired, and the first image comprises an identifier; acquiring a second image; carrying out matting on the target object in the second image to obtain a target object; synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image; therefore, the images, or target objects in the images, or videos can be simply, conveniently and rapidly synthesized, so that the synthesized images are richer, three-dimensional and interesting.
Drawings
FIG. 1 is a schematic diagram of a system architecture;
FIG. 2A is a schematic diagram of an implementation flow of an image processing method according to an embodiment of the present invention;
FIG. 2B is a first image of an embodiment of the present invention;
FIG. 2C is a second image of the embodiment of the invention;
FIG. 2D is a schematic diagram of a synthesized image according to an embodiment of the present invention;
FIG. 3A is a second schematic diagram of an implementation flow of an image processing method according to an embodiment of the present invention;
FIG. 3B is a schematic view of a face image according to an embodiment of the present invention;
FIG. 3C is a second image diagram after synthesis according to an embodiment of the present invention;
FIG. 4 is a schematic diagram showing the structure of an image processing apparatus according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a hardware entity of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further elaborated below with reference to the drawings and examples.
Fig. 1 is a schematic diagram of a system architecture, and fig. 1 shows a communication system, where the system includes a terminal 11 and a server 12, where various clients (such as an Application (APP)) are installed and running on the terminal 11, and the clients include clients such as social software, instant messaging software, self-media software, information sharing software, and the server is a server corresponding to the clients. In this example, the number of terminals 11 and servers 12 may be one or more, and thus the system includes one or more terminals 11 with clients installed and one or more servers 12, and these terminals 11 and servers 12 are connected through a network 13. In the embodiment of the present invention, the network side server 12 may interact with the terminal 11 through the client, where the terminal 11 sends the material to be distributed to the server 12, and then the server distributes the received material to be distributed. Wherein the material to be distributed comprises images, videos and combinations of the images and the videos.
It should be noted that some embodiments of the present invention may be based on the system architecture set forth in fig. 1.
The embodiment of the invention provides an image processing method, which is applied to a terminal, the function realized by the method can be realized by calling program codes by a processor in the terminal, and the program codes can be stored in a computer storage medium. From the above, it can be seen that the terminal comprises at least a processor and a storage medium. Fig. 2A is a schematic flow chart of an implementation of an image processing method according to an embodiment of the present invention, as shown in fig. 2A, the method includes:
step S201, acquiring a first image, wherein the first image comprises an identifier;
here, the first image may be an image photographed in real time, a locally stored image, a recorded video, or a material of the application APP, for example, a cartoon image, a cartoon video, etc., which is not limited in the embodiment of the present invention.
In this embodiment, the client includes various clients capable of uploading and distributing multimedia information, such as social software, instant messaging software, self-media software, information sharing software, information uploading software, and other clients, the user publishes the synthesized picture or video through the social software, the user can also send the synthesized picture or video to friends of the user through the instant messaging software, the user can of course upload the synthesized picture or video from the media software, and the user can also share the synthesized picture or video through the information sharing software.
In general, terminals may be various types of devices with information processing capabilities in an implementation, including mobile terminals and fixed terminals, such as mobile terminals including cell phones, personal digital assistants (Personal Digital Assistant, PDAs), navigators, digital phones, video phones, smart watches, smart bracelets, wearable devices, tablet computers, and the like. The server may be a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a fixed terminal such as a personal computer, a server cluster, or other computing devices with information processing capability in the implementation process.
Here, the identifier may be a placeholder or the like. The placeholder, as the name implies, occupies a fixed position and then adds the symbol of the content, which is widely used for editing various documents in the computer. The placeholder comprises an image placeholder, namely, in a certain webpage or article, a temporary substituted image is firstly found out and is placed at the position of a final image as temporary substitution, and the temporary substituted image is only a temporary and substituted graph. The image placeholder is used for occupying a position on the webpage, and when the picture to be inserted is not ready, the image placeholder can be used for occupying a position to mark, so that the position of the picture to be inserted can be marked, the size of the picture to be inserted can be marked, and the direction of the picture to be inserted can be marked.
It should be noted that, in the embodiment of the present invention, when the identifier is a placeholder, the identifier refers to an image placeholder.
Step S202, acquiring a second image;
here, the second image may be an image photographed in real time, a locally stored image, a recorded video, or a material of the application APP, for example, a cartoon image, a cartoon video, etc., which is not limited in the embodiment of the present invention.
Step 203, performing matting on the target object in the second image to obtain a target object;
here, the target object may be any object that can be displayed in the second image. For example, in a face image, the target object may be a face; in an image taken while exercising, the target object may be a person exercising; in an animal photographic image, the target object may be one or several animals, or may be a part of the body of one animal. In the embodiment of the present invention, the target object is not particularly limited.
And step S204, synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image.
Here, the attribute value of the identifier includes, but is not limited to, a value of a position of the identifier mark, a value of a direction of the identifier mark, and a value of a size of the identifier mark, where the value of the position of the identifier mark indicates a position of a picture to be inserted in the synthesized image; the value of the direction of the identifier mark represents the direction of the picture to be inserted in the synthesized image; and the value of the size of the identifier mark represents the size of the picture to be inserted in the synthesized image.
In the image processing method provided by the embodiment of the invention, a first image is acquired, wherein the first image comprises an identifier; acquiring a second image; carrying out matting on the target object in the second image to obtain a target object; synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image; therefore, the images, or target objects in the images, or videos can be simply, conveniently and rapidly synthesized, so that the synthesized images are richer, three-dimensional and interesting.
Based on the above method embodiment, the embodiment of the present invention further provides an image processing method, which includes:
Step S21, carrying out matting on a target area in the acquired preprocessed image to obtain a blank area;
here, the preprocessed image may be acquired network material or a picture taken by the user. Fig. 2B is a schematic diagram of a first image according to an embodiment of the present invention, as shown in fig. 2B, in which a picture 21 is a preprocessed image, and a target area 22 in the picture 21 is scratched to obtain a blank area 24. Then, an identifier is added to the blank area 24, and the attribute value of the identifier is adjusted, so that the first image 23 can be obtained. The target area 22 in the preprocessed image 21 is an area where the butterfly is located, and when in actual use, different target areas can be selected according to different effects.
Step S22, adding the identifier to the blank area according to the attribute value of the identifier to obtain a first image;
here, the identifier may be added to the blank area 24, and the image in which the identifier is added to the blank area 24, that is, the first image.
Here, the attribute value of the identifier includes, but is not limited to, a value of a position of the identifier mark, a value of a direction of the identifier mark, and a value of a size of the identifier mark.
In the embodiment of the present invention, adding the identifier to the blank area according to the attribute value of the identifier may include two ways: firstly, an identifier can be placed in the first image, and then the position, the size and/or the value of the direction of the identifier can be adjusted according to the attribute value of the identifier so that the identifier is positioned in a blank area; in the second way, the identifier having the attribute value may be added to the blank area according to the position, size and/or direction of the identifier.
Here, the step S22 may be implemented by the following steps:
step S221, adding the identifier to the blank area;
step S222, the attribute value of the added identifier is adjusted, and a first image is obtained.
Here, after the identifier is added to the blank area, an attribute value of the identifier needs to be adjusted to obtain the first image.
Here, the attribute of the identifier includes a position, a size, and/or a direction of the identifier, and further, the step S222 may be implemented in two ways, wherein,
the first implementation is realized according to the following steps:
Step S2221a, setting the position, size and/or direction of the added identifier to obtain a first image;
in the embodiment of the invention, the position, the size and/or the direction of the added identifier are set to obtain the first image, that is, the user or the tool developer can set the position, the size and/or the direction of the added identifier according to own wish to achieve the effect desired by the user or the tool developer.
The second implementation is realized according to the following steps:
step S2221b, adjusting the position, size, and/or direction of the added identifier according to the image feature information of the preprocessed image, so as to obtain a first image.
In the embodiment of the invention, the position, the size and/or the direction of the added identifier are adjusted according to the image characteristic information of the preprocessed image to obtain the first image, that is, the position, the size and/or the direction of the added identifier can be manually adjusted according to the characteristic information of the preprocessed image to obtain the first image; the position, size and/or direction of the added identifier can be adjusted by a program according to the characteristic information of the preprocessed image, so as to obtain the first image.
Wherein the image characteristic information includes, but is not limited to, color characteristics, texture characteristics, shape characteristics and spatial relationship characteristics of the image. The color feature is a global feature that describes the surface properties of the scene to which the image or image area corresponds. The texture feature is also a global feature that also describes the surface properties of the scene to which the image or image region corresponds. However, since texture is only a characteristic of the surface of an object, and cannot fully reflect the intrinsic properties of the object, high-level image contents cannot be obtained by using only texture features. Unlike color features, texture features are not pixel-based features, which require statistical calculations in areas containing multiple pixels. The shape features have two types of representation methods, one is outline features and the other is region features. The contour features of the image are mainly directed to the outer boundary of the object, whereas the region features of the image are related to the whole shape region. The spatial relationship refers to a spatial position or a relative direction relationship between a plurality of objects segmented in an image, and these relationships may be classified into a connected/adjacent relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like.
S23, acquiring a second image;
here, the second image may be an image captured by a camera, an image acquired locally, or an image acquired from a network.
Step S24, carrying out matting on the target object in the second image to obtain the target object;
fig. 2C is a schematic diagram of a second image according to an embodiment of the present invention, as shown in fig. 2C, a picture 25 is a second image, a picture 26 is a target object, and the target object is a facial feature part of a face in the figure, and the target object 26 in the second image 25 is scratched to obtain the target object 26. At present, there are various methods for matting, which can use programs to perform matting on the target object in the second image, or use various types of APP to perform matting on the target object in the second image, for example, software PhotoShop, software Knockout, software photospectactt, etc., and a person skilled in the art can select a suitable method to perform matting according to actual needs, which is not described here again.
Step S25, the attribute value of the identifier at least comprises the position of the identifier, and the position of the target object on the first image is adjusted according to the position of the identifier, so that a synthesized image is obtained;
Step S26, the attribute value further comprises a size, and the display size of the target object in the first image is adjusted according to the size of the identifier, so that a synthesized image is obtained;
step S27, the attribute value further includes a direction, and the direction of the target object in the first image is adjusted according to the direction of the identifier, so as to obtain a synthesized image.
FIG. 2D is a schematic diagram of an image synthesized according to an embodiment of the present invention, as shown in FIG. 2D, the size, direction and position of the identifier added to the blank area 24 in FIG. 2B are adjusted first; the position of the target object 26 in fig. 2C on the first image is then adjusted according to the position of the identifier, the display size of the target object in the first image is adjusted according to the size of the identifier, and the direction of the target object in the first image is adjusted according to the direction of the identifier, resulting in a synthesized image 27. Wherein the position of the target object in the synthesized image is consistent with the position of the identifier, the size of the target object in the synthesized image is consistent with the size of the identifier, and the direction of the target object in the synthesized image is consistent with the direction of the identifier.
In other embodiments, the method further comprises:
the second image includes a face image, the target object includes a face, the direction of the target object in the first image is adjusted according to the direction of the identifier, and a synthesized image is obtained, including:
determining key point information of a human face according to the human face image;
in the embodiment of the present invention, the facial key point information includes, but is not limited to, location information of five sense organs on a face, for example, eyes, nose, mouth, eyebrows, etc., where the same five sense organs may also include multiple key point information.
Determining three-dimensional information of the face according to the key point information of the face;
determining the initial direction of the face according to the three-dimensional information of the face;
here, the initial direction of the face, that is, the direction of the face before synthesis, for example, a picture taken when the face is horizontally placed and the camera is also horizontally placed, a picture taken when the face is horizontally placed and the camera is horizontally placed, and a picture taken when the face is horizontally placed and the camera is horizontally placed, wherein the initial directions of the face are different.
And adjusting the direction of the face in the first image according to the direction of the identifier and the initial direction of the face to obtain a synthesized image.
Here, the direction of the face in the first image may be adjusted according to the initial direction of the face and the direction of the identifier, so as to obtain a synthesized image, where the direction of the face in the first image is consistent with the direction of the identifier.
Based on the above method embodiment, the embodiment of the present invention further provides an image processing method, which includes:
step S21, carrying out matting on a target area in the acquired frame image of the first video to obtain a frame image with a blank area;
step S22, adding the identifier to a blank area of the frame image according to the attribute value of the identifier;
step S23, each frame image with an identifier is taken as the first image;
here, the first image may be a frame image in a video after the first video is processed, where the first video may be a first video existing in a video material library in a program application, or may be a video uploaded by a user, and the present invention is not limited specifically.
Here, the step S22 may be implemented by the following steps:
step S221, adding the identifier to the blank area;
step S222, the attribute value of the added identifier is adjusted, and a first image is obtained.
Here, the attribute of the identifier includes a position, a size, and/or a direction of the identifier, and further, the step S222 may be implemented in two ways, wherein,
the first implementation is realized according to the following steps:
step S2221a, setting the position, size and/or direction of the added identifier to obtain a first image;
the second implementation is realized according to the following steps:
step S2221b, adjusting the position, size, and/or direction of the added identifier according to the image feature information of the preprocessed image, so as to obtain a first image.
Step S24, acquiring a second video acquired by the image acquisition device;
step S25, taking the current frame in the second video as the second image;
step S26, carrying out matting on the target object in the second image to obtain the target object;
here, the second image may be a frame image in a video after the second video is processed, where the second video may be a second video existing in a video material library in a program application, may be a video uploaded by a user, or may be an image collector, for example, a camera, and the collected video, which is not specifically limited in the present invention.
Step S27, the attribute value of the identifier at least comprises the position of the identifier, and the position of the target object on the first image is adjusted according to the position of the identifier, so that a synthesized image is obtained;
step S28, the attribute value further comprises a size, and the display size of the target object in the first image is adjusted according to the size of the identifier, so that a synthesized image is obtained;
step S29, the attribute value further comprises a direction, and the direction of the target object in the first image is adjusted according to the direction of the identifier, so that a synthesized image is obtained;
in an embodiment of the present invention, the identifier may include a placeholder, and the like.
In the embodiment of the invention, when the first image is a frame image from a first video and the second image is a frame image from a second video, the synthesized image is also a frame image in the video, that is, the frame image in the first video and the frame image from the second video are synthesized according to a certain rule, so as to obtain a synthesized image, and further obtain a synthesized video.
In the embodiment of the present invention, when the first image is a frame image from a first video (where the video includes N frames) and the second image is an independent picture, each frame image in the first video and a target object in the second image may be synthesized to obtain a synthesized N frame image, so as to obtain a synthesized video. The content of the background image (i.e., the image except the blank area in the first image) in the synthesized image may be continuously changed, while the content of the target object in the synthesized image is unchanged, and only the position, direction and size of the target object may be changed.
In the embodiment of the present invention, when the second image is a frame image from a second video (where the video includes N frames) and the first image is an independent picture, a target object in each frame image of the second video and the first image may be synthesized to obtain a synthesized N frame image, so as to obtain a synthesized video. The content of the background image (i.e., the image except the blank area in the first image) in the synthesized image is unchanged, and the content, position, direction and size of the target object in the synthesized image can be changed continuously.
Based on the above method embodiment, the embodiment of the present invention further provides an image processing method, and fig. 3A is a schematic diagram of a second implementation flow of the image processing method according to the embodiment of the present invention, as shown in fig. 3A, where the method includes:
step S301, obtaining basic materials;
step S302, carrying out matting on the frame images in the basic materials to obtain blank areas;
in the embodiment of the invention, the obtained basic material can be a video, a specific mark is added to the sequence frame corresponding to the video, and the sequence frame added with the mark is marked as a face matting function.
Step S303, adding a face placeholder in the blank area;
step S304, adjusting the size and the direction of the face placeholder in each frame of image, and then adjusting the position of the face placeholder in the blank area;
in the embodiment of the present invention, steps S301 to S305 in the image processing method may be implemented in the form of a widget. Fig. 3B is a schematic view of a face image according to an embodiment of the present invention, as shown in fig. 3B, where a picture 31 is a face image captured by a mobile phone camera in real time, and the face occupies a large part of the image. The picture 32 is to place the face image at the specified position 33 in the screen, that is, no matter where the face is in the screen when shooting, the shot face can be forced to be placed at the specified position of the screen to a fixed size and the direction of the face can be selected, the direction can be 360 degrees of rotation, and the direction is real-time.
Here, the adjusting the size and direction of the placeholder and then adjusting the position of the placeholder in the blank area means that an effect of adjusting the placeholder can be removed, so that the effect of adjusting the placeholder and the effect actually seen are consistent.
Here, the basic material includes a plurality of frames, and considering that the positions and sizes of faces detected in real time may be different between each frame and each frame, each frame needs to be set independently, and each frame needs to be set frame by frame until face placeholders in all frame images are set. I.e. supporting different sizes and positions for each frame of placeholders in the gadget, facilitating the designer to design more effects.
Step S305, storing and outputting the adjusted basic materials;
here, the adjusted basic material is stored and output, which means a material packet that is collated into a material packet that can be used by the rendering engine.
Step S306, detecting face images shot by a camera in real time;
step S307, adding the detected face to the output basic material according to the position of the face placeholder;
step S308, comparing the face placeholder with a face image detected in real time, and adjusting the size of the detected face to be consistent with the size of the placeholder;
step 309, comparing the face placeholder with the face image detected in real time, and adjusting the detected face direction to be consistent with the placeholder direction.
Fig. 3C is a schematic diagram of a second image synthesized by the embodiment of the present invention, as shown in fig. 3C, the pictures 34 to 38 are images synthesized by faces and basic materials detected in real time at different moments. The content included in the basic material in the pictures 34 to 38, and the content of the face detected in real time are different. When the camera cannot shoot the face image, other shot contents can be displayed, and the content can be displayed as basic material content before the picture is scratched. Each frame image in the basic material can be synthesized with the face image detected in real time, and part of the frame images in the basic material can be synthesized with the face image detected in real time. As can be seen from the pictures 34 to 38, the synthesized image includes no facial parts, but is not a whole photographed facial image, so that in order to achieve this effect, the facial parts can be detected and identified, and the facial parts can be obtained.
In the embodiment of the invention, the face placeholder is compared with the face image detected in real time, the direction of the detected face is adjusted to be consistent with the direction of the placeholder, the face recognition is included, the key points of the face and the face posture information are determined, the angle of the face is adjusted by utilizing the face posture information, and the face posture information comprises three-dimensional face information.
It should be noted that, the methods of face detection and face recognition related to the embodiment of the present invention may be many, and at present, many APP may also directly perform face detection and face recognition, so that a person skilled in the art may select a suitable face detection and face recognition method according to actual needs, which is not described herein.
Here, the steps S306 to S309 represent that the face is detected in real time, and the detected face is placed at a designated position in the screen, and the size and direction of the face are the same as those of the previous design.
In the image processing method provided by the embodiment of the invention, basic materials are obtained; carrying out matting on the frame image in the basic material to obtain a blank area; adding a face placeholder in the blank area; adjusting the size and the direction of the face placeholder in each frame of image, and then adjusting the position of the face placeholder in the blank area; storing and outputting the adjusted basic materials; detecting a face image shot by a camera in real time; adding the detected face into the output basic material according to the position of the face placeholder; comparing the face placeholder with a face image detected in real time, and adjusting the size of the detected face to be consistent with the size of the placeholder; comparing the face placeholder with a face image detected in real time, and adjusting the direction of the detected face to be consistent with the direction of the placeholder; therefore, the face image shot in real time can be simply, conveniently and quickly synthesized with the basic materials, so that the synthesized image is richer, three-dimensional and interesting.
An embodiment of the present invention provides an image processing apparatus, fig. 4 is a schematic diagram of a composition structure of the image processing apparatus according to the embodiment of the present invention, as shown in fig. 4, the apparatus 400 includes: a first acquisition module 401, a second acquisition module 402, a matting module 403 and a synthesis module 404, wherein:
the first obtaining module 401 is configured to obtain a first image, where the first image includes an identifier;
the second acquiring module 402 is configured to acquire a second image;
the matting module 403 is configured to perform matting on the target object in the second image to obtain a target object;
the synthesizing module 404 is configured to synthesize the target object with the first image according to the attribute value of the identifier, so as to obtain a synthesized image.
In other embodiments, the first acquisition module includes:
the first matting unit is used for matting the target area in the acquired preprocessed image to obtain a blank area;
and the first adding unit is used for adding the identifier to the blank area according to the attribute value of the identifier to obtain a first image.
In other embodiments, the first adding unit includes:
First adding means for adding the identifier to the blank area;
and the first adjusting component is used for adjusting the attribute value of the identifier after being added to obtain a first image.
In other embodiments, the attribute of the identifier includes a position, a size, and/or a direction of the identifier, and the first adjusting part includes:
a first setting sub-component for setting the position, size and/or direction of the added identifier to obtain a first image;
in other embodiments, the attribute of the identifier includes a position, a size, and/or a direction of the identifier, and the first adjusting part further includes:
and the first adjustment sub-component is used for adjusting the position, the size and/or the direction of the added identifier according to the image characteristic information of the preprocessed image to obtain a first image.
In other embodiments, the attribute value of the identifier includes at least a location of the identifier, and the synthesizing module includes:
and the first synthesis unit is used for adjusting the position of the target object on the first image according to the position of the identifier to obtain a synthesized image.
In other embodiments, the attribute values further include a size, and the composition module further includes:
And the second synthesis unit is used for adjusting the display size of the target object in the first image according to the size of the identifier to obtain a synthesized image.
In other embodiments, the attribute value further includes a direction, and the synthesizing module further includes:
and a third synthesizing unit, configured to adjust the direction of the target object in the first image according to the direction of the identifier, so as to obtain a synthesized image.
In other embodiments, the second image includes a face image, the target object includes a face, and the third synthesizing unit includes:
the first determining part is used for determining key point information of the face according to the face image;
the second determining part is used for determining three-dimensional information of the face according to the key point information of the face;
a third determining unit, configured to determine an initial direction of the face according to three-dimensional information of the face;
and the first synthesis part is used for adjusting the direction of the face in the first image according to the direction of the identifier and the initial direction of the face to obtain a synthesized image.
In other embodiments, the first acquisition module includes:
The second matting unit is used for matting the target area in the acquired frame image of the first video to obtain a frame image with a blank area;
a second adding unit configured to add the identifier to a blank area of the frame image according to an attribute value of the identifier;
a first execution unit for taking each frame image with an identifier as the first image.
In other embodiments, the second acquisition module includes:
the first acquisition unit is used for acquiring a second video acquired by the image acquisition device;
and the second execution unit is used for taking the current frame in the second video as the second image.
In other embodiments, the identifier comprises a placeholder.
It should be noted here that: the description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus of the present invention, please refer to the description of the embodiments of the method of the present invention.
In the embodiment of the present invention, if the above-described image processing method is implemented in the form of a software functional module and sold or used as a separate product, it may also be stored in a computer-readable storage medium. Based on such understanding, the technical solutions of embodiments of the present invention may be embodied essentially or in part in the form of a software product stored in a storage medium, including instructions for causing a computing device to perform all or part of the methods described in the various embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a ROM (Read Only Memory), a magnetic disk, or an optical disk. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
Correspondingly, the embodiment of the invention provides a computer device comprising a memory and a processor, wherein the memory stores a computer program which can be run on the processor, and the processor realizes the steps in the image processing method when executing the program.
Accordingly, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements steps in an image processing method.
It should be noted here that: the description of the storage medium and apparatus embodiments above is similar to that of the method embodiments described above, with similar benefits as the method embodiments. For technical details not disclosed in the embodiments of the storage medium and the apparatus of the present invention, please refer to the description of the method embodiments of the present invention.
It should be noted that, fig. 5 is a schematic diagram of a hardware entity of an image processing apparatus according to an embodiment of the present invention, and as shown in fig. 5, the hardware entity of the image processing apparatus 500 includes: a memory 501, a communication bus 502, and a processor 503, wherein,
the memory 501 is configured to store instructions and applications executable by the processor 503, and may also cache data to be processed or already processed by each module in the processor 503 and the image processing apparatus 500, and may be implemented by FLASH (FLASH memory) or RAM (Random Access Memory ).
The communication bus 502 may allow the image processing apparatus 500 to communicate with other terminals or servers through a network, and may also enable connection communication between the processor 503 and the memory 501.
The processor 503 generally controls the overall operation of the image processing apparatus 500.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in the embodiments of the present invention.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (20)

1. An image processing method, the method comprising:
acquiring a first image, wherein the first image comprises an identifier;
acquiring a second image;
carrying out matting on the target object in the second image to obtain a target object;
according to the attribute value of the identifier, adjusting the target object in the first image in the process of synthesizing the target object and the first image to obtain a synthesized image;
Wherein the acquiring a first image, wherein the first image includes an identifier thereon, includes:
carrying out matting on a target area in the obtained preprocessed image to obtain a blank area; adding the identifier to the blank area according to the attribute value of the identifier to obtain a first image;
or, the acquiring a first image, where the first image includes an identifier thereon, includes:
carrying out matting on a target area in the acquired frame image of the first video to obtain a frame image with a blank area; adding the identifier to a blank area of the frame image according to the attribute value of the identifier; each frame of image having an identifier is taken as the first image.
2. The method of claim 1, wherein adding the identifier to the blank area based on the attribute value of the identifier results in a first image, comprising:
adding the identifier to the blank area;
and adjusting the attribute value of the added identifier to obtain a first image.
3. The method according to claim 2, wherein the attribute of the identifier includes a position, a size and/or a direction of the identifier, and the adjusting the attribute value of the added identifier to obtain the first image includes:
And setting the position, the size and/or the direction of the added identifier to obtain a first image.
4. The method according to claim 2, wherein the attribute of the identifier includes a position, a size, and/or a direction of the identifier, and the adjusting the attribute value of the added identifier to obtain the first image further includes:
and adjusting the position, the size and/or the direction of the added identifier according to the image characteristic information of the preprocessed image to obtain a first image.
5. The method according to any one of claims 1 to 4, wherein the attribute value of the identifier includes at least a position and a size of the identifier, and the adjusting the target object in the first image during the synthesizing of the target object and the first image according to the attribute value of the identifier, to obtain the synthesized image includes:
and adjusting the position of the target object on the first image according to the position of the identifier, and adjusting the display size of the target object in the first image according to the size of the identifier to obtain a synthesized image.
6. The method according to any one of claims 1 to 4, wherein the attribute value of the identifier includes at least a position and a direction of the identifier, and the adjusting the target object in the first image during the synthesizing of the target object and the first image according to the attribute value of the identifier, to obtain the synthesized image includes:
And adjusting the position of the target object on the first image according to the position of the identifier, and adjusting the direction of the target object in the first image according to the direction of the identifier to obtain a synthesized image.
7. The method of claim 6, wherein the second image comprises a face image and the target object comprises a face, wherein the adjusting the direction of the target object in the first image according to the direction of the identifier results in a synthesized image, comprising:
determining key point information of a human face according to the human face image;
determining three-dimensional information of the face according to the key point information of the face;
determining the initial direction of the face according to the three-dimensional information of the face;
and adjusting the direction of the face in the first image according to the direction of the identifier and the initial direction of the face to obtain a synthesized image.
8. The method of any one of claims 1 to 4, wherein the acquiring the second image comprises:
acquiring a second video acquired by an image acquisition device;
and taking the current frame in the second video as the second image.
9. The method of any of claims 1 to 4, wherein the identifier comprises a placeholder.
10. An image processing apparatus, characterized in that the image processing apparatus comprises: the system comprises a first acquisition module, a second acquisition module, a matting module and a synthesis module, wherein:
the first acquisition module is used for acquiring a first image, wherein the first image comprises an identifier;
the second acquisition module is used for acquiring a second image;
the matting module is used for matting the target object in the second image to obtain the target object;
the synthesizing module is used for adjusting the target object positioned in the first image in the process of synthesizing the target object and the first image according to the attribute value of the identifier to obtain a synthesized image;
the first obtaining module specifically includes:
the first matting unit is used for matting the target area in the acquired preprocessed image to obtain a blank area;
a first adding unit, configured to add the identifier to the blank area according to an attribute value of the identifier, so as to obtain a first image;
or, the first obtaining module specifically includes:
The second matting unit is used for matting the target area in the acquired frame image of the first video to obtain a frame image with a blank area;
a second adding unit configured to add the identifier to a blank area of the frame image according to an attribute value of the identifier;
a first execution unit for taking each frame image with an identifier as the first image.
11. The apparatus of claim 10, wherein the first adding unit comprises:
first adding means for adding the identifier to the blank area;
and the first adjusting component is used for adjusting the attribute value of the identifier after being added to obtain a first image.
12. The apparatus according to claim 11, wherein the attribute of the identifier comprises a position, a size and/or a direction of the identifier, the first adjustment means comprising:
and the first setting sub-component is used for setting the position, the size and/or the direction of the added identifier to obtain a first image.
13. The apparatus of claim 11, wherein the attribute of the identifier comprises a position, a size, and/or a direction of the identifier, the first adjustment component further comprising:
And the first adjustment sub-component is used for adjusting the position, the size and/or the direction of the added identifier according to the image characteristic information of the preprocessed image to obtain a first image.
14. The apparatus according to any one of claims 10 to 13, wherein the attribute value of the identifier includes at least a position, a size of the identifier, and the synthesizing module includes:
a first synthesizing unit for adjusting the position of the target object on the first image according to the position of the identifier;
and the second synthesis unit is used for adjusting the display size of the target object in the first image according to the size of the identifier to obtain a synthesized image.
15. The apparatus according to any one of claims 10 to 13, wherein the attribute value of the identifier includes at least a position, a direction of the identifier, the synthesizing module includes:
a first synthesizing unit for adjusting the position of the target object on the first image according to the position of the identifier;
and a third synthesizing unit, configured to adjust the direction of the target object in the first image according to the direction of the identifier, so as to obtain a synthesized image.
16. The apparatus of claim 15, wherein the second image comprises a face image, the target object comprises a face, and the third synthesizing unit comprises:
the first determining part is used for determining key point information of the face according to the face image;
the second determining part is used for determining three-dimensional information of the face according to the key point information of the face;
a third determining unit, configured to determine an initial direction of the face according to three-dimensional information of the face;
and the first synthesis part is used for adjusting the direction of the face in the first image according to the direction of the identifier and the initial direction of the face to obtain a synthesized image.
17. The apparatus according to any one of claims 10 to 13, wherein the second acquisition module comprises:
the first acquisition unit is used for acquiring a second video acquired by the image acquisition device;
and the second execution unit is used for taking the current frame in the second video as the second image.
18. The apparatus of any of claims 10 to 13, wherein the identifier comprises a placeholder.
19. A computer storage medium having stored thereon computer executable instructions which, when executed, are capable of carrying out the method of any one of claims 1 to 9.
20. A computer device comprising a memory having stored thereon computer executable instructions and a processor which when executed performs the method of any of claims 1 to 9.
CN201810848300.3A 2018-07-27 2018-07-27 Image processing method and device, equipment and storage medium Active CN109035288B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810848300.3A CN109035288B (en) 2018-07-27 2018-07-27 Image processing method and device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810848300.3A CN109035288B (en) 2018-07-27 2018-07-27 Image processing method and device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109035288A CN109035288A (en) 2018-12-18
CN109035288B true CN109035288B (en) 2024-04-16

Family

ID=64647437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810848300.3A Active CN109035288B (en) 2018-07-27 2018-07-27 Image processing method and device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109035288B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110012229B (en) * 2019-04-12 2021-01-08 维沃移动通信有限公司 Image processing method and terminal
CN110111238A (en) * 2019-04-24 2019-08-09 薄涛 Image processing method, device, equipment and its storage medium
CN110136363A (en) * 2019-05-08 2019-08-16 深圳市朗形网络科技有限公司 Self-service entertainment video production and DIY text create souvenir equipment for customizing
CN112308809B (en) * 2019-08-20 2024-06-14 北京字节跳动网络技术有限公司 Image synthesis method, device, computer equipment and storage medium
CN112529765A (en) * 2019-09-02 2021-03-19 阿里巴巴集团控股有限公司 Image processing method, apparatus and storage medium
CN111738930A (en) * 2020-05-12 2020-10-02 北京三快在线科技有限公司 Face image synthesis method and device, electronic equipment and storage medium
CN112291590A (en) * 2020-10-30 2021-01-29 北京字节跳动网络技术有限公司 Video processing method and device
CN113791721A (en) * 2021-08-31 2021-12-14 北京达佳互联信息技术有限公司 Picture processing method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102763099A (en) * 2009-10-05 2012-10-31 法布塔利生产股份有限公司 Interactive electronic document
CN105701762A (en) * 2015-12-30 2016-06-22 联想(北京)有限公司 Picture processing method and electronic equipment
CN107330408A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Video processing method and device, electronic equipment and storage medium
CN107959843A (en) * 2017-12-25 2018-04-24 广东欧珀移动通信有限公司 Image processing method and device, computer-readable recording medium and computer equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8072472B2 (en) * 2006-06-26 2011-12-06 Agfa Healthcare Inc. System and method for scaling overlay images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102763099A (en) * 2009-10-05 2012-10-31 法布塔利生产股份有限公司 Interactive electronic document
CN105701762A (en) * 2015-12-30 2016-06-22 联想(北京)有限公司 Picture processing method and electronic equipment
CN107330408A (en) * 2017-06-30 2017-11-07 北京金山安全软件有限公司 Video processing method and device, electronic equipment and storage medium
CN107959843A (en) * 2017-12-25 2018-04-24 广东欧珀移动通信有限公司 Image processing method and device, computer-readable recording medium and computer equipment

Also Published As

Publication number Publication date
CN109035288A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN109035288B (en) Image processing method and device, equipment and storage medium
US11410457B2 (en) Face reenactment
EP3815042B1 (en) Image display with selective depiction of motion
CN112419170B (en) Training method of shielding detection model and beautifying processing method of face image
KR102624635B1 (en) 3D data generation in messaging systems
CN107944420B (en) Illumination processing method and device for face image
CN106791380B (en) Method and device for shooting dynamic photos
US8903139B2 (en) Method of reconstructing three-dimensional facial shape
CN113261013A (en) System and method for realistic head rotation and facial animation synthesis on mobile devices
JP7247327B2 (en) Techniques for Capturing and Editing Dynamic Depth Images
CN111667420B (en) Image processing method and device
US20160086365A1 (en) Systems and methods for the conversion of images into personalized animations
CN110580733A (en) Data processing method and device and data processing device
CN107766803B (en) Video character decorating method and device based on scene segmentation and computing equipment
Reinhuber Synthography–An invitation to reconsider the rapidly changing toolkit of digital image creation as a new genre beyond photography
CN114359453A (en) Three-dimensional special effect rendering method and device, storage medium and equipment
CN112906553B (en) Image processing method, apparatus, device and medium
CN110084306B (en) Method and apparatus for generating dynamic image
CN111314627B (en) Method and apparatus for processing video frames
CN114785957A (en) Shooting method and device thereof
Lai et al. Correcting face distortion in wide-angle videos
CN110147511B (en) Page processing method and device, electronic equipment and medium
CN114302071B (en) Video processing method and device, storage medium and electronic equipment
US20240161362A1 (en) Target-augmented material maps
CN114915730B (en) Shooting method and shooting device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant