CN110310318B - Special effect processing method and device, storage medium and terminal - Google Patents

Special effect processing method and device, storage medium and terminal Download PDF

Info

Publication number
CN110310318B
CN110310318B CN201910594665.2A CN201910594665A CN110310318B CN 110310318 B CN110310318 B CN 110310318B CN 201910594665 A CN201910594665 A CN 201910594665A CN 110310318 B CN110310318 B CN 110310318B
Authority
CN
China
Prior art keywords
image
special effect
pixel
initial
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910594665.2A
Other languages
Chinese (zh)
Other versions
CN110310318A (en
Inventor
邓涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201910594665.2A priority Critical patent/CN110310318B/en
Publication of CN110310318A publication Critical patent/CN110310318A/en
Application granted granted Critical
Publication of CN110310318B publication Critical patent/CN110310318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Architecture (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The disclosure provides a special effect processing method and device, a storage medium and a terminal. The method comprises the following steps: performing dimension raising processing on an initial image in response to receiving a special effect processing instruction to obtain a first three-dimensional image of the initial image, and then performing special effect processing on the first three-dimensional image to obtain a second three-dimensional image; wherein the special effect processing includes: and performing deformation processing, namely acquiring a special effect image according to the initial image and the second three-dimensional image, and further displaying the special effect image on a display screen. The method disclosed by the invention improves the matching degree of the visual special effect and the actual image and the special effect display effect, and expands the development and application of the field of the visual special effect to a certain extent.

Description

Special effect processing method and device, storage medium and terminal
Technical Field
The present disclosure relates to visual special effect technologies, and in particular, to a special effect processing method and apparatus, a storage medium, and a terminal.
Background
With the popularization of intelligent terminals and the continuous development of visual special effect technologies, more and more application programs pay attention to visual special effects which provide more individuation and better visual perception for users.
At present, a visual special effect technology, especially a visual special effect aiming at deformation, is generally realized by taking an image to be processed as a two-dimensional plane as a basis, that is, deformation processing on the two-dimensional plane is performed in a two-dimensional image. In the specific processing process, two-dimensional plane features of the acquired image to be processed are generally extracted, so that special effect action points are determined in the image to be processed according to the two-dimensional plane features, and corresponding special effect processing is performed.
However, the existing special effect processing method discards depth information of an image, and the processing mode causes that the existing visual special effect technology is limited to a deformation special effect on a two-dimensional plane, and the special effect processing of the image in the aspect of image depth cannot be realized. This results in a poor matching degree between the special effect image and the actual image, and a poor effect of displaying the special effect to some extent, and also limits the development and application of the field of visual special effects to some extent because the special effect processing and displaying in the aspect of image depth cannot be realized.
Disclosure of Invention
The disclosure provides a special effect processing method and device, a storage medium and a terminal, which are used for improving the matching degree of a visual special effect and an actual image and the special effect display effect and expanding the development and application of the field of the visual special effect to a certain extent.
In a first aspect, the present disclosure provides a special effect processing method, including:
performing dimension raising processing on an initial image in response to receiving a special effect processing instruction to obtain a first three-dimensional image of the initial image;
carrying out special effect processing on the first three-dimensional image to obtain a second three-dimensional image; wherein the special effect processing includes: deformation treatment;
obtaining a special effect image according to the initial image and the second three-dimensional image;
and displaying the special effect image on a display screen.
In a second aspect, the present disclosure provides a special effects processing apparatus, including:
the first processing module is used for responding to the received special effect processing instruction and performing dimension increasing processing on the initial image to obtain a first three-dimensional image of the initial image;
the second processing module is used for carrying out special effect processing on the first three-dimensional image to obtain a second three-dimensional image; wherein the special effect processing includes: deformation treatment;
the acquisition module is used for acquiring a special effect image according to the initial image and the second three-dimensional image;
and the display module is used for displaying the special effect image on a display screen.
In a third aspect, the present disclosure provides a special effect processing apparatus, including:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of the first aspect.
In a fourth aspect, the present disclosure provides a terminal comprising:
special effects processing means for implementing the method according to the first aspect;
a terminal body.
In a fifth aspect, the present disclosure provides a computer-readable storage medium having a computer program stored thereon,
the computer program is executed by a processor to implement the method as described in the first aspect.
According to the special effect processing method and device, the storage medium and the terminal, after a special effect processing instruction is received, a first three-dimensional image of a two-dimensional initial image is constructed, special effect processing is executed on the first three-dimensional image to obtain a second three-dimensional image, and then a special effect image is obtained and displayed according to the second three-dimensional image and the initial image. Therefore, compared with the scheme of directly executing special effect processing on a two-dimensional image in the prior art, the scheme can further consider the problem of image depth on the basis of the image depth, and carry out special effect processing on a three-dimensional image, so that the problem of poor matching effect of a special effect and the image caused by neglecting the image depth is avoided, the finally obtained and displayed special effect image is more natural, and a better display effect is achieved; and the scheme can realize special effect display of three-dimensional characteristics based on the two-dimensional image, and expands the development and application of the visual special effect field to a certain extent.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a schematic diagram of a terminal according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of a special effect processing method according to an embodiment of the disclosure;
fig. 3 is a schematic flow chart of another special effect processing method provided in the embodiment of the present disclosure;
fig. 4 is a schematic flowchart of another special effect processing method provided in the embodiment of the present disclosure;
fig. 5 is a schematic flowchart of another special effect processing method provided in the embodiment of the present disclosure;
fig. 6 is a schematic flowchart of another special effect processing method provided in the embodiment of the present disclosure;
fig. 7 is a functional block diagram of an effect processing apparatus according to an embodiment of the disclosure;
fig. 8 is a schematic physical structure diagram of a special effect processing apparatus according to an embodiment of the disclosure;
fig. 9 is a schematic physical structure diagram of a special effect processing apparatus according to an embodiment of the present disclosure.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
The specific application scenarios of the present disclosure are: and (4) carrying out visual special effect processing on the two-dimensional image.
As mentioned above, the existing visual special effect processing is generally implemented based on two-dimensional features in a two-dimensional image, and depth features included in the image are discarded, so that the finally formed special effect is affected in two aspects: for the visual special effect which can be realized currently, the depth characteristic is ignored, so that the matching degree of the special effect and the actual image is poor, the visual special effect is not natural enough, and the display effect of the special effect is poor; secondly, the depth feature is ignored, so that special effect change cannot be realized on the three-dimensional feature in the image, and the development and application of the visual special effect are limited.
For convenience of understanding, the embodiments of the present disclosure take the visual special effect of a human face as an example, and perform an example and a scheme description below.
The main realization mode of the existing human face special effect processing is to carry out shape processing on two-dimensional planes of human face organs such as eyes, mouth shapes, face shapes and the like or to superpose and display special effect ornaments. For example, one possible visual effect is for facial modification of a person's face, while another possible visual effect is primarily for adding eye ornamentation to the eyes of the person. The special effect processing modes are realized on the basis of the two-dimensional plane features of the face images.
However, such visual effects do not involve adjustment of the depth of the image. Taking the nose in the face image as an example, since the nose is a stereoscopic organ with a higher depth in the face, currently, the adjustment for the nose mainly focuses on adjusting the display state of the nose in a two-dimensional plane, for example: forming a special effect that the nose is crooked; without involving visual special effects on the height of the nose, for example, the effect of pulling up the height of the bridge of the nose (or simply "hump nose") cannot be achieved.
The technical scheme provided by the disclosure aims to solve the above technical problems in the prior art, and provides the following solving ideas: and constructing a three-dimensional image corresponding to the initial image, thereby carrying out special effect processing on the three-dimensional image, and obtaining and displaying a final special effect image through the displacement condition of each pixel point before and after the special effect processing.
The special effect processing method provided by the disclosure can be applied to the terminal shown in fig. 1. As shown in fig. 1, the terminal 100 includes: a terminal body 110 and a special effects processing apparatus 700, wherein the special effects processing apparatus 700 is used for executing the special effects processing method.
The embodiments of the present disclosure are not particularly limited with respect to components included in the terminal body. In a practical implementation scenario, one or more of the following components may be included: a processing component, a memory, a power component, a multimedia component, an audio component, an input/output (I/O) interface, a sensor component, and a communication component.
The terminal related to the embodiments of the present disclosure may be a wireless terminal or a wired terminal. A wireless terminal may refer to a device that provides voice and/or other traffic data connectivity to a user, a handheld device having wireless connection capability, or other processing device connected to a wireless modem. A wireless terminal, which may be a mobile terminal such as a mobile phone (or called a "cellular" phone) and a computer having a mobile terminal, for example, a portable, pocket, hand-held, computer-included or vehicle-mounted mobile device, may communicate with one or more core Network devices via a Radio Access Network (RAN), and exchange languages and/or data with the RAN. For another example, the Wireless terminal may be a Personal Communication Service (PCS) phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA), and the like. A wireless Terminal may also be referred to as a system, a Subscriber Unit (Subscriber Unit), a Subscriber Station (Subscriber Station), a Mobile Station (Mobile), a Remote Station (Remote Station), a Remote Terminal (Remote Terminal), an Access Terminal (Access Terminal), a User Terminal (User Terminal), a User Agent (User Agent), and a User Equipment (User Device or User Equipment), which are not limited herein. Optionally, the terminal device may also be a smart watch, a tablet computer, or the like.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
Example one
The embodiment of the disclosure provides a special effect processing method. Referring to fig. 2, the method includes the following steps:
s202, responding to the received special effect processing instruction, performing dimension increasing processing on the initial image to obtain a first three-dimensional image of the initial image.
The dimension-increasing processing in the embodiment of the disclosure is to perform dimension-increasing conversion on a two-dimensional initial image, and construct a corresponding three-dimensional image, that is, a first three-dimensional image. After the processing, the depth features in the initial image can be directly represented in the first three-dimensional image.
And the special effect processing instruction can be issued by the user through operation on the terminal. In the embodiment of the disclosure, the operation information of the user on the terminal can be continuously detected, the operation information is compared with the preset special effect processing instruction, if the operation information is matched with the preset special effect processing instruction, the special effect processing instruction is determined to be received, and the subsequent processing is executed. The embodiment of the disclosure has no special limitation on whether the terminal outputs other visual special effects and whether the terminal acquires or outputs images before receiving the special effect processing instruction, that is, no limitation is imposed on the trigger scene of the scheme.
In addition, the content, the representation form and the acquisition source of the initial image are not particularly limited in the embodiments of the present disclosure.
The initial image may include: a face image.
The initial image may be a still image, such as a photograph, a picture, etc., or may be one or more frames from a multimedia image, such as a frame of a video. If there are multiple frames in the initial image, the method may include: at least two continuous frames of images, and/or at least two discontinuous frames of images; at this time, for convenience of description, the embodiment of the present disclosure only provides a processing manner for one frame of image, and each of the remaining frames of image may be processed according to the method provided in the embodiment of the present disclosure, and details are not repeated.
The initial image can be an image acquired by an image acquisition device, such as a camera and the like, and is acquired currently in real time; alternatively, the images may be already acquired and stored in advance. That is, the method provided by the embodiment of the present disclosure can be applied to processing of both real-time images and historical images. When the acquiring step is executed, if the real-time image is aimed at, the image acquired by the image acquisition device in real time currently is directly acquired or received; if the image is a history image, the image stored in the storage location indicated by the user may be determined by the user operation.
S204, carrying out special effect processing on the first three-dimensional image to obtain a second three-dimensional image.
The special effect processing provided by the embodiment of the present disclosure may include, but is not limited to: and (5) deformation treatment.
In this step, the special effect processing may be performed only on the local image in the first three-dimensional image, so as to obtain the second three-dimensional image. For example, when the special effect processing is performed on a human face, the special effect processing may be performed only on an image of a region where a nose in the human face is located, so as to obtain a second three-dimensional image.
Or,
the second three-dimensional image can also be obtained by performing the special effect processing on the whole image in the first three-dimensional image. For example, when the special effect processing is performed on a human face, the special effect processing may be performed on the whole human face image to obtain a second three-dimensional image.
And S206, acquiring a special effect image according to the initial image and the second three-dimensional image.
Specifically, since the foregoing step of performing the special effect processing is performed in a three-dimensional angle, and the special effect image needs to be displayed on a plane, it is necessary to further consider the processing of the three-dimensional image into a two-dimensional plane image when implementing this step.
In an implementation manner, since the special effect processing manner according to the embodiment of the present disclosure includes deformation processing, when the obtaining step of the special effect image is executed, the special effect image can be obtained according to the displacement condition of each pixel point before and after the special effect processing.
In another implementation, the second three-dimensional image may also be directly mapped onto the two-dimensional plane where the initial image is located, and the mapped image may be used as a special effect image.
And S208, displaying the special effect image on a display screen.
In this step, the special effect image may be directly output on a display screen.
By the method shown in fig. 2, the processing and displaying of the special effect can be realized in a three-dimensional angle, and in the process, due to the consideration of the dimension of the image depth, the problem of poor matching effect between the special effect and the image caused by neglecting the image depth in the prior art is solved, so that the special effect image finally obtained and displayed by the method provided by the embodiment of the disclosure is more natural, and has a better displaying effect. And the scheme can realize special effect display of three-dimensional characteristics based on the two-dimensional image, and expands the development and application of the visual special effect field to a certain extent.
Hereinafter, a specific implementation of each step in the flow shown in fig. 2 will be further described.
First, with respect to the method for constructing the first three-dimensional image described in S202, referring to fig. 3, S202 may include the following steps:
s2022, performing feature extraction on the initial image to obtain an initial feature vector of the initial image.
In one possible design, the initial image may be processed by a trained keypoint identification model, where the keypoint identification model has an input of the initial image and an output of the keypoint identification model is a keypoint in the initial image. After the key points output by the key point identification model are obtained, converting the key points into a high-dimensional vector to be used as the initial feature vector.
In another possible design, the initial image may be processed by a trained second neural network model, where the input of the second neural network model is the initial image and the output is the initial feature vector of the initial image. That is, the keypoints are automatically identified by the second neural network model and combined as the initial feature vector.
No matter what kind of method (first or second) the neural network model is designed, the neural network model needs to be trained and learned in advance by using sample data, and details are not repeated.
S2024, acquiring a target feature vector of the initial image according to a preset standard feature vector corresponding to the standard image and the initial feature vector.
The standard images can be designed by self according to actual needs, and the standard images can be obtained by counting the images of the same category, or any one of the images can be designated as the standard image. It is known that, in the case of standard image determination, the standard feature vector is also already determined. The standard feature vector may be obtained by processing the standard image according to the method of S2022.
In the embodiment of the present disclosure, the target feature vector is used for characterizing a difference condition between the initial image and the standard image. Thus, after the standard feature vector and the initial feature vector are obtained, the difference between the initial feature vector and the standard feature vector can be obtained as the target feature vector.
S2026, performing principal component analysis on the target feature vector to obtain a shape principal element of the initial image.
Principal Component Analysis (PCA) is a multivariate statistical method that can be used to examine the correlation between a plurality of variables, and aims to reveal the internal structure between a plurality of variables by a small number of Principal components.
In the embodiment of the present disclosure, a principal component analysis technique may be used to process a target feature vector, so as to obtain feature values of an initial image in a plurality of shapes, and then obtain a longest-mode shape of the feature values as a shape principal element of the initial image.
S2028, constructing the first three-dimensional image according to the shape pivot element.
And combining the shape elements together to form the first three-dimensional image.
Based on the method shown in fig. 3, a two-dimensional initial image can be converted into a three-dimensional image, so that the converted first three-dimensional image has features in the depth dimension.
Based on the first three-dimensional image determined in the foregoing step, the embodiment of the present disclosure further provides an implementation manner for performing special effect processing on the first three-dimensional image. Specifically, the morphing process may be implemented based on the target processing manner indicated by the aforementioned special effect processing instruction.
Referring to fig. 4, S204 may specifically include the following steps:
and S2042, performing first deformation processing of key deformation points on the first three-dimensional image according to a target processing mode indicated by the special effect processing instruction.
The key deformation point refers to a key deformation position corresponding to the target processing mode, and the key deformation point is related to the target processing mode.
This will be described below with reference to an acquisition method of a target processing method.
In a specific implementation scenario, the target processing manner indicated by the special effect processing instruction may include at least the following design:
in one implementation, the special effect processing may be automatically implemented in a preset special effect processing manner. At this time, from the user perspective, the user only needs to select from preset processing modes and send the special effect processing instruction, and at this time, the special effect processing instruction includes the special effect processing mode instructed by the user.
For example, the preset processing method includes: eyes and hump nose are enlarged, the currently detected operation information indicates that a user clicks a 'hump nose' virtual key on a display interface, the operation information can be used as a special effect processing instruction, and the operation information also carries a special effect processing mode: and (4) humping the nose, thus treating the humped nose only according to a preset special treatment mode corresponding to the humped nose.
In the preset processing modes, the key deformation point corresponding to each processing mode is fixed. For example, for a augmentation rhinoplasty treatment, the key deformation point can be preset as a pixel point of the nose head position. In some implementation scenes, the system can further comprise pixel points at positions such as a nose bridge. For another example, for the special effect processing of reducing the face shape, the key deformation points may be preset as partial pixel points that directly affect the face contour.
The processing mode can realize automatic special effect processing without excessive operation of a user, is simple and convenient for the user to operate, has higher processing speed for a system, and is beneficial to shortening the waiting time for displaying special effect images.
In another implementation, the user may then be provided with a greater degree of freedom, i.e., by: outputting the first three-dimensional image, receiving operation information (operation information obtained by at least one operation) of a user on the first three-dimensional image, determining a target processing mode according to the operation information, and further performing special effect processing on the first three-dimensional image according to the target processing mode to obtain a second three-dimensional image.
For example, a face image stored in a terminal gallery may be used as an initial image, then a first three-dimensional image corresponding to the initial image is obtained and output on a display screen of the terminal, and further, a target processing manner of the user is obtained through a user operation, for example, if it is detected that the user raises the image depth dimension on the first three-dimensional image by 0.5cm from the tip of the nose (the highest position of the image depth of the nose), when a subsequent special effect processing step is executed, nose humping processing needs to be performed on the first three-dimensional image according to the target processing manner, where the humped height is 0.5cm, and a second three-dimensional image can be obtained.
In particular, the target treatment determined by this implementation is determined by the location indicated by the user. For example, if it is detected that the nose tip is pulled up by 0.5cm by the user as described in the foregoing example, the pixel point where the nose tip is located may be determined as the key deformation point. If the user inputs a specific requirement for directly operating on the first three-dimensional image, the key deformation point can be determined according to the requirement.
In the implementation mode, the user has higher degree of freedom, can carry out custom special effect design, and has higher flexibility.
And S2044, performing second deformation processing on other pixel points of the first three-dimensional image by using Laplace transform to obtain a second three-dimensional image.
The Laplace Transform (Laplace Transform) is a linear transformation technology, and the deformation of the key deformation point can be extended to other pixel points in the first three-dimensional image through the Laplace Transform.
When the step is specifically realized, firstly, for any pixel point, a point cloud group surface coordinate B of the pixel point can be obtained, a preset laplacian matrix a of a standard image is obtained, and then, a deformed position X of the pixel point is solved by using a relation between a = BX. The acquisition mode of the point cloud group surface coordinate B of any one pixel point can be as follows: and according to the pixel points, carrying out three-dimensional mesh division on the first three-dimensional image, and carrying out cotangent calculation on a mesh curved surface corresponding to the pixel points to obtain a point cloud group surface coordinate B of the pixel points. The description of the part may not be accurate enough, and if the part needs to be modified, you can directly modify the file.
The disclosed embodiment further provides a possible implementation manner of S206 in the flow shown in fig. 2.
Referring to fig. 5, S206 may be implemented as follows:
s2062, obtaining the displacement difference of each pixel point before and after the special effect processing according to the initial image and the second three-dimensional image.
S2064, obtaining the special effect image according to the displacement difference of each pixel point and the initial image.
When processing is performed, the initial pixel coordinates and the pixel values of the pixels in the initial image can be obtained first, then the special effect pixel coordinates of the pixels are obtained according to the initial pixel coordinates and the displacement difference, and then rendering is performed on the special effect pixel coordinates of the pixels according to the pixel values of the pixels to obtain the special effect image.
The special effect pixel coordinate and the initial pixel coordinate are coordinate positions in the same pixel coordinate system, and the relationship between the special effect pixel coordinate and the initial pixel coordinate is as follows: the special effect pixel coordinate may be obtained by obtaining a sum of the initial pixel coordinate and the displacement difference of the pixel.
Since the embodiment of the present disclosure does not delete the pixel points in the image, and only the deformation special effect processing is performed on the three-dimensional image, the position change of each pixel point is realized, and therefore, when the deformation special effect image is specifically obtained, the special effect image can be obtained only by performing image rendering according to the pixel value of each special effect pixel point before the special effect processing.
Hereinafter, the embodiment of the present disclosure further specifically describes an implementation manner of step S2062 in fig. 5. Referring to fig. 6, S2062 may be implemented as follows:
s20622, determining key pixel points in the initial image.
The key pixel points are pixel points which can directly influence the display shape of the displayed content in the initial image. The embodiment of the disclosure provides at least two implementation modes as follows:
in one implementation, this may be accomplished using a keypoint recognition model. The key point identification model can be any neural network model, the input data of the neural network model is an image, and the output data of the neural network model is key pixel points of an initial image.
Therefore, when the step is specifically realized, the initial image is only required to be used as the input of the key point identification model, the output of the key point identification model is obtained, and the key pixel point is obtained.
In another implementation, the key pixel points may be determined by using a preset standard image. The method comprises the steps of obtaining an initial image and a standard image, and determining key pixel points of the initial image according to the difference condition and standard key points of the standard image.
Specifically, the initial image and the standard image can be characterized in a vectorization mode after feature extraction, and then a corresponding equation can be established according to the difference condition between the initial image vector, the key pixel points, and the standard image vector and the standard key points, and then the key pixel points of the initial image can be obtained in a mode of solving an optimal solution.
In addition, it should be noted that the key pixel point and the deformed pixel point in the foregoing step may have the same pixel point or different pixel points.
S20624, obtaining the displacement difference of the key pixel points before and after the special effect processing according to the initial image and the second three-dimensional image.
The method comprises the step of obtaining the displacement difference of the key pixel points in the two-dimensional plane dimension. For any key pixel point, the method for obtaining the displacement difference can be as follows: firstly, acquiring a first pixel coordinate of a key pixel point in the initial image; mapping the second three-dimensional image to a two-dimensional plane, and acquiring a second pixel coordinate of the key pixel point on the mapping plane; therefore, the difference value between the second pixel coordinate and the first pixel coordinate is obtained and is used as the displacement difference of the key pixel point before and after the special effect processing.
S20626, smoothing other pixel points according to the displacement difference of the key pixel points to obtain the displacement difference of each pixel point before and after the special effect processing.
When the method is used for processing specifically, only the key pixel points have poor displacement, and the displacement of other pixel points is still uncertain, so that when the scheme is implemented, the displacement values of other pixel points in the image need to be further acquired. The embodiment of the disclosure acquires the pixel values of other pixel points in a smoothing manner.
Specifically, the smoothing method may include, but is not limited to: and (5) filtering. That is, according to the displacement data of each pixel key point, filtering processing is performed on other pixels in the target motion region, so that the displacement difference of other pixels is propagated on the premise of ensuring that the displacement difference of the key pixel points is not changed, and thus, the displacement difference of each pixel point in the target motion region is obtained.
Therefore, the displacement difference of each pixel point can be determined, the special effect image is obtained, and the subsequent display step of S208 is executed, so that the scheme can be realized.
It is to be understood that some or all of the steps or operations in the above-described embodiments are merely examples, and other operations or variations of various operations may be performed by the embodiments of the present application. Further, the various steps may be performed in a different order presented in the above-described embodiments, and it is possible that not all of the operations in the above-described embodiments are performed.
The words used in this application are words of description only and not of limitation of the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Example two
Based on the special effect processing method provided by the first embodiment, the embodiment of the present disclosure further provides an embodiment of a device for implementing each step and method in the embodiment of the method.
Referring to fig. 7, the special effect processing apparatus 700 according to an embodiment of the present disclosure includes:
the first processing module 71 is configured to perform, in response to receiving a special effect processing instruction, dimension-up processing on an initial image to obtain a first three-dimensional image of the initial image;
the second processing module 72 is configured to perform special effect processing on the first three-dimensional image to obtain a second three-dimensional image; wherein the special effect processing includes: deformation treatment;
an obtaining module 73, configured to obtain a special effect image according to the initial image and the second three-dimensional image;
a display module 74, configured to display the special effect image on a display screen.
In one possible design, the obtaining module 73 is configured to:
obtaining the displacement difference of each pixel point before and after the special effect processing according to the initial image and the second three-dimensional image;
and acquiring the special effect image according to the displacement difference of each pixel point and the initial image.
In another possible design, the obtaining module 73 is configured to:
determining key pixel points in the initial image;
obtaining the displacement difference of the key pixel points before and after the special effect processing according to the initial image and the second three-dimensional image;
and smoothing other pixel points according to the displacement difference of the key pixel points to obtain the displacement difference of each pixel point before and after the special effect processing.
In an implementation manner, the obtaining module 73 is specifically configured to:
and taking the initial image as the input of a key point identification model, and acquiring the output of the key point identification model to obtain the key pixel points.
In another implementation manner, the obtaining module 73 is specifically configured to:
acquiring the difference condition between the initial image and a preset standard image;
and determining the key pixel points of the initial image according to the difference condition and the standard key points of the standard image.
In another implementation manner, the obtaining module 73 is specifically configured to:
acquiring a first pixel coordinate of the key pixel point in the initial image;
mapping the second three-dimensional image to a two-dimensional plane, and acquiring a second pixel coordinate of the key pixel point on the mapping plane;
and acquiring a difference value between the second pixel coordinate and the first pixel coordinate to be used as a displacement difference of the key pixel point before and after the special effect processing.
The smoothing process according to the embodiment of the present disclosure includes: and (4) Gaussian filtering processing.
In another possible design, the obtaining module 73 is further specifically configured to:
acquiring initial pixel coordinates and pixel values of all pixel points in the initial image;
obtaining special effect pixel coordinates of each pixel point according to the initial pixel coordinates and the displacement difference;
and rendering is respectively carried out on the special effect pixel coordinates of each pixel point according to the pixel value of each pixel point to obtain the special effect image.
In another possible design, the first processing module 71 is specifically configured to:
performing feature extraction on the initial image to obtain an initial feature vector of the initial image;
acquiring a target characteristic vector of the initial image according to a standard characteristic vector corresponding to a preset standard image and the initial characteristic vector; wherein the target feature vector is used for representing the difference situation between the initial image and the standard image;
performing principal component analysis on the target feature vector to obtain a shape principal element of the initial image;
and constructing the first three-dimensional image according to the shape pivot.
In another possible design, the second processing module 72 is specifically configured to:
performing the special effect processing on a local image in the first three-dimensional image to obtain a second three-dimensional image; or,
and carrying out special effect processing on the whole image in the first three-dimensional image to obtain the second three-dimensional image.
In another possible design, the second processing module 72 is specifically configured to:
according to a target processing mode indicated by the special effect processing instruction, performing first deformation processing of key deformation points on the first three-dimensional image;
and performing second deformation processing on other pixel points of the first three-dimensional image by using Laplace transform to obtain a second three-dimensional image.
The initial image according to the embodiment of the present disclosure includes: a face image.
The special effect processing apparatus 700 in the embodiment shown in fig. 7 may be used to implement the technical solution of the above method embodiment, and the implementation principle and the technical effect of the technical solution may further refer to the relevant description in the method embodiment, and optionally, the special effect processing apparatus 700 may be a terminal.
It should be understood that the above division of the modules of the special effect processing apparatus 700 shown in fig. 7 is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules can be realized in the form of software called by processing element; or can be implemented in the form of hardware; and part of the modules can be realized in the form of software called by the processing element, and part of the modules can be realized in the form of hardware. For example, the obtaining module 73 may be a processing element separately set up, or may be integrated into the special effect processing apparatus 700, for example, implemented in a chip of a terminal, or may be stored in a memory of the special effect processing apparatus 700 in the form of a program, and a certain processing element of the special effect processing apparatus 700 calls and executes the functions of the above modules. The other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. As another example, when one of the above modules is implemented in the form of a Processing element scheduler, the Processing element may be a general purpose processor, such as a Central Processing Unit (CPU) or other processor capable of invoking programs. As another example, these modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Also, an embodiment of the present disclosure provides a special effect processing apparatus, referring to fig. 8, the special effect processing apparatus 700 includes:
a memory 710;
a processor 720; and
a computer program;
wherein the computer program is stored in the memory 710 and configured to be executed by the processor 720 to implement the methods as described in the above embodiments.
The number of the processors 720 in the special effect processing apparatus 700 may be one or more, and the processors 720 may also be referred to as processing units, which may implement a certain control function. The processor 720 may be a general purpose processor, a special purpose processor, or the like. In an alternative design, the processor 720 may also store instructions, which can be executed by the processor 720, so that the special effect processing apparatus 700 executes the method described in the above method embodiment.
In yet another possible design, the special effects processing apparatus 700 may include a circuit, which may implement the functions of transmitting or receiving or communicating in the foregoing method embodiments.
Optionally, the number of the memories 710 in the special effect processing apparatus 700 may be one or more, and the memories 710 have instructions or intermediate data stored thereon, and the instructions may be executed on the processor 720, so that the special effect processing apparatus 700 performs the method described in the above method embodiment. Optionally, other related data may also be stored in the memory 710. Optionally, instructions and/or data may also be stored in processor 720. The processor 720 and the memory 710 may be provided separately or may be integrated together.
In addition, as shown in fig. 8, a transceiver 730 is further disposed in the special effect processing apparatus 700, where the transceiver 730 may be referred to as a transceiver unit, a transceiver circuit, or a transceiver, and is used for data transmission or communication with a test device or other terminal devices, and is not described herein again.
As shown in fig. 8, the memory 710, the processor 720 and the transceiver 730 are connected by a bus and communicate.
If the special effects processing apparatus 700 is used to implement the method corresponding to fig. 2, the processor 720 is used to perform corresponding determination or control operations, and optionally, corresponding instructions may also be stored in the memory 710. The specific processing manner of each component can be referred to the related description of the foregoing embodiment.
In another possible design, referring to fig. 9, the special effect processing apparatus 700 may further include: an image acquisition device 740 and a display device 750;
the image acquisition device 740 is configured to acquire the image;
a display device 750 for displaying the special effect image.
The image capturing device 740 includes any device capable of capturing an image, such as a camera; and representations of display device 750 may include, but are not limited to: a terminal screen, a projection display device, other portable display devices connected to the terminal, etc.
Furthermore, the disclosed embodiments provide a readable storage medium, on which a computer program is stored, the computer program being executed by a processor to implement the method according to the first embodiment.
Also, an embodiment of the present disclosure provides a terminal, please refer to fig. 1, where the terminal 100 includes: a special effects processing apparatus 700 and a terminal body 110.
The terminal body 110 is generally configured with an image capturing device (e.g., a camera), a display device (e.g., a display screen), and the like. At this time, the image capturing device and/or the display device in the special effect processing device 700 shown in fig. 9 may reuse the existing devices of the terminal.
Since each module in this embodiment can execute the method shown in the first embodiment, reference may be made to the related description of the first embodiment for a part of this embodiment that is not described in detail.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (11)

1. A special effect processing method is characterized by comprising the following steps:
performing dimension raising processing on an initial image in response to receiving a special effect processing instruction to obtain a first three-dimensional image of the initial image;
carrying out special effect processing on the first three-dimensional image to obtain a second three-dimensional image; wherein the special effect processing includes: deformation treatment;
acquiring the difference condition between the initial image and a preset standard image;
determining key pixel points of the initial image according to the difference condition and the standard key points of the standard image;
acquiring a first pixel coordinate of the key pixel point in the initial image;
mapping the second three-dimensional image to a two-dimensional plane, and acquiring a second pixel coordinate of the key pixel point on the mapping plane;
acquiring a difference value between the second pixel coordinate and the first pixel coordinate to serve as a displacement difference of the key pixel point before and after the special effect processing;
according to the displacement difference of the key pixel points, smoothing other pixel points to obtain the displacement difference of each pixel point before and after the special effect processing;
obtaining a special effect image according to the displacement difference of each pixel point and the initial image;
and displaying the special effect image on a display screen.
2. The method of claim 1, wherein the smoothing process comprises: and (4) Gaussian filtering processing.
3. The method according to claim 1, wherein the obtaining a special effect image according to the displacement difference of each pixel point and the initial image comprises:
acquiring initial pixel coordinates and pixel values of all pixel points in the initial image;
obtaining special effect pixel coordinates of each pixel point according to the initial pixel coordinates and the displacement difference;
and rendering is respectively carried out on the special effect pixel coordinates of each pixel point according to the pixel value of each pixel point to obtain the special effect image.
4. The method of claim 1, wherein the performing the upscaling process on the initial image to obtain the first three-dimensional image of the initial image comprises:
extracting features of the initial image to obtain an initial feature vector of the initial image;
acquiring a target characteristic vector of the initial image according to a standard characteristic vector corresponding to a preset standard image and the initial characteristic vector; wherein the target feature vector is used for representing the difference situation between the initial image and the standard image;
performing principal component analysis on the target feature vector to obtain a shape principal element of the initial image;
and constructing the first three-dimensional image according to the shape pivot.
5. The method according to any one of claims 1 to 3, wherein the performing a special effect process on the first three-dimensional image to obtain a second three-dimensional image comprises:
performing the special effect processing on a local image in the first three-dimensional image to obtain a second three-dimensional image; or,
and carrying out the special effect processing on the whole image in the first three-dimensional image to obtain the second three-dimensional image.
6. The method according to any one of claims 1 to 3, wherein the performing a special effect process on the first three-dimensional image to obtain a second three-dimensional image comprises:
performing first deformation processing of key deformation points on the first three-dimensional image according to a target processing mode indicated by the special effect processing instruction;
and performing second deformation processing on other pixel points of the first three-dimensional image by using Laplace transform to obtain a second three-dimensional image.
7. The method of any of claims 1-3, wherein the initial image comprises: a face image.
8. A special effect processing apparatus, comprising:
the first processing module is used for responding to the received special effect processing instruction and performing dimension-increasing processing on the initial image to obtain a first three-dimensional image of the initial image;
the second processing module is used for carrying out special effect processing on the first three-dimensional image to obtain a second three-dimensional image; wherein the special effect processing includes: deformation treatment;
the acquisition module is used for acquiring the difference condition between the initial image and a preset standard image;
determining key pixel points of the initial image according to the difference condition and the standard key points of the standard image; acquiring a first pixel coordinate of the key pixel point in the initial image; mapping the second three-dimensional image to a two-dimensional plane, and acquiring a second pixel coordinate of the key pixel point on the mapping plane; obtaining a difference value between the second pixel coordinate and the first pixel coordinate to serve as a displacement difference of the key pixel point before and after the special effect processing; according to the displacement difference of the key pixel points, smoothing other pixel points to obtain the displacement difference of each pixel point before and after the special effect processing; acquiring a special effect image according to the displacement difference of each pixel point and the initial image;
and the display module is used for displaying the special effect image on a display screen.
9. A special effect processing apparatus, comprising:
a memory;
a processor; and
a computer program;
wherein the computer program is stored in the memory and configured to be executed by the processor to implement the method of any one of claims 1-7.
10. The apparatus of claim 9, further comprising:
the image acquisition device is used for acquiring the image;
and the display device is used for displaying the special effect image.
11. A computer-readable storage medium, having stored thereon a computer program,
the computer program is executed by a processor to implement the method of any one of claims 1-7.
CN201910594665.2A 2019-07-03 2019-07-03 Special effect processing method and device, storage medium and terminal Active CN110310318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910594665.2A CN110310318B (en) 2019-07-03 2019-07-03 Special effect processing method and device, storage medium and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910594665.2A CN110310318B (en) 2019-07-03 2019-07-03 Special effect processing method and device, storage medium and terminal

Publications (2)

Publication Number Publication Date
CN110310318A CN110310318A (en) 2019-10-08
CN110310318B true CN110310318B (en) 2022-10-04

Family

ID=68079679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910594665.2A Active CN110310318B (en) 2019-07-03 2019-07-03 Special effect processing method and device, storage medium and terminal

Country Status (1)

Country Link
CN (1) CN110310318B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102374B (en) * 2020-11-23 2021-03-12 北京蜜莱坞网络科技有限公司 Image processing method, image processing apparatus, electronic device, and medium
CN113920282B (en) * 2021-11-15 2022-11-04 广州博冠信息科技有限公司 Image processing method and device, computer readable storage medium, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106203400A (en) * 2016-07-29 2016-12-07 广州国信达计算机网络通讯有限公司 A kind of face identification method and device
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN108833791A (en) * 2018-08-17 2018-11-16 维沃移动通信有限公司 A kind of image pickup method and device
CN109147037A (en) * 2018-08-16 2019-01-04 Oppo广东移动通信有限公司 Effect processing method, device and electronic equipment based on threedimensional model
CN109636888A (en) * 2018-12-05 2019-04-16 网易(杭州)网络有限公司 2D special effect making method and device, electronic equipment, storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5592599A (en) * 1991-12-18 1997-01-07 Ampex Corporation Video special effects system with graphical operator interface
CN105100775B (en) * 2015-07-29 2017-12-05 努比亚技术有限公司 A kind of image processing method and device, terminal
CN108960020A (en) * 2017-05-27 2018-12-07 富士通株式会社 Information processing method and information processing equipment
CN108765273B (en) * 2018-05-31 2021-03-09 Oppo广东移动通信有限公司 Virtual face-lifting method and device for face photographing
CN109685915B (en) * 2018-12-11 2023-08-15 维沃移动通信有限公司 Image processing method and device and mobile terminal

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104008564A (en) * 2014-06-17 2014-08-27 河北工业大学 Human face expression cloning method
CN106203400A (en) * 2016-07-29 2016-12-07 广州国信达计算机网络通讯有限公司 A kind of face identification method and device
CN108062791A (en) * 2018-01-12 2018-05-22 北京奇虎科技有限公司 A kind of method and apparatus for rebuilding human face three-dimensional model
CN109147037A (en) * 2018-08-16 2019-01-04 Oppo广东移动通信有限公司 Effect processing method, device and electronic equipment based on threedimensional model
CN108833791A (en) * 2018-08-17 2018-11-16 维沃移动通信有限公司 A kind of image pickup method and device
CN109636888A (en) * 2018-12-05 2019-04-16 网易(杭州)网络有限公司 2D special effect making method and device, electronic equipment, storage medium

Also Published As

Publication number Publication date
CN110310318A (en) 2019-10-08

Similar Documents

Publication Publication Date Title
CN108846793B (en) Image processing method and terminal equipment based on image style conversion model
CN109903217B (en) Image deformation method and device
KR101141643B1 (en) Apparatus and Method for caricature function in mobile terminal using basis of detection feature-point
WO2021078001A1 (en) Image enhancement method and apparatus
WO2021004322A1 (en) Head special effect processing method and apparatus, and storage medium
WO2020102978A1 (en) Image processing method and electronic device
US11769286B2 (en) Beauty processing method, electronic device, and computer-readable storage medium
CN110287836B (en) Image classification method and device, computer equipment and storage medium
WO2022001806A1 (en) Image transformation method and apparatus
CN116048244B (en) Gaze point estimation method and related equipment
WO2016165614A1 (en) Method for expression recognition in instant video and electronic equipment
CN110310318B (en) Special effect processing method and device, storage medium and terminal
CN112799508A (en) Display method and device, electronic equipment and storage medium
CN113066497A (en) Data processing method, device, system, electronic equipment and readable storage medium
CN110298326A (en) A kind of image processing method and device, storage medium and terminal
CN110298327A (en) A kind of visual effect processing method and processing device, storage medium and terminal
KR102389457B1 (en) Image Transformation Apparatus, Method and Computer Readable Recording Medium Thereof
CN115908120B (en) Image processing method and electronic device
CN111988525A (en) Image processing method and related device
CN115147524B (en) 3D animation generation method and electronic equipment
CN115937938A (en) Training method of face identity recognition model, face identity recognition method and device
CN111294518B (en) Portrait composition limb truncation detection method, device, terminal and storage medium
CN108765321A (en) It takes pictures restorative procedure, device, storage medium and terminal device
CN114066724A (en) Image processing method, intelligent terminal and storage medium
CN112183217A (en) Gesture recognition method, interaction method based on gesture recognition and mixed reality glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant