CN113986105B - Face image deformation method and device, electronic equipment and storage medium - Google Patents

Face image deformation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113986105B
CN113986105B CN202010732569.2A CN202010732569A CN113986105B CN 113986105 B CN113986105 B CN 113986105B CN 202010732569 A CN202010732569 A CN 202010732569A CN 113986105 B CN113986105 B CN 113986105B
Authority
CN
China
Prior art keywords
face image
displacement
deformed
track
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010732569.2A
Other languages
Chinese (zh)
Other versions
CN113986105A (en
Inventor
闫鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010732569.2A priority Critical patent/CN113986105B/en
Priority to PCT/CN2021/104093 priority patent/WO2022022220A1/en
Publication of CN113986105A publication Critical patent/CN113986105A/en
Application granted granted Critical
Publication of CN113986105B publication Critical patent/CN113986105B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a method and a device for deforming a face image, electronic equipment and a storage medium, and belongs to the technical field of computer vision processing. The method comprises the following steps: acquiring an initial trigger position selected by a user for a face image, and determining a region to be deformed in the face image according to the initial trigger position; acquiring a trigger track taking an initial trigger position as a trigger starting point, and extracting the track length and track direction of the trigger track; determining the displacement length of each pixel point in the region to be deformed according to the track length, and respectively determining the displacement length and the displacement direction of each pixel point according to the track length and the track direction; and adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate a deformed face image. Therefore, the personalized deformation of the face image is performed in response to the trigger of the user, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.

Description

Face image deformation method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computer vision processing, and in particular relates to a method and a device for deforming a face image, electronic equipment and a storage medium.
Background
With the progress of computer vision processing technology, the functions of performing vision processing on face images are also becoming more and more diversified, such as special effect addition, filter superposition, etc. in beauty application.
In the related art, the function of performing visual processing on the face image is preset by the system, after the user selects the corresponding function, the corresponding processing effect is realized according to the default processing effect parameter corresponding to the function preset by the system, so that the personalized visual processing requirement of the user cannot be met, and the viscosity of the user and the product is low.
Disclosure of Invention
The disclosure provides a method, a device, an electronic device and a storage medium for deforming a face image, so as to at least solve the problem that in the related art, the face image processing depends on default effect parameters preset by a system and cannot meet personalized requirements of users.
The technical scheme of the present disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, there is provided a method for deforming a face image, including: acquiring an initial trigger position selected by a user for a face image, acquiring a trigger track taking the initial trigger position as a trigger starting point, and extracting the track length and track direction of the trigger track; respectively determining the displacement length of each pixel point in the region to be deformed according to the track length and the track direction, and determining the sum displacement direction of each pixel point according to the track direction; and adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate and display a deformed face image.
In addition, the method for deforming the face image according to the embodiment of the present disclosure further includes the following additional technical features:
in one embodiment of the present disclosure, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes: calculating the distance between the initial trigger position and a plurality of preset control areas; and determining a preset control area with the minimum distance from the initial trigger position as the area to be deformed.
In one embodiment of the present disclosure, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes: determining target identification information corresponding to the initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image; and determining an image area corresponding to the target identification information as the area to be deformed.
In one embodiment of the present disclosure, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes: determining target face key points in the face image according to the initial trigger position; acquiring a preset deformation radius corresponding to the target face key point; and determining the region to be deformed by taking the target face key point as a circle center and taking the preset deformation radius as a circular radius.
In one embodiment of the present disclosure, the determining the displacement length of each pixel point in the to-be-deformed area according to the track length includes: calculating a first distance between each pixel point in the region to be deformed and the key point of the target face; determining a first deformation coefficient corresponding to each pixel point according to the first distance; and calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
In one embodiment of the present disclosure, the determining the displacement length of each pixel point in the to-be-deformed area according to the track length includes: calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image; determining a second deformation coefficient corresponding to each pixel point according to the second distance; and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
In one embodiment of the present disclosure, the adjusting the position of each pixel point in the to-be-deformed area according to the displacement length and the displacement direction includes: determining a target adjustment position of each pixel point according to the displacement direction and the displacement length of each pixel point to the corresponding displacement direction in response to the displacement direction belonging to the preset direction, wherein the preset direction comprises a horizontal direction and a vertical direction of the face image; and adjusting each pixel point to the target adjustment position.
In one embodiment of the present disclosure, further comprising: responding to the displacement direction not belonging to the preset direction, splitting the displacement direction into a horizontal direction and a vertical direction; determining a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length; and controlling each pixel point in the region to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction so as to generate and display a deformed face image.
According to a second aspect of the embodiments of the present disclosure, there is provided a morphing device of a face image, including: the first determining module is configured to acquire an initial trigger position selected by a user for a face image, and determine a region to be deformed in the face image according to the initial trigger position; the extraction module is configured to acquire a trigger track taking the initial trigger position as a trigger starting point and extract the track length and track direction of the trigger track; the second determining module is configured to determine the displacement length and the displacement direction of each pixel point in the region to be deformed according to the track length and the track direction; and the deformation adjustment module is configured to adjust the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate a deformed face image.
In addition, the facial image deforming device of the embodiment of the disclosure further includes the following additional technical features:
in one embodiment of the disclosure, the first determining module is specifically configured to: calculating the distance between the initial trigger position and a plurality of preset control areas; and determining a preset control area with the minimum distance from the initial trigger position as the area to be deformed.
In one embodiment of the disclosure, the first determining module is specifically configured to: determining target identification information corresponding to the initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image; and determining an image area corresponding to the target identification information as the area to be deformed.
In one embodiment of the present disclosure, the first determining module includes: a first determining unit configured to determine a target face key point in the face image according to the initial trigger position; an acquisition unit configured to acquire a preset deformation radius corresponding to the target face key point; and the second determining unit is configured to determine the to-be-deformed area by taking the target face key point as a circle center and taking the preset deformation radius as a circular radius.
In one embodiment of the disclosure, the second determining module is specifically configured to: calculating a first distance between each pixel point in the region to be deformed and the key point of the target face; determining a first deformation coefficient corresponding to each pixel point according to the first distance; and calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
In one embodiment of the disclosure, the second determining module is specifically configured to: calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image; determining a second deformation coefficient corresponding to each pixel point according to the second distance; and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
In one embodiment of the present disclosure, the deformation adjustment module specifically includes: a third determining unit configured to determine, in response to the displacement direction belonging to the preset direction, a target adjustment position of each pixel according to the displacement direction and the displacement length of each pixel, where the preset direction includes a horizontal direction and a vertical direction of the face image; and a first adjustment unit configured to adjust each pixel point to the target adjustment position.
In one embodiment of the present disclosure, the deformation adjustment module further includes: a fourth determining unit configured to split the displacement direction into a horizontal direction and a vertical direction in response to the displacement direction not belonging to the preset direction; a fifth determining unit configured to determine a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length; and the second adjusting unit is configured to control each pixel point in the region to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction so as to generate and display a deformed face image.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement a method of morphing facial images as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform a method of morphing a face image as previously described.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, which, when executed by a processor of an electronic device, enables the electronic device to perform a method of morphing facial images as previously described.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
Acquiring an initial trigger position selected by a user for the face image, determining a region to be deformed in the face image according to the initial trigger position, further acquiring a trigger track with the initial trigger position as a trigger starting point, extracting track length and track direction of the trigger track, respectively determining displacement length and displacement direction of each pixel point in the region to be deformed according to the track length and the track direction, and adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction to generate and display the deformed face image. Therefore, the personalized deformation of the face image is performed in response to the trigger of the user, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flowchart showing a method of morphing a face image according to a first exemplary embodiment;
fig. 2 is a schematic view of a track direction shown according to a second exemplary embodiment;
FIG. 3 is a schematic view of a track direction shown according to a third exemplary embodiment;
fig. 4 is a schematic view of a deformed scene of a face image according to a fourth exemplary embodiment;
Fig. 5 is a flowchart showing a method of morphing a face image according to a fifth exemplary embodiment;
Fig. 6-1 is a schematic diagram showing a distance between an initial trigger position and a corresponding preset control region according to a sixth exemplary embodiment;
Fig. 6-2 is a schematic view of a distance between an initial trigger position and a corresponding preset control area according to a seventh exemplary embodiment;
fig. 7 is a flowchart showing a method of morphing a face image according to an eighth exemplary embodiment;
fig. 8 is a schematic diagram of identification information according to a ninth exemplary embodiment;
fig. 9 is a flowchart showing a morphing method of a face image according to a tenth exemplary embodiment;
Fig. 10 is a schematic view of a region-to-be-deformed determination scene according to an eleventh exemplary embodiment;
fig. 11 is a schematic view of a to-be-deformed region determination scene according to a twelfth exemplary embodiment;
fig. 12 is a flowchart showing a method of morphing a face image according to a thirteenth exemplary embodiment;
Fig. 13 is a flowchart showing a deformation method of a face image according to a fourteenth exemplary embodiment;
Fig. 14 is a flowchart showing a morphing method of a face image according to a fifteenth exemplary embodiment;
FIG. 15-1 is a schematic view of a target adjustment position scene shown according to a sixteenth exemplary embodiment;
fig. 15-2 is a schematic view of a deformed face image according to a seventeenth exemplary embodiment;
FIG. 16-1 is a schematic view of a first direction and a second direction scene shown according to an eighteenth exemplary embodiment;
FIG. 16-2 is a schematic illustration of a morphed face image according to a nineteenth exemplary embodiment;
fig. 17 is a schematic structural view of a morphing means for face images according to the twentieth exemplary embodiment;
fig. 18 is a schematic structural view of a morphing means for face images according to the twenty-first exemplary embodiment;
Fig. 19 is a schematic structural view of a morphing means for face images according to a twenty-second exemplary embodiment;
fig. 20 is a schematic structural view of a morphing means for face images according to a twenty-third exemplary embodiment;
fig. 21 is a schematic structural view of a terminal device according to a twenty-fourth exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Aiming at the problems that the processing effect based on visual processing in the related art is determined by the processing effect parameters preset by the system and is difficult to meet the personalized processing requirement of the user, the disclosure provides a method for realizing personalized deformation of a face image according to a mode of interaction with the user.
Of course, the face image processing method according to the embodiment of the present disclosure may be applied to a deformation process of any subject image, such as a building image, a fruit image, etc., in addition to a face image, and for convenience of explanation, the following embodiments mainly focus on deformation of a face image.
Fig. 1 is a flowchart illustrating a method for deforming a face image according to an exemplary embodiment, and as shown in fig. 1, the method for deforming a face image is used in an electronic device, which may be a smart phone, a portable computer, or the like, and includes the following steps.
In step 101, an initial trigger position selected by a user for a face image is obtained, and a region to be deformed in the face image is determined according to the initial trigger position.
The main body for implementing the initial trigger position by the user can be a finger, a touch pen or the like.
In the embodiment of the disclosure, an initial trigger position selected by a user for a face image is obtained, for example, a position that is initially clicked by the user with a finger is an initial trigger position, and then, a region to be deformed in the face image is determined according to the initial trigger position, where the region to be deformed includes a face region to be deformed, and the like.
In some possible embodiments, an image area which is a preset range from the initial trigger position may be directly determined as the area to be deformed; in other possible embodiments, the corresponding relationship between each trigger position and the to-be-deformed region may be pre-constructed, and if the initial trigger position belongs to the pre-constructed trigger position, the to-be-deformed region corresponding to the pre-constructed trigger position is determined as the to-be-deformed region. The specific details of how to determine the region to be deformed in the face image according to the initial trigger position will be described in the following embodiments, and will not be described herein.
In step 102, a trigger track with an initial trigger position as a trigger start point is acquired, and a track length and a track direction of the trigger track are extracted.
It should be understood that in this embodiment, the face deformation display is implemented based on the drag operation of the user, so that the trigger track of the initial trigger position is obtained, for example, the trigger track of the initial trigger position is detected based on the capacitance value in the terminal device, and then the track length and the track direction of the trigger track are extracted.
In one embodiment of the present disclosure, as shown in fig. 2, an end trigger position of a trigger track may be identified, a direction from an initial trigger position to the end trigger position is constructed as a track direction, and a track length may be obtained as a distance from the initial trigger position to the end trigger position.
In another embodiment of the present disclosure, as shown in fig. 3, a plurality of trigger points may be selected in a current trigger track according to a preset time interval, and a direction of each trigger point and a previous adjacent trigger point is constructed as a track direction, where the track direction is a plurality of, and a distance between each trigger point and the previous adjacent trigger point is used as a track length in a corresponding track direction.
In step 103, the displacement length and the displacement direction of each pixel point in the region to be deformed are determined according to the track length and the track direction, respectively.
In an embodiment of the present disclosure, in order to perform a deformation operation on a face in response to a trigger operation of a user, a displacement length of each pixel point in a region to be deformed is determined according to a track length, and a displacement direction of each pixel point is determined according to a track direction, so that a trigger track of the user is reflected on movement of each pixel point in the region to be deformed.
It should be noted that, the track length and the displacement length of each pixel point in the to-be-deformed area are in a direct proportion relationship, so that the farther the hand is triggered, the larger the offset, the stronger the pulling to the to-be-deformed area, and the more exaggerated the face is pulled. The displacement direction may be the same as the track direction, or may be set to be different according to the individual needs of the user, for example, the user sets the track direction to be leftward, and the corresponding displacement direction is rightward, and so on.
In step 104, the position of each pixel point in the region to be deformed is adjusted according to the displacement length and the displacement direction, so as to generate a deformed face image.
In this embodiment, the position of each pixel point in the region to be deformed is adjusted according to the displacement length and the displacement direction, so as to generate a deformed face image. For example, as shown in the left graph of fig. 4, when the initial trigger position of the user is the position of the eyelid, the determined area to be deformed is the whole eye area, the trigger track of the user is the downward movement a track length, and as shown in the right graph of fig. 4, the eye area of the face image area is deformed.
In summary, the method for deforming a face image according to the embodiment of the present disclosure obtains an initial trigger position selected by a user for the face image, and determines a region to be deformed in the face image according to the initial trigger position, further, obtains a trigger track taking the initial trigger position as a trigger starting point, extracts a track length and a track direction of the trigger track, determines a displacement length of each pixel point in the region to be deformed according to the track length, determines a displacement direction of each pixel point according to the track direction, and adjusts a position of each pixel point in the region to be deformed according to the displacement length and the displacement direction to generate and display a deformed face image. Therefore, based on the interaction mode with the user, personalized deformation of the face image is met, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.
It should be noted that, in different application scenarios, the manner of determining the region to be deformed in the face image according to the initial trigger position is different, and the following is illustrated as follows:
Example one:
As shown in fig. 5, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
in step 201, distances between the initial trigger position and a plurality of preset control areas are calculated.
In step 202, a preset control area with the smallest distance from the initial trigger position is determined as an area to be deformed.
In this embodiment, it may be understood that a plurality of control areas are set in advance in the face image, where each control area corresponds to a face area, for example, the control area 1 corresponds to a left eye area of the face, the control area 2 corresponds to a mouth area of the face, and the like, and different preset control areas may be calibrated by big data, or may be calibrated by a user according to personal needs, and when calibrated according to the personal needs of the user, different preset control areas may be divided according to a dividing track triggered by the user on the face image, and the like.
After the initial trigger position is obtained, the distance between the initial trigger position and a plurality of preset control areas is calculated, for example, as shown in fig. 6-1, the center point coordinates of the initial trigger position and the center point coordinates of each preset control area are extracted, and the distance between the initial trigger position and the corresponding preset control area is calculated based on the distance between the center point coordinates of the initial trigger position and the center point coordinates of each preset control area. For another example, as shown in fig. 6-2, the closest distance between the edge profile of the initial trigger position and the edge profile of each preset control area is calculated as the distance between the initial trigger position and the corresponding preset control area.
And further, determining the preset control area with the minimum distance from the initial trigger position as the area to be deformed. Of course, in this embodiment, if the initial trigger position belongs to a certain preset control area, the corresponding preset control area is directly used as the area to be deformed, if the initial trigger position belongs to a plurality of preset control areas, the proportion of the area of the initial trigger position in each control area to the total area of the initial trigger position is counted, and the preset control area corresponding to the highest proportion value is used as the area to be deformed. Thus, the deformation effect of "in-dot band" can be achieved.
Example two:
as shown in fig. 7, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
in step 301, target identification information corresponding to an initial trigger position is determined from a plurality of pieces of identification information preset on a face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image.
In this embodiment, as shown in fig. 8, a plurality of identification information (may or may not be displayed) is preset on the face image, where each identification information is used as a control module of an image area in the face image, and a correspondence between each identification information and the image area is stored in a preset database, and for example, for identification information 1, the corresponding image area is a forehead area.
In step 302, an image area corresponding to the target identification information is determined as an area to be deformed.
In this embodiment, the target identification information corresponding to the initial trigger position is determined, for example, the target identification information whose position overlaps with the initial trigger position is determined as the target identification information corresponding to the initial trigger position, and the distance between the center point of the initial trigger position and the center point of the identification information may also be calculated, and the identification information closest to the center point of the initial trigger position may be used as the target identification information corresponding to the initial trigger position.
As mentioned above, there is a correspondence between the identification information and the image area, and thus, in this embodiment, the image area corresponding to the target identification information is determined as the area to be deformed by querying a preset database or the like. Therefore, an image area to be deformed can be directly determined according to target identification information triggered by a user, and the image area and the identification information are bound in advance, so that the deformation efficiency is improved.
Example three:
as shown in fig. 9, the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
Step 401, determining a target face key point in the face image according to the initial trigger position.
In this embodiment, a target face key point is determined in a face image according to an initial trigger position, where the target face key point may be a center point of a region corresponding to the initial trigger position, or may be a pre-trained convolutional neural network, as shown in fig. 10, where after the face image is input, a plurality of face key points (101 in the drawing) may be obtained by the convolutional neural network, further, the face key point associated with the initial trigger position is determined to be the target face key point, when the initial trigger position overlaps one face key point, the one face key point is taken as the target face key point, and when the initial trigger position overlaps a plurality of face key points, one face key point is randomly selected from the plurality of face key points as the target face key point. And when the initial trigger position is not overlapped with the key points of the human face, determining the key point closest to the central point or the edge point of the initial trigger position as the key point of the target human face.
Step 402, obtaining a preset deformation radius corresponding to the target face key point.
In some possible embodiments, a fixed preset deformation radius is set for the face image, where the preset deformation radius in this embodiment may be calibrated according to preset big data, or may be set according to the size of the face image, and the larger the size of the face image, the larger the corresponding preset deformation radius, which may, of course, be set by the personal preference of the user in some possible embodiments.
In other possible examples, the preset radius in the face image may be different, for example, the face image is divided into a plurality of regions, for example, a forehead region, a cheek region, a mouth region, etc., a preset deformation radius is set for each region, the preset deformation radii corresponding to the different regions are different, so as to ensure that the deformation effects obtained in the different regions are different, and further, the region where the target face key point is located is determined, and the preset deformation radius corresponding to the region is used as the preset deformation radius of the target face key point.
In this example, a corresponding preset deformation radius may be preset for each key point in the face image, a corresponding relation between the coordinates of the key point and the preset deformation radius is stored, the corresponding relation is queried, and the preset deformation radius corresponding to the target face key point is determined, so that the flexibility of face deformation is further improved, and the diversity of deformation effects is ensured.
And step 403, determining a region to be deformed by taking the target face key point as a circle center and taking the preset deformation radius as a circular radius.
In this embodiment, as shown in fig. 11, a target face key point is used as a center of a circle, and a preset deformation radius is used as a circular radius, so as to determine a region to be deformed, where the region to be deformed may include a background region in addition to a face region.
In summary, according to the method for deforming the face image, the region to be deformed corresponding to the initial trigger position is flexibly determined according to the requirements of an application scene, and the viscosity of a user and a product is improved.
In order to enable those skilled in the art to more clearly understand how to determine the displacement length of each pixel point in the region to be deformed according to the track length in the embodiments of the present disclosure, the following description is given with reference to specific examples:
Example one:
In this example, the manner of determining the region to be deformed is the example three shown in fig. 9 described above, and thus, as shown in fig. 12, determining the displacement length of each pixel point in the region to be deformed according to the track length includes:
In step 501, a first distance between each pixel point in the region to be deformed and the key point of the target face is calculated.
In some possible embodiments, a first distance between each pixel point in the region to be deformed and the target face key point is calculated, where the distance may be calculated according to the coordinates of each pixel point and the coordinates of the target face key point. Wherein, because the target face key points are positioned in the face region of the face image, the deformation can be ensured by taking the face region as a main body.
In step 502, a first deformation coefficient for each pixel is determined based on the first distance.
In step 503, a first product of the track length and the first deformation coefficient is calculated, and the first product is taken as the displacement length of each pixel point.
In some possible examples, the first distance and the first deformation coefficient are in an inverse relation, and the larger the first distance is, the smaller the corresponding first deformation coefficient is, so that a first product of the track length and the first deformation coefficient is calculated, after the displacement length of each pixel point is obtained, the effect of liquefying the target face key point is achieved after displacement according to the displacement length.
In another possible example, the first distance and the first deformation coefficient are in a direct proportion relation, and the larger the first distance is, the larger the corresponding first deformation coefficient is, so that a first product of the track length and the first deformation coefficient is calculated, and after the displacement length of each pixel point is obtained, the focus blurring effect is realized after displacement according to the displacement length.
Example two:
In this example, as shown in fig. 13, determining the displacement length of each pixel point in the region to be deformed according to the track length includes:
In step 601, a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image is calculated.
The preset reference key points in the face image are key points in a preset fixed face area, can be key points on the nose tip, can also be key points on the forehead and the like, and the preset reference key points have different setting positions, so that the deformation effect is necessarily different.
In step 602, a second deformation coefficient for each pixel is determined based on the second distance.
In step 603, a second product of the track length and the second deformation coefficient is calculated, and the second product is taken as the displacement length of each pixel point.
In some possible examples, the second distance and the second deformation coefficient are in an inverse relation, and the larger the second distance is, the smaller the corresponding second deformation coefficient is, so that a second product of the track length and the second deformation coefficient is calculated, after the displacement length of each pixel point is obtained, the effect of liquefying with the preset reference key point as the center is achieved after the displacement according to the displacement length.
In another possible example, the second distance and the second deformation coefficient are in a direct proportion relationship, and the larger the second distance is, the larger the corresponding second deformation coefficient is, so that a second product of the track length and the second deformation coefficient is calculated, and after the displacement length of each pixel point is obtained, the focus blurring effect is realized after the displacement according to the displacement length.
Example three:
in this example, the track length is directly taken as the displacement length, so that after each pixel point moves according to the displacement length, the effect of integrally moving the region to be deformed is achieved.
Of course, in the actual operation, in order to ensure the processing effect of the deformed image, a limit value of the displacement length may be set, and when the displacement length is once greater than the limit value, the displacement length is determined as the limit value, wherein the limit value of the displacement length may be different according to the initial trigger position of the user, and the user may set according to personal preference, for example, when the initial trigger position of the user is in the eyes, the corresponding limit value may be greater, and the like.
Therefore, in this embodiment, the displacement of each pixel point may be determined in different manners, so as to achieve different deformation effects.
Further, after determining the displacement length, different implementation manners may be adopted to adjust the position of each pixel point in the to-be-deformed area according to the displacement length and the displacement direction.
In one embodiment of the present disclosure, as shown in fig. 14, the step of adjusting the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction includes:
In step 701, in response to the displacement direction belonging to the preset direction, determining a target adjustment position of each pixel according to the displacement direction and the displacement length of each pixel, where the preset direction includes a horizontal direction and a vertical direction of the face image.
In one embodiment of the present disclosure, an end trigger position of a trigger track is obtained, a direction from an initial trigger position to the end trigger position is determined as a displacement direction, and whether the displacement direction is a horizontal direction or a vertical direction is determined, wherein if the displacement direction is the horizontal direction or the vertical direction, the displacement direction is determined to belong to a preset direction, and it can be understood that in the present embodiment, whether the displacement direction is four positive directions, i.e., up, down, left, and right, is determined to determine whether the preset direction is satisfied.
Of course, in different application scenarios, the preset direction conditions may be any 1-3 directions of up, down, left and right, or any other directions, which may be set according to the scenario requirements.
Specifically, if the target position belongs to the preset direction, the target adjustment position of each pixel is determined according to the displacement direction and the displacement length of each pixel, and as shown in fig. 15-1, the target position of each pixel is a position distant from the displacement length in the corresponding unique direction.
In step 702, each pixel point is adjusted to a target adjustment position.
For example, as shown in fig. 15-2, when the user triggers the identification information 1 on the forehead shown in fig. 8 and the corresponding displacement is right above the face image, each pixel point of the area to be deformed is adjusted to the target adjustment position, so as to obtain a deformed face image with the forehead elongated.
In one embodiment of the present disclosure, with continued reference to fig. 14, after step 701 described above, further includes:
in step 703, in response to the displacement direction not belonging to the preset direction, splitting the displacement direction into a horizontal direction and a vertical direction.
It should be understood that in a single trigger track, the corresponding displacement directions may be approximately lower left, lower right, upper left, upper right, etc., each of which may split into a horizontal direction and a vertical direction, e.g., for lower left, split into lower left and lower.
In step 704, a first displacement in the horizontal direction and a second displacement in the vertical direction are determined from the displacement length.
In the embodiment of the present disclosure, the first displacement in the horizontal direction and the second displacement in the vertical direction are determined according to the displacement length, and if the displacement direction is lower left, as shown in fig. 16-1, the first displacement in the left direction and the second displacement in the lower direction are split into the left and lower directions according to the rectangular coordinate system for the displacement length.
In step 705, each pixel point in the area to be deformed is controlled to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction, so as to generate and display a deformed face image. .
In this embodiment, the idea of the frame buffer (Frame buffer Object, FBO) may be adopted to generate the deformed face image, firstly, each pixel point in the region to be deformed is controlled to move to a first pixel position in the horizontal direction according to the first displacement, and a reference face image of the face image is obtained, at this time, the reference face image is not displayed, but is cached in the early offline control, secondly, a second pixel position corresponding to each pixel point is determined in the vertical direction according to the second displacement, the reference face image is used as an input texture map, and each pixel point in the reference face image is controlled to be adjusted from the first pixel position to the second pixel position. Therefore, only the face image which is finally adjusted to the second pixel position is rendered and displayed, the processing cost of the image is greatly reduced, and the generation efficiency of the image is improved.
Therefore, the method for deforming the face image in the embodiment can realize the operation of deforming the face arbitrarily according to the triggering operations such as the dragging of the user, wherein after the triggering track of each triggering operation is finished, the deforming effect is not rebounded but reserved, so that the face image can reflect the multiple triggering operations of the user, for example, for fig. 16-2, the user can realize the deforming operation of the face of the user by dragging the identification information 1, 4 and 5, and the personalized effect shown in fig. 16-2 is realized.
In summary, the method for deforming the face image according to the embodiment of the present disclosure may flexibly implement adjustment of the position of each pixel point in the region to be deformed, thereby satisfying personalized service for generating the deformed face image.
In order to achieve the above embodiments, the embodiments of the present disclosure further provide a structural schematic diagram of a deformation device for a face image.
Fig. 17 is a schematic structural view of a morphing means for face images according to an exemplary embodiment. As shown in fig. 17, the face image morphing device includes: a first determination module 171, an extraction module 172, a second determination module 173, and a deformation adjustment module 174, wherein,
A first determining module 171 configured to obtain an initial trigger position selected by a user for the face image, and determine a region to be deformed in the face image according to the initial trigger position;
An extraction module 172 configured to acquire a trigger track taking the initial trigger position as a trigger start point, and extract a track length and a track direction of the trigger track;
a second determining module 173 configured to determine a displacement length and a displacement direction of each pixel point in the region to be deformed according to the track length and the track direction, respectively;
The deformation adjustment module 174 is configured to adjust the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction, so as to generate a deformed face image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, the deformation device of the face image in the embodiment of the present disclosure obtains an initial trigger position selected by a user for the face image, and determines a region to be deformed in the face image according to the initial trigger position, further, obtains a trigger track taking the initial trigger position as a trigger starting point, extracts a track length and a track direction of the trigger track, determines a displacement length of each pixel point in the region to be deformed according to the track length, determines a displacement direction of each pixel point according to the track direction, and adjusts a position of each pixel point in the region to be deformed according to the displacement length and the displacement direction to generate and display the deformed face image. Therefore, the personalized deformation of the face image is performed in response to the trigger of the user, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.
It should be noted that, in different application scenarios, the manner in which the first determining module 171 determines the region to be deformed in the face image according to the initial trigger position is different, which is illustrated as follows:
Example one:
in this example, the first determination module 171 is specifically configured to:
Calculating the distance between the initial trigger position and a plurality of preset control areas;
And determining a preset control area with the minimum distance from the initial trigger position as an area to be deformed.
Example two:
in this example, the first determination module 171 is specifically configured to:
Determining target identification information corresponding to an initial trigger position in a plurality of pieces of identification information preset on a face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image;
Inquiring a preset database, and determining an image area corresponding to the target identification information as an area to be deformed.
Example three:
In this example, as shown in fig. 18, the first determining module 171, on the basis of that shown in fig. 17, includes: a first determination unit 1711, a first acquisition unit 1712, a second determination unit 1713, wherein,
A first determining unit 1711 configured to determine a target face key point in the face image according to the initial trigger position;
An acquiring unit 1712 configured to acquire a preset deformation radius corresponding to a target face key point;
The second determining unit 1713 is configured to determine the area to be deformed with the target face key point as a center and a preset deformation radius as a circular radius.
In one embodiment of the present disclosure, the first determining unit 1711 is specifically configured to:
inputting the face image into a pre-trained convolutional neural network to generate a plurality of face key points;
And determining the face key point associated with the initial trigger position as a target face key point.
In summary, the deformation device for the face image according to the embodiment of the present disclosure flexibly determines, according to the requirement of an application scenario, a region to be deformed corresponding to an initial trigger position, thereby improving the viscosity of a user and a product.
In order to enable those skilled in the art to more clearly understand how to determine the displacement length of each pixel point in the region to be deformed according to the track length in the embodiments of the present disclosure, the following description is given with reference to specific examples:
Example one:
in this example, the second determining module 173 is specifically configured to:
Calculating a first distance between each pixel point in the region to be deformed and a target face key point;
Determining a first deformation coefficient corresponding to each pixel point according to the first distance;
And calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
Example two:
in this example, the second determining module 173 is specifically configured to:
calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image;
Determining a second deformation coefficient corresponding to each pixel point according to the second distance;
And calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
Therefore, in this embodiment, the displacement of each pixel point may be determined in different manners, so as to achieve different deformation effects.
Further, after determining the displacement length, different implementation manners may be adopted to adjust the position of each pixel point in the to-be-deformed area according to the displacement length and the displacement direction.
In one embodiment of the present disclosure, as shown in fig. 19, the deformation adjustment module 174 includes, on the basis of that shown in fig. 17: a third determining unit 1741, a first adjusting unit 1742, wherein,
A third determining unit 1741 configured to determine a target adjustment position of each pixel according to a displacement direction and a displacement length of each pixel in response to the displacement direction belonging to a preset direction, where the preset direction includes a horizontal direction and a vertical direction of the face image;
The first adjusting unit 1742 is configured to adjust each pixel to a target adjustment position.
In one embodiment of the present disclosure, referring to fig. 20, the deformation adjustment module 174 further includes, on the basis of that shown in fig. 19: a fourth determining unit 1743, a fifth determining unit 1744 and a second adjusting unit 1745, wherein,
A fourth determining unit 1743 configured to split the displacement direction into a horizontal direction and a vertical direction in response to the displacement direction not belonging to the preset direction;
a fifth determining unit 1744 configured to determine a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length;
The second adjusting unit 1745 is configured to control each pixel point in the region to be deformed to move according to the first displacement in the horizontal direction and the second displacement in the vertical direction so as to generate and display the deformed face image.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In summary, the facial image deforming device according to the embodiment of the present disclosure may flexibly implement adjustment of the position of each pixel point in the region to be deformed, thereby satisfying personalized service for generating the deformed facial image.
In order to implement the above embodiment, the present disclosure also proposes an electronic device. Fig. 21 is a block diagram of an electronic device according to the present disclosure.
As shown in fig. 21, the electronic device 200 includes:
The memory 210 and the processor 220, the bus 230 connecting different components (including the memory 210 and the processor 220), the memory 210 stores a computer program, and the processor 220 implements the method for deforming a face image according to the embodiments of the present disclosure when executing the program.
Bus 230 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, micro channel architecture (MAC) bus, enhanced ISA bus, video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media can be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from or write to non-removable, nonvolatile magnetic media (not shown in FIG. 21, commonly referred to as a "hard disk drive"). Although not shown in fig. 21, a magnetic disk drive for reading from and writing to a removable non-volatile magnetic disk (e.g., a "floppy disk"), and an optical disk drive for reading from or writing to a removable non-volatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In such cases, each drive may be coupled to bus 230 via one or more data medium interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules configured to carry out the functions of the various embodiments of the disclosure.
Program/utility 280 having a set (at least one) of program modules 270 may be stored in, for example, memory 210, such program modules 270 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment. Program modules 270 generally perform the functions and/or methods in the embodiments described in this disclosure.
The electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), one or more devices that enable a user to interact with the electronic device 200, and/or any device (e.g., network card, modem, etc.) that enables the electronic device 200 to communicate with one or more other computing devices. Such communication may occur through an input/output (I/O) interface 292. Also, electronic device 200 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 293. As shown in fig. 21, the network adapter 293 communicates with other modules of the electronic device 200 over the bus 230. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 200, including, but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
The processor 220 executes various functional applications and data processing by running programs stored in the memory 210.
It should be noted that, the implementation process and the technical principle of the electronic device in this embodiment refer to the foregoing explanation of the method for deforming the face image in the embodiment of the disclosure, and are not repeated herein.
In summary, the electronic device for a face image according to the embodiment of the present disclosure obtains an initial trigger position selected by a user for the face image, and determines a region to be deformed in the face image according to the initial trigger position, further, obtains a trigger track of the initial trigger position, extracts a track length and a track direction of the trigger track, determines a displacement length of each pixel point in the region to be deformed according to the track length, determines a displacement direction of each pixel point according to the track direction, and adjusts a position of each pixel point in the region to be deformed according to the displacement length and the displacement direction, so as to generate and display the deformed face image. Therefore, based on the interaction mode with the user, personalized deformation of the face image is met, the face deformation function with higher degree of freedom is realized, and the viscosity of the user and the product is increased.
In order to implement the above-described embodiments, the present disclosure also proposes a storage medium.
Wherein the instructions in the storage medium, when executed by the processor of the electronic device, enable the electronic device to perform the method of morphing facial images as described above.
To achieve the above embodiments, the present disclosure also provides a computer program product which, when executed by a processor of an electronic device, enables the electronic device to perform the method of morphing a face image as described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (12)

1. A method for deforming a face image, comprising:
acquiring an initial trigger position selected by a user for a face image, and determining a region to be deformed in the face image according to the initial trigger position;
Acquiring a trigger track taking the initial trigger position as a trigger starting point, and extracting the track length and track direction of the trigger track;
Respectively determining the displacement length and the displacement direction of each pixel point in the region to be deformed according to the track length and the track direction;
the position of each pixel point in the region to be deformed is adjusted according to the displacement length and the displacement direction so as to generate a deformed face image;
the method for determining the region to be deformed in the face image according to the initial trigger position of the trigger operation comprises the following steps:
Calculating the distance between the initial trigger position and a plurality of preset control areas, wherein different preset control areas correspond to different face areas, and the plurality of preset control areas are generated according to the dividing track triggered by a user on a face image;
determining a preset control area with the minimum distance from the initial trigger position as the area to be deformed;
the extracting the track length and track direction of the trigger track includes:
Selecting a plurality of trigger points in a trigger track according to a preset time interval, constructing the direction of each trigger point and the last adjacent trigger point as a track direction, and determining the distance between each trigger point and the last adjacent trigger point as the track length in the corresponding track direction;
the adjusting the position of each pixel point in the to-be-deformed area according to the displacement length and the displacement direction includes:
Determining a target adjustment position of each pixel point according to the displacement direction and the displacement length of each pixel point to the corresponding displacement direction in response to the displacement direction belonging to a preset direction, wherein the preset direction comprises a horizontal direction and a vertical direction of the face image;
adjusting each pixel point to the target adjustment position;
The method further comprises the steps of:
Responding to the displacement direction not belonging to the preset direction, splitting the displacement direction into a horizontal direction and a vertical direction;
Determining a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length;
Controlling each pixel point in the region to be deformed to move to a first pixel position according to the first displacement in the horizontal direction, acquiring a reference face image of the face image, and caching the reference face image in an early offline control;
And determining a second pixel position corresponding to each pixel point according to the second displacement movement in the vertical direction, taking the cached reference face image as an input texture map, and controlling each pixel point in the reference face image to be adjusted from the first pixel position to the second pixel position so as to generate and display the deformed face image.
2. The method according to claim 1, wherein the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
determining target identification information corresponding to the initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image;
And determining an image area corresponding to the target identification information as the area to be deformed.
3. The method according to claim 1, wherein the determining the region to be deformed in the face image according to the initial trigger position of the trigger operation includes:
determining target face key points in the face image according to the initial trigger position;
acquiring a preset deformation radius corresponding to the target face key point;
And determining the region to be deformed by taking the target face key point as a circle center and taking the preset deformation radius as a circular radius.
4. A method according to claim 3, wherein said determining the displacement length of each pixel in the region to be deformed from the track length comprises:
Calculating a first distance between each pixel point in the region to be deformed and the key point of the target face;
Determining a first deformation coefficient corresponding to each pixel point according to the first distance;
And calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
5. The method of claim 1, wherein determining a displacement length of each pixel in the region to be deformed from the track length comprises:
Calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image;
determining a second deformation coefficient corresponding to each pixel point according to the second distance;
and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
6. A facial image morphing device, comprising:
The first determining module is configured to acquire an initial trigger position selected by a user for a face image, and determine a region to be deformed in the face image according to the initial trigger position;
The extraction module is configured to acquire a trigger track taking the initial trigger position as a trigger starting point and extract the track length and track direction of the trigger track;
the second determining module is configured to determine the displacement length and the displacement direction of each pixel point in the region to be deformed according to the track length and the track direction;
the deformation adjustment module is configured to adjust the position of each pixel point in the region to be deformed according to the displacement length and the displacement direction so as to generate a deformed face image;
Wherein the first determining module is specifically configured to:
Calculating the distance between the initial trigger position and a plurality of preset control areas, wherein different preset control areas correspond to different face areas, and the plurality of preset control areas are generated according to the dividing track triggered by a user on a face image;
determining a preset control area with the minimum distance from the initial trigger position as the area to be deformed;
the extracting the track length and track direction of the trigger track includes:
Selecting a plurality of trigger points in a trigger track according to a preset time interval, constructing the direction of each trigger point and the last adjacent trigger point as a track direction, and determining the distance between each trigger point and the last adjacent trigger point as the track length in the corresponding track direction;
the deformation adjustment module specifically comprises:
A third determining unit configured to determine a target adjustment position of each pixel point according to the displacement direction and the displacement length of each pixel point, in response to the displacement direction belonging to a preset direction, wherein the preset direction includes a horizontal direction and a vertical direction of the face image;
A first adjustment unit configured to adjust each pixel point to the target adjustment position;
the deformation adjustment module further includes:
a fourth determining unit configured to split the displacement direction into a horizontal direction and a vertical direction in response to the displacement direction not belonging to a preset direction;
A fifth determining unit configured to determine a first displacement in the horizontal direction and a second displacement in the vertical direction according to the displacement length;
The second adjusting unit is configured to control each pixel point in the region to be deformed to move to a first pixel position according to the first displacement in the horizontal direction, acquire a reference face image of the face image, and cache the reference face image in an early offline control;
the second adjusting unit is further configured to determine a second pixel position corresponding to each pixel point according to the second displacement movement in the vertical direction, and control each pixel point in the reference face image to be adjusted from the first pixel position to the second pixel position by taking the cached reference face image as an input texture map so as to generate and display the deformed face image.
7. The apparatus of claim 6, wherein the first determination module is specifically configured to:
determining target identification information corresponding to the initial trigger position in a plurality of pieces of identification information preset on the face image, wherein each piece of identification information in the plurality of pieces of identification information corresponds to an image area on the face image;
And determining an image area corresponding to the target identification information as the area to be deformed.
8. The apparatus of claim 6, wherein the first determination module comprises:
A first determining unit configured to determine a target face key point in the face image according to the initial trigger position;
An acquisition unit configured to acquire a preset deformation radius corresponding to the target face key point;
And the second determining unit is configured to determine the to-be-deformed area by taking the target face key point as a circle center and taking the preset deformation radius as a circular radius.
9. The apparatus of claim 8, wherein the second determination module is specifically configured to:
Calculating a first distance between each pixel point in the region to be deformed and the key point of the target face;
Determining a first deformation coefficient corresponding to each pixel point according to the first distance;
And calculating a first product of the track length and the first deformation coefficient, and taking the first product as the displacement length of each pixel point.
10. The apparatus of claim 6, wherein the second determination module is specifically configured to:
Calculating a second distance between each pixel point in the region to be deformed and a preset reference key point in the face image;
determining a second deformation coefficient corresponding to each pixel point according to the second distance;
and calculating a second product of the track length and the second deformation coefficient, and taking the second product as the displacement length of each pixel point.
11. An electronic device, comprising:
A processor;
A memory configured to store the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of morphing facial images of any one of claims 1-5.
12. A storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of morphing a face image according to any one of claims 1-5.
CN202010732569.2A 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium Active CN113986105B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010732569.2A CN113986105B (en) 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium
PCT/CN2021/104093 WO2022022220A1 (en) 2020-07-27 2021-07-01 Morphing method and morphing apparatus for facial image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010732569.2A CN113986105B (en) 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113986105A CN113986105A (en) 2022-01-28
CN113986105B true CN113986105B (en) 2024-05-31

Family

ID=79731505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010732569.2A Active CN113986105B (en) 2020-07-27 2020-07-27 Face image deformation method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN113986105B (en)
WO (1) WO2022022220A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096870B (en) * 2024-03-27 2024-07-09 深圳大学 Resolution method measuring camera, storage medium, and electronic device
CN118015685B (en) * 2024-04-09 2024-07-02 湖北楚天龙实业有限公司 Method and system for identifying one-card

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154030A (en) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 Image processing method and device, electronic equipment and storage medium
CN109087239A (en) * 2018-07-25 2018-12-25 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN109242765A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN110069191A (en) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 Image based on terminal pulls deformation implementation method and device
CN110502993A (en) * 2019-07-18 2019-11-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965654B (en) * 2015-06-15 2019-03-15 广东小天才科技有限公司 Head portrait adjusting method and system
US9940753B1 (en) * 2016-10-11 2018-04-10 Disney Enterprises, Inc. Real time surface augmentation using projected light

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154030A (en) * 2017-05-17 2017-09-12 腾讯科技(上海)有限公司 Image processing method and device, electronic equipment and storage medium
CN109087239A (en) * 2018-07-25 2018-12-25 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN109242765A (en) * 2018-08-31 2019-01-18 腾讯科技(深圳)有限公司 A kind of face image processing process, device and storage medium
CN110069191A (en) * 2019-01-31 2019-07-30 北京字节跳动网络技术有限公司 Image based on terminal pulls deformation implementation method and device
CN110502993A (en) * 2019-07-18 2019-11-26 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113986105A (en) 2022-01-28
WO2022022220A1 (en) 2022-02-03

Similar Documents

Publication Publication Date Title
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
US10599914B2 (en) Method and apparatus for human face image processing
US20200020173A1 (en) Methods and systems for constructing an animated 3d facial model from a 2d facial image
US20190384967A1 (en) Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium
CN107831900B (en) human-computer interaction method and system of eye-controlled mouse
CN113986105B (en) Face image deformation method and device, electronic equipment and storage medium
CN109242765B (en) Face image processing method and device and storage medium
US20110317874A1 (en) Information Processing Device And Information Processing Method
Arcoverde Neto et al. Enhanced real-time head pose estimation system for mobile device
WO2021083133A1 (en) Image processing method and device, equipment and storage medium
CN111489311A (en) Face beautifying method and device, electronic equipment and storage medium
CN104063039A (en) Human-computer interaction method of wearable computer intelligent terminal
CN105988566B (en) A kind of information processing method and electronic equipment
CN111527468A (en) Air-to-air interaction method, device and equipment
CN111432267A (en) Video adjusting method and device, electronic equipment and storage medium
Sun et al. Real-time gaze estimation with online calibration
CN111429338B (en) Method, apparatus, device and computer readable storage medium for processing video
US11216067B2 (en) Method for eye-tracking and terminal for executing the same
WO2021185110A1 (en) Method and device for eye tracking calibration
CN113822965A (en) Image rendering processing method, device and equipment and computer storage medium
CN110941337A (en) Control method of avatar, terminal device and computer readable storage medium
Perra et al. Adaptive eye-camera calibration for head-worn devices
CN114904268A (en) Virtual image adjusting method and device, electronic equipment and storage medium
CN111652795A (en) Face shape adjusting method, face shape adjusting device, live broadcast method, live broadcast device, electronic equipment and storage medium
CN113223125B (en) Face driving method, device, equipment and medium for virtual image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant