CN114820285A - Method for simulating facial image change and terminal equipment - Google Patents

Method for simulating facial image change and terminal equipment Download PDF

Info

Publication number
CN114820285A
CN114820285A CN202110086216.4A CN202110086216A CN114820285A CN 114820285 A CN114820285 A CN 114820285A CN 202110086216 A CN202110086216 A CN 202110086216A CN 114820285 A CN114820285 A CN 114820285A
Authority
CN
China
Prior art keywords
image
target
user
simulated
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110086216.4A
Other languages
Chinese (zh)
Inventor
吴磊
杨忠华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202110086216.4A priority Critical patent/CN114820285A/en
Publication of CN114820285A publication Critical patent/CN114820285A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a method for simulating facial image change and terminal equipment, which can solve the problem that the display form of the terminal equipment is single. The method comprises the following steps: acquiring a first user face image; determining M first user image areas where preset M face feature points are located from the first user face image; displaying a first simulated facial avatar based on the first user facial image; acquiring a second user face image, and determining M second user image areas where the M face feature points are located; determining the position variation of a first target user image area and a second target user image area aiming at the target facial feature points; determining a second avatar area corresponding to the first avatar area according to the position variation; and adjusting the pixel value of the second simulated image area according to the pixel value of the first simulated image area to obtain an updated second simulated face image.

Description

Method for simulating facial image change and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of terminal equipment, in particular to a method for simulating facial image change and terminal equipment.
Background
Usually, some prompting messages such as electric quantity and time can be displayed on a display screen of the terminal device, background pictures are pre-stored pictures in advance, or pictures set by a user, and after the setting is completed, the background pictures on a display interface can not be changed, so that the display form of the terminal device is single.
Disclosure of Invention
The embodiment of the invention provides a method for simulating facial image change and terminal equipment, which are used for solving the problem that the display form of the terminal equipment in the prior art is single. In order to solve the above technical problem, the embodiment of the present invention is implemented as follows:
in a first aspect, a method for simulating a change in facial appearance is provided, the method comprising: acquiring a first user face image;
determining M first user image areas where preset M face feature points are located from the first user face image;
displaying a first simulated facial avatar based on the first user facial image;
acquiring a second user face image, and determining M second user image areas where the M face feature points are located;
determining the position variation of a first target user image area and a second target user image area aiming at a target facial feature point, wherein the target facial feature point is any one of the M facial feature points;
determining a second simulated image area corresponding to a first simulated image area according to the position variation, wherein the first simulated image area is an area corresponding to the target facial feature point in the first simulated facial image;
adjusting the pixel value of the second simulated image area according to the pixel value of the first simulated image area to obtain an updated second simulated face image;
the first target user image area and the second target user image area correspond to the target facial feature point respectively, the first target user image area is located in the first user facial image, and the second target user image area is located in the second user facial image.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, after the adjusting the pixel value of the second avatar region according to the pixel value of the first avatar region to obtain an updated second simulated face avatar, the method further includes:
and updating and displaying the first simulated face image as the second simulated face image on a display screen of the terminal equipment.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining, for the target facial feature point, a position variation amount of the first target user image area and the second target user image area includes:
acquiring a first target coordinate value, wherein the first target coordinate value is an average value of coordinate values of each pixel point in the first target user image area;
acquiring a second target coordinate value, wherein the second target coordinate value is an average value of coordinate values of each facial feature point in the second target user image area;
and obtaining the position variation according to the first target coordinate value and the second target coordinate value.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the determining, for the target facial feature point, a position variation amount of the first target user image area and the second target user image area includes:
acquiring a third target coordinate value, wherein the third target coordinate value is a coordinate value of a pixel point corresponding to the central point of the first target user image area;
acquiring a fourth target coordinate value, wherein the fourth target coordinate value is the coordinate value of a pixel point corresponding to the central point of the second target user image area;
and obtaining the position variation according to the third target coordinate value and the fourth target coordinate value.
As an alternative implementation manner, in the first aspect of the embodiment of the present invention, the determining, from the first user face image, M first user image regions where M preset face feature points are located includes:
determining P face feature points according to the first user face image;
selecting the M face feature points from the P face feature points, wherein the M face feature points correspond to pixel points in a preset change area in a target face image;
and determining the M first user image areas where the M face feature points are located according to the M face feature points.
As an alternative implementation, in the first aspect of the embodiments of the present invention, the displaying a first simulated facial avatar according to the first user facial image includes:
acquiring a target face image;
for the target facial feature point, determining position information of the first target user image area, wherein the target facial feature point is any one of the M facial feature points;
determining the first simulated image area according to the position information;
adjusting the pixel value of the first simulated image area according to the pixel value of a preset image area to obtain the first simulated face image; the preset image area is an area corresponding to the target face feature point in the target face image;
displaying the first simulated facial avatar;
wherein the first target user image region corresponds to the target facial feature point, the first target user image region being in the first user facial image.
As an alternative implementation, in the first aspect of the embodiment of the present invention, the obtaining a target face image includes:
acquiring personal information of a user, wherein the personal information at least comprises: user age, user gender, user interests;
displaying N preset facial images according to the personal information;
and responding to the touch operation of the user to obtain the target face image, wherein the touch operation is used for selecting the target face image from the N preset face images.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the M first user image areas include at least one pixel point;
the M second user image regions include at least one pixel point.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the pixel value includes: one of an RGB value and a YUV value.
In a second aspect, a terminal device is provided, which includes: the acquisition module is used for acquiring a first user face image;
the processing module is used for determining M preset first user image areas where the M face feature points are located from the first user face image;
the display module is used for displaying a first simulated face image according to the first user face image;
the processing module is further configured to acquire a second user face image and determine M second user image regions where the M face feature points are located;
the processing module is further configured to determine, for a target facial feature point, a position variation amount between a first target user image area and a second target user image area, where the target facial feature point is any one of the M facial feature points;
the processing module is further used for determining a second simulated image area corresponding to a first simulated image area according to the position variation, wherein the first simulated image area is an area corresponding to the target facial feature point in the first simulated facial image;
the processing module is further configured to adjust a pixel value of the second simulated image region according to a pixel value of the first simulated image region to obtain an updated second simulated face image;
the first target user image area and the second target user image area correspond to the target facial feature point respectively, the first target user image area is located in the first user facial image, and the second target user image area is located in the second user facial image.
In a third aspect, a terminal device is provided, including:
a memory storing executable program code;
a processor coupled with the memory;
the processor calls the executable program code stored in the memory to execute the method for simulating facial image change in the first aspect of the embodiment of the present invention.
In a fourth aspect, there is provided a computer-readable storage medium storing a computer program for causing a computer to execute the method for simulating a change in facial appearance in the first aspect of the embodiment of the present invention. The computer readable storage medium includes a ROM/RAM, a magnetic or optical disk, or the like.
In a fifth aspect, there is provided a computer program product which, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
A sixth aspect provides an application publishing platform for publishing a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of any one of the methods of the first aspect.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the embodiment of the invention, the terminal device can analyze two different user face images, obtain the position variation of a target image area corresponding to the target face feature point in the two different user face images, determine a first simulated image area corresponding to the target face feature point in the first simulated face image, determine a second simulated image area corresponding to the first simulated image area according to the position variation, and adjust the pixel value of the second simulated image area according to the pixel value of the first simulated image area to obtain the updated second simulated face image. Therefore, the terminal equipment can analyze the facial image of the user, when the expression of the user is detected to change, the terminal equipment can update the simulated facial image according to different facial features of the user, so that the simulated facial image with the facial features of the user can be dynamically displayed in real time, and the terminal equipment can provide real-time interactive experience and increase interestingness.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a first schematic flow chart of a method for simulating facial image changes according to an embodiment of the present invention;
FIG. 2 is a first schematic diagram of a face image of a user according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a user's face image according to an embodiment of the present invention;
FIG. 4 is a first schematic diagram of a simulated face image according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a simulated face image according to an embodiment of the present invention;
FIG. 6 is a schematic flow chart diagram of a method for simulating facial image changes according to an embodiment of the present invention;
fig. 7 is a first schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 9 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first" and "second," and the like, in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular order of the objects. For example, the first simulated face figure, the second simulated face figure, and so on are for distinguishing different simulated face figures, rather than for describing a particular order of simulated face figures.
The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
Generally, some prompting messages such as electric quantity and time can be displayed on a display screen of the terminal equipment, a user can set a favorite picture when using the terminal equipment, or the picture prestored in the terminal equipment is used as a background picture, after the setting is completed, the background picture cannot be changed, and the user can only set the background picture again to change the background picture. Therefore, the terminal equipment cannot display personalized animation images with the facial features of the user according to the facial features of the user, and the display form is single.
The embodiment of the invention provides a method for changing a simulated facial image and terminal equipment, which can update the simulated facial image according to different facial features of a user, so that the simulated facial image with the facial features of the user can be dynamically displayed in real time, and the terminal equipment can provide real-time interactive experience and increase interestingness.
The terminal device according to the embodiment of the present invention may be an electronic device such as a Mobile phone, a tablet Computer, a notebook Computer, a palmtop Computer, a vehicle-mounted terminal device, a wearable device, an Ultra-Mobile Personal Computer (UMPC), a netbook, or a Personal Digital Assistant (PDA). The wearable device may be a smart watch, a smart bracelet, a watch phone, a smart foot ring, a smart earring, a smart necklace, a smart headset, or the like, and the embodiment of the present invention is not limited.
The execution subject of the method for changing the simulated facial image provided by the embodiment of the present invention may be the terminal device, or may also be a functional module and/or a functional entity capable of implementing the method for changing the simulated facial image in the terminal device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily describe the method for changing the simulated facial image provided by the embodiment of the present invention.
The method for simulating the change of the facial image provided by the embodiment of the invention can be applied to scenes in which the user expression changes in the process of simulating the facial features of the user by the terminal equipment.
Example one
As shown in fig. 1, an embodiment of the present invention provides a method for simulating a change in facial image, which may include the steps of:
101. a first user facial image is acquired.
In the embodiment of the invention, the terminal equipment can acquire the first user face image through the camera.
Optionally, the acquiring, by the terminal device, the first user face image may specifically include: continuously acquiring a plurality of frames of user face images; comparing the face images of multiple frames of users to obtain a comparison result; and when the resident abnormal display area exists in the multi-frame user face images according to the comparison result, repairing the resident abnormal display area according to the normal display area of each frame of user face images.
The resident abnormal display area is the same abnormal display area with the frequency higher than the frequency threshold value in the face images of the multiple frames of users.
It should be noted that, for an abnormal display area caused by dirt/cracks on a camera or an abnormal display area caused by dirt/cracks on a screen area corresponding to a camera under a screen, a difference between pixel values of the abnormal display area and adjacent areas of the abnormal display area is generally large, so that a terminal device may identify an edge of each image area in a user face image through an edge identification algorithm, after determining the edge of the image area, may compare the pixel value of the edge of the image area with the pixel value of an adjacent pixel point outside the edge of the image area, and if the difference between the pixel value of the edge and the pixel value of the adjacent pixel point is greater than a preset threshold, determine that the image area in the edge is the abnormal display area.
Further, after determining that the resident abnormal display area exists in the face images of the plurality of frames of users according to the comparison result, the terminal device may further: determining the size of an inscribed circle of the resident abnormal display area and the size of an circumscribed circle of the resident abnormal display area; calculating a first ratio of the size of the inscribed circle to the size of the circumscribed circle; if the first ratio is smaller than the first ratio threshold, determining that the resident abnormal display area is caused by damage of the camera, or caused by damage of a screen area corresponding to the camera; and if the first ratio is larger than or equal to the first ratio threshold, determining that the resident abnormal display area is caused by the dirt of the camera or the dirt of the screen area corresponding to the camera.
It should be noted that, because the resident abnormal display area caused by general dirt is close to a circle, the difference between the sizes of the inscribed circle and the circumscribed circle of the resident abnormal display area is small, the resident display area caused by damage to the camera or the screen area corresponding to the camera is close to a linear shape, and the difference between the sizes of the inscribed circle and the circumscribed circle of the resident abnormal display area is large. The size of the circle involved in the embodiment of the present invention may be the area, radius or diameter of the circle.
In this optional implementation manner, the terminal device may detect whether an abnormal display area exists in the currently acquired user face image, and if so, repair the abnormal display area, so that the terminal device may acquire a normal user face image, and the terminal device may determine a reason for causing abnormal display according to size information of the abnormal display area, which is more favorable for the terminal device to repair.
102. And determining M first user image areas where preset M face feature points are located.
In the embodiment of the present invention, the terminal device may determine, from the first user face image, M first user image regions where M preset face feature points are located.
The M facial feature points may be points which are stored in advance by the terminal device and can represent facial features of the user, as shown in fig. 2, which is a schematic diagram of a facial image of the user, and the M facial feature points may be facial feature points shown in 201 to 216 in fig. 2.
It should be noted that M facial feature points are used to indicate facial features of the user (e.g., corners of the mouth, corners of the eyes, and eyebrows) in the first user facial image, and are not specific pixel points in the first user facial image, and M is an integer greater than or equal to 1.
It should be noted that M facial feature points correspond to M first user image regions, where each facial feature point may correspond to one first user image region, and the first user image region is formed by specific pixel points in the first user facial image.
Illustratively, as the point 212 and the point 214 in fig. 2 are used to indicate the right mouth corner and the left mouth corner of the user, respectively, and the left mouth corner and the right mouth corner may be raised when the user smiles, at this time, the point 212 and the point 214 are still used to indicate the right mouth corner and the left mouth corner of the user, respectively, but the first user image areas corresponding to the point 212 and the point 214 are changed.
Optionally, in the first user face image, each face feature point may correspond to one first user image region, and the number of pixel points in each first user image region may be one or multiple.
Optionally, the terminal device may determine, through a face recognition algorithm, M first user image regions where M preset face feature points are located.
Optionally, the face recognition algorithm may be an insight face (also written as insight face) algorithm, the insight face algorithm is mainly to acquire a face recognition function by acquiring a plurality of face data for training, and in the embodiment of the present invention, after the terminal device acquires the first user face image, the insight face algorithm may rapidly and accurately mark M face feature points on the first user face image through the face recognition function, so as to determine M first user image regions.
Optionally, the face recognition algorithm may also be a FaceNet (also written as FaceNet) algorithm, where the FaceNet algorithm mainly performs face recognition from an image to a european space through end-to-end learning, and based on this code, in this embodiment of the present invention, after a terminal device acquires a first user face image, the FaceNet algorithm may mark M face feature points on the first user face image quickly and accurately through the code, so as to determine M first user image regions.
103. A first simulated facial avatar is displayed.
In an embodiment of the present invention, the terminal device may display the first simulated face figure based on the first user face image.
It should be noted that, the terminal device determines the first simulated image region in the target face image according to the position information of each target image region of the first user face image, and adjusts the pixel value of the first simulated image region according to the pixel value of the preset image region corresponding to the target face feature point in the target face image, so as to obtain the first simulated face image. Wherein the target face image is an image pre-stored in advance by the terminal device.
For example, as shown in fig. 4, which is a schematic diagram of a simulated face image, points 201 to 216 in fig. 4 may be facial feature points, which are the same as points 201 to 216 in fig. 2, and when the user smiles and raises the right mouth corner, points 212 in fig. 2 move upward. If the terminal device is in a mirror image display, point 212 in fig. 4 is also moved upward; if the terminal device is a corresponding display, point 214 in FIG. 4 is also moved upward.
104. A second user facial image is acquired.
In the embodiment of the invention, the terminal equipment can acquire the second user face image through the camera.
105. And determining M second user image areas where the M face feature points are located.
In the embodiment of the present invention, the terminal device may determine, from the second user face image, M second user image regions where M preset face feature points are located.
It should be noted that the M facial feature points are not specific pixel points in the second user facial image, and M is an integer greater than or equal to 1. The M second user image regions are each in a second user face image.
Optionally, in the second user face image, each face feature point may correspond to one second user image region, and the number of pixel points in each second user image region may be one or multiple, which is not limited in the embodiment of the present invention.
106. And determining the position variation of the first target user image area and the second target user image area.
In the embodiment of the present invention, the terminal device may determine the amount of change in the positions of the first target user image area and the second target user image area with respect to the target facial feature point.
The target facial feature point is any one of the M facial feature points; the first target user image area and the second target user image area correspond to the target facial feature points respectively; the first target user image area is positioned in the first user face image, and the first target user image area is any one of the M first user image areas; the second target user image area is in the second user face image, and the second target user image area is any one of the M second user image areas.
For example, as shown in fig. 3, another schematic diagram of a user face image is shown, and assuming that the terminal device uses point 212 in fig. 2 as a target facial feature point, point 217 in fig. 2 may be a first target user image area, and point 317 in fig. 3 may be a second target user image area. The terminal device can acquire the amount of change in position of 217 and 317.
Optionally, determining the position variation of the first target user image area and the second target user image area may specifically include two optional implementation manners:
a first alternative implementation: acquiring a first target coordinate value, wherein the first target coordinate value is an average value of coordinate values of each pixel point in a first target user image area; acquiring a second target coordinate value, wherein the second target coordinate value is an average value of coordinate values of each facial feature point in a second target user image area; and obtaining the position variation according to the first target coordinate value and the second target coordinate value.
The terminal device can establish a first coordinate system for the first user face image and a second coordinate system for the second user face image, the coordinate origins of the first coordinate system and the second coordinate system are the same, and the scales of the first coordinate system and the second coordinate system are the same.
It should be noted that the terminal device may obtain a coordinate value of each pixel point in the first target user image area, and then calculate to obtain an average value as a first target coordinate value; the terminal equipment can acquire the coordinate value of each pixel point in the second target user image area, and then calculate to obtain an average value as a second target coordinate value; and then, carrying out subtraction calculation on the first target coordinate value and the second target coordinate value to obtain a position variation, wherein the position variation comprises the size and the direction of the coordinate.
Exemplarily, it is assumed that the first target user image area includes four pixel points, and the coordinate values are pixel point one (2.1, 5.9), pixel point two (2.2, 6.1), pixel point three (2.1, 6.5), and pixel point four (2.3, 6.3), respectively; the second target user image area comprises four pixel points, and the coordinate values are respectively pixel point five (8.6, 10.8), pixel point six (8.9, 10.5), pixel point seven (8.7, 9.8) and pixel point eight (9.2, 11.1). Then, the terminal device may average the coordinate values of the pixel point one, the pixel point two, the pixel point three, and the pixel point four to obtain a first target coordinate value of (2.175, 6.2); and averaging the coordinate values of the pixel point five, the pixel point six, the pixel point seven and the pixel point eight to obtain a second target coordinate value (8.85, 10.55), and then subtracting the first target coordinate value from the second target coordinate value to obtain a position variation value (6.675, 4.35), wherein the direction is from the first target coordinate value to the second target coordinate value.
A second alternative implementation: acquiring a third target coordinate value, wherein the third target coordinate value is the coordinate value of a pixel point corresponding to the central point of the first target user image area; acquiring a fourth target coordinate value, wherein the fourth target coordinate value is the coordinate value of a pixel point corresponding to the central point of the second target user image area; and obtaining the position variation according to the third target coordinate value and the fourth target coordinate value.
The terminal device may establish a third coordinate system for the first user face image and a fourth coordinate system for the second user face image, where the coordinate origins of the third coordinate system and the fourth coordinate system are the same, and the scales of the third coordinate system and the fourth coordinate system are the same.
It should be noted that this alternative implementation is applicable to the case where the first target user image area and the second target user image area are regular image areas. The terminal equipment can acquire a coordinate value of a pixel point corresponding to a central point of the first target user image area as a third target coordinate value; the terminal equipment can acquire the coordinate value of the pixel point corresponding to the central point of the second target user image area as a fourth target coordinate value; and performing subtraction calculation on the third target coordinate value and the fourth target coordinate value to obtain a position variation, wherein the position variation comprises the size and the direction of the coordinate.
Exemplarily, it is assumed that the first target user image area is a circle, a center point of the circle is a circle center, and coordinate values of pixel points corresponding to the circle center are (5.5, 7.9); the second target user image area is a circle, the center point of the circle is the circle center, and the coordinate value of the pixel point corresponding to the circle center is (9.2, 11.1). The terminal device may determine the third target coordinate value to be (5.5, 7.9), determine the fourth target coordinate value to be (9.2, 11.1), and then subtract the third target coordinate value from the fourth target coordinate value to obtain a magnitude of the position change amount to be (3.7, 3.2) and a direction to be the direction from the third target coordinate value to the fourth target coordinate value.
The terminal device can obtain the position variation of the first target user image area and the second target user image area through the two optional implementation modes, and the implementation mode can convert the position variation between the two target image areas into a difference value between the two coordinate values, so that the position variation between the image areas can be more accurately and specifically represented.
107. A second avatar region corresponding to the first avatar region is determined.
In the embodiment of the present invention, the terminal device may determine the second avatar region corresponding to the first avatar region according to the amount of position change.
The first simulated face image region is a region corresponding to the target face feature point in the first simulated face image.
For example, as shown in fig. 4, a schematic diagram of a simulated face image is shown, points 201 to 216 in fig. 4 may be face feature points, as shown in fig. 5, another schematic diagram of a simulated face image is shown, and points 201 to 216 in fig. 5 may be face feature points. Taking the target facial feature point as 212 points as an example, 417 in fig. 4 is a first simulated image region, and 517 in fig. 5 is a second simulated image region.
It should be noted that the first avatar region and the second avatar region are both regions in the first simulated face avatar.
Optionally, the terminal device determines, according to the position variation, a second avatar region corresponding to the first avatar region, which may specifically include two implementation manners:
the implementation mode is as follows: the terminal device can obtain the size ratio (area ratio, perimeter ratio, pixel point number ratio, and the like) between the user face image and the simulated face image, obtain the second position variation on the simulated face image according to the size ratio and the first position variation on the user face image, and add the second position variation to the coordinate value of the first simulated image region to obtain the coordinate value of the second simulated image region.
Illustratively, it is assumed that the terminal device detects that the user face image is 0.7 times of the simulated face image, the first position variation amount is (5.6, 9.0), and the first simulated image region includes four pixel points, namely pixel point one (5.6, 8.4), pixel point two (5.9, 8.1), pixel point three (5.7, 8.6), and pixel point four (5.6, 8.5). Then, the terminal device may calculate that the second position variation is (5.6, 9.0)/0.7 ═ 8.0, 12.9, and then add the second position variation to the coordinate values of the pixel points of the first avatar region, so as to obtain pixel point five (13.6, 21.3), pixel point six (13.9, 21.0), pixel point seven (13.7, 21.5), and pixel point eight (13.6, 21.4). Then the area composed of pixel point five, pixel point six, pixel point seven and pixel point eight is the second analog image area.
The implementation mode two is as follows: the terminal device may directly add the coordinate value of the first avatar region to the first position variation to obtain the coordinate value of the second avatar region.
It should be noted that, this implementation may be applicable to the case where the size of the user face image is the same as that of the simulated face image, that is, the size ratio between the user face image and the simulated face image is 1, and at this time, the first position variation of the first target user image area and the second target user image area may be equal to the second position variation of the first simulated character area and the second simulated character area.
Illustratively, assuming that the first position variation is (5.6, 9.0), the first avatar region includes four pixels, i.e., pixel one (5.6, 8.4), pixel two (5.9, 8.1), pixel three (5.7, 8.6), and pixel four (5.6, 8.5). Then, the second position variation is (5.6, 9.0), and the terminal device adds the coordinate value of the pixel point in the first simulated image region to the second position variation to obtain pixel point five (11.2, 17.4), pixel point six (11.5, 17.1), pixel point seven (11.3, 17.6), and pixel point eight (11.2, 17.5). Then the area composed of pixel point five, pixel point six, pixel point seven and pixel point eight is the second analog image area.
108. And adjusting the pixel value of the second simulated image area to obtain the updated second simulated face image.
In the embodiment of the present invention, the terminal device may adjust the pixel value of the second avatar region according to the pixel value of the first avatar region to obtain the updated second simulated face avatar.
It should be noted that the terminal device may adjust the pixel values of the pixels in the second simulated image region to the pixel values of the pixels in the first simulated image region, and perform the same operation on other facial feature points, so that the terminal device may obtain the updated second simulated facial image.
Optionally, the pixel value includes: one of an RGB value and a YUV value.
Alternatively, the RGB value is a color standard, and various colors are obtained by changing three primary color channels of red (R), green (G) and blue (B) and superimposing them on each other. The colors that can be seen by the naked eyes are formed by mixing the lights of three primary colors of red, green and blue according to different proportions, the RGB values refer to the brightness of the lights of the three primary colors, each light has 256 brightness values, and the brightness values are represented by numbers as 0, 1 and 2.
Alternatively, the RGB color space can be regarded as a unit cube in a three-dimensional rectangular coordinate color system, three axes respectively represent three primary colors of red, green and blue, and any color that can be seen by the naked eye can be represented by a point in the three-dimensional space in the RGB color space.
Exemplarily, as shown in table 1, the correspondence between different colors and different RGB values is shown.
TABLE 1
Figure BDA0002908845260000141
Figure BDA0002908845260000151
For example, the color of the lip part of a general user is reddish, and then the RGB value of the pixel point of the lip part may be (250, 0, 0); the color of the pupil part of a general user is black, so that the RGB value of the pixel point of the pupil part can be (5, 3, 0); generally, the color of the tooth part of the user is white, so the RGB value of the pixel point of the tooth part can be (250, 254, 239); generally, the color of the cheek portion of the user is yellow, and then the RGB value of the pixel point of the cheek portion may be (250, 245, 0).
Optionally, YUV is a color coding method, and is mainly used for optimizing transmission of color signals. Compared with the transmission of RGB signals, the great advantage is that it only needs to occupy little bandwidth (RGB requires the transmission of three independent color signals at the same time). Here, "Y" is a baseband signal and represents brightness (Luminance), i.e., a gray scale value. The "U" and "V" represent chromaticity (chroma) which is used to describe the color and saturation of the image for specifying the color of the pixel. U and V are quadrature modulated signals. The YUV color model is derived from an RGB model, and is characterized in that the brightness and the chroma are separated, so that the YUV color model is more suitable for the field of image processing.
Optionally, a conversion relationship is provided between the RGB value and the YUV value.
The conversion relationship is generally shown by the following equation: Y-0.299R + 0.587G + 0.114B, U-0.147R-0.289G + 0.436B, V-0.615R-0.515G-0.100B, R-Y + 1.140V, G-0.394U-0.581V, B-Y + 2.032U.
In the embodiment of the present invention, taking the pixel value as the RGB value as an example, it is assumed that the first avatar region includes four pixels, where the pixel value of the first pixel is (250, 0, 0), the pixel value of the second pixel is (255, 3, 0), the pixel value of the third pixel is (248, 9, 2), and the pixel value of the fourth pixel is (252, 0, 7). Then, the terminal device may adjust the pixel value of the pixel point in the second simulated image region to the pixel value of the pixel point in the first simulated image region, that is, adjust the pixel value of the pixel point five corresponding to the pixel point one in the second simulated image region to (250, 0, 0), adjust the pixel value of the pixel point six corresponding to the pixel point two in the second simulated image region to (255, 3, 0), adjust the pixel value of the pixel point seven corresponding to the pixel point three in the second simulated image region to (248, 9, 2), adjust the pixel value of the pixel point eight corresponding to the pixel point four in the second simulated image region to (252, 0, 7), so as to obtain the updated second simulated image region, and perform the same operation on other facial feature points, so as to obtain the updated second simulated facial image.
The embodiment of the invention provides a method for changing an image of a simulated face, wherein terminal equipment can analyze two different user face images, obtain the position variation of a target image area corresponding to a target face characteristic point in the two different user face images, determine a first simulated image area corresponding to the target face characteristic point in a first simulated face image, determine a second simulated image area corresponding to the first simulated image area according to the position variation, and adjust the pixel value of the second simulated image area according to the pixel value of the first simulated image area to obtain an updated second simulated face image. Therefore, the terminal equipment can analyze the facial image of the user, when the expression of the user is detected to change, the terminal equipment can update the simulated facial image according to different facial features of the user, so that the simulated facial image with the facial features of the user can be dynamically displayed in real time, and the terminal equipment can provide real-time interactive experience and increase interestingness.
Example two
As shown in fig. 6, an embodiment of the present invention provides a method for simulating a change in facial image, which may further include the following steps:
601. a first user facial image is acquired.
602. And determining M first user image areas where preset M face feature points are located.
In the embodiment of the present invention, for the description of steps 601 to 602, please refer to the detailed description of steps 101 to 102 in the first embodiment, which is not repeated herein.
Optionally, the determining, by the terminal device, M first user image regions where the preset M facial feature points are located may specifically include: determining P face feature points according to the first user face image; selecting M face feature points from the P face feature points, wherein the M face feature points correspond to pixel points in a preset change area in the target face image; and determining M first user image areas where the M face feature points are located according to the M face feature points.
It should be noted that M facial feature points selected by the terminal device from the P facial feature points correspond to pixel points in a preset change area in the target facial image. The preset change area is stored by the terminal equipment in advance according to the target face image, and when the face image of the user changes, the pixel values of the pixel points in the preset change area can be changed; the pixel values of the pixel points not in the preset change area are not changed.
This optional implementation can be so that in the target facial image of terminal equipment, the pixel point that is in presetting the change region can change along with the change of user's facial image, and the pixel point that is not in presetting the change region can not change along with the change of user's facial image, can be so that terminal equipment can be more accurate the demonstration user's of appearance facial image, avoid appearing the condition that the simulation facial image takes place the distortion of great degree.
603. A target face image is acquired.
In an embodiment of the present invention, the terminal device may acquire a target face figure, which may be a face figure displayed on a display screen of the terminal device before the terminal device simulates the user face figure.
Optionally, the obtaining, by the terminal device, the target face image specifically includes: acquiring personal information of a user, wherein the personal information at least comprises: user age, user gender, user interests; displaying N preset facial images according to the personal information; and responding to the touch operation of the user to obtain a target face image, wherein the touch operation is used for selecting the target face image from the N preset face images.
It should be noted that the terminal device may obtain personal information of a user, and display N preset facial images matched with the user according to the personal information, so that the user can select from the N preset facial images; after the terminal device receives the touch operation of the user, the terminal device can obtain the target face image selected by the user in response to the touch operation.
For example, the terminal device obtains that the user is a 6-year-old girl who likes the sponge baby cartoon, and then the terminal device can display the pre-stored preset facial image of the sponge baby, the grandma, the octopus brother and the crab boss on a display screen of the terminal device, and after the user clicks the facial image of the sponge baby, the terminal device can take the facial image of the sponge baby as the target facial image.
604. Position information of the first target user image area is determined.
In the embodiment of the present invention, the terminal device may determine the position information of the first target user image area with respect to the target facial feature point.
The target face feature point is any one of the M face feature points, the first target user image area corresponds to the target face feature point, and the first target user image area is in the first user face image.
Optionally, the determining, by the terminal device, the position information of the first target user image area may specifically include: and establishing a fifth coordinate system aiming at the first user face image, and acquiring a fifth target coordinate value, wherein the fifth target coordinate value is an average value of coordinate values of each pixel point in the first target user image area.
Illustratively, it is assumed that the first target user image area includes four pixel points, and the coordinate values are pixel point one (2.1, 5.9), pixel point two (2.2, 6.1), pixel point three (2.1, 6.5), and pixel point four (2.3, 6.3), respectively. Then, the terminal device may average the coordinate values of the pixel point one, the pixel point two, the pixel point three, and the pixel point four to obtain the position information that the fifth target coordinate value is (2.175, 6.2), which is the first target user image area.
605. A first simulated character region is determined.
In the embodiment of the invention, the terminal equipment can determine the first simulated character area according to the position information.
It should be noted that, as the above steps, a fifth coordinate system is established for the first user face image, and a fifth target coordinate value is obtained, the terminal device may further establish a sixth coordinate system for the target face image, and determine an area on the target face image that is the same as the fifth target coordinate value as the first simulated image area.
And the coordinate origin of the fifth coordinate system is the same as that of the sixth coordinate system, and the scales of the fifth coordinate system are the same as those of the sixth coordinate system.
For example, assuming that the fifth target coordinate value is (2.175, 6.2), the terminal device may use a region where a pixel point having a coordinate value of (2.175, 6.2) on the target face avatar is located as the first avatar region.
606. And adjusting the pixel value of the first simulated image area to obtain a first simulated face image.
In the embodiment of the present invention, the terminal device may adjust the pixel value of the first simulated image region according to the pixel value of the preset image region to obtain the first simulated face image.
The preset image area is an area corresponding to the target face characteristic point in the target face image.
It should be noted that the terminal device may adjust the pixel values of the pixels in the first simulated image region to the pixel values of the pixels in the preset image region, and perform the same operation on other facial feature points, so that the terminal device may obtain the first simulated facial image.
For example, it is assumed that the preset image area includes four pixels, where the pixel value of the first pixel is (0, 0, 4), the pixel value of the second pixel is (2, 5, 8), the pixel value of the third pixel is (3, 0, 2), and the pixel value of the fourth pixel is (0, 9, 0). Then, the terminal device may adjust the pixel value of the pixel point of the first simulated image region to the pixel value of the pixel point of the preset image region, that is, adjust the pixel value of the pixel point five corresponding to the pixel point one in the first simulated image region to (0, 0, 4), adjust the pixel value of the pixel point six corresponding to the pixel point two in the first simulated image region to (2, 5, 8), adjust the pixel value of the pixel point seven corresponding to the pixel point three in the first simulated image region to (3, 0, 2), adjust the pixel value of the pixel point eight corresponding to the pixel point four in the first simulated image region to (0, 9, 0), so as to obtain the first simulated image region, and perform the same operation on other facial feature points, so as to obtain the updated first simulated facial image.
607. A first simulated facial avatar is displayed.
608. A second user facial image is acquired.
609. And determining M second user image areas where the M face feature points are located.
610. And determining the position variation of the first target user image area and the second target user image area.
611. A second avatar region corresponding to the first avatar region is determined.
612. And adjusting the pixel value of the second simulated image area to obtain the updated second simulated face image.
In the embodiment of the present invention, for the description of steps 607 to 612, please refer to the detailed description of steps 103 to 108 in the first embodiment, which is not repeated herein.
613. The first simulated facial avatar is displayed updated as a second simulated facial avatar.
In an embodiment of the present invention, the terminal device may update and display the first simulated face figure as the second simulated face figure on a display screen of the terminal device.
The embodiment of the invention provides a method for changing a simulated face image, wherein terminal equipment can firstly analyze a first user face image to obtain a target face image and position information of a target image area corresponding to a target face characteristic point in the target face image, determine a first simulated image area according to the position information, and adjust the pixel value of the first simulated image area according to the pixel value of a preset image area corresponding to the target face characteristic point in the target face image to obtain and display the first simulated face image; the terminal equipment analyzes the second user face image, obtains the position variation of a target image area corresponding to the target face characteristic point in two different user face images, determines a first simulated image area corresponding to the target face characteristic point in the first simulated face image, determines a second simulated image area corresponding to the first simulated image area according to the position variation, and adjusts the pixel value of the second simulated image area according to the pixel value of the first simulated image area so as to obtain and update and display the second simulated face image. Therefore, the terminal equipment can analyze the facial image of the user firstly and display the simulated facial image with the facial features of the user, and when the expression of the user is detected to change, the terminal equipment can update and display the simulated facial image according to the different facial features of the user, so that the simulated facial image with the facial features of the user can be dynamically displayed in real time, and the terminal equipment can provide real-time interactive experience and increase interestingness.
EXAMPLE III
As shown in fig. 7, an embodiment of the present invention provides a terminal device, where the terminal device includes:
an obtaining module 701, configured to obtain a first user face image;
a processing module 702, configured to determine, from a first user facial image, M first user image regions where preset M facial feature points are located;
a display module 703 for displaying a first simulated facial image according to the first user facial image;
the processing module 702 is further configured to acquire a second user facial image, and determine M second user image regions where the M facial feature points are located;
a processing module 702, further configured to determine, for a target facial feature point, a position variation amount between a first target user image area and a second target user image area, where the target facial feature point is any one of the M facial feature points;
the processing module 702 is further configured to determine, according to the position variation, a second simulated image region corresponding to the first simulated image region, where the first simulated image region is a region corresponding to the target facial feature point in the first simulated facial image;
the processing module 702 is further configured to adjust a pixel value of the second simulated image region according to a pixel value of the first simulated image region to obtain an updated second simulated face image;
the first target user image area and the second target user image area correspond to the target facial feature points respectively, the first target user image area is located in the first user facial image, and the second target user image area is located in the second user facial image.
Optionally, the display module 703 is further configured to update and display the first simulated facial image as the second simulated facial image on a display screen of the terminal device.
Optionally, the obtaining module 701 is specifically configured to obtain a first target coordinate value, where the first target coordinate value is an average value of coordinate values of each pixel point in the first target user image area;
an obtaining module 701, configured to obtain a second target coordinate value, where the second target coordinate value is an average value of coordinate values of each facial feature point in the second target user image area;
the processing module 702 is specifically configured to obtain a position variation according to the first target coordinate value and the second target coordinate value.
Optionally, the obtaining module 701 is specifically configured to obtain a third target coordinate value, where the third target coordinate value is a coordinate value of a pixel point corresponding to a center point of the first target user image area;
the obtaining module 701 is specifically configured to obtain a fourth target coordinate value, where the fourth target coordinate value is a coordinate value of a pixel point corresponding to a center point of the second target user image area;
the processing module 702 is specifically configured to obtain a position variation according to the third target coordinate value and the fourth target coordinate value.
Optionally, the processing module 702 is specifically configured to determine P facial feature points according to the first user facial image;
a processing module 702, specifically configured to select M facial feature points from the P facial feature points, where the M facial feature points correspond to pixel points in a preset change region in the target facial image;
the processing module 702 is specifically configured to determine, according to the M facial feature points, M first user image regions where the M facial feature points are located.
Optionally, the obtaining module 701 is specifically configured to obtain a target face image;
a processing module 702, specifically configured to determine, for a target facial feature point, position information of a first target user image area, where the target facial feature point is any one of the M facial feature points;
a processing module 702, specifically configured to determine a first simulated image region according to the position information;
a processing module 702, specifically configured to adjust a pixel value of the first simulated image region according to a pixel value of a preset image region, so as to obtain a first simulated face image; the preset image area is an area corresponding to the target face characteristic point in the target face image;
a display module 703, specifically configured to display a first simulated facial image;
and the first target user image area corresponds to the target facial feature point, and the first target user image area is positioned in the first user facial image.
Optionally, the obtaining module 701 is specifically configured to obtain personal information of the user, where the personal information at least includes: user age, user gender, user interests;
a display module 703, configured to display N preset facial images according to personal information;
the processing module 702 is specifically configured to obtain a target face image in response to a touch operation of a user, where the touch operation is used to select the target face image from N preset face images.
Optionally, the M first user image areas include at least one pixel point;
the M second user image areas include at least one pixel point.
Optionally, the pixel values include: one of an RGB value and a YUV value.
In the embodiment of the present invention, each module can implement the method for simulating facial image change provided by the method embodiment, and can achieve the same technical effect, and for avoiding repetition, the description is omitted here.
As shown in fig. 8, an embodiment of the present invention further provides a terminal device, where the terminal device may include:
a memory 801 in which executable program code is stored;
a processor 802 coupled with the memory 801;
the processor 802 calls the executable program code stored in the memory 801 to execute the simulated facial image change method executed by the terminal device in the above embodiments of the methods.
As shown in fig. 9, an embodiment of the present invention further provides a terminal device, where the terminal device includes, but is not limited to: a Radio Frequency (RF) circuit 901, a memory 902, an input unit 903, a display unit 904, a sensor 905, an audio circuit 906, a WiFi (wireless communication) module 907, a processor 908, a power supply 909, and a camera 910. The radio frequency circuit 901 includes a receiver 9011 and a transmitter 9012. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 9 does not constitute a limitation of the terminal device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
RF circuit 901 is used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then processes the received downlink information to processor 908; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuit 901 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 901 can also communicate with a network and other devices through wireless communication. The wireless communication may use any communication standard or protocol, including but not limited to global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), etc.
The memory 902 may be used to store software programs and modules, and the processor 908 executes various functional applications and data processing of the terminal device by operating the software programs and modules stored in the memory 902. The memory 902 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal device, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 903 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the input unit 903 may include a touch panel 9031 and other input devices 9032. The touch panel 9031, also called a touch screen, may collect a touch operation performed by a user on or near the touch panel 9031 (e.g., an operation performed by the user on or near the touch panel 9031 by using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a preset program. Alternatively, the touch panel 9031 may include two parts, namely, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 908, and receives and executes commands from the processor 908. In addition, the touch panel 9031 may be implemented by using various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 903 may include other input devices 9032 in addition to the touch panel 9031. In particular, other input devices 9032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 904 may be used to display information input by a user or information provided to the user and various menus of the terminal device. The display unit 904 may include a display panel 9041, and optionally, the display panel 9041 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-Emitting diode (OLED), or the like. Further, the touch panel 9031 may cover the display panel 9041, and when the touch panel 9031 detects a touch operation on or near the touch panel 9031, the touch panel is transmitted to the processor 908 to determine a touch event, and then the processor 908 provides a corresponding visual output on the display panel 9041 according to the touch event. Although in fig. 9, the touch panel 9031 and the display panel 9041 are implemented as two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 9031 and the display panel 9041 may be integrated to implement the input and output functions of the terminal device.
The terminal device may also include at least one sensor 905, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 9041 according to the brightness of ambient light, and a proximity sensor that may exit the display panel 9041 and/or be backlit when the terminal device is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration) for recognizing the attitude of the terminal device, and related functions (such as pedometer and tapping) for vibration recognition; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured in the terminal device, detailed description is omitted here. In the embodiment of the present invention, the terminal device may include an acceleration sensor, a depth sensor, a distance sensor, or the like.
The audio circuitry 906, speaker 9061, and microphone 9062 may provide an audio interface between a user and a terminal device. The audio circuit 906 may transmit the electrical signal obtained by converting the received audio data to the speaker 9061, and convert the electrical signal into a sound signal by the speaker 9061 and output the sound signal; on the other hand, the microphone 9062 converts a collected sound signal into an electric signal, converts the electric signal into audio data after being received by the audio circuit 906, processes the audio data by the audio data output processor 908, and then sends the audio data to, for example, another terminal device via the RF circuit 901 or outputs the audio data to the memory 902 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the terminal equipment can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 907, and provides wireless broadband internet access for the user. Although fig. 9 shows the WiFi module 907, it is understood that it does not belong to the essential constitution of the terminal device, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 908 is a control center of the terminal device, connects various parts of the entire terminal device by various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 902 and calling data stored in the memory 902, thereby performing overall monitoring of the terminal device. Alternatively, processor 908 may include one or more processing units; preferably, the processor 908 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 908.
The terminal device also includes a power supply 909 (such as a battery) for supplying power to the various components, which may preferably be logically connected to the processor 908 via a power management system, so as to manage charging, discharging, and power consumption via the power management system. Although not shown, the terminal device may further include a bluetooth module or the like, which is not described in detail herein.
Embodiments of the present invention provide a computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention also provide a computer program product, wherein the computer program product, when run on a computer, causes the computer to perform some or all of the steps of the method as in the above method embodiments.
Embodiments of the present invention further provide an application publishing platform, where the application publishing platform is configured to publish a computer program product, where the computer program product, when running on a computer, causes the computer to perform some or all of the steps of the method in the above method embodiments.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are exemplary and alternative embodiments, and that the acts and modules illustrated are not required in order to practice the invention.
The terminal device provided by the embodiment of the present invention can implement each process shown in the above method embodiments, and is not described herein again to avoid repetition.
In various embodiments of the present invention, it should be understood that the sequence numbers of the above-mentioned processes do not imply an inevitable order of execution, and the execution order of the processes should be determined by their functions and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, can be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of each embodiment of the present invention.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.

Claims (12)

1. A method for simulating facial image change is applied to a terminal device, and comprises the following steps:
acquiring a first user face image;
determining M first user image areas where preset M face feature points are located from the first user face image;
displaying a first simulated facial avatar based on the first user facial image;
acquiring a second user face image, and determining M second user image areas where the M face feature points are located;
determining the position variation of a first target user image area and a second target user image area aiming at a target facial feature point, wherein the target facial feature point is any one of the M facial feature points;
determining a second simulated image area corresponding to a first simulated image area according to the position variation, wherein the first simulated image area is an area corresponding to the target facial feature point in the first simulated facial image;
adjusting the pixel value of the second simulated image area according to the pixel value of the first simulated image area to obtain an updated second simulated face image;
the first target user image area and the second target user image area correspond to the target facial feature point respectively, the first target user image area is located in the first user facial image, and the second target user image area is located in the second user facial image.
2. The method of claim 1, wherein after adjusting the pixel values of the second simulated character region to obtain an updated second simulated face character in accordance with the pixel values of the first simulated character region, further comprising:
and updating and displaying the first simulated face image as the second simulated face image on a display screen of the terminal equipment.
3. The method of claim 1, wherein determining an amount of change in position of the first target user image region and the second target user image region for the target facial feature point comprises:
acquiring a first target coordinate value, wherein the first target coordinate value is an average value of coordinate values of each pixel point in the first target user image area;
acquiring a second target coordinate value, wherein the second target coordinate value is an average value of coordinate values of each pixel point in the second target user image area;
and obtaining the position variation according to the first target coordinate value and the second target coordinate value.
4. The method of claim 1, wherein determining an amount of change in position of the first target user image region and the second target user image region for the target facial feature point comprises:
acquiring a third target coordinate value, wherein the third target coordinate value is the coordinate value of a pixel point corresponding to the central point of the first target user image area;
acquiring a fourth target coordinate value, wherein the fourth target coordinate value is the coordinate value of a pixel point corresponding to the central point of the second target user image area;
and obtaining the position variation according to the third target coordinate value and the fourth target coordinate value.
5. The method according to claim 1, wherein the determining, from the first user face image, M first user image regions where M preset face feature points are located comprises:
determining P face feature points according to the first user face image;
selecting the M face feature points from the P face feature points, wherein the M face feature points correspond to pixel points in a preset change area in a target face image;
and determining the M first user image areas where the M face feature points are located according to the M face feature points.
6. The method of claim 1, wherein displaying a first simulated facial avatar based on the first user facial image comprises:
acquiring a target face image;
for the target facial feature point, determining position information of the first target user image area, wherein the target facial feature point is any one of the M facial feature points;
determining the first simulated image area according to the position information;
adjusting the pixel value of the first simulation image area according to the pixel value of a preset image area to obtain the first simulation part image; the preset image area is an area corresponding to the target face feature point in the target face image;
displaying the first simulated facial avatar;
wherein the first target user image region corresponds to the target facial feature point, the first target user image region being in the first user facial image.
7. The method of claim 6, wherein said obtaining a target facial image comprises:
acquiring personal information of a user, wherein the personal information at least comprises: user age, user gender, user interests;
displaying N preset facial images according to the personal information;
and responding to the touch operation of the user to obtain the target face image, wherein the touch operation is used for selecting the target face image from the N preset face images.
8. The method according to any one of claims 1 to 7, wherein the M first user image areas comprise at least one pixel point;
the M second user image regions include at least one pixel point.
9. The method according to any one of claims 1 to 8, wherein the pixel values comprise:
one of an RGB value and a YUV value.
10. A terminal device, comprising:
the acquisition module is used for acquiring a first user face image;
the processing module is used for determining M first user image areas where preset M face feature points are located from the first user face image;
the display module is used for displaying a first simulated face image according to the first user face image;
the processing module is further configured to acquire a second user face image and determine M second user image regions where the M face feature points are located;
the processing module is further configured to determine, for a target facial feature point, a position variation amount between a first target user image area and a second target user image area, where the target facial feature point is any one of the M facial feature points;
the processing module is further used for determining a second simulated image area corresponding to a first simulated image area according to the position variation, wherein the first simulated image area is an area corresponding to the target facial feature point in the first simulated facial image;
the processing module is further configured to adjust a pixel value of the second simulated image region according to a pixel value of the first simulated image region to obtain an updated second simulated face image;
the first target user image area and the second target user image area correspond to the target facial feature point respectively, the first target user image area is located in the first user facial image, and the second target user image area is located in the second user facial image.
11. A terminal device characterized by comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to implement the simulated facial image alteration method of any of claims 1 to 9.
12. A computer-readable storage medium on which a computer program is stored, which computer program, when being executed by a processor, carries out a method of simulating a change in facial appearance according to any one of claims 1 to 9.
CN202110086216.4A 2021-01-21 2021-01-21 Method for simulating facial image change and terminal equipment Pending CN114820285A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110086216.4A CN114820285A (en) 2021-01-21 2021-01-21 Method for simulating facial image change and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110086216.4A CN114820285A (en) 2021-01-21 2021-01-21 Method for simulating facial image change and terminal equipment

Publications (1)

Publication Number Publication Date
CN114820285A true CN114820285A (en) 2022-07-29

Family

ID=82524769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110086216.4A Pending CN114820285A (en) 2021-01-21 2021-01-21 Method for simulating facial image change and terminal equipment

Country Status (1)

Country Link
CN (1) CN114820285A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331572A (en) * 2016-08-26 2017-01-11 乐视控股(北京)有限公司 Image-based control method and device
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN112040223A (en) * 2020-08-25 2020-12-04 RealMe重庆移动通信有限公司 Image processing method, terminal device and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331572A (en) * 2016-08-26 2017-01-11 乐视控股(北京)有限公司 Image-based control method and device
CN110557625A (en) * 2019-09-17 2019-12-10 北京达佳互联信息技术有限公司 live virtual image broadcasting method, terminal, computer equipment and storage medium
CN112040223A (en) * 2020-08-25 2020-12-04 RealMe重庆移动通信有限公司 Image processing method, terminal device and storage medium

Similar Documents

Publication Publication Date Title
CN107256555B (en) Image processing method, device and storage medium
CN110602473B (en) White balance calibration method and device
CN107908383B (en) Screen color adjusting method and device and mobile terminal
CN108287744B (en) Character display method, device and storage medium
CN110969981B (en) Screen display parameter adjusting method and electronic equipment
CN108259990B (en) Video editing method and device
CN109238460B (en) Method for obtaining ambient light intensity and terminal equipment
CN107067842B (en) Color value adjusting method, mobile terminal and storage medium
CN107958470A (en) A kind of color correcting method, mobile terminal
CN110458921B (en) Image processing method, device, terminal and storage medium
CN107895352A (en) A kind of image processing method and mobile terminal
CN107479781A (en) A kind of update method and terminal of application icon color
CN108877737A (en) Screen adjustment method, mobile terminal and computer readable storage medium
CN109614173B (en) Skin changing method and device
CN108280813A (en) A kind of image processing method, terminal and computer readable storage medium
CN107291334A (en) A kind of icon font color determines method and apparatus
CN108055388A (en) A kind of methods of exhibiting, device and the mobile terminal of virtual red packet
CN107835402A (en) A kind of image processing method, device and mobile terminal
CN109639981B (en) Image shooting method and mobile terminal
US11783517B2 (en) Image processing method and terminal device, and system
CN112307941A (en) Image conversion method and terminal equipment
CN111562955B (en) Method and device for configuring theme colors of terminal equipment and terminal equipment
CN108053383A (en) A kind of noise-reduction method, equipment and computer readable storage medium
CN109729280B (en) Image processing method and mobile terminal
CN113838436B (en) Color temperature adjusting method, device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No.168, Dongmen Middle Road, Xiaobian community, Chang'an Town, Dongguan City, Guangdong Province, 528850

Applicant after: Guangdong GENIUS Technology Co., Ltd.

Address before: Rao Shengtian, 168 Dongmen Middle Road, Xiaobian community, Chang'an Town, Dongguan City, Guangdong Province, 528850

Applicant before: Guangdong GENIUS Technology Co., Ltd.

CB02 Change of applicant information