CN109584150B - Image processing method and terminal equipment - Google Patents

Image processing method and terminal equipment Download PDF

Info

Publication number
CN109584150B
CN109584150B CN201811434146.1A CN201811434146A CN109584150B CN 109584150 B CN109584150 B CN 109584150B CN 201811434146 A CN201811434146 A CN 201811434146A CN 109584150 B CN109584150 B CN 109584150B
Authority
CN
China
Prior art keywords
target
region
area
target area
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811434146.1A
Other languages
Chinese (zh)
Other versions
CN109584150A (en
Inventor
孙向华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN201811434146.1A priority Critical patent/CN109584150B/en
Publication of CN109584150A publication Critical patent/CN109584150A/en
Application granted granted Critical
Publication of CN109584150B publication Critical patent/CN109584150B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/18Image warping, e.g. rearranging pixels individually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses an image processing method and terminal equipment, relates to the technical field of communication, and aims to solve the problem that face beautifying algorithms in the prior art are not comprehensive. The method comprises the following steps: determining a first target area according to the depth image and the two-dimensional image of the target face area, wherein the first target area is an area in the area except for five sense organs in the depth image; in the case that the first target area does not satisfy the target condition, adjusting the contour of the first target area, the target condition including a first condition including: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range; and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted two-dimensional image except for the five sense organs. The scheme is particularly applied to scenes for beautifying images.

Description

Image processing method and terminal equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an image processing method and terminal equipment.
Background
With the increasing demand of people for image display effect and the continuous development of terminal technology, the photographing mode tends to be diversified, wherein the beauty mode is the photographing mode with the highest use frequency.
However, most of the current beauty models are designed to be adjusted and beautified by algorithms such as buffing, whitening, large eye and face thinning on the face, eyes, nose and chin, but the above algorithms cannot beautify all features of all areas of the face, such as the outline of the forehead area. Thus, the face beautification algorithms in the prior art are not comprehensive.
Disclosure of Invention
The embodiment of the invention provides an image processing method and terminal equipment, and aims to solve the problem that face beautifying algorithms are incomplete in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, where the method includes:
determining a first target area according to the depth image and the two-dimensional image of the target face area, wherein the first target area is an area in the area except for five sense organs in the depth image;
adjusting the contour of the first target area if the first target area does not satisfy a target condition, the target condition comprising a first condition comprising: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range;
and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted area except for the five sense organs in the two-dimensional image.
In a second aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes: the device comprises a determining module, an adjusting module and a mapping module;
the determining module is used for determining a first target area according to the depth image and the two-dimensional image of the target face area, wherein the first target area is an area in an area except for five sense organs in the depth image;
the adjusting module is configured to adjust the contour of the first target area if the first target area determined by the determining module does not satisfy a target condition, where the target condition includes a first condition, and the first condition includes: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range;
the mapping module is configured to map the contour of the first target region adjusted by the adjustment module into the two-dimensional image to obtain a second target region, where the second target region is a region in the two-dimensional image after adjustment, except for five sense organs.
In a third aspect, an embodiment of the present invention provides a terminal device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, and when executed by the processor, the computer program implements the steps of the image processing method in the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the image processing method as in the first aspect.
In the embodiment of the present invention, the terminal device may determine, according to the depth image and the two-dimensional image of the target face region, a first target region, where the first target region is a region in a region other than five sense organs in the depth image; adjusting the contour of the first target area if the first target area does not satisfy a target condition, the target condition comprising a first condition comprising: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range; and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted area except for the five sense organs in the two-dimensional image. The terminal equipment combines the depth image and the two-dimensional image to determine a first target area in the depth image, adjusts the outline of the first target area under the condition that the first target area does not meet the target condition, and maps the adjusted outline of the first target area to the two-dimensional image to obtain a second target area subjected to outline adjustment in the two-dimensional image, so that the purpose of beautifying the face is achieved.
Drawings
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a second flowchart of an image processing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present invention;
fig. 5 is a hardware schematic diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive step based on the embodiments of the present invention, are within the scope of protection of the present invention.
The terms "first," "second," "third," and "fourth," etc. in the description and in the claims of the present invention are used for distinguishing between different objects and not for describing a particular sequential order of the objects. For example, the first condition, the second condition, the third condition, the fourth condition, and the like are for distinguishing different conditions, and are not for describing a specific order of the conditions.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "such as" in an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of processing units means two or more processing units; plural elements means two or more elements, and the like.
The embodiment of the invention provides an image processing method.A terminal device can determine a first target area according to a depth image and a two-dimensional image of a target face area, wherein the first target area is an area in an area except for five sense organs in the depth image; adjusting the contour of the first target area if the first target area does not satisfy a target condition, the target condition comprising a first condition comprising: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range; and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted two-dimensional image except for the five sense organs. The terminal equipment combines the depth image and the two-dimensional image to determine a first target area in the depth image, adjusts the outline of the first target area under the condition that the first target area does not meet the target condition, and maps the adjusted outline of the first target area to the two-dimensional image to obtain a second target area subjected to outline adjustment in the two-dimensional image, so that the purpose of beautifying the face is achieved.
The following describes a software environment to which the image processing method provided by the embodiment of the present invention is applied, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system operating environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the image processing method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the image processing method may operate based on the android operating system shown in fig. 1. That is, the processor or the terminal may implement the image processing method provided by the embodiment of the present invention by running the software program in the android operating system.
The terminal device in the embodiment of the invention can be a mobile terminal device and can also be a non-mobile terminal device. The mobile terminal device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), etc.; the non-mobile terminal device may be a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, or the like; the embodiments of the present invention are not particularly limited.
An execution main body of the image processing method provided by the embodiment of the present invention may be the terminal device (including a mobile terminal device and a non-mobile terminal device), or may also be a functional module and/or a functional entity capable of implementing the method in the terminal device, which may be determined specifically according to actual use requirements, and the embodiment of the present invention is not limited. The following takes a terminal device as an example to exemplarily explain an image processing method provided by the embodiment of the present invention.
Referring to fig. 2, an embodiment of the present invention provides an image processing method, which may include steps 201 to 203 described below.
Step 201, the terminal device determines a first target area according to the depth image and the two-dimensional image of the target face area.
The first target region is a region in a region other than five sense organs in the depth image.
In the terminal device, the depth image is an image stored in the form of depth information of the target face area, and the two-dimensional image is an image stored in the form of two-dimensional information of the target face area. The two-dimensional image is a planar image without depth information, and includes a grayscale image, a color image, and the like. The color image further includes an RGB (three primary colors of red, green, and blue) color image, a YUV (luminance, chrominance) color image, and the like, and the embodiment of the present invention is not limited. At present, depth images and two-dimensional images which are commonly used are Depth images (Depth maps) and RGB color images, also called RGB-D images.
The terminal device may capture the depth image and the two-dimensional image from other devices (e.g., downloaded from a network, captured by a camera, etc.), or may capture the depth image and the two-dimensional image via a camera on the terminal device (typically a combination of two cameras, one for capturing the depth image, such as an infrared camera or a structured light camera, and one for capturing the two-dimensional image, such as a color camera). It should be noted that: in the embodiment of the present invention, the depth image and the two-dimensional image are registered, that is, the coordinates of each pixel point in the depth image and the two-dimensional image are in one-to-one correspondence, and the specific registration process refers to the prior art and is not described herein again.
For example, the first target region may be a forehead region, a chin region, a face region, a lower eyelid region, and the like in the target face region, and the embodiment of the present invention is not limited thereto.
The terminal device may determine the first target region in the depth image by combining the depth image and the two-dimensional image through an image recognition method, and the specific image recognition method may refer to the related art, which is not described herein again.
This step may be implemented, for example, by step 201a described below.
Step 201a, determining the first target area according to the depth value and the depth gradient value of each pixel point of the depth image and the gray value of each pixel point of the two-dimensional image.
The first target area is taken as a forehead area as an example for illustration.
Illustratively, the terminal device detects a highest point of a hairline of the target face region in the two-dimensional image through an image detection technique (for example, the highest point of the hairline is a pixel point (k, l) in the two-dimensional image), and corresponds the detected highest point of the hairline to the depth image (and the highest point of the hairline is a pixel point (k, l) in the depth image), that is, obtains a point corresponding to the highest point of the hairline in the depth image. Similarly, the upper boundary of the forehead region in the depth image is obtained according to the above method. And detecting key points such as eyes, an eyebrow, a nose tip, a nasal fundus, a mandible, a forehead and the like on the depth image according to the depth value and the depth gradient value of each pixel point of the depth image, and further obtaining a lower boundary and a left boundary and a right boundary of the forehead area. For example, if the nose tip is the most prominent point in the target face region, and the depth value of the nose tip position is not too large as the depth values of other positions, the terminal device may determine the position of the nose tip according to the depth value of each pixel point in the depth image; in the depth gradient values of the pixel points on any parallel line parallel to the connection line of the two pupils in the forehead area, the depth gradient values of the pixel points on the left and right boundaries of the forehead area are larger. Therefore, the terminal device can calculate the depth gradient of each point in the forehead area, find out the point with the depth gradient value larger than the threshold value on the extension line of the two pupil connection lines, and use the point as the left and right boundary pixel points on the extension line of the two pupil connection lines in the forehead area.
The terminal device may also determine the first target area according to the depth value and the depth gradient value of each pixel point of the depth image and the gray value of each pixel point of the two-dimensional image by other methods, which is not limited in the embodiment of the present invention.
The method of determining the first target region from the depth image and the two-dimensional image of the target face region can make the obtained first target region more accurate than the method of determining the region in the other region than the five sense organs directly in the two-dimensional image.
Step 202, the terminal device adjusts the outline of the first target area when the first target area does not satisfy the target condition.
The target condition includes a first condition comprising: the ratio of the length value of a region to the length value of a face region is within a first range, and the ratio of the width value of a region to the interpupillary distance value in a face region is within a second range.
Before the terminal device adjusts the contour of the first target area, it is determined whether the first target area satisfies a target condition.
The first target area is taken as a forehead area for example.
Generally, the length value of the forehead area is the distance value from the highest point of the forehead hairline to the eyebrow center, and the length value of the face area is the length value from the highest point of the forehead hairline to the lower jaw; the width value of the forehead area is the maximum distance value in the distance from the left hairline of the forehead area to the right hairline of the forehead area (on a straight line parallel to the two eyes), and the interpupillary distance value is the straight line distance value between the left pupil point and the right pupil point when the two eyes are in front view.
The first range is an optimal range in which the ratio of the forehead area length value to the face area length value is located, that is, if the ratio of the forehead area length value to the face area length value is within the first range, it can be considered that the ratio of the forehead area length value to the face area length value is optimal, and the length of the forehead area is beautiful.
The second range is an optimal range in which the ratio of the width value of the forehead area to the interpupillary distance value is located, that is, if the ratio of the width value of the forehead area to the interpupillary distance value is within the second range, the ratio of the width value of the forehead area to the interpupillary distance value can be considered as optimal, and the width of the forehead area is good-looking and beautiful.
The size of the first range and the size of the second range may be known in the prior art, or may be obtained by analyzing a large number of human face samples, and the specific analysis process may refer to the prior art, which is not described herein again.
The terminal equipment obtains the ratio K of the length value of the forehead area to the length value of the face area according to the depth image and the two-dimensional image of the target face area 2 And the ratio K of the width of the forehead region to the interpupillary distance 3 . Specific terminal device acquires K 2 And K 3 The process of (1) may refer to the method for determining the first target region, which is not described herein again, or refer to the related art, which is not described herein again.
The terminal equipment obtains K through calculation according to the method 2 And K 3 Then, by judging K 2 Whether or not within a first range, and K 3 Whether or not within the second range, thereby determining whether or not the first target area satisfies the first condition, to determine whether or not the first target area satisfies the target condition.
The target condition (i.e., K) is satisfied in the first target region 2 And K 3 Each within a corresponding range), the contour of the first target area is aesthetic and does not need to be adjusted; the first target region does not satisfy the target condition (K) 2 And K 3 At least one of which is not within the corresponding range), the profile of the first target region is adjusted.
For example, the terminal device may store an adjustment table for adjusting the contour of the first target area, and when the terminal device determines that the contour of the first target area needs to be adjusted, the method for adjusting the contour of the first target area may be obtained by looking up the table. The terminal device may further adjust the contour of the first target area by using other methods for adjusting the contour of the area, and the specific method may refer to the related art, which is not described herein again.
For example, the terminal device may perform surface deformation processing on the first target area, and for a specific implementation process of the surface deformation processing, reference may be made to the related art, which is not described herein again.
Step 203, the terminal device maps the adjusted contour of the first target area to the two-dimensional image to obtain a second target area.
The second target region is a region in the two-dimensional image after adjustment, except for the five sense organs. For the description of the second target area, reference may be made to the above description of the first target area, and details are not repeated here.
And the terminal equipment maps the adjusted outline of the first target area to the two-dimensional image to obtain a second target area. That is, the mapping process is performed to make the contour of the second target region in the two-dimensional image the same as the contour of the first target region in the depth image after the adjustment, and all the processing methods that can achieve the above-mentioned effects may be referred to as mapping process, and specifically, reference may be made to the related art, and the embodiment of the present invention is not limited thereto.
Illustratively, this step may be specifically realized by step 203a described below.
Step 203a, adjusting the two-dimensional image corresponding to the adjusted contour of the first target area to obtain a second target area having the same contour as the adjusted contour of the first target area.
And the terminal equipment acquires the adjusted contour of the first target area, and performs corresponding adjustment processing on the two-dimensional image by taking the adjusted contour of the first target area as a target to obtain a second target area which is the same as the adjusted contour of the first target area. For example, the two-dimensional image may be triangulated and deformed to obtain a second target region having the same contour as the adjusted first target region, with the contour of the adjusted first target region as a target. For the specific implementation process of the triangulation deformation processing, reference may be made to the related art, which is not described herein again.
Compared with a second target area obtained by directly adjusting the two-dimensional image, the second target area obtained by firstly adjusting the first target area in the depth image and then mapping the first target area to the two-dimensional image is more accurate and more harmonious.
Optionally, the target condition further includes: a second condition; the second condition includes: the flatness value of the region is within a third range. The flatness value of a region is a weighted average of the depth gradients of the region, and is given by the formula:
Figure BDA0001883378380000051
wherein, K 1 Is the flatness value of the region, (i, j) is the coordinates of the pixel points of the region,
Figure BDA0001883378380000061
(i 0 ,j 0 ) Is the coordinate of the central point of the area, weight (i, j) is the weight value of the pixel point (i, j), sigma is a preset constant value,
Figure BDA0001883378380000062
depth (i, j) is the depth value of the pixel (i, j), and gradient (i, j) is the depth gradient value of the pixel (i, j).
The σ value may specifically refer to the related art, and is not limited herein. weight (i, j) may also be the reciprocal of the number of all pixel points in the target region, or may be other, and the embodiment of the present invention is not limited.
The first target area is taken as a forehead area for an exemplary explanation.
The flatness degree value of the forehead region is a weighted average of the depth gradients of the forehead region. The third range is the optimal range where the flatness degree value of the forehead area is located, that is, if the flatness degree value of the forehead area is within the third range, the flatness degree value of the forehead area can be considered as optimal, and the flatness degree of the forehead area is good-looking and beautiful.
The size of the third range may be known in the prior art, or may be obtained by analyzing a large number of human face samples, and the specific analysis process may refer to the prior art, which is not described herein again.
And calculating the weighted average value of the depth gradient of the forehead area according to the calculation formula so as to obtain the flatness degree value of the forehead area.
Illustratively, in conjunction with fig. 2, as shown in fig. 3, the step 202 can be specifically realized by the following step 202 a; after step 203, the image processing method provided by the embodiment of the present invention may further include step 204 described below.
Step 202a, under the condition that the first target area does not meet the target condition, the terminal equipment adjusts at least one of the following items: the contour of the first target region and the flatness degree value of the first target region.
The terminal equipment obtains K through calculation according to the method 1 、K 2 And K 3 Then, by judging K 2 Whether or not within a first range, and K 3 Whether it is within the second range, thereby determining whether the first target region satisfies the first condition, and determining whether the first target region satisfies the first condition by determining K 1 Whether the first target area meets the second condition is determined, and then whether the first target area meets the target condition is determined in combination with whether the first target area meets the first condition and whether the first target area meets the second condition.
The target condition (i.e., K) is satisfied in the first target region 1 、K 2 And K 3 Each within a corresponding range), the contour of the first target area is aesthetic and does not need to be adjusted; the first target region does not satisfy the target condition (K) 1 、K 2 And K 3 Does not lie within the corresponding range), adjusting at least one of: the contour of the first target region and the flatness degree value of the first target region.
Illustratively, the first condition (i.e., K) is not satisfied at the first target region 2 And K 3 At least one of which is not within the corresponding range), or adjusting the contour of the first target area and adjusting the flatness value of the first target area; the second condition (K) is not satisfied in the first target region 1 Not within the third range), adjusting the flatness value of the first target area; the first condition (i.e., K) is not satisfied in the first target region 2 And K 3 Is not within the corresponding range), and does not satisfy a second condition (K) 1 Not within the third range), the contour of the first target region is adjusted and the flatness degree value of the first target region is adjusted. The terminal equipment can pass through the first target areaPerforming a surface deformation process to adjust at least one of: the contour of the first target region and the flatness degree value of the first target region. For the specific implementation process of the surface deformation processing, reference may be made to the related art, which is not described herein again.
In this way, the adjustment can be performed in a targeted manner in accordance with the problem in the first target region, and the adjustment efficiency is improved.
And 204, the terminal device adjusts the gray value of the second target area according to the adjusted depth variation of the first target area.
The depth variation is the depth variation of the first target region after the adjustment of the contour of the first target region and before the adjustment of the contour of the first target region. That is, for each pixel point in the range where the adjusted first target region is located, the depth variation is a ratio of a difference between the adjusted depth value and the depth value before adjustment to an average of the differences between the adjusted depth values of all the pixel points in the range and the depth value before adjustment.
The formula of the depth variation is as follows:
Figure BDA0001883378380000071
where T is a constant value, and can be obtained by a person skilled in the art according to practical experience and a lot of experiments, for example, the value may be between 10 and 20.
The formula for adjusting the gray scale of the target region is as follows:
gray′(i,j)=gray(i,j)+Δ gray (i, j), wherein avg (depth _ after (i, j) -depth _ before (i, j) represents the mean value of the depth change of the forehead region, and gray' (i, j) represents the adjusted gray value, and if the terminal device adjusts the contour of the target region in the two-dimensional image, gray (i, j) is the gray value of the target region of the two-dimensional image after adjustment.
Therefore, the gray scale of the target area is slightly adjusted according to the depth variation, so that the contrast of the target area can be enhanced, and the target area looks more natural, stereoscopic, harmonious and beautiful.
The embodiment of the invention provides an image processing method.A terminal device can determine a first target area according to a depth image and a two-dimensional image of a target face area, wherein the first target area is an area in an area except five sense organs in the depth image; in the case that the first target area does not satisfy a target condition, adjusting the contour of the first target area, the target condition including a first condition including: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range; and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted two-dimensional image except for the five sense organs. The terminal device determines a first target area in the depth image by combining the depth image and the two-dimensional image, adjusts the outline of the first target area under the condition that the first target area does not meet the target condition, and maps the adjusted outline of the first target area to the two-dimensional image to obtain a second target area subjected to outline adjustment in the two-dimensional image, so that the purpose of beautifying the face is achieved.
As shown in fig. 4, an embodiment of the present invention provides a terminal device 120, where the terminal device 120 includes: a determination module 121, an adjustment module 122, and a mapping module 123;
the determining module 121 is configured to determine a first target region according to the depth image and the two-dimensional image of the target face region, where the first target region is a region in a region other than five sense organs in the depth image;
the adjusting module 122 is configured to, in a case that the first target area determined by the determining module does not satisfy a target condition, adjust the contour of the first target area, where the target condition includes a first condition that includes: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range;
the mapping module 123 is configured to map the contour of the first target region adjusted by the adjusting module into the two-dimensional image to obtain a second target region, where the second target region is a region in the two-dimensional image after adjustment, except for five sense organs.
Optionally, the determining module 121 is specifically configured to determine the first target area according to a depth value and a depth gradient value of each pixel of the depth image, and a gray value of each pixel of the two-dimensional image.
Optionally, the target condition further includes: a second condition; the second condition includes: the flatness degree value of the area is in a third range; the adjusting module 122 is specifically configured to, when the first target area determined by the determining module does not satisfy the target condition, adjust at least one of the following: the contour of the first target region and the flatness degree value of the first target region; and the processing unit is further configured to, after the adjusted contour of the first target region is mapped to the two-dimensional image to obtain a second target region, adjust a gray value of the second target region according to a depth variation of the adjusted first target region, where the depth variation is a depth variation of the first target region after the contour of the first target region is adjusted and before the contour of the first target region is adjusted.
Optionally, the flatness value of the region is a weighted average of the depth gradients of the region, and the formula is as follows:
Figure BDA0001883378380000081
wherein, K 1 Is the flatness value of the region, (i, j) is the coordinates of the pixel points of the region,
Figure BDA0001883378380000082
(i 0 ,j 0 ) Is the coordinate of the central point of the region, weight (i, j) is the weight value of the pixel point (i, j), sigma is a preset constant value,
Figure BDA0001883378380000083
depth (i, j) is the depth value of the pixel (i, j), and gradient (i, j) is the depth gradient value of the pixel (i, j).
Optionally, the mapping module is specifically configured to adjust the two-dimensional image corresponding to the profile of the first target area adjusted by the adjusting module, so as to obtain a second target area that is the same as the profile of the first target area after the adjustment.
The terminal device provided in the embodiment of the present invention can implement each process shown in any one of fig. 2 to fig. 3 in the above method embodiment, and details are not described here again to avoid repetition.
The embodiment of the invention provides terminal equipment, which can determine a first target area according to a depth image and a two-dimensional image of a target face area, wherein the first target area is an area in an area except for five sense organs in the depth image; adjusting the contour of the first target area if the first target area does not satisfy a target condition, the target condition comprising a first condition comprising: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range; and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted two-dimensional image except for the five sense organs. The terminal equipment combines the depth image and the two-dimensional image to determine a first target area in the depth image, adjusts the outline of the first target area under the condition that the first target area does not meet the target condition, and maps the adjusted outline of the first target area to the two-dimensional image to obtain a second target area subjected to outline adjustment in the two-dimensional image, so that the purpose of beautifying the face is achieved.
Fig. 5 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention. As shown in fig. 5, the terminal device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 5 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal device, a wearable device, a pedometer, and the like.
The processor 110 is configured to determine a first target region according to the depth image and the two-dimensional image of the target face region, where the first target region is a region in a region other than five sense organs in the depth image; in the case that the first target area does not satisfy a target condition, adjusting the contour of the first target area, the target condition including a first condition including: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range; and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted two-dimensional image except for the five sense organs.
According to the terminal device provided by the embodiment of the invention, the terminal device can determine a first target area according to the depth image and the two-dimensional image of the target face area, wherein the first target area is an area in an area except for five sense organs in the depth image; adjusting the contour of the first target area if the first target area does not satisfy a target condition, the target condition comprising a first condition comprising: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range; and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted two-dimensional image except for the five sense organs. The terminal equipment combines the depth image and the two-dimensional image to determine a first target area in the depth image, and under the condition that the first target area does not meet a target condition, the contour of the first target area is adjusted, and the contour of the adjusted first target area is mapped to the two-dimensional image to obtain a second target area which is subjected to contour adjustment in the two-dimensional image.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 110; in addition, uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The terminal device provides wireless broadband internet access to the user through the network module 102, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the terminal device 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive audio or video signals. The input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, and the Graphics processor 1041 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The terminal device 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the terminal device 100 moves to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the attitude of the terminal device (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), and vibration identification related functions (such as pedometer and tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on touch panel 1071 or near touch panel 1071 using a finger, a stylus, or any other suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, receives a command from the processor 110, and executes the command. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 5, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the terminal device, which is not limited herein.
The interface unit 108 is an interface for connecting an external device to the terminal apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 100 or may be used to transmit data between the terminal apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, etc. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the terminal device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
Terminal device 100 may further include a power supply 111 (e.g., a battery) for providing power to various components, and optionally, power supply 111 may be logically connected to processor 110 via a power management system, so as to implement functions of managing charging, discharging, and power consumption via the power management system.
In addition, the terminal device 100 includes some functional modules that are not shown, and are not described in detail here.
Optionally, an embodiment of the present invention further provides a terminal device, which may include the processor 110 shown in fig. 5, the memory 109, and a computer program stored on the memory 109 and capable of being executed on the processor 110, where the computer program, when executed by the processor 110, implements each process of the image processing method shown in any one of fig. 2 to fig. 3 in the foregoing method embodiments, and can achieve the same technical effect, and details are not described here to avoid repetition.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the image processing method shown in any one of fig. 2 to 3 in the foregoing method embodiments, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the particular illustrative embodiments, it is to be understood that the invention is not limited to the disclosed embodiments, but is intended to cover various modifications, equivalent arrangements, and equivalents thereof, which may be made by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
For example, each drawing in the embodiments of the present invention is illustrated in the drawings of the independent embodiments, and when the embodiments of the present invention are specifically implemented, each drawing may also be implemented in combination with any other drawing that may be combined, and the embodiments of the present invention are not limited.

Claims (10)

1. An image processing method, characterized in that the method comprises:
determining a first target area according to the depth image and the two-dimensional image of the target face area, wherein the first target area is an area in the area except for five sense organs in the depth image;
adjusting the contour of the first target area if the first target area does not satisfy a target condition, the target condition comprising a first condition comprising: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range;
and mapping the adjusted contour of the first target area to the two-dimensional image to obtain a second target area, wherein the second target area is an area in the adjusted two-dimensional image except for the five sense organs.
2. The method of claim 1, wherein determining the first target region from the depth image and the two-dimensional image of the target face region comprises:
and determining the first target area according to the depth value and the depth gradient value of each pixel point of the depth image and the gray value of each pixel point of the two-dimensional image.
3. The method of claim 1, wherein the target condition further comprises: a second condition; the second condition includes: the flatness degree value of the area is in a third range;
the adjusting the contour of the first target area in the case that the first target area does not satisfy a target condition includes:
in the event that the first target area does not satisfy the target condition, adjusting at least one of: a contour of the first target region and a flatness degree value of the first target region;
after the mapping the adjusted contour of the first target region to the two-dimensional image to obtain a second target region, the method further includes:
and adjusting the gray value of the second target area according to the adjusted depth variation of the first target area, wherein the depth variation is the adjusted depth variation of the first target area after the contour of the first target area is adjusted and before the contour of the first target area is adjusted.
4. The method of claim 3, wherein the flatness-degree value for the region is a weighted average of the depth gradients of the region, as given by:
Figure FDA0001883378370000011
wherein, K 1 Is the plane of the areaThe degree of flatness value, (i, j) is the coordinates of the pixel points of the region,
Figure FDA0001883378370000012
(i 0 ,j 0 ) Is the coordinate of the central point of the region, weight (i, j) is the weight value of the pixel point (i, j), sigma is a preset constant value,
Figure FDA0001883378370000013
depth (i, j) is the depth value of the pixel (i, j), and gradient (i, j) is the depth gradient value of the pixel (i, j).
5. The method according to any one of claims 1 to 4, wherein the mapping the adjusted contour of the first target region into the two-dimensional image to obtain a second target region comprises:
and adjusting the two-dimensional image corresponding to the adjusted contour of the first target area to obtain a second target area with the same contour as the adjusted contour of the first target area.
6. A terminal device, characterized in that the terminal device comprises: the device comprises a determining module, an adjusting module and a mapping module;
the determining module is used for determining a first target area according to the depth image and the two-dimensional image of the target face area, wherein the first target area is an area in the area except for the five sense organs in the depth image;
the adjusting module is configured to, if the first target area determined by the determining module does not satisfy a target condition, adjust the contour of the first target area, where the target condition includes a first condition, and the first condition includes: the ratio of the length value of the region to the length value of the face region is within a first range, and the ratio of the width value of the region to the interpupillary distance value in the face region is within a second range;
the mapping module is configured to map the contour of the first target region adjusted by the adjustment module into the two-dimensional image to obtain a second target region, where the second target region is a region in the two-dimensional image after adjustment, except for five sense organs.
7. The terminal device of claim 6,
the determining module is specifically configured to determine the first target area according to the depth value and the depth gradient value of each pixel point of the depth image and the gray value of each pixel point of the two-dimensional image.
8. The terminal device of claim 6, wherein the target condition further comprises: a second condition; the second condition includes: the flatness degree value of the area is in a third range;
the adjusting module is specifically configured to, when the first target area determined by the determining module does not satisfy the target condition, adjust at least one of the following: a contour of the first target region and a flatness degree value of the first target region; and after the adjusted contour of the first target region is mapped to the two-dimensional image to obtain a second target region, adjusting a gray value of the second target region according to a depth variation of the adjusted first target region, where the depth variation is a depth variation of the first target region after the contour of the first target region is adjusted and before the contour of the first target region is adjusted.
9. A terminal device, characterized in that it comprises a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 5.
CN201811434146.1A 2018-11-28 2018-11-28 Image processing method and terminal equipment Active CN109584150B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811434146.1A CN109584150B (en) 2018-11-28 2018-11-28 Image processing method and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811434146.1A CN109584150B (en) 2018-11-28 2018-11-28 Image processing method and terminal equipment

Publications (2)

Publication Number Publication Date
CN109584150A CN109584150A (en) 2019-04-05
CN109584150B true CN109584150B (en) 2023-03-14

Family

ID=65925269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811434146.1A Active CN109584150B (en) 2018-11-28 2018-11-28 Image processing method and terminal equipment

Country Status (1)

Country Link
CN (1) CN109584150B (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6029021B2 (en) * 2012-01-27 2016-11-24 パナソニックIpマネジメント株式会社 Image processing apparatus, imaging apparatus, and image processing method
BR112014028135A2 (en) * 2012-05-14 2017-06-27 Koninklijke Philips Nv portable depth characterization apparatus for characterizing a depth of a surface of a target object; method of characterizing a depth of a surface of a target object using a handheld apparatus; and a half computer readable storage
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN107835367A (en) * 2017-11-14 2018-03-23 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN108038836B (en) * 2017-11-29 2020-04-17 维沃移动通信有限公司 Image processing method and device and mobile terminal
CN108052878B (en) * 2017-11-29 2024-02-02 上海图漾信息科技有限公司 Face recognition device and method

Also Published As

Publication number Publication date
CN109584150A (en) 2019-04-05

Similar Documents

Publication Publication Date Title
EP3965003A1 (en) Image processing method and device
CN110969981B (en) Screen display parameter adjusting method and electronic equipment
CN109685915B (en) Image processing method and device and mobile terminal
CN108712603B (en) Image processing method and mobile terminal
CN107644396B (en) Lip color adjusting method and device
CN107248137B (en) Method for realizing image processing and mobile terminal
CN109241832B (en) Face living body detection method and terminal equipment
CN107730460B (en) Image processing method and mobile terminal
CN111031234B (en) Image processing method and electronic equipment
CN109544445B (en) Image processing method and device and mobile terminal
CN107153500B (en) Method and equipment for realizing image display
CN109819166B (en) Image processing method and electronic equipment
JP2023518548A (en) Detection result output method, electronic device and medium
CN111080747B (en) Face image processing method and electronic equipment
CN111031178A (en) Video stream clipping method and electronic equipment
CN110602390B (en) Image processing method and electronic equipment
CN109840476B (en) Face shape detection method and terminal equipment
CN110555815B (en) Image processing method and electronic equipment
CN109639981B (en) Image shooting method and mobile terminal
CN109727212B (en) Image processing method and mobile terminal
CN110944112A (en) Image processing method and electronic equipment
CN109104573B (en) Method for determining focusing point and terminal equipment
CN107563353B (en) Image processing method and device and mobile terminal
CN111432122B (en) Image processing method and electronic equipment
CN109584150B (en) Image processing method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant