CN107886516B - Method and computing equipment for computing hair trend in portrait - Google Patents

Method and computing equipment for computing hair trend in portrait Download PDF

Info

Publication number
CN107886516B
CN107886516B CN201711240025.9A CN201711240025A CN107886516B CN 107886516 B CN107886516 B CN 107886516B CN 201711240025 A CN201711240025 A CN 201711240025A CN 107886516 B CN107886516 B CN 107886516B
Authority
CN
China
Prior art keywords
hair
pixel point
pixel
connected region
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711240025.9A
Other languages
Chinese (zh)
Other versions
CN107886516A (en
Inventor
吴善思源
王晓晶
李启东
李志阳
洪炜冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN201711240025.9A priority Critical patent/CN107886516B/en
Publication of CN107886516A publication Critical patent/CN107886516A/en
Application granted granted Critical
Publication of CN107886516B publication Critical patent/CN107886516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for calculating the trend of hair in a portrait, which comprises the following steps: calculating the gradient of each pixel point in the hair area in the portrait; calculating the hair direction of each pixel point according to the gradient of each pixel point; generating at least one connected region according to the hairline direction of each pixel point; merging the pixel points outside at least one connected region into one connected region adjacent to the pixel points according to a first preset condition to generate a new connected region; and determining the forward and reverse directions of the hairline direction in each new communication area to obtain the hairline direction of each new communication area. The invention also discloses a computing device for executing the method.

Description

Method and computing equipment for computing hair trend in portrait
Technical Field
The invention relates to the technical field of image processing, in particular to a method and computing equipment for computing the trend of hair in a portrait.
Background
In the application of face image processing, it is often necessary to obtain some part of texture information for further processing. For example, when a portrait (in the present disclosure, the portrait refers to an image including a face hair region) is subjected to special hand-drawing effect processing, a hair run of hair in the portrait needs to be acquired, and for example, when the hair region is subjected to hair reconstruction, hair style modeling, and other processing, a hair run of the hair region also needs to be acquired, and so on. Therefore, in practical applications, it is often necessary to determine the hair running direction (i.e., the trend direction) of the hair in the hair region in a portrait image.
There are many existing methods for calculating the hair strike in a portrait. If a plurality of images shot at different angles are used for hair texture matching, a general hair model is calculated according to parallax, and the trend of the hair in the hair is calculated by using the geometric form of the model; or fitting and adjusting the existing hair model to obtain an approximate hair model, thereby calculating the hair trend; the other algorithm is to obtain a model with human image pictures as input and hair trend heat map as output through deep learning. However, the method for obtaining the 3D model requires a huge amount of computation, and machine learning requires a large amount of labeled data and a huge model file, so the above computation method is not only time-consuming, but also has a very high requirement on the performance of the intelligent hardware.
In summary, a scheme for calculating the hair strike in the portrait with high efficiency and time saving and capable of ensuring accuracy is needed, and the scheme is used as a general module of a hair area related processing algorithm for processing various portraits.
Disclosure of Invention
To this end, the present invention provides a method and a computing device for calculating hair strike in a portrait in an attempt to solve or at least alleviate at least one of the problems identified above.
According to one aspect of the present invention, there is provided a method of calculating hair strike in a portrait, the method being adapted to be executed in a computing device, comprising the steps of: calculating the gradient of each pixel point in the hair area in the portrait; calculating the hair direction of each pixel point according to the gradient of each pixel point; generating at least one connected region according to the hairline direction of each pixel point; merging the pixel points outside at least one connected region into one connected region adjacent to the pixel points according to a first preset condition to generate a new connected region; and determining the forward and reverse directions of the hairline direction in each new communication area to obtain the hairline direction of each new communication area.
Optionally, in the method for calculating hair strike in a portrait according to the present invention, the step of calculating the gradient of each pixel point in the hair region in the portrait includes: acquiring a hair area in the portrait by a hair area identification method; and calculating the gradient of each pixel point in the hair region by adopting a preset gradient operator.
Optionally, in the method for calculating a hair running direction in a portrait according to the present invention, after the step of obtaining the hair running direction of each new connected region, the method further includes the steps of: and smoothing the hairline direction of each new connected region, and taking the hairline direction after smoothing as the hairline direction of the new connected region.
Optionally, in the method for calculating a hair strike in a portrait according to the present invention, the hair strike direction of each pixel point is represented by an angle between the hair strike direction of the pixel point and an x-axis, and is denoted as θ: θ is arctan (Gy/Gx), wherein Gy is the gradient of the pixel point in the y-axis direction, and Gx is the gradient of the pixel point in the x-axis direction.
Optionally, in the method for calculating a hair strike in a portrait according to the present invention, the step of generating at least one connected region according to the hair strike direction of each pixel point includes: calculating the pixel connection right of each pixel point and the adjacent pixel points according to the hairline direction of the pixel point; and generating at least one connected region according to the pixel connection weight.
Optionally, in the method for calculating a hair strike in a portrait according to the present invention, the step of calculating the pixel connection right between a pixel point and an adjacent pixel point includes: if the gradient value of the pixel point and the gradient value of an adjacent pixel point are both larger than a gradient threshold value, and the absolute difference value between the hairline direction of the pixel point and the hairline direction of the adjacent pixel point is smaller than an angle threshold value, the pixel connection right of the pixel point and the adjacent pixel point is 1; otherwise, the pixel connection weight between the pixel point and the adjacent pixel point is 0.
Optionally, in the method for calculating hair strike in a portrait according to the present invention, the neighboring pixel points of the pixel point (y, x) include: pixel (y, x +1) and pixel (y +1, x).
Optionally, in the method for calculating hair strike in portrait according to the present invention, the pixel connection weight L of the pixel point (y, x) and the pixel point (y, x +1)y,x+1Expressed as:
Figure BDA0001489655290000031
and
pixel connection right L of pixel point (y, x) and pixel point (y +1, x)y+1,xExpressed as:
Figure BDA0001489655290000032
wherein G isy,xIs the gradient value, G, of a pixel point (y, x)y,x+1Is the gradient value, G, of pixel point (y, x +1)y+1,xIs the gradient value, θ, of pixel point (y +1, x)y,xIs the hair direction, theta, of a pixel point (y, x)y,x+1Is the hair direction, θ, of pixel point (y, x +1)y+1,xThe hair direction of the pixel point (y +1, x), epsilon is a gradient threshold value, and pi/2 is an angle threshold value.
Optionally, in the method for calculating hair line in portrait according to the present invention, the step of generating at least one connected region according to pixel connection weight includes: if the pixel connection right meets a second preset condition, two adjacent pixel points corresponding to the pixel connection right are considered to be communicated; generating at least one connected block by the mutually connected pixel points; and judging the block area of at least one connected block, and taking the connected block as a connected region when the block area of the connected block is larger than an area threshold value.
Alternatively, in the method of calculating hair run in a portrait according to the invention, the area threshold is determined from the area of the hair region.
Optionally, in the method for calculating hair running direction in portrait according to the present invention, the step of generating a new connected region includes:
and merging the pixel points outside the at least one connected region into the connected region with the maximum gradient value adjacent to the pixel points, and generating a corresponding new connected region, wherein the pixel points outside the at least one connected region comprise connected blocks which are not merged into the connected region and/or pixel points which do not meet a second preset condition.
Optionally, in the method for calculating a hair running direction in a portrait according to the present invention, the step of determining the forward and reverse directions of the hair running direction in each new connected region includes: and determining the forward and reverse directions of the hair directions in each new connected region by minimizing the difference value of the initial hair direction of each new connected region and the initial hair direction of the new connected regions around the new connected region.
Alternatively, in the method of calculating hair strike in a portrait according to the present invention, minimizing the difference between the initial hair direction of each new connected region and the initial hair direction of its surrounding new connected regions is achieved by the following formula:
Figure BDA0001489655290000041
wherein n represents the number of new connected regions,
Figure BDA0001489655290000042
representing a new connected region
Figure BDA0001489655290000043
In the direction of the initial hair strand of the hair,
Figure BDA0001489655290000044
representing a new connected region
Figure BDA0001489655290000045
New connected area around
Figure BDA0001489655290000046
ω ± 1.
According to yet another aspect of the present invention, there is provided a computing device comprising: one or more processors; and a memory; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the methods described above.
According to a further aspect of the invention there is provided a computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods described above.
According to the scheme for calculating the hair strike in the portrait, a model file and a large number of training samples are not required to be relied on, the adjacent pixel points with the same hair direction are classified into a connected region only by calculating the hair direction of each pixel point in the hair region, and the hair direction of each connected region is determined by minimizing the difference value of the hair direction of each connected region and the surrounding connected regions, so that the hair strike information of the hair in the portrait is obtained, most of calculation can be finished in parallel, and the calculation efficiency is high.
Drawings
To the accomplishment of the foregoing and related ends, certain illustrative aspects are described herein in connection with the following description and the annexed drawings, which are indicative of various ways in which the principles disclosed herein may be practiced, and all aspects and equivalents thereof are intended to be within the scope of the claimed subject matter. The above and other objects, features and advantages of the present disclosure will become more apparent from the following detailed description read in conjunction with the accompanying drawings. Throughout this disclosure, like reference numerals generally refer to like parts or elements.
FIG. 1 shows a schematic diagram of a configuration of a computing device 100 according to one embodiment of the invention;
FIG. 2 shows a flow diagram of a method 200 of calculating hair strike in a portrait according to one embodiment of the present invention; and
fig. 3A shows a schematic diagram of a hair region with connected regions according to one embodiment of the invention, and fig. 3B shows a schematic diagram of a hair region with new connected regions according to one embodiment of the invention.
Detailed Description
Exemplary embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a block diagram of an example computing device 100. In a basic configuration 102, computing device 100 typically includes system memory 106 and one or more processors 104. A memory bus 108 may be used for communication between the processor 104 and the system memory 106.
Depending on the desired configuration, the processor 104 may be any type of processor, including but not limited to: a microprocessor (μ P), a microcontroller (μ C), a Digital Signal Processor (DSP), or any combination thereof. The processor 104 may include one or more levels of cache, such as a level one cache 110 and a level two cache 112, a processor core 114, and registers 116. The example processor core 114 may include an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a digital signal processing core (DSP core), or any combination thereof. The example memory controller 118 may be used with the processor 104, or in some implementations the memory controller 118 may be an internal part of the processor 104.
Depending on the desired configuration, system memory 106 may be any type of memory, including but not limited to: volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.), or any combination thereof. System memory 106 may include an operating system 120, one or more applications 122, and program data 124. In some embodiments, application 122 may be arranged to operate with program data 124 on an operating system. In some embodiments, computing device 100 is configured to perform a method of calculating hair strike in a portrait, with program data 124 including instructions for performing the method.
Computing device 100 may also include an interface bus 140 that facilitates communication from various interface devices (e.g., output devices 142, peripheral interfaces 144, and communication devices 146) to the basic configuration 102 via the bus/interface controller 130. The example output device 142 includes a graphics processing unit 148 and an audio processing unit 150. They may be configured to facilitate communication with various external devices, such as a display or speakers, via one or more a/V ports 152. Example peripheral interfaces 144 may include a serial interface controller 154 and a parallel interface controller 156, which may be configured to facilitate communication with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, image input device) or other peripherals (e.g., printer, scanner, etc.) via one or more I/O ports 158. An example communication device 146 may include a network controller 160, which may be arranged to facilitate communications with one or more other computing devices 162 over a network communication link via one or more communication ports 164. In this embodiment, the to-be-processed portrait may be obtained in real time through an image input device such as a camera, or may be obtained through the communication device 146.
A network communication link may be one example of a communication medium. Communication media may typically be embodied by computer readable instructions, data structures, program modules, and may include any information delivery media, such as carrier waves or other transport mechanisms, in a modulated data signal. A "modulated data signal" may be a signal that has one or more of its data set or its changes made in such a manner as to encode information in the signal. By way of non-limiting example, communication media may include wired media such as a wired network or private-wired network, and various wireless media such as acoustic, Radio Frequency (RF), microwave, Infrared (IR), or other wireless media. The term computer readable media as used herein may include both storage media and communication media. In some embodiments, one or more programs are stored in the computer-readable medium, including instructions for performing certain methods, such as a method for calculating hair strike in a portrait by computing device 100 through the instructions, according to embodiments of the present invention.
Computing device 100 may be implemented as part of a small-form factor portable (or mobile) electronic device such as a cellular telephone, a digital camera, a Personal Digital Assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that include any of the above functions. Computing device 100 may also be implemented as a personal computer including both desktop and notebook computer configurations.
The flow of the method 200 for calculating the hair strike in a portrait according to one embodiment of the present invention will be described in detail below with reference to fig. 2. In summary, the method 200 calculates the hair direction of each pixel point in the hair region, classifies the adjacent pixel points with the same hair direction as a connected region, and calculates the hair direction of each connected region by an iterative convergence method to obtain the hair strike information of the hair in the portrait.
As shown in FIG. 2, the method 200 begins with step S210, where the gradient of each pixel point in the hair region in the portrait is calculated. As previously mentioned, a portrait as used herein refers to an image that includes a region of human face hair.
According to an embodiment of the present invention, for a portrait acquired in real time or a portrait received through a communication interface, a hair region in the portrait is acquired through a hair region identification method, and the following operation processing steps are all to process the acquired hair region image. Optionally, the method for identifying the hair region may be to perform region segmentation by using the characteristics of the hair, such as color, texture, shape, position, etc., on the basis of determining the face region, such as a method based on skin color model matching; a Gaussian mixture model can be constructed according to the hair color and the position information to identify/segment the hair area; a machine learning model can also be constructed based on a neural network machine learning method to identify hair regions. The invention is not limited in this regard. Any method of identifying hair regions may be combined with embodiments of the present invention to complete the calculation of hair run in a portrait.
After the hair region is obtained, the gradient of each pixel point in the hair region is calculated by adopting a preset gradient operator. According to one embodiment of the invention, gradient calculation is performed on the brightness channel image of the hair region to obtain gradient information of the hair edge. Alternatively, the predetermined gradient operator may be a Sobel operator, Roberts operator, Prewitt operator, Lapacian operator, etc., and taking the Sobel operator as an example, the gradient information of the pixel point (y, x) may be calculated by the following formula:
Figure BDA0001489655290000071
Figure BDA0001489655290000072
wherein, Gxy.xAnd Gyy.xRespectively representing the gradients of the pixel points (y, x) in the x-axis and y-axis directions (i.e. horizontal and vertical directions), Iy,xIs the pixel value, Sx, at the image (y, x)i,jAnd Syi,jRepresent the weights at the convolution kernel (i, j) in the x-axis and y-axis directions, respectively, i-3 and j-3 for a convolution kernel of 3 x 3.
The gradient operators for the 3 x 3 templates in the horizontal (left) and vertical (right) directions are shown below, respectively:
Figure BDA0001489655290000081
then, in step S220, the hair direction of each pixel point is calculated according to the gradient of each pixel point. According to the embodiment of the invention, the hair direction of each pixel point is represented by the included angle between the hair direction on the pixel point and the x axis, which is recorded as theta, and the theta is calculated by the following formula:
θ=arctan(Gy/Gx)
wherein, Gy is the gradient of the pixel point in the y-axis direction, Gx is the gradient of the pixel point in the x-axis direction, and taking the pixel point (y, x) as an example, the hair direction θ of the pixel pointy,x=arctan(Gyy,x/Gxy,x)。
In the research, it is found that the hair region does not have obvious hair everywhere, that is, most of θ calculated according to step S220 is unreliable and is often interfered by factors such as disordered hair and noise. Meanwhile, in a specific implementation process, the inventors of the present application found that:
A. regions with stronger gradients exhibit higher directional confidence;
B. the more uniform the direction, the higher the overall direction confidence expressed by the region;
C. in most cases, the direction of the hair in one area is similar to the direction of the surrounding area.
To solve the above interference, the problem is simplified according to the embodiment of the present invention as follows: and searching each high-strength subregion with uniform direction according to the hair direction of each pixel point.
In the following step S230, at least one connected region is generated according to the hair direction of each pixel point. Specifically, step S230 includes steps 1) and 2) as follows.
Step 1): and calculating the pixel connection right of each pixel point and the adjacent pixel points according to the hairline direction of each pixel point. Wherein, the adjacent pixel of pixel (y, x) includes its two direct adjacent pixel in x, y direction: (y, x +1) and (y +1, x).
According to an embodiment of the present invention, if the gradient value of the pixel point and the gradient value of an adjacent pixel point are both greater than the gradient threshold, and the absolute difference between the hairline direction of the pixel point and the hairline direction of the adjacent pixel point is less than the angle threshold, the pixel connection weight between the pixel point and the adjacent pixel point is 1; otherwise, the pixel connection weight between the pixel point and the adjacent pixel point is 0.
The calculation of the pixel connection weights is further explained by the following formula:
pixel connection right L of pixel point (y, x) and pixel point (y, x +1)y,x+1Expressed as:
Figure BDA0001489655290000091
pixel connection right L of pixel point (y, x) and pixel point (y +1, x)y+1,xExpressed as:
Figure BDA0001489655290000092
in the above formula, θy,xIs the hair direction, theta, of a pixel point (y, x)y,x+1Is the hair direction, θ, of pixel point (y, x +1)y+1,xIs the hair direction of the pixel point (y +1, x), epsilon is the gradient threshold, pi/2 is the angle threshold, Gy,xIs the gradient value, G, of a pixel point (y, x)y,x+1Is the gradient value, G, of pixel point (y, x +1)y+1,xIs the gradient value of pixel point (y +1, x) in Gy,xFor example, the following steps are carried out:
Figure BDA0001489655290000093
as previously described, Gxy.xAnd Gyy.xRepresenting the gradients of the pixel points (y, x) in the x-axis and y-axis directions, respectively.
According to an embodiment of the invention, the gradient threshold ε is determined based on the global gradient values of the hair region, optionally ε is taken to be 30% of the maximum value of the global gradient.
Step 2): generating at least one connected region according to the pixel connection weight obtained in the step 1).
According to an embodiment of the present invention, if the pixel connection right satisfies the second predetermined condition, it is determined that two adjacent pixels corresponding to the pixel connection right are connected (the second predetermined condition is that the pixel connection right is 1); generating at least one connected block from interconnected pixels (i.e., connected)The blocks satisfy: any pixel point in the connected block is communicated with at least one pixel point in the same connected block and is not communicated with any pixel point outside the connected block); judging the block area of each connected block in the at least one connected block, and when the block area of the connected block is larger than the area threshold value, taking the connected block as a connected region and marking the connected region as a connected region
Figure BDA0001489655290000094
According to an embodiment of the invention, the area threshold is determined according to the area of the hair region, optionally, the area threshold is 0.5-1.0% of the area of the hair region.
After the processing of step S230, there is at least one connected region in the hair region, as an example of the hair region is shown in fig. 3A, wherein 301, 303 respectively represent the connected regions generated after the processing of step S230, and can be referred to as "connected region"
Figure BDA0001489655290000095
Subsequently, in step S240, the pixel points outside the at least one connected region are merged into one connected region adjacent thereto under a first predetermined condition, and a new connected region is generated. Wherein the pixel points outside the at least one connected region include non-merged connected regions
Figure BDA0001489655290000101
And/or pixel points that do not satisfy the second predetermined condition (i.e., independent pixels that are not connected to any pixel), as in fig. 3A, 304 represents a connected block that does not incorporate a connected region, and 305 represents an independent pixel that is not connected to any pixel. According to one embodiment of the invention, the first predetermined condition is: the non-merged connected region
Figure BDA0001489655290000102
The connected blocks and the pixel points which do not meet the second preset condition are merged into a connected area adjacent to the connected block and with the largest gradient value.
After the processing of step S240, the hair region is divided into a plurality of connected partsThe plurality of connected regions are marked as new connected regions
Figure BDA0001489655290000103
FIG. 3B is a diagram illustrating the hair region of FIG. 3A after being processed in step S240, wherein 310, 320, and 330 respectively represent the new connected regions generated after being processed in step S240, and can be referred to as
Figure BDA0001489655290000104
It should be noted that fig. 3A and 3B are only schematic diagrams illustrating the processing results of step S230 and step S240, and in the actual processing, the distribution of the connected regions may be more complicated than that illustrated.
After the processing of steps S230 and S240, the pixels with the same hair run are grouped into one region, that is, each new connected region already has a relatively accurate hair run. Optionally, taking the average value of the hair directions of each pixel point in the new connected region as the initial hair direction of the new connected region, and recording the average value as the initial hair direction of the new connected region
Figure BDA0001489655290000105
Of course, probability statistics may also be performed on the hair direction of each pixel point in the new connected region (for example, individual pixel points with hair directions different from the overall hair direction are eliminated), and the initial hair direction of the new connected region is calculated. The invention is not limited in this regard.
Subsequently, in step S250, the forward and reverse directions of the hair direction in each new connected region are determined, and the hair direction of each new connected region is obtained. As a result of the foregoing steps, it is necessary to further distinguish each new connected region
Figure BDA0001489655290000106
And the determined initial hair direction
Figure BDA0001489655290000107
Whether in the same direction or in the opposite direction.
According to one embodiment of the invention, the step of determining the forward and reverse directions of the direction of the hair in each new connected region comprises: the forward and reverse directions of the hair directions in each new connected region are determined by minimizing the difference between the initial hair direction of each new connected region and the initial hair direction of its surrounding new connected region (i.e., making the hair direction of each new connected region and the hair direction of its surrounding new connected region as co-directional as possible). Specifically, minimizing the difference between the initial hairline direction of each new connected region and the initial hairline direction of its surrounding new connected region is achieved by the following formula:
Figure BDA0001489655290000111
wherein n represents the number of new connected regions,
Figure BDA0001489655290000112
representing a new connected region
Figure BDA0001489655290000113
In the direction of the initial hair strand of the hair,
Figure BDA0001489655290000114
representing a new connected region
Figure BDA0001489655290000115
New connected area around
Figure BDA0001489655290000116
ω is 1 or-1.
After the above minimization, the direction of the hair in each new connected region is determined again and recorded as
Figure BDA0001489655290000117
According to another embodiment of the present invention, after obtaining the hair direction of each new connected region through the processing of step S250, the method further comprises a step (not shown) of further smoothing the hair direction:and smoothing the hairline direction of each new connected region, and taking the hairline direction after smoothing as the hairline direction of the new connected region. Specifically, the new connected region is mapped as follows
Figure BDA0001489655290000118
Direction of hair
Figure BDA0001489655290000119
Smoothing in the direction of the surrounding hairs adjoining the newly connected region:
Figure BDA00014896552900001110
in the formula, n represents the number of new connected regions to be smoothed, and optionally, n is 5 × 5.
According to the scheme for calculating the hair strike in the portrait, a model file and a large number of training samples are not required to be relied on, the adjacent pixel points with the same hair direction are classified into a connected region only by calculating the hair direction of each pixel point in the hair region, and the hair direction of each connected region is determined by minimizing the difference value of the hair direction of each connected region and the surrounding connected regions, so that the hair strike information of the hair in the portrait is obtained, most of calculation can be finished in parallel, and the calculation efficiency is high.
It should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules or units or components of the devices in the examples disclosed herein may be arranged in a device as described in this embodiment or alternatively may be located in one or more devices different from the devices in this example. The modules in the foregoing examples may be combined into one module or may be further divided into multiple sub-modules.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the following claims, any of the claimed embodiments may be used in any combination.
The various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
In the case of program code execution on programmable computers, the computing device will generally include a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. Wherein the memory is configured to store program code; the processor is configured to perform the method of the present invention according to instructions in the program code stored in the memory.
By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer-readable media includes both computer storage media and communication media. Computer storage media store information such as computer readable instructions, data structures, program modules or other data. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. Combinations of any of the above are also included within the scope of computer readable media.
The invention also discloses:
a9, the method of any one of A1-8, wherein the step of generating at least one connected region according to pixel connection weights comprises: if the pixel connection right meets a second preset condition, two adjacent pixel points corresponding to the pixel connection right are considered to be communicated; generating at least one connected block by the mutually connected pixel points; and judging the block area of the at least one connected block, and taking the connected block as a connected region when the block area of the connected block is larger than an area threshold value.
A10, the method of A9, wherein the area threshold is determined from the area of the hair region.
A11, the method of a9 or 10, wherein the step of generating a new connected region comprises: and merging the pixel points outside the at least one connected region into the connected region with the maximum gradient value adjacent to the pixel points, and generating a corresponding new connected region, wherein the pixel points outside the at least one connected region comprise connected blocks which are not merged into the connected region and/or pixel points which do not meet a second preset condition.
A12, the method of any one of A1-11, wherein the step of determining the forward and reverse direction of the hair strands in each newly connected area comprises: and determining the forward and reverse directions of the hair directions in each new connected region by minimizing the difference value of the initial hair direction of each new connected region and the initial hair direction of the new connected regions around the new connected region.
A13, the method as in a12, wherein minimizing the difference between the initial hairline direction of each new connected component and the initial hairline direction of its surrounding new connected component is accomplished by:
Figure BDA0001489655290000131
wherein n represents the number of new connected regions,
Figure BDA0001489655290000132
Figure BDA0001489655290000137
representing a new connected region
Figure BDA0001489655290000133
In the direction of the initial hair strand of the hair,
Figure BDA0001489655290000134
representing a new connected region
Figure BDA0001489655290000135
New connected area around
Figure BDA0001489655290000136
ω ± 1.
Furthermore, some of the described embodiments are described herein as a method or combination of method elements that can be performed by a processor of a computer system or by other means of performing the described functions. A processor having the necessary instructions for carrying out the method or method elements thus forms a means for carrying out the method or method elements. Further, the elements of the apparatus embodiments described herein are examples of the following apparatus: the apparatus is used to implement the functions performed by the elements for the purpose of carrying out the invention.
As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. Furthermore, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the appended claims. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (15)

1. A method of calculating hair run in a portrait, the method being adapted to be executed in a computing device, comprising the steps of:
calculating the gradient of each pixel point in the hair area in the portrait;
calculating the hair direction of each pixel point according to the gradient of each pixel point;
generating at least one connected region according to the hairline direction of each pixel point;
merging the pixel points outside the at least one connected region into one connected region adjacent to the pixel points according to a first preset condition to generate a new connected region; and
determining the forward and reverse directions of the hair direction in each new connected region to obtain the hair direction of each new connected region, wherein the forward and reverse directions of the hair direction in the new connected region indicate that the actual hair direction of the new connected region and the initial hair direction of the new connected region are the same direction or opposite directions, and the initial hair direction of the new connected region is the mean value of the hair direction of each pixel point in the new connected region, or the initial hair direction of the new connected region is obtained by carrying out probability statistics on the hair direction of each pixel point in the new connected region.
2. The method of claim 1, wherein the step of calculating the gradient of each pixel point within the hair region in the portrait comprises:
acquiring a hair area in the portrait by a hair area identification method; and
and calculating the gradient of each pixel point in the hair region by adopting a preset gradient operator.
3. The method of claim 2, wherein after the step of obtaining the hair direction for each new connected component, further comprising the steps of:
and smoothing the hairline direction of each new connected region, and taking the hairline direction after smoothing as the hairline direction of the new connected region.
4. The method of claim 3, wherein the direction of the hair at each pixel point is represented by the angle between the direction of the hair at the pixel point and the x-axis, which is denoted as θ:
θ=arctan(Gy/Gx),
wherein, Gy is the gradient of the pixel point in the y-axis direction, and Gx is the gradient of the pixel point in the x-axis direction.
5. The method of claim 4, wherein the step of generating at least one connected component according to the hair direction of each pixel point comprises:
calculating the pixel connection right of each pixel point and the adjacent pixel points according to the hairline direction of each pixel point; and
and generating at least one connected region according to the pixel connection weight.
6. The method of claim 5, wherein the step of calculating pixel connection rights of a pixel point and its neighboring pixel points comprises:
if the gradient value of the pixel point and the gradient value of an adjacent pixel point are both larger than a gradient threshold value, and the absolute difference value between the hairline direction of the pixel point and the hairline direction of the adjacent pixel point is smaller than an angle threshold value, the pixel connection right of the pixel point and the adjacent pixel point is 1;
otherwise, the pixel connection weight between the pixel point and the adjacent pixel point is 0.
7. The method of claim 6, wherein the neighborhood of pixel points (y, x) comprises: pixel (y, x +1) and pixel (y +1, x).
8. The method of claim 7, wherein the pixel connection weight L of pixel (y, x) and pixel (y, x +1)y,x+1Expressed as:
Figure FDA0002390565200000021
and
pixel connection right L of pixel point (y, x) and pixel point (y +1, x)y+1,xExpressed as:
Figure FDA0002390565200000022
wherein G isy,xIs the gradient value, G, of a pixel point (y, x)y,x+1Is the gradient value, G, of pixel point (y, x +1)y+1,xIs the gradient value, θ, of pixel point (y +1, x)y,xIs the hair direction, theta, of a pixel point (y, x)y,x+1The hairline square of the pixel point (y, x +1)To, thetay+1,xThe hair direction of the pixel point (y +1, x), epsilon is a gradient threshold value, and pi/2 is an angle threshold value.
9. The method of claim 8, wherein the generating at least one connected component from pixel connection weights comprises:
if the pixel connection right meets a second preset condition, two adjacent pixel points corresponding to the pixel connection right are considered to be communicated;
generating at least one connected block by the mutually connected pixel points; and
and judging the block area of the at least one connected block, and taking the connected block as a connected region when the block area of the connected block is larger than an area threshold value.
10. The method of claim 9, wherein the area threshold is determined based on an area of the hair region.
11. The method of claim 10, wherein generating a new connected region comprises:
merging the pixel points outside the at least one connected region into a connected region with the maximum gradient value adjacent to the pixel points to generate a corresponding new connected region,
and the pixel points outside the at least one connected region comprise connected blocks which are not merged into the connected region and/or pixel points which do not meet a second preset condition.
12. The method of any one of claims 1-11, wherein the step of determining the forward and reverse direction of the hair strands in each new connected component comprises:
and determining the forward and reverse directions of the hair directions in each new connected region by minimizing the difference value of the initial hair direction of each new connected region and the initial hair direction of the new connected regions around the new connected region.
13. The method of claim 12, wherein minimizing the difference in the initial hairline direction of each new connected component and the initial hairline direction of its surrounding new connected component is accomplished by:
Figure FDA0002390565200000031
wherein n represents the number of new connected regions,
Figure FDA0002390565200000032
Figure FDA0002390565200000033
representing a new connected region
Figure FDA0002390565200000034
In the direction of the initial hair strand of the hair,
Figure FDA0002390565200000035
representing a new connected region
Figure FDA0002390565200000036
New connected area around
Figure FDA0002390565200000037
ω ± 1.
14. A computing device, comprising:
one or more processors; and
a memory;
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-13.
15. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by a computing device, cause the computing device to perform any of the methods of claims 1-13.
CN201711240025.9A 2017-11-30 2017-11-30 Method and computing equipment for computing hair trend in portrait Active CN107886516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711240025.9A CN107886516B (en) 2017-11-30 2017-11-30 Method and computing equipment for computing hair trend in portrait

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711240025.9A CN107886516B (en) 2017-11-30 2017-11-30 Method and computing equipment for computing hair trend in portrait

Publications (2)

Publication Number Publication Date
CN107886516A CN107886516A (en) 2018-04-06
CN107886516B true CN107886516B (en) 2020-05-15

Family

ID=61776306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711240025.9A Active CN107886516B (en) 2017-11-30 2017-11-30 Method and computing equipment for computing hair trend in portrait

Country Status (1)

Country Link
CN (1) CN107886516B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629781B (en) * 2018-04-24 2022-04-22 成都品果科技有限公司 Hair drawing method
CN109087377B (en) * 2018-08-03 2019-11-12 北京字节跳动网络技术有限公司 Method and apparatus for handling image
CN109408653B (en) * 2018-09-30 2022-01-28 叠境数字科技(上海)有限公司 Human body hairstyle generation method based on multi-feature retrieval and deformation
CN109816764B (en) 2019-02-02 2021-06-25 深圳市商汤科技有限公司 Image generation method and device, electronic equipment and storage medium
CN111540021B (en) * 2020-04-29 2023-06-13 网易(杭州)网络有限公司 Hair data processing method and device and electronic equipment
CN113763228B (en) * 2020-06-01 2024-03-19 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN114187633B (en) * 2021-12-07 2023-06-16 北京百度网讯科技有限公司 Image processing method and device, and training method and device for image generation model
CN114758391B (en) * 2022-04-08 2023-09-12 北京百度网讯科技有限公司 Hair style image determining method, device, electronic equipment, storage medium and product

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7418371B2 (en) * 2005-03-30 2008-08-26 Seoul National University Industry Foundation Method and system for graphical hairstyle generation using statistical wisp model and pseudophysical approaches
JP4619112B2 (en) * 2004-12-27 2011-01-26 花王株式会社 Hair shape measurement method
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
CN103606186A (en) * 2013-02-02 2014-02-26 浙江大学 Virtual hair style modeling method of images and videos
CN104376597A (en) * 2014-12-05 2015-02-25 北京航空航天大学 Multi-direction constrained hair reconstruction method
CN105405163A (en) * 2015-12-28 2016-03-16 北京航空航天大学 Vivid static-state hair modeling method based on multiple direction fields
CN107103619A (en) * 2017-04-19 2017-08-29 腾讯科技(上海)有限公司 A kind of processing method of hair grain direction, apparatus and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9317970B2 (en) * 2010-01-18 2016-04-19 Disney Enterprises, Inc. Coupled reconstruction of hair and skin
US9262857B2 (en) * 2013-01-16 2016-02-16 Disney Enterprises, Inc. Multi-linear dynamic hair or clothing model with efficient collision handling

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4619112B2 (en) * 2004-12-27 2011-01-26 花王株式会社 Hair shape measurement method
US7418371B2 (en) * 2005-03-30 2008-08-26 Seoul National University Industry Foundation Method and system for graphical hairstyle generation using statistical wisp model and pseudophysical approaches
CN102800129A (en) * 2012-06-20 2012-11-28 浙江大学 Hair modeling and portrait editing method based on single image
CN103606186A (en) * 2013-02-02 2014-02-26 浙江大学 Virtual hair style modeling method of images and videos
CN104376597A (en) * 2014-12-05 2015-02-25 北京航空航天大学 Multi-direction constrained hair reconstruction method
CN105405163A (en) * 2015-12-28 2016-03-16 北京航空航天大学 Vivid static-state hair modeling method based on multiple direction fields
CN107103619A (en) * 2017-04-19 2017-08-29 腾讯科技(上海)有限公司 A kind of processing method of hair grain direction, apparatus and system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A generative model of human hair for hair sketching;H. Chen,et.al;《2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR"05)》;20050725;第2卷;全文 *
A Generative Sketch Model for Human Hair Analysis and Synthesis;Hong Chen,et.al;《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》;20060731;第28卷(第7期);第1025-1040页 *
Hair Segmentation Using Heuristically-Trained Neural Networks;Wenzhangzhi Guo;《https://tspace.library.utoronto.ca/bitstream/1807/72673/3/Guo_Wenzhangzhi_201606_MAS_thesis.pdf》;20160630;全文 *
基于分区的颅骨面貌复原技术与真实感处理方法研究;李康;《中国博士学位论文全文数据库信息科技辑》;20150415(第04期);全文 *
基于学习的人脸表情动画生成方法研究;刘莎;《中国优秀硕士学位论文全文数据库信息科技辑》;20130115(第01期);全文 *

Also Published As

Publication number Publication date
CN107886516A (en) 2018-04-06

Similar Documents

Publication Publication Date Title
CN107886516B (en) Method and computing equipment for computing hair trend in portrait
CN107808147B (en) Face confidence discrimination method based on real-time face point tracking
CN109978063B (en) Method for generating alignment model of target object
CN106780512B (en) Method, application and computing device for segmenting image
AU2017421316B2 (en) Systems and methods for verifying authenticity of ID photo
CN110096964B (en) Method for generating image recognition model
CN108898142B (en) Recognition method of handwritten formula and computing device
WO2022199583A1 (en) Image processing method and apparatus, computer device, and storage medium
CN107909016B (en) Convolutional neural network generation method and vehicle system identification method
CN110020600B (en) Method for generating a data set for training a face alignment model
CN108154509B (en) Cancer identification method, device and storage medium
CN109859217B (en) Segmentation method and computing device for pore region in face image
WO2015106700A1 (en) Method and apparatus for implementing image denoising
CN109840912B (en) Method for correcting abnormal pixels in image and computing equipment
US20150363645A1 (en) Method and apparatus for roof type classification and reconstruction based on two dimensional aerial images
CN107749062B (en) Image processing method and device
CN109671061B (en) Image analysis method and device, computing equipment and storage medium
CN110287857B (en) Training method of feature point detection model
CN112150371B (en) Image noise reduction method, device, equipment and storage medium
CN108960012B (en) Feature point detection method and device and electronic equipment
CN111582267B (en) Text detection method, computing device and readable storage medium
CN111357034A (en) Point cloud generation method, system and computer storage medium
Cheng et al. A pre-saliency map based blind image quality assessment via convolutional neural networks
CN111260655A (en) Image generation method and device based on deep neural network model
CN107808394B (en) Image processing method based on convolutional neural network and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant