CN109389562B - Image restoration method and device - Google Patents

Image restoration method and device Download PDF

Info

Publication number
CN109389562B
CN109389562B CN201811148733.4A CN201811148733A CN109389562B CN 109389562 B CN109389562 B CN 109389562B CN 201811148733 A CN201811148733 A CN 201811148733A CN 109389562 B CN109389562 B CN 109389562B
Authority
CN
China
Prior art keywords
image
repaired
area
face
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811148733.4A
Other languages
Chinese (zh)
Other versions
CN109389562A (en
Inventor
李传俊
严琼
王昊然
鲍旭
戴立根
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sensetime Technology Co Ltd
Original Assignee
Shenzhen Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sensetime Technology Co Ltd filed Critical Shenzhen Sensetime Technology Co Ltd
Priority to CN201811148733.4A priority Critical patent/CN109389562B/en
Publication of CN109389562A publication Critical patent/CN109389562A/en
Application granted granted Critical
Publication of CN109389562B publication Critical patent/CN109389562B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image restoration method and device. The method comprises the following steps: acquiring a face area of an image to be processed; segmenting the face area to obtain a first face area of the face area; acquiring a pixel point set of which the gray value is greater than a threshold value in the first face region to obtain a first region to be repaired of the face region; and repairing the first area to be repaired according to the first face area to obtain a repaired image. The first region to be repaired, namely the speckle pox spot, in the image to be processed is automatically and quickly identified through the neural network. And filling the detected first area to be repaired (speckle spots) with the skin of the first face area which is completely around, so as to realize the repair of the speckle spots in the image to be processed. Therefore, the region of the image to be processed, where the speckle pox is repaired, has no difference with the surrounding skin, so that the repaired whole is beautiful and natural.

Description

Image restoration method and device
Technical Field
The present application relates to the field of image processing, and in particular, to an image restoration method and apparatus.
Background
At present, the camera performance of mobile terminals such as mobile phones is continuously improved, and more people use the mobile phones to carry out self-shooting. However, the skin of not everyone is white, tender and smooth, and basically has speckle and pox spots. How to repair the speckles in the image also becomes one of the main problems in the beauty of the image.
However, the spot repairing in the traditional method has two difficulties, on one hand, the spot can be repaired only by manually selecting the spot by a user, and the method has low efficiency and poor effect. On the other hand, the repaired area and the adjacent skin look different and have poor visual effect.
Disclosure of Invention
The application provides an image restoration method and device, which are used for realizing restoration of speckle spots in an image.
In a first aspect, an image inpainting method is provided, including: acquiring a face area of an image to be processed; segmenting the face area to obtain a first face area of the face area; acquiring a pixel point set of which the gray value is greater than a threshold value in the first face region to obtain a first region to be repaired of the face region; and repairing the first area to be repaired according to the first face area to obtain a repaired image.
In a possible implementation manner, before the acquiring a face region of an image to be processed, the method further includes: carrying out face detection on the image to be processed; and responding to the result of the face detection, and acquiring the face area in the image to be processed.
In another possible implementation manner, the segmenting the face region to obtain a first face region of the face region includes: acquiring a face area mask according to the image to be processed and the face area; performing key point detection on the face region to obtain a second face region containing preset key points, wherein the preset key points comprise at least one of eyes, eyebrows and a mouth bar; and subtracting the second face area from the face area mask to obtain the first face area.
In another possible implementation manner, the obtaining a face region mask according to the image to be processed and the face region by the pair includes: extracting the features of the face region to obtain a skin feature image; distinguishing the skin characteristic image according to the characteristics in the skin characteristic image to obtain a skin area segmentation image; and determining the mask of the face region based on the skin region segmentation map and the face region.
In another possible implementation manner, the obtaining a pixel point set in the first face region whose gray value is greater than a threshold value to obtain a first region to be repaired of the face region includes: performing gradient calculation on the first face area to obtain a gradient map of the first face area; and determining a pixel point set of which the gray value is greater than the threshold value in the first face region according to the gradient map to obtain a first region to be repaired in the face region.
In another possible implementation manner, the performing a gradient calculation on the first face region to obtain a gradient map of the first face region includes: performing image down-sampling processing on the first face area to obtain a reduced first face area; performing gradient calculation on the reduced first face area to respectively obtain a gradient in the x direction and a gradient in the y direction; and fusing the gradient in the x direction and the gradient in the y direction to obtain a gradient map of the first face area.
In another possible implementation manner, the determining, according to the gradient map, a pixel point set in the first face region whose gray value is greater than the threshold to obtain a first region to be repaired in the face region includes: finding out pixel points with gradient value changes larger than a set threshold value from the gradient map to obtain a pixel point set with the gray value larger than the threshold value in the first face region; and obtaining a first region to be repaired in the face region according to the pixel point set.
In another possible implementation manner, after obtaining the first region to be repaired in the face region according to the pixel point set, the method further includes: combining the first area to be repaired with the image to be processed to obtain an updated first area to be repaired; comparing the updated first area to be repaired with the peripheral area of the updated first area to be repaired to obtain an optimized first area to be repaired; and repairing the optimized first area to be repaired according to the first face area to obtain a repaired image.
In another possible implementation manner, after obtaining the first region to be repaired in the face region according to the pixel point set, the method further includes: and obtaining a first mask of the area to be repaired according to the image to be processed, the second face area and the face area mask.
In another possible implementation manner, the repairing the first region to be repaired according to the first face region to obtain a repaired image includes: reducing the first area to be repaired and the first area to be repaired by a preset stage number to obtain a minimum area to be repaired; repairing the minimum image to be repaired according to the first face area to obtain a repaired minimum image; gradually amplifying the repaired minimum image to obtain an enlarged image; performing image restoration on each stage of enlarged image step by step until the image size of the enlarged image is the same as that of the image to be processed, and obtaining a final restoration area; and replacing the corresponding area in the image to be processed with the repair area to obtain a repaired image.
In another possible implementation manner, the repairing the minimum to-be-repaired image according to the first face region to obtain a repaired minimum image includes: performing block matching search on the area to be repaired of the minimum graph to be repaired, wherein the search range is the peripheral area of the area to be repaired of the minimum graph to be repaired; calculating the average value of the pixels with the preset number according to the preset number of the pixels to be repaired covered by the image block; taking the average value as the pixel value of the pixel point to be repaired to obtain the repaired pixel point; and obtaining a repaired minimum image according to each repaired pixel point.
In a second aspect, there is provided an image repair apparatus comprising: the first acquisition unit is used for acquiring a face area of an image to be processed; the segmentation unit is used for segmenting a face region to obtain a first face region of the face region; the second obtaining unit is used for obtaining a pixel point set of which the gray value is greater than a threshold value in the first face area to obtain a first to-be-repaired area of the face area; and the repairing unit is used for repairing the first area to be repaired according to the first face area to obtain a repaired image.
In one possible implementation manner, the image restoration apparatus further includes: the detection unit is used for carrying out face detection on the image to be processed; and the third acquisition unit is used for responding to the result of the face detection and acquiring the face area in the image to be processed.
In another possible implementation manner, the segmentation unit includes: the acquisition subunit is used for acquiring a face area mask according to the image to be processed and the face area; the detection subunit is configured to perform key point detection on the face region to obtain a second face region including predetermined key points, where the predetermined key points include at least one of eyes, eyebrows, and a mouth; and the processing subunit is configured to subtract the second face region from the face region mask to obtain the first face region.
In another possible implementation manner, the obtaining subunit is further configured to: extracting the features of the face region to obtain a skin feature image; distinguishing the skin characteristic image according to the characteristics in the skin characteristic image to obtain a skin area segmentation image; and determining the face region mask based on the skin region segmentation map and the face region.
In another possible implementation manner, the second obtaining unit includes: the calculating subunit is configured to perform gradient calculation on the first face region to obtain a gradient map of the first face region; and the determining subunit is configured to determine, according to the gradient map, a pixel point set in the first face region, where the gray value is greater than the threshold, to obtain a first region to be repaired in the face region.
In yet another possible implementation manner, the calculating subunit is further configured to: performing image down-sampling processing on the first face area to obtain a reduced first face area; performing gradient calculation on the reduced first face area to respectively obtain a gradient in the x direction and a gradient in the y direction; and fusing the gradient in the x direction and the gradient in the y direction to obtain a gradient map of the first face area.
In yet another possible implementation manner, the determining subunit is further configured to: finding out pixel points with gradient value changes larger than a set threshold value from the gradient map to obtain a pixel point set with the gray value larger than the threshold value in the first face region; and obtaining a first region to be repaired in the face region according to the pixel point set.
In yet another possible implementation manner, the determining subunit is further configured to: combining the first area to be repaired with the image to be processed to obtain an updated first area to be repaired; comparing the updated first area to be repaired with the peripheral area of the updated first area to be repaired to obtain an optimized first area to be repaired; and repairing the optimized first region to be repaired according to the first face region to obtain a repaired image.
In another possible implementation manner, the determining subunit is further configured to: and obtaining a first mask of the area to be repaired according to the image to be processed, the second face area and the face area mask.
In another possible implementation manner, the repair unit includes: the reduction subunit is used for reducing the first area to be repaired and the mask of the first area to be repaired in a preset series to obtain a minimum image to be repaired; the repairing subunit is used for repairing the minimum image to be repaired according to the first face area to obtain a repaired minimum image; the amplifying subunit is used for amplifying the repaired minimum image step by step to obtain an enlarged image; the amplifying subunit is further used for carrying out image restoration on each stage of the amplified images step by step until the image size of the amplified images is the same as that of the image to be processed, and a final restoration area is obtained; and the replacing subunit is used for replacing the corresponding area in the image to be processed with the repairing area to obtain a repaired image.
In yet another possible implementation manner, the repair subunit is further configured to: performing block matching search on the area to be repaired of the minimum graph to be repaired, wherein the search range is the peripheral area of the area to be repaired of the minimum graph to be repaired; calculating the average value of the pixels with the preset number according to the preset number of the pixels to be repaired covered by the image block; taking the average value as the pixel value of the pixel point to be repaired to obtain the repaired pixel point; and obtaining a repaired minimum image according to each repaired pixel point.
In a third aspect, there is provided an image restoration apparatus comprising: comprises a processor and a memory; the processor is configured to enable the apparatus to perform the respective functions of the method of the first aspect described above. The memory is used for coupling with the processor and holds the programs (instructions) and data necessary for the device. Optionally, the apparatus may further comprise an input/output interface for supporting communication between the apparatus and other apparatuses.
In a fourth aspect, a computer-readable storage medium having stored therein instructions, which when executed on a computer, cause the computer to perform the method of the aspects.
In a fifth aspect, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the aspects.
In the image restoration method of the embodiment, the face area of the image to be processed is obtained; segmenting the face area to obtain a first face area of the face area; acquiring a pixel point set of which the gray value is greater than a threshold value in the first face region to obtain a first region to be repaired of the face region; and repairing the first area to be repaired according to the first face area to obtain a repaired image. The method of the embodiment can automatically and quickly identify the first area to be repaired, namely the speckle pox point, in the image to be processed. And filling the detected first area to be repaired (the speckle pox spots) with the skin of the first face area which is intact at the periphery to obtain a repaired image, namely the image after the speckle pox spots are removed. Therefore, the region of the image to be processed, where the speckle pox is repaired, has no difference with the surrounding skin, so that the repaired whole is beautiful and natural.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments or the background art of the present application, the drawings required to be used in the embodiments or the background art of the present application will be described below.
Fig. 1 is a schematic flowchart of an image restoration method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of face detection according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for acquiring a face region in an image according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a method for determining an area to be repaired according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of a method for repairing speckle pox spots provided by the embodiment of the present application
Fig. 6 is a schematic structural diagram of an image restoration apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a hardware structure of an image restoration apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein may be combined with other embodiments.
The embodiments of the present application will be described below with reference to the drawings.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image repairing method according to an embodiment of the present disclosure.
101. And acquiring a face area of the image to be processed.
According to the method and the device, the speckles (or blemishes) existing on the face part in the image are repaired through the neural network, so that the subsequent processing is completed based on the face area of the image to be processed. Inputting an image to be processed into a neural network, extracting face features from the image to be processed by a face detection algorithm, judging whether a face exists in the image to be processed according to the extracted face features and preset face features, if so, obtaining a face frame corresponding to the face according to face frame features in the preset face features, and amplifying the length and width of the face frame according to set multiples to obtain a face area, wherein the set multiples can be set according to user requirements, and the optional set multiples are 1.5. In this embodiment, the face detection algorithm may adopt an algorithm based on set features or other common algorithms, which are not listed here.
102. And segmenting the face area to obtain a first face area of the face area.
And performing skin area segmentation processing on the image to be processed to obtain a skin segmentation area. Because the skin segmentation region comprises a face skin region and a trunk skin region, a face mask obtained by combining the skin segmentation region and the face region only comprises the face skin region, so that the face skin region can be processed in the following process. And detecting the key points of the face in the face mask by using a face key point detection algorithm, and removing the key points from the face mask to obtain the face mask with the key points removed, namely a first face region. The face key points comprise at least one of the following: eyes, mouth, eyebrows. For example, the face key point detection algorithm may be: SBR (super-by-registration), AAM (active architecture modules), ERT (ensemble of regression), MTCNN (multi-task shielded connectivity networks), and the like.
103. And acquiring a pixel point set of which the gray value is greater than a threshold value in the first face region to obtain a first region to be repaired of the face region.
Because the appearance of the speckles and pox spots in the face is greatly different from that of the peripheral skin, the gray value of the speckles and pox spots is usually not an order of magnitude, and the gray value of the speckles and pox spots is usually much larger than that of the normal skin, such as: the gray values of normal skin have the following value ranges: [0,100], whereas the gray value at the motox point is often greater than 1000. Therefore, the first region to be repaired in the face region can be determined by setting a threshold (optionally, the value range is [300,700 ]), that is, the set of pixel points with the gray value less than or equal to the threshold is the normal skin, and the set of pixel points with the gray value greater than the threshold is the first image to be repaired, and the threshold can be set according to specific effects and experience.
It is obvious that the gray level value at the speckle point is greatly different from that of the normal skin, and accordingly, the gradient value of the speckle point obtained by the gray level value is greatly different from that of the skin around the speckle point. For example, when there is an edge in the image or when there is a sudden change in the content of a certain location, the gradient value of the edge region or the sudden change in the content is larger, whereas when there is a relatively smooth portion in the image, the gradient value of the corresponding smooth portion is smaller. Therefore, the pixel points with larger gradient values in the face gradient image are found, namely the pixel points with the gray values larger than the threshold value are found. And finally, obtaining a first to-be-repaired area of the face area, namely the speckle-pox point area, according to the pixel point set with the gray value larger than the threshold value in the first face area.
104. And repairing the first area to be repaired according to the first face area to obtain a repaired image.
In this embodiment, a first mask (speckle-pox spot) of the area to be repaired is obtained according to the first area to be repaired and the first face area. Filling the detected first area to be repaired (speckle pox spots) with the skin of the first face area which is intact around the detected first area to be repaired to obtain a repaired image, namely the image after the speckle pox spots are removed.
In the image restoration method of the embodiment, the face area of the image to be processed is obtained; segmenting the face area to obtain a first face area of the face area; acquiring a pixel point set of which the gray value is greater than a threshold value in the first face region to obtain a first region to be repaired of the face region; and repairing the first area to be repaired according to the first face area to obtain a repaired image. The method of the embodiment can automatically and quickly identify the first to-be-repaired area, namely the speckle point, in the to-be-processed image. And filling the detected first area to be repaired (the speckle pox spots) with the skin of the first face area which is intact at the periphery to obtain a repaired image, namely the image after the speckle pox spots are removed. Therefore, the region of the image to be processed, where the speckle pox is repaired, has no difference with the surrounding skin, so that the repaired whole is beautiful and natural.
Referring to fig. 2, fig. 2 is a schematic view illustrating a flow of face detection according to an embodiment of the present disclosure.
201. And acquiring an image to be processed.
In this embodiment, the image to be processed may be an image obtained by acquiring a stored image from a terminal (e.g., a mobile phone, a digital camera, a tablet computer) and inputting the stored image into the neural network, or an image captured by a camera and input into the neural network.
202. And carrying out face detection on the image to be processed.
And determining whether a face region exists in the image to be processed through any mature face detection algorithm, and acquiring a face frame according to the existing face region. Specifically, firstly, defining the features of the face frame specifically includes: the method comprises the steps of firstly carrying out convolution operation on a picture to be processed, extracting a face characteristic image, secondly carrying out convolution operation on the face characteristic image, converting a plurality of local characteristic images in the face characteristic image into corresponding low-dimensional characteristics, carrying out convolution on a neural network, and judging and classifying the low-dimensional characteristics, such as eyes, a nose, a mouth, eyebrows and face contour points. And selecting the face frame candidate region by the neural network according to the defined face features, and giving a corresponding frame, namely the face frame. The face box includes only the face box features defined above, and does not necessarily include the entire face region. Specifically, when there are n faces in the image, there are n extracted face frames correspondingly. It should be noted that the face detection algorithm is not specifically limited in the embodiment of the present application, and the above process may be completed by any face detection algorithm.
203. And responding to the result of the face detection, and acquiring a face area of the image to be processed.
In this embodiment, if n =0, that is, when no face is detected, no processing is performed on the image to be processed, and finally, the image to be processed is output by the neural network; if n is greater than 0, n face frames are obtained in response to the face detection result.
According to the embodiment, the face frame characteristics of the image to be processed are extracted to determine whether the face exists in the detected image to be processed, so that the subsequent processing of the image without the face can be effectively avoided, and the processing efficiency is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a method for obtaining a face region in an image according to an embodiment of the present disclosure.
301. And acquiring a face area of the image to be processed.
Responding to the face detection result in 203, when n >0, obtaining n face frames, wherein as shown in 202, the face frames do not necessarily include the whole face area, and the length and width of each face frame are amplified according to a set multiple, so that the amplified face frames include the whole face area, and the area in the amplified face frames is the face area of the image to be processed. The number of the face areas corresponding to the n face frames is n, and the face areas of the image to be processed are corresponding to the n face frames one by one. The setting multiple can be set according to the requirements of users, and the optional setting multiple is 1.5.
302. And acquiring a face area mask according to the image to be processed and the face area.
Since the above-mentioned face region may contain non-face skin image contents, such as: hair at the top of the head, background around the head, etc., and therefore, the non-facial skin image content needs to be removed before repairing the facial speckle spots in the image to be processed.
The skin area segmentation processing is carried out on the face area through a skin segmentation network, wherein the skin segmentation network is a neural network and specifically comprises the following steps: the device comprises an encoding layer, a decoding layer and a softmax layer, wherein the encoding layer comprises: convolutional layer, batchNorm layer, and ReLU active layer.
And carrying out convolution operation on the image to be processed step by step through the coding layer to complete coding processing, and extracting the features. Specifically, the convolution layer performs convolution operation on the image to be processed, namely, a convolution kernel is utilized to slide on the image, pixel values on image points are multiplied by numerical values on the corresponding convolution kernel, then all multiplied values are added to serve as pixel values on the image corresponding to pixels in the middle of the convolution kernel, finally all images are subjected to sliding processing, and corresponding features are extracted. During training and practical use, the sliding step size of the convolution kernel is set to 2.
In addition, the feature content and semantic information extracted by each convolution layer are different, and the concrete expression is that the image features are abstracted step by the coding processing, and the relatively minor features are removed step by step, so that the smaller the feature size extracted later, the more concentrated the content and the semantic information are. And carrying out convolution operation on the image to be processed step by step through the multilayer convolution layer, extracting corresponding intermediate features and finally obtaining a coding feature image with a fixed size. Thus, the image size can be reduced while obtaining the main content information (i.e., the characteristic image) of the image, thereby reducing the amount of calculation of the system and improving the calculation speed.
And a Batch Norm layer is connected after the convolution layer, trainable parameters are added through the Batch Norm layer to finish the normalization processing of the data, meanwhile, the training speed can be accelerated, the correlation of the data is removed, and the distribution difference between the characteristic data is highlighted. And then the data is processed through the ReLu activation layer, so that the nonlinearity of the data can be increased, the current characteristic space is converted into another space through certain linear mapping, the data can be better classified, and the problem of gradient dissipation of an image segmentation network in the learning process can be solved to a great extent.
And gradually amplifying the features through a decoding layer to finally obtain a skin feature image with the size of 256 × 256, specifically, the skin feature image comprises a face skin feature, a trunk skin feature and a background feature, wherein the trunk skin refers to all skins below the neck of the human body.
And distinguishing the skin characteristic image according to the characteristics in the skin characteristic image to obtain a skin area segmentation map. Specifically, the region segmentation in the skin feature image is predicted by the softmax layer. Specifically, the softmax layer predicts the image content of the region according to the features of different regions in the skin feature image, and gives probability values of the region, namely a face skin region, a trunk skin region and a background region. In actual use, after the skin feature image obtained by decoding processing is input into the softmax layer, the softmax layer gives probabilities that the features in the feature image are respectively face skin features, trunk skin features and background features. Thus, a 256 × 256 characteristic probability map is obtained, and each pixel point in the probability map has 3 probability values. And respectively selecting the features with the maximum probability value in each pixel point as the features of the pixel point, and combining the to-be-processed image to obtain a final skin region segmentation map, namely dividing the to-be-processed image into a face skin region, a trunk skin region and a background region.
And determining all face skin areas in the image to be processed based on the skin area segmentation image, combining the face skin areas with the face areas, and taking the overlapped area of the face skin areas and the face areas to obtain a face area mask.
303. And carrying out key point detection on the face area to obtain a second face area containing preset key points.
When the speckle pox spot is detected, eyebrows, mouths and eyes are easy to be detected mistakenly as the speckle pox spot. Therefore, before the speckle and pox spots within the face area are processed, the eyebrow area, the mouth area and the eye area need to be detected and removed from the mask of the face, so as to improve the accuracy of the detection of the area to be repaired.
Specifically, the process of face keypoint detection is similar to the process of face detection in 202. Carrying out convolution operation on the facial skin mask image, and extracting face key points from the facial skin mask image, wherein the face key points refer to: eyes, nose and mouth, and the region formed by these key points is the second face region.
304. And subtracting the second face area from the face area mask to obtain the first face area.
And removing corresponding key point regions in the mask of the face region according to the extracted second face region to obtain the face skin mask with the key points removed, namely the first face region.
305. And performing image down-sampling processing on the first face area to obtain a reduced first face area.
And performing image down-sampling processing on the first face region, and reducing the longest edge of all the images to be within 250 × 250 to obtain the reduced first face region, wherein 250 refers to the number of pixels. The specific processing procedure of the down-sampling of the image can be seen in the following example: for an M × N image, S-time down-sampling is performed, that is, the image is divided into S × S cells, so that the size of each cell is (M/S) × (N/S), and then an average value or a maximum value of the target motion features in each cell is calculated, so that a first face region of a target size can be obtained.
In the embodiment, the skin segmentation processing is carried out on the image to be processed to determine the face skin area, and then the face area and the position of the key point of the face are combined to obtain the face skin mask with the key point removed, so that the subsequent processing can be only carried out on the mask part of the face area, and the area except the mask of the non-face area does not need to be subjected to any processing, thereby reducing the calculated amount of the subsequent processing and improving the operation speed of a neural network; and then, carrying out downsampling treatment on the face skin mask with the key points removed to obtain a reduced mask, and further reducing the calculation amount of subsequent treatment.
Referring to fig. 4, fig. 4 is a schematic flowchart illustrating a method for determining a region to be repaired according to an embodiment of the present disclosure.
401. And performing gradient calculation on the reduced first face area to respectively obtain the gradient in the x direction and the gradient in the y direction.
The gradient can be obtained by a derivative calculation, and what appears on the image is to extract the edges of the image (whether transverse, longitudinal, diagonal, etc.). The gradient of the image can be solved by the first derivative as described above. Digital images are stored in a matrix form, so that derivation of a digital image is equivalent to derivation of a flat, curved surface, as neither is it possible to directly derive a digital image, as is the case with straight or curved lines in mathematical theory. In the embodiment of the present application, the scharr operator is used to perform gradient calculation on the reduced first face region obtained in step 305 to obtain gradients in the x direction and the y direction, and other gradient calculation methods may also be used, which are not listed here.
Let the image function be f (x, y), the gradient at any point (x, y) of the image is a vector with magnitude and direction, and let the gradients in x and y directions be G respectively x And G y Then, there is the following formula:
Figure BDA0001817427090000101
since the image is stored in the computer in the form of a digital image, i.e. the image is a discrete digital signal, and the gradients of the digital image are differentiated using differences instead of continuous signals, the image gradients can be approximated again as:
G x =f(x,y)-f(x-1,y),G y = f (x, y) -f (x, y-1) … equation (2)
402. And fusing the gradient in the x direction and the gradient in the y direction to obtain a gradient map of the first face area.
Fusing the gradient in the x direction and the gradient in the y direction to obtain a gradient map of the first face area in 304
403. And determining a pixel point set of which the gray value is greater than a threshold value in the first face region according to the gradient map to obtain a first region to be repaired in the face region.
When an edge exists in the image or the content of a certain position changes suddenly, a larger gradient value is determined, and conversely, when a smoother part exists in the image, the corresponding gradient value is also smaller. Specifically, the sudden change of the edge or the content in the image is understood as that the gray value at a certain position in the image (in the embodiment of the present application, the speckle point) has a sudden change, and there must be a larger gradient value.
The speckle and pox spots in the face of a person can show great difference with the surrounding skin, and the speckle and pox spots are shown as follows: the spots are generally dark brown, and the pox spots are generally red, both pigments are dark, and have a large difference from the skin color around the spots, and the difference is reflected in the digital image as the gray value of the spots is greatly different from the skin gray value around the spots, and correspondingly, the gradient value obtained through the gray value has a large mutation compared with the gradient value of the surrounding skin. In addition, the pox spots are generally accompanied by swelling, which is manifested by the pox spots bulging. The rise of vaccinia in the digital image will appear as a large mutation in the gradient values at that location compared to the gradient values of the surrounding skin. Therefore, the pixel points with the gradient value change larger than the threshold value are found out from the gradient map, the pixel point set with the gray value larger than the threshold value in the first face region is obtained, and the obtained pixel point set is the speckle-pox point region. The threshold value can be set according to the requirements of users.
It should be noted that the scharr operator is not designed at will, but is derived from the derivation theory in mathematics, and the specific derivation process will not be described in detail here.
The method for deriving the image and determining the speckle-pox points through the neural network can accurately determine the speckle-pox points in the mask, and the whole process is automatically completed by the neural network without manual participation, so that the method is high in efficiency and high in accuracy.
Referring to fig. 5, fig. 5 is a schematic flow chart of a speckle-pox spot repairing method according to an embodiment of the present application.
501. And combining the first area to be repaired with the image to be processed to obtain an updated first area to be repaired.
Since the spot of speckle that 403 gets may have spot of speckle that is detected by mistake, the spot of speckle that is detected by mistake needs to be removed before the spot of speckle is repaired. And combining the first area to be repaired obtained by 403 with the image to be processed obtained by 201, finding out all speckles and pox points in the image to be processed, and obtaining an updated first area to be repaired. And comparing the updated pixel value of the pixel point in the first area to be repaired with the updated pixel value of the pixel point in the peripheral area of the first area to be repaired, if the difference between the pixel value of the speckle point and the pixel value of the peripheral skin is larger, regarding the speckle point as a mistakenly detected speckle point, and otherwise, if the difference between the pixel value of the speckle point and the pixel value of the peripheral skin is not larger, regarding the speckle point as a stable speckle point. And finally, removing all wrong speckle spots, and reserving all stable speckle spots to obtain an optimized first region to be repaired.
It should be noted that all operations in 501 are performed in the image to be processed, and therefore the resulting speckle pox spot may include eyes, eyebrows, and mouth.
502. And obtaining a first mask of the area to be repaired according to the image to be processed, the second face area and the face area mask.
Since the stable speckle pox spot obtained by the method may contain eyes, eyebrows and a mouth, the eyes, the eyebrows and the mouth of the stable speckle pox spot need to be removed before the speckle pox spot is repaired. And combining the image to be processed, the face area mask obtained by 302 and the second face area obtained by 303, and taking a common area of the three to obtain a first area mask to be repaired.
503. And reducing the masks of the first area to be repaired and the first area to be repaired by preset stages to obtain a minimum image to be repaired.
After the first mask of the area to be repaired is obtained through the previous steps, if the speckle-pox spot repairing treatment is directly carried out on the first mask of the area to be repaired, a huge calculation amount is generated, and the first mask of the area to be repaired can be reduced to a target dimension by carrying out pyramid reduction on the first mask of the area to be repaired, so that the requirement of subsequent treatment is met, and meanwhile, the calculation amount can be greatly reduced.
The image pyramid is a kind of multi-scale representation of an image, and is an effective but conceptually simple structure to interpret an image in multi-resolution. The pyramid of the repaired minimum image is a series of pyramid-shaped sets of minimum images with progressively lower resolutions and is derived from the repaired minimum image set. And reducing the repaired minimum image by down-sampling layer by layer, and stopping down-sampling until a certain termination condition is reached.
The specific method for obtaining the pyramid image of the (N + 1) th layer by adopting the image downsampling mode for the nth layer of the image pyramid is as follows: firstly, the convolution operation with the step size of 2 is performed on the image N (the convolution operation here can refer to the convolution operation in the encoding process), and then the image obtained by removing all even columns and even rows is the image of the N +1 layer. Obviously, the size of the N +1 layer image is only one quarter of the image to be processed.
And continuously iterating the input repaired minimum image to obtain the repaired minimum image. In short, image down-sampling can reduce the size of the image to be processed, but also loses information of the image to be processed. It should be noted that the minimum size of the repaired image is greater than 3*3.
Finding out the corresponding position of the speckle pox point from the reduced mask of the first area to be repaired, searching m points which are most similar to the pixel value of the point from the vicinity of the speckle pox point of the reduced mask of the first area to be repaired by a patch match algorithm, then calculating the average value of the pixels of the m points, replacing the pixel value of the original speckle pox point by the average value, and obtaining the minimum image after the repair, wherein m is a positive integer, and optionally, the value of m is 9.
504. And amplifying the repaired minimum image step by step to obtain an enlarged image.
The pyramid enlargement is an inverse process of pyramid reduction, and the principle is similar, but the enlargement is realized by performing up-sampling processing on the image layer by layer. The specific method for obtaining the pyramid image of the (N + 1) th layer by adopting the image up-sampling mode for the nth layer of the image pyramid is as follows: firstly, the image is expanded to be twice of the original image in each direction, the rows and the columns which need to be supplemented are filled with 0, and then the kernel which is the same as the down-sampling of the image is used for carrying out convolution operation to obtain an approximate value of 'newly added pixels'. The image thus obtained is an enlarged image.
And amplifying the repaired minimum image in a pyramid amplification mode to obtain an enlarged image. It should be noted that, the number of layers of the pyramid amplification is greater than or equal to 1, that is, the minimum image after repair needs to be amplified M times, and M amplification images are sequentially obtained, where M is greater than or equal to 1, and the amplification images obtained after amplification of each layer are different.
505. And repairing the image of each enlarged image stage by stage until the image size of the enlarged image is the same as that of the image to be processed, and obtaining the final repaired area.
And repairing the enlarged image through the patch match algorithm, carrying out pyramid amplification on the repaired enlarged image to obtain a first enlarged image, repairing the first enlarged image through a patch match algorithm searching process, continuing amplification, and repeating the step-by-step amplification and repair processes until the size of the repaired area obtained after amplification is the same as that of the image to be processed.
It should be noted that the range searched by the algorithm does not include the stable speckle-pox spot region. The method can finish the repair of all stable speckles and pox spots by repairing the single pixel point of all speckles and pox spots.
506. And replacing the corresponding area in the image to be processed with the repair area to obtain the repaired image.
And replacing the corresponding area (namely the speckle point area) in the image to be processed by using the repairing area, namely completing the repairing of all speckle points in the image to be processed and obtaining the repaired image.
In the embodiment of the disclosure, the speckle and pox points of the image can be automatically repaired through the neural network, so that the repaired image is real and natural.
In the embodiment, the first region to be repaired is masked layer by layer, so that the calculation amount is further reduced, and the processing efficiency and the processing speed are improved; the image is amplified and repaired layer by layer, the pixel value of the speckle point is replaced by the average pixel value of the skin around the speckle point to obtain a repaired speckle point mask, and finally the repaired speckle point mask is replaced by the speckle point in the image to be processed to complete the repair, so that the center of the speckle point is more smooth to the edge, the difference between the repaired speckle point and the periphery is very small, and the overall effect after the repair is very natural.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an image restoration apparatus according to an embodiment of the present application, where the apparatus 1000 includes: a first acquisition unit 11, a segmentation unit 12, a second acquisition unit 13, a repair unit 14, a detection unit 15, and a third acquisition unit 16, wherein:
a first obtaining unit 11, configured to obtain a face region of an image to be processed;
the segmentation unit 12 is configured to segment a face region to obtain a first face region of the face region;
a second obtaining unit 13, configured to obtain a pixel point set in the first face region, where a gray value is greater than a threshold, to obtain a first region to be repaired of the face region;
and the repairing unit 14 is configured to repair the first area to be repaired according to the first face area to obtain a repaired image.
The detection unit 15 is used for carrying out face detection on the image to be processed;
a third obtaining unit 16, configured to obtain the face region in the image to be processed in response to the result of face detection.
Further, the dividing unit 12 includes: an obtaining subunit 121, configured to obtain a face region mask according to the image to be processed and the face region; a detection subunit 122, configured to perform keypoint detection on the face region, so as to obtain a second face region including predetermined keypoints, where the predetermined keypoints include at least one of eyes, eyebrows, and a mouth bar; and the processing subunit 123 is configured to subtract the second face region from the face region mask to obtain the first face region.
Further, the obtaining subunit 121 is further configured to: extracting the features of the face region to obtain a skin feature image; distinguishing the skin characteristic image according to the characteristics in the skin characteristic image to obtain a skin area segmentation map; and determining the face region mask based on the skin region segmentation map and the face region.
Further, the second acquiring unit 13 includes: a calculating subunit 131, configured to perform gradient calculation on the first face region to obtain a gradient map of the first face region; and the determining subunit 132 is configured to determine, according to the gradient map, a pixel point set in the first face region whose gray value is greater than the threshold, to obtain a first region to be repaired in the face region.
Further, the calculating subunit 131 is further configured to: performing image down-sampling processing on the first face area to obtain a reduced first face area; performing gradient calculation on the reduced first face area to respectively obtain a gradient in the x direction and a gradient in the y direction; and fusing the gradient in the x direction and the gradient in the y direction to obtain a gradient map of the first face area.
Further, the determining subunit 132 is further configured to: finding out pixel points with gradient value changes larger than a set threshold value from the gradient map to obtain a pixel point set with the gray value larger than the threshold value in the first face region; and obtaining a first region to be repaired in the face region according to the pixel point set.
Further, the determining subunit 132 is further configured to: combining the first area to be repaired with the image to be processed to obtain an updated first area to be repaired; comparing the updated first area to be repaired with the peripheral area of the updated first area to be repaired to obtain an optimized first area to be repaired; and repairing the optimized first area to be repaired according to the first face area to obtain a repaired image.
Further, the determining subunit 132 is further configured to: and obtaining a first mask of the area to be repaired according to the image to be processed, the second face area and the face area mask.
Further, the repair unit 14 includes: a reducing subunit 141, configured to perform mask reduction on the first region to be repaired and the first region to be repaired by a predetermined number of stages to obtain a minimum map to be repaired; a repairing subunit 142, configured to repair the minimum to-be-repaired image according to the first face region, so as to obtain a repaired minimum image; the amplifying subunit 143 is configured to amplify the repaired minimum image step by step to obtain an enlarged image; the amplifying subunit 143 is further configured to perform image restoration on each stage of the amplified image step by step until the image size of the amplified image is the same as the size of the image to be processed, so as to obtain a final restored area; a replacing subunit 144, configured to replace, by the repair area, a corresponding area in the image to be processed, to obtain a repaired image.
Further, the repair subunit 142 is further configured to: performing block matching search on the area to be repaired of the minimum graph to be repaired, wherein the search range is the peripheral area of the area to be repaired of the minimum graph to be repaired; calculating the average value of the pixels with the preset number according to the preset number of the pixels to be repaired covered by the image block; taking the average value as the pixel value of the pixel point to be repaired to obtain the repaired pixel point; and obtaining a repaired minimum image according to each repaired pixel point.
According to the image repairing method provided by the application, the repairing of the speckles and pox spots in the image can be automatically realized. By reducing the image to be processed layer by layer, the operation amount can be greatly reduced, and the processing speed is improved; and the image is enlarged and repaired layer by layer to finally obtain the repaired speckle whelk point mask, so that the transition from the center to the edge of the speckle whelk point is smoother and has smaller abrupt feeling. According to the method, the pixel value of the speckle point is replaced by the average value of the pixel points around the speckle point, so that the speckle point in the image is repaired, and the difference between the repaired speckle point and the periphery is very small, so that the overall effect after repair is very natural.
Fig. 7 is a schematic diagram of a hardware structure of an image restoration apparatus according to an embodiment of the present disclosure. The repair device 2000 includes a processor 21 and may further include an input device 22, an output device 23, and a memory 24. The input device 22, the output device 23, the memory 24 and the processor 21 are connected to each other via a bus.
The memory includes, but is not limited to, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), or a portable read-only memory (CD-ROM), which is used for storing instructions and data.
The input means are for inputting data and/or signals and the output means are for outputting data and/or signals. The output means and the input means may be separate devices or may be an integral device.
The processor may include one or more processors, for example, one or more Central Processing Units (CPUs), and in the case of one CPU, the CPU may be a single-core CPU or a multi-core CPU.
The memory is used to store program codes and data of the network device.
The processor is used for calling the program codes and data in the memory and executing the steps in the method embodiment. For details, reference may be made to the description in the method embodiments, which are not repeated herein.
It will be appreciated that fig. 7 only shows a simplified design of the image restoration apparatus. In practical applications, the image restoration devices may further include necessary other components, including but not limited to any number of input/output devices, processors, controllers, memories, etc., and all image restoration devices that can implement the embodiments of the present application are within the scope of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the division of the unit is only one logical function division, and other division may be implemented in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. The shown or discussed mutual coupling, direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions according to the embodiments of the present application are wholly or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on or transmitted over a computer-readable storage medium. The computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by wire (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)), or wirelessly (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more available media. The usable medium may be a read-only memory (ROM), or a Random Access Memory (RAM), or a magnetic medium, such as a floppy disk, a hard disk, a magnetic tape, a magnetic disk, or an optical medium, such as a Digital Versatile Disk (DVD), or a semiconductor medium, such as a Solid State Disk (SSD).

Claims (16)

1. An image restoration method, comprising:
acquiring a face area of an image to be processed;
acquiring a face area mask according to the image to be processed and the face area;
performing key point detection on the face region to obtain a second face region containing preset key points, wherein the preset key points comprise at least one of eyes, eyebrows and a mouth bar;
subtracting the second face area from the face area mask to obtain a first face area;
acquiring a pixel point set of which the gray value is greater than a threshold value in the first face region to obtain a first region to be repaired of the face region;
obtaining a first mask of the area to be repaired according to the image to be processed, the second face area and the face area mask;
reducing the first area to be repaired and the first area to be repaired by a preset number of stages to obtain a minimum area to be repaired;
repairing the minimum image to be repaired according to the first face area to obtain a repaired minimum image;
gradually amplifying the repaired minimum image to obtain an enlarged image;
performing image restoration on each stage of enlarged image step by step until the image size of the enlarged image is the same as that of the image to be processed, and obtaining a final restoration area;
and replacing the corresponding area in the image to be processed with the final repair area to obtain a repaired image.
2. The method according to claim 1, before acquiring the face region of the image to be processed, further comprising:
carrying out face detection on the image to be processed;
and responding to the result of the face detection, and acquiring the face area of the image to be processed.
3. The method according to claim 1, wherein the obtaining a face region mask according to the image to be processed and the face region comprises:
extracting the features of the face region to obtain a skin feature image;
distinguishing the skin characteristic image according to the characteristics in the skin characteristic image to obtain a skin area segmentation image;
and determining the mask of the face region based on the skin region segmentation map and the face region.
4. The method according to claim 3, wherein the obtaining a set of pixel points in the first face region whose gray scale value is greater than a threshold value to obtain a first region to be repaired of the face region includes:
performing gradient calculation on the first face area to obtain a gradient map of the first face area;
and determining a pixel point set of which the gray value is greater than the threshold value in the first face region according to the gradient map to obtain a first region to be repaired in the face region.
5. The method according to claim 4, wherein the performing a gradient calculation on the first face region to obtain a gradient map of the first face region comprises:
performing image down-sampling processing on the first face area to obtain a reduced first face area;
performing gradient calculation on the reduced first face area to respectively obtain a gradient in the x direction and a gradient in the y direction;
and fusing the gradient in the x direction and the gradient in the y direction to obtain a gradient map of the first face area.
6. The method according to claim 4 or 5, wherein the determining, according to the gradient map, a set of pixel points in the first face region whose gray value is greater than the threshold value to obtain a first region to be repaired in the face region includes:
finding out pixel points with gradient value changes larger than a set threshold value from the gradient map to obtain a pixel point set with the gray value larger than the threshold value in the first face region;
and obtaining a first region to be repaired in the face region according to the pixel point set.
7. The method according to claim 1, wherein after obtaining the first region to be repaired in the face region according to the pixel point set, the method further comprises:
combining the first area to be repaired with the image to be processed to obtain an updated first area to be repaired;
comparing the updated first area to be repaired with the peripheral area of the updated first area to be repaired to obtain an optimized first area to be repaired;
and repairing the optimized first area to be repaired according to the first face area to obtain a repaired image.
8. The method according to claim 1, wherein the repairing the minimum image to be repaired according to the first face region to obtain a repaired minimum image comprises:
performing block matching search on the area to be repaired of the minimum graph to be repaired, wherein the search range is the peripheral area of the area to be repaired of the minimum graph to be repaired;
calculating the average value of the pixels with the preset number according to the preset number of the pixels to be repaired covered by the image block;
taking the average value as the pixel value of the pixel point to be repaired to obtain the repaired pixel point;
and obtaining a repaired minimum image according to each repaired pixel point.
9. An image restoration apparatus, comprising:
the first acquisition unit is used for acquiring a face area of an image to be processed;
a segmentation unit comprising: the device comprises an acquisition subunit, a detection subunit and a processing subunit;
the acquiring subunit is configured to acquire a face region mask according to the image to be processed and the face region;
the detection subunit is configured to perform keypoint detection on the face region to obtain a second face region including predetermined keypoints, where the predetermined keypoints include at least one of eyes, eyebrows, and a mouth bar;
the processing subunit is configured to subtract the second face region from the face region mask to obtain a first face region;
the second obtaining unit is used for obtaining a pixel point set of which the gray value is greater than a threshold value in the first face region to obtain a first region to be repaired of the face region;
the second obtaining unit further comprises a determining subunit, and the determining subunit is configured to obtain a first mask of the region to be repaired according to the image to be processed, the second face region and the face region mask;
the repair unit comprises a reduction subunit, a repair subunit, an amplification subunit and a replacement subunit;
the reduction subunit is used for reducing the first area to be repaired and the mask of the first area to be repaired in a preset series to obtain a minimum image to be repaired;
the repairing subunit is used for repairing the minimum image to be repaired according to the first face area to obtain a repaired minimum image;
the amplifying subunit is used for amplifying the repaired minimum image step by step to obtain an enlarged image;
the amplifying subunit is further used for carrying out image restoration on each stage of the amplified images step by step until the image size of the amplified images is the same as that of the image to be processed, and a final restoration area is obtained;
and the replacing subunit is used for replacing the corresponding area in the image to be processed with the repairing area to obtain a repaired image.
10. The apparatus of claim 9, further comprising:
the detection unit is used for carrying out face detection on the image to be processed;
and the third acquisition unit is used for responding to the result of the face detection and acquiring the face area in the image to be processed.
11. The apparatus of claim 9, wherein the obtaining subunit is further configured to:
extracting the features of the face region to obtain a skin feature image;
distinguishing the skin characteristic image according to the characteristics in the skin characteristic image to obtain a skin area segmentation image;
and determining the face region mask based on the skin region segmentation map and the face region.
12. The apparatus of claim 11, wherein the second obtaining unit comprises:
the calculating subunit is configured to perform gradient calculation on the first face region to obtain a gradient map of the first face region;
and the determining subunit is configured to determine, according to the gradient map, a pixel point set in the first face region, where the gray value is greater than the threshold, to obtain a first region to be repaired in the face region.
13. The apparatus of claim 12, wherein the computing subunit is further configured to:
performing image down-sampling processing on the first face area to obtain a reduced first face area;
performing gradient calculation on the reduced first face area to respectively obtain a gradient in the x direction and a gradient in the y direction;
and fusing the gradient in the x direction and the gradient in the y direction to obtain a gradient map of the first face area.
14. The apparatus according to claim 12 or 13, wherein the determining subunit is further configured to:
finding out pixel points with gradient value changes larger than a set threshold value from the gradient map to obtain a pixel point set with the gray value larger than the threshold value in the first face region;
and obtaining a first region to be repaired in the face region according to the pixel point set.
15. The apparatus of claim 9, wherein the determining subunit is further configured to:
combining the first area to be repaired with the image to be processed to obtain an updated first area to be repaired;
comparing the updated first area to be repaired with the peripheral area of the updated first area to be repaired to obtain an optimized first area to be repaired;
and repairing the optimized first area to be repaired according to the first face area to obtain a repaired image.
16. The apparatus of claim 9, wherein the repair subunit is further configured to:
performing block matching search on the area to be repaired of the minimum image to be repaired, wherein the search range is the peripheral area of the area to be repaired of the minimum image to be repaired;
calculating the average value of the pixels with the preset number according to the preset number of the pixels to be repaired, which are covered by the image blocks;
taking the average value as the pixel value of the pixel point to be repaired to obtain the repaired pixel point;
and obtaining a repaired minimum image according to each repaired pixel point.
CN201811148733.4A 2018-09-29 2018-09-29 Image restoration method and device Active CN109389562B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811148733.4A CN109389562B (en) 2018-09-29 2018-09-29 Image restoration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811148733.4A CN109389562B (en) 2018-09-29 2018-09-29 Image restoration method and device

Publications (2)

Publication Number Publication Date
CN109389562A CN109389562A (en) 2019-02-26
CN109389562B true CN109389562B (en) 2022-11-08

Family

ID=65419130

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811148733.4A Active CN109389562B (en) 2018-09-29 2018-09-29 Image restoration method and device

Country Status (1)

Country Link
CN (1) CN109389562B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112037162B (en) * 2019-05-17 2022-08-02 荣耀终端有限公司 Facial acne detection method and equipment
CN110211063B (en) * 2019-05-20 2021-06-08 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and system
CN110349108B (en) * 2019-07-10 2022-07-26 北京字节跳动网络技术有限公司 Method, apparatus, electronic device, and storage medium for processing image
CN110826401B (en) * 2019-09-26 2023-12-26 广州视觉风科技有限公司 Human body limb language identification method and system
CN110706179B (en) * 2019-09-30 2023-11-10 维沃移动通信有限公司 Image processing method and electronic equipment
CN111583154B (en) * 2020-05-12 2023-09-26 Oppo广东移动通信有限公司 Image processing method, skin beautifying model training method and related device
CN111738934B (en) * 2020-05-15 2024-04-02 西安工程大学 Automatic red eye repairing method based on MTCNN
CN111738958B (en) * 2020-06-28 2023-08-22 字节跳动有限公司 Picture restoration method and device, electronic equipment and computer readable medium
CN111831193A (en) * 2020-07-27 2020-10-27 北京思特奇信息技术股份有限公司 Automatic skin changing method, device, electronic equipment and storage medium
CN112418054A (en) * 2020-11-18 2021-02-26 北京字跳网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN112488942A (en) * 2020-12-02 2021-03-12 北京字跳网络技术有限公司 Method, device, equipment and computer readable medium for repairing image
CN113592732B (en) * 2021-07-19 2023-03-24 安徽省赛达科技有限责任公司 Image processing method based on big data and intelligent security
CN113516604B (en) * 2021-09-14 2021-11-16 成都数联云算科技有限公司 Image restoration method
CN114418897B (en) * 2022-03-10 2022-07-19 深圳市一心视觉科技有限公司 Eye spot image restoration method and device, terminal equipment and storage medium
CN117501326A (en) * 2022-05-23 2024-02-02 京东方科技集团股份有限公司 Image processing method and device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100458848C (en) * 2003-03-20 2009-02-04 欧姆龙株式会社 Image processing device
CN103440633A (en) * 2013-09-06 2013-12-11 厦门美图网科技有限公司 Digital image automatic speckle-removing method
CN103927718A (en) * 2014-04-04 2014-07-16 北京金山网络科技有限公司 Picture processing method and device
CN107392166A (en) * 2017-07-31 2017-11-24 北京小米移动软件有限公司 Skin color detection method, device and computer-readable recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100458848C (en) * 2003-03-20 2009-02-04 欧姆龙株式会社 Image processing device
CN103440633A (en) * 2013-09-06 2013-12-11 厦门美图网科技有限公司 Digital image automatic speckle-removing method
CN103927718A (en) * 2014-04-04 2014-07-16 北京金山网络科技有限公司 Picture processing method and device
CN107392166A (en) * 2017-07-31 2017-11-24 北京小米移动软件有限公司 Skin color detection method, device and computer-readable recording medium

Also Published As

Publication number Publication date
CN109389562A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN109389562B (en) Image restoration method and device
CN109166130B (en) Image processing method and image processing device
US10635890B2 (en) Facial recognition method and apparatus, electronic device, and storage medium
US10713532B2 (en) Image recognition method and apparatus
CN110363091B (en) Face recognition method, device and equipment under side face condition and storage medium
US10318797B2 (en) Image processing apparatus and image processing method
CN109829448B (en) Face recognition method, face recognition device and storage medium
JP7512262B2 (en) Facial keypoint detection method, device, computer device and computer program
CN108681743B (en) Image object recognition method and device and storage medium
WO2022078041A1 (en) Occlusion detection model training method and facial image beautification method
KR20210149848A (en) Skin quality detection method, skin quality classification method, skin quality detection device, electronic device and storage medium
CN109271930B (en) Micro-expression recognition method, device and storage medium
CN109389076B (en) Image segmentation method and device
CN112308866B (en) Image processing method, device, electronic equipment and storage medium
CN111310705A (en) Image recognition method and device, computer equipment and storage medium
CN111597884A (en) Facial action unit identification method and device, electronic equipment and storage medium
WO2018082308A1 (en) Image processing method and terminal
CN108734126B (en) Beautifying method, beautifying device and terminal equipment
WO2023284182A1 (en) Training method for recognizing moving target, method and device for recognizing moving target
CN112001285B (en) Method, device, terminal and medium for processing beauty images
CN111080670A (en) Image extraction method, device, equipment and storage medium
WO2021218659A1 (en) Face recognition
CN112907569A (en) Head image area segmentation method and device, electronic equipment and storage medium
US20220207917A1 (en) Facial expression image processing method and apparatus, and electronic device
CN114581918A (en) Text recognition model training method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant