WO2023225774A1 - 图像处理方法及装置、电子设备、计算机可读存储介质 - Google Patents

图像处理方法及装置、电子设备、计算机可读存储介质 Download PDF

Info

Publication number
WO2023225774A1
WO2023225774A1 PCT/CN2022/094361 CN2022094361W WO2023225774A1 WO 2023225774 A1 WO2023225774 A1 WO 2023225774A1 CN 2022094361 W CN2022094361 W CN 2022094361W WO 2023225774 A1 WO2023225774 A1 WO 2023225774A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
image
defective
defect
skin
Prior art date
Application number
PCT/CN2022/094361
Other languages
English (en)
French (fr)
Inventor
吴艳红
高艳
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to CN202280001363.XA priority Critical patent/CN117501326A/zh
Priority to PCT/CN2022/094361 priority patent/WO2023225774A1/zh
Publication of WO2023225774A1 publication Critical patent/WO2023225774A1/zh

Links

Images

Definitions

  • the present disclosure belongs to the field of computer technology, and specifically relates to an image processing method and device, electronic equipment, and non-transitory computer-readable storage media.
  • the present disclosure aims to solve at least one of the technical problems existing in the related art, and provide an image processing method and device, electronic equipment, and non-transitory computer-readable storage media.
  • embodiments of the present disclosure provide an image processing method, which method includes: performing face detection on a first image to be processed, and determining the face area and face key point information in the first image; according to the Describe the human face area, perform skin detection on the first image, and determine the skin area in the first image; perform flaw detection on the skin area according to the key point information of the human face, and determine the defects in the skin area.
  • the defective area if there is a defective area in the skin area, perform defect removal processing on the defective area in the first image to obtain a processed second image.
  • performing skin detection on the first image according to the face area and determining the skin area in the first image includes: converting the first image into YCrCb space , obtain a third image; determine the target detection value for skin detection based on the pixel values of the pixels in the face area in the third image; determine at least one candidate skin area from the third image, The pixel values of the pixels in the candidate skin area satisfy the first limiting condition based on the target detection value; the candidate skin area in at least one of the candidate skin areas that is connected to the human face area is determined as the skin area.
  • the target detection value includes: the first mean and the first standard deviation of the pixel values of the Cr channel of the pixels in the face area in the third image, and the The second mean and second standard deviation of the pixel values of the Cb channel for the pixels in the face area.
  • performing defect detection on the skin area according to the face key point information and determining the defect area in the skin area includes: based on the grayscale image of the first image and multiple preset grayscale thresholds to determine multiple binary images corresponding to the first image; determine second limiting conditions for the defective area based on the facial key point information; based on the second limiting Conditions, perform defect extraction on each of the binary images to obtain candidate defect areas in each of the binary images; according to the position of the center pixel of the candidate defect area in each of the binary images, extract the candidate defects for each of the binary images.
  • the candidate defective areas in the binary image are classified to determine the defective area in the grayscale image, where the defective area includes the center point position and area size.
  • the defect extraction is performed on each of the binary images according to the preset second limiting condition to obtain candidate defect areas in each of the binary images, including:
  • the second limiting condition includes at least one of the following: the color of the connected area is a preset color, the size of the connected area is within a preset size interval, the roundness of the connected area is greater than or equal to the roundness threshold, and the convexity of the connected area is greater than or equal to the convexity
  • the threshold, the eccentricity of the connected area is less than or equal to the eccentricity threshold, and the connected area is within the skin area and outside the facial features area; wherein the facial features area includes the eyebrow area, eye area, nostril area, mouth area, and ears At least one of the areas, the preset color includes black or white.
  • determining the second limiting condition for the defect area based on the facial key point information includes: determining the second limiting condition in the first image based on the facial key point information. Facial features region; if the facial features region includes an eye region, the preset size interval is determined based on the size of the eye region.
  • the candidate defect areas in each of the binary images are classified according to the position of the center pixel of the candidate defect area in each of the binary images, and the candidate defect areas in the grayscale image are determined.
  • the defective area includes: determining at least one area grouping according to the position of the center pixel of the candidate defective area in each of the binary images, and the candidate defective area in the area grouping belongs to different binary images, the area The distance between the central pixels of the candidate defective areas in the grouping is less than or equal to the distance threshold, and the number of candidate defective areas in the area grouping is greater than or equal to the quantity threshold; according to the central pixel of the candidate defective area in the area grouping point position, determine the center point position of the defective area corresponding to the area grouping; determine the area size of the defective area corresponding to the area grouping based on the size of the candidate defective area in the area grouping.
  • determining the center point position of the defective area corresponding to the area grouping according to the position of the center pixel of the candidate defective area in the area grouping includes: according to each candidate in the area grouping The inertia rate of the defect area determines the weight of each candidate defect area respectively; according to the weight of each candidate defect area, the weighted sum of the positions of the center pixels of each candidate defect area is determined as The center point position of the defective area corresponding to the above-mentioned area grouping.
  • performing defect removal processing on the defective area in the first image to obtain a processed second image includes: for any defective area, determining the defect corresponding to the defective area. frame, and multiple adjacent frames of the defective frame, the size of the adjacent frame is the same as the size of the defective frame, and the adjacent frame is within the skin area; the defective frame and multiple adjacent frames are respectively The adjacent frame is subjected to gradient filtering and gradient calculation to obtain the average gradient value of the defective frame and multiple adjacent frames; according to the average gradient value of the defective frame and multiple adjacent frames, from the defective frame and determine a target frame from a plurality of adjacent frames; when the target frame is an adjacent frame, use the area image of the target frame to replace the area image of the defective frame; in the first image When all area images of the defective frame are replaced, the second image is obtained.
  • the step of performing gradient filtering and gradient calculation on the defective frame and multiple adjacent frames to obtain the average gradient value of the defective frame and multiple adjacent frames includes:
  • the defective frame and any region frame among the plurality of adjacent frames perform gradient transverse filtering and gradient longitudinal filtering on the regional image of the region frame to obtain the transverse filtering map and the longitudinal filtering map of the region frame; According to the horizontal filter map and the longitudinal filter map, the gradient map of the region frame is determined; the average value of each point in the gradient map is determined as the average gradient value of the region frame.
  • the method before performing face detection on the first image to be processed and determining the face area and face key point information in the first image, the method further includes: displaying the Defect removal control for the first image;
  • the first image to be processed is subjected to face detection to determine the face area and face key point information in the first image, including: in response to the defect removal control being triggered, performing a face detection on the first image.
  • the image is subjected to face detection to determine the face area and face key point information in the first image.
  • an image processing device which includes:
  • the face detection module is used to perform face detection on the first image to be processed, and determine the face area and face key point information in the first image; the skin detection module is used to detect the face based on the face area.
  • the first image is subjected to skin detection to determine the skin area in the first image;
  • a flaw detection module is used to detect flaws in the skin area based on the face key point information and determine the skin area in the skin area.
  • a defective area; a defect removal module configured to perform defect removal processing on the defective area in the first image to obtain a processed second image when a defective area exists in the skin area.
  • embodiments of the present disclosure provide an electronic device, including: one or more processors; a memory for storing one or more programs; when the one or more programs are processed by the one or more The processor executes, so that the one or more processors implement the above image processing method.
  • embodiments of the present disclosure provide a non-transitory computer-readable storage medium on which a computer program is stored, wherein the computer program implements the steps in the above image processing method when executed by a processor.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart of some steps of an image processing method according to an embodiment of the present disclosure.
  • FIG. 3 is a schematic diagram of skin detection according to related art and embodiments of the present disclosure.
  • FIG. 4 is a flowchart of some steps of an image processing method according to an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of some steps of an image processing method according to an embodiment of the present disclosure.
  • Figure 6 is a schematic diagram of a defective frame and an adjacent frame according to an embodiment of the present disclosure.
  • FIG. 7 is a block diagram of an image processing device according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the main principle of skin beautification is to perform operations such as skin resurfacing and whitening on the entire image to achieve an overall beautification effect.
  • operations such as skin resurfacing and whitening on the entire image to achieve an overall beautification effect.
  • blemishes such as moles, pimples, and pimples on the human face, which are generally not completely removed when beautifying the entire image; and if the degree of dermabrasion is increased, the texture will be easily lost, resulting in unnatural distortion and poor processing effects. Poor.
  • skin detection can be guided based on face detection, and defects in the skin area can be detected based on face key point information, thereby removing defects in the skin area, and can improve the accuracy of skin detection and defect detection. accuracy, thereby improving the effect of defect removal and improving the beautification effect of the image.
  • the image processing method according to the embodiment of the present disclosure can be executed by an electronic device such as a terminal device or a server.
  • the terminal device can be a vehicle-mounted device, a user equipment (User Equipment, UE), a mobile device, a user terminal, a terminal, a cellular phone, a cordless phone, Personal Digital Assistant (Personal Digital Assistant, PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
  • the method can be implemented by the processor calling computer-readable program instructions stored in the memory. Alternatively, the method may be performed via a server.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present disclosure.
  • the image processing method includes:
  • Step S11 perform face detection on the first image to be processed, and determine the face area and face key point information in the first image
  • Step S12 Perform skin detection on the first image according to the face area to determine the skin area in the first image
  • Step S13 Perform flaw detection on the skin area based on the facial key point information to determine the flaw area in the skin area;
  • Step S14 If there is a defective area in the skin area, perform defect removal processing on the defective area in the first image to obtain a processed second image.
  • the first image to be processed can be an image including a human face, such as a selfie image, a group photo image, etc. collected by a camera of an electronic device such as a smartphone; the first image can also be an image obtained through other methods.
  • a human face such as a selfie image, a group photo image, etc. collected by a camera of an electronic device such as a smartphone
  • the first image can also be an image obtained through other methods.
  • a defect removal function for images can be provided in the electronic device, for example, a defect removal control is displayed in an interface for beautifying the image. If the defect removal control is triggered (for example, the user clicks on the defect removal control), the defect removal function is enabled and the first image to be processed is processed.
  • face detection can be performed on the first image in step S11 to locate the face frame (ie, face area) in the first image; and, face key detection can be performed on the face area.
  • Point extraction obtains facial key point information in the first image, such as the location information of 68 facial key points.
  • detection methods in related technologies such as using open source facial feature recognition libraries
  • This disclosure does not limit the specific methods of face detection and facial key point extraction.
  • the target detection value for skin detection can be determined based on the pixel values of the pixels in the face area, such as the average value of the pixel values of the pixels in the face area. and/or standard deviation, etc.; and, the first limiting condition based on the target detection value can be set, the entire first image is searched, and the pixel points in the first image that meet the first limiting condition are regarded as points that may be skin, thereby obtaining At least one possible area of the skin (called a candidate skin area); further, the candidate skin area including the human face is used as the skin area in the first image.
  • defects in the skin area can be detected in step S13.
  • multiple grayscale thresholds can be preset, and the grayscale image of the first image is converted into multiple binary images, such as 20 binary images, to obtain a binary image set. This disclosure places no restrictions on the specific value of the grayscale threshold or the number of grayscale thresholds.
  • the second limiting condition for defect detection can be determined based on facial key point information, such as the size range of the defect area (for example, in a certain proportion to the size of the eyes), and the defect area should be outside the facial features area. wait.
  • the second limiting condition may also include other contents, for example, the defective area needs to meet a preset roundness threshold, convexity threshold, etc., which is not limited by this disclosure.
  • connected regions in each binary image can be extracted by detecting the boundaries of each binary image, and regions that meet the second limiting condition are selected from the connected regions as the binary image.
  • the areas in the value image that may be defects are called candidate defect areas. In this way, candidate defect areas in each binary image in the binary image set can be obtained respectively.
  • the candidate defect areas in the binary images can be classified according to the positions of the center pixels of the candidate defect areas in all binary images. For example, the centers of the candidate defect areas in different binary images can be calculated. The distance between pixels. If the distance between central pixels is less than or equal to the preset threshold, and the candidate defect area exists in all or most of the binary images, the candidate defect area can be considered to be an actual defect. area.
  • the center point position and area size of the corresponding group of candidate defect areas can be determined based on the center point position and area size of the corresponding group of candidate defect areas, for example, the center point position and area size of the group of candidate defect areas.
  • the cluster center of the center point position and the size of the center of the area size can be determined based on the center point position and area size of the corresponding group of candidate defect areas, for example, the center point position and area size of the group of candidate defect areas.
  • the defective area in the skin area of the first image can be obtained.
  • step S13 if no defective area is detected in step S13, subsequent processing may not be performed, the unprocessed first image may be returned, and/or "no defective area detected" may be prompted. If a defective area is detected in step S13, that is, there is a defective area in the skin area, then in step S14, the defective area in the first image can be removed.
  • each defective area is repaired separately to obtain a repaired image, which is called the second image.
  • skin detection can be guided based on face detection, and defects in the skin area can be detected based on facial key point information, thereby removing defects in the skin area, thereby improving the accuracy of skin detection and the accuracy of defect detection. Thereby improving the effect of defect removal and enhancing the beautification effect of the image.
  • the electronic device can be provided with a defect removal function for images, allowing the user to choose whether to turn on the function, and when the function is turned on, defects in the image will be detected and removed.
  • the method may further include: displaying a defect removal control for the first image;
  • step S11 may include: in response to the defect removal control being triggered, performing face detection on the first image to determine the face area and face key point information in the first image.
  • the defect removal control can be displayed in the interface for beautifying the image, for example, as a touchable icon. If the user clicks on the defect removal control, the defect removal function can be turned on.
  • This disclosure does not limit the specific setting method and triggering method of the defect removal control.
  • the electronic device responds to the triggered defect removal control, turns on the defect removal function, performs the face detection in step S11, and performs subsequent steps accordingly. deal with.
  • the defect removal function can be turned on based on user triggers, improving user flexibility and thus improving user experience.
  • face detection can be performed on the first image to be processed in step S11 to determine the face area in the first image; face key points can be extracted from the face area to obtain the first image.
  • the facial key point information in the image such as the position coordinates of 68 facial key points.
  • detection methods in related technologies such as using an open source facial feature recognition library
  • facial key point extraction methods can be used to achieve face detection and facial key point extraction respectively. This disclosure does not limit the specific methods of face detection and facial key point extraction.
  • skin detection may be performed on the first image in step S12.
  • Skin detection methods in related technologies usually include methods based on color space, based on spectral features, and based on skin color reflection models. The main steps of these detection methods are to first transform the color space and then establish a skin color model for processing.
  • the color spaces used in skin detection include RGB, YCrCb, HSV, Lab, etc.
  • the image is usually converted from the RGB color space to the corresponding color space, and then based on skin color clustering/threshold segmentation, etc., for example, the more commonly used ones are YCbCr/HSV/RGB/CIELAB color space threshold segmentation.
  • the skin color YCrCb color space is a commonly used color model for skin color detection, where Y represents brightness, Cr represents the red component in the light source, and Cb represents the blue component in the light source.
  • Y represents brightness
  • Cr represents the red component in the light source
  • Cb represents the blue component in the light source.
  • the color difference in the appearance of human skin color is caused by chroma, and the skin color of different people is concentrated in a smaller area.
  • the skin segmentation algorithm in the related art usually uses the Otsu algorithm to perform binary segmentation on the Cr component, and the clustering result of skin color can be roughly calculated.
  • the error is large, and there are problems such as incomplete detection of face areas and false detection of background pixels similar to skin color.
  • skin detection can be guided according to the face area, thereby improving the accuracy of skin detection.
  • FIG. 2 is a flowchart of some steps of an image processing method according to an embodiment of the present disclosure.
  • step S12 includes:
  • Step S121 convert the first image into YCrCb space to obtain a third image
  • Step S122 determine the target detection value for skin detection based on the pixel values of the pixels in the face area in the third image
  • Step S123 determine at least one candidate skin area from the third image, and the pixel values of the pixels in the candidate skin area satisfy the first limiting condition based on the target detection value;
  • Step S124 Determine a candidate skin area connected to the human face area among at least one candidate skin area as the skin area.
  • step S121 color conversion can be performed on the first image (for example, an RGB image) to obtain a third image in YCrCb space.
  • the first image for example, an RGB image
  • This disclosure does not limit the specific method of color conversion.
  • the target detection value for skin detection may be determined based on the pixel values of the pixels in the face area in the third image.
  • the target detection value may include: the first mean Mcr and the first standard deviation Vcr of the pixel values in the Cr channel of the pixels in the face area in the third image, and the pixels in the face area in the third image.
  • the point is the second mean Mcb and the second standard deviation Vcb of the pixel value of the Cb channel.
  • the first mean and the second mean can reflect the range of pixel values in the face area (mostly skin), and the first standard deviation and the second standard deviation can reflect the difference between the pixel values in the face area.
  • the target detection value determined based on the face area will change according to different images and is a dynamic value, which can better characterize the characteristics of the skin in the corresponding image, thus improving the detection accuracy.
  • a first limiting condition based on the target detection value may be set; all pixels in the third image are traversed to determine whether each pixel satisfies the first limiting condition.
  • the first limiting condition can be expressed as:
  • abs() is the absolute value function
  • abs(Cr-Mcr) is the absolute value of (Cr-Mcr);
  • (Crn, Cbn) represents the pixel of the n-th pixel in the Cr channel and Cb channel value;
  • Mcr represents the first mean;
  • Vcr represents the first standard deviation;
  • Mcb represents the second mean;
  • Vcb represents the second standard deviation;
  • && represents logical AND;
  • k is the preset coefficient.
  • the value range of k is [1,3], for example, the value is 2 or 3.
  • step S123 after traversing the pixels in the entire third image, at least one connected area composed of candidate skin points can be obtained as a candidate skin area. This disclosure does not limit the specific determination method of the connected area.
  • a candidate skin area connected to the face area among at least one candidate skin area is determined as a skin area. In this way, background areas similar to skin color can be removed, further improving the accuracy of skin detection.
  • the first image can also be transformed into other color spaces, such as HSV, Lab, etc., the corresponding target detection value is determined based on the face area, and the first limiting condition based on the target detection value is set. , in order to achieve skin detection, this disclosure does not limit this.
  • the first row is the first image to be processed, which is a group photo of people with different skin colors/under different lighting; the second row and the third row are respectively the skin detection results of the related technology and the skin of the embodiment of the present disclosure. Test results.
  • the skin detection results according to the related technology include inaccurate detection, false detection, missed detection, etc.; in the third row, the skin detection results according to the embodiments of the present disclosure are relatively accurate and meet practical requirements.
  • embodiments of the present disclosure can accurately detect skin of different skin colors/under different lighting, effectively reduce distortion in the background area, improve the accuracy of skin detection, and achieve more accurate and reliable detection. Effect.
  • step S13 is a flow chart of some steps of an image processing method according to an embodiment of the present disclosure.
  • step S13 may include:
  • Step S131 determine multiple binary images corresponding to the first image based on the grayscale image of the first image and multiple preset grayscale thresholds;
  • Step S132 Determine the second limiting condition for the defective area based on the facial key point information
  • Step S133 Perform defect extraction on each of the binary images according to the second limiting condition to obtain candidate defect areas in each of the binary images;
  • Step S134 Classify the candidate defect areas in each binary image according to the position of the center pixel of the candidate defect area in each binary image, and determine the defect area in the grayscale image, so The defect area includes the center point position and area size.
  • multiple grayscale thresholds may be preset, and in step S131, the grayscale image of the first image is converted into a plurality of binary images to obtain a binary image set.
  • This disclosure places no restrictions on the specific value of the grayscale threshold or the number of grayscale thresholds.
  • the threshold range can be set to [T1, T2] and the step size is t, thereby obtaining multiple grayscale thresholds: T1, T1+t, T1+2t,..., T2.
  • T1 and T2 are values between [0, 255], such as 50 and 200 respectively; t can be a smaller value, such as 5, 10, etc. This disclosure does not limit this.
  • T1, T2, and t take values of 50, 200, and 10 respectively
  • 16 grayscale thresholds of 50, 60, 70, 80, ..., 190, and 200 can be obtained.
  • 16 binary images can be obtained to form a binary image set.
  • the second limiting condition for the defect area can be determined based on the facial key point information obtained in step S11, so as to narrow the detection range of defect detection and improve the accuracy of detection.
  • step S132 may include: determining the facial features area in the first image based on the face key point information; if the facial features area includes an eye area, determining the facial features area based on the size of the eye area. Describes the preset size interval.
  • the facial features area of the human face can be determined, including at least one of the eyebrow area, eye area, nostril area, mouth area, and ear area. Due to the angle of the face, occlusion, and lighting conditions, all or part of the facial features area may be determined. If an eye area exists in the facial features area, the size of the eye area can be further determined, such as the area of the eye area.
  • the second limiting condition may include that the size of the connected area in the image is within a preset size interval. If the eye area is determined, the preset size interval can be set from 5 pixels to 0.1 times the size of the eye area in order to remove connected areas that are too small or too large; if the eye area is not determined (such as eyes are blocked), you can directly set a moderate size range, such as 5 pixels to 0.01 times the face area size, or 5 pixels to 200 pixels. It should be understood that those skilled in the art can set the size interval of the connected area according to actual conditions, and this disclosure does not limit this.
  • the second limiting condition may also include that the connected area is within the skin area and outside the facial features area, so as to remove the connected areas outside the skin area and within the facial features area and improve detection accuracy.
  • the second limiting condition may also include other contents, such as the color of the connected area being a preset color, the roundness of the connected area being greater than or equal to the roundness threshold, and the convexity of the connected area being greater than or equal to the convexity.
  • the threshold, the eccentricity of the connected area is less than or equal to the eccentricity threshold, etc.
  • step S133 may include:
  • a region that satisfies the second limiting condition can be screened out from the connected region as a region that may be a defect in the binary image, and is called a candidate defect region.
  • the second qualifying condition includes at least one of the following:
  • the color of the connected area is the preset color
  • the size of the connected area is within the preset size interval
  • the roundness of the connected area is greater than or equal to the roundness threshold
  • the convexity of the connected area is greater than or equal to the convexity threshold
  • the eccentricity of the connected area Less than or equal to the eccentricity threshold, the connected area is within the skin area and outside the facial features area;
  • the facial features area includes at least one of an eyebrow area, an eye area, a nostril area, a mouth area, and an ear area
  • the preset color includes black or white
  • the default color can be set to black; the size range can be set to 5 pixels to 0.1 times the size of the eye area; the roundness threshold can be set to 0.5; the convexity threshold can be set to 0.9; the eccentricity threshold is set to 0.3. It should be understood that those skilled in the art can set the specific content of the second limiting condition and the values of various thresholds and intervals therein according to the actual situation, and this disclosure does not limit this.
  • the detected blemishes can be any type of blemishes that may exist in the skin area, such as moles, acne, and pimples.
  • the second limiting condition can be set according to the type of defect actually detected. For example, when detecting acne, the preset color can be set to white, etc. This disclosure does not limit this.
  • each binary image in the binary image set is processed separately, and the candidate defect area in each binary image can be obtained.
  • step S134 the candidate defect areas in each binary image can be classified according to the position of the center pixel of the candidate defect area in each binary image, and the candidate defect areas in the grayscale image can be determined. defective area.
  • step S134 may include:
  • the candidate defective regions in the region grouping belong to different binary images.
  • the candidate defective regions in the region grouping The distance between the center pixels is less than or equal to the distance threshold, and the number of candidate defective areas in the region group is greater than or equal to the quantity threshold;
  • the area size of the defective area corresponding to the area grouping is determined.
  • the distance between the center pixels of the candidate defect areas of different binary images can be calculated separately; if the distance between the center pixels is less than or equal to the preset distance threshold Tb, the corresponding candidate defect area can be considered are the same area; if the corresponding candidate defect areas exist in all or most of the binary images, then this group of candidate defect areas can be considered as actual defect areas, and a region grouping is obtained.
  • each area group corresponds to a defect area.
  • the candidate defect areas in the same area group belong to different binary images
  • the distance between the center pixels of the candidate defect areas in the area group is less than or equal to the distance threshold
  • the number of candidate defect areas in the area group is greater than or equal to the quantity threshold.
  • the center point position of the defect area corresponding to the area group can be determined based on the position of the center pixel of the candidate defect area in the area group.
  • the step of determining the center point position of the defective area corresponding to the area grouping may include:
  • the weighted sum of the positions of the center pixels of each candidate defect area is determined as the center point position of the defect area corresponding to the area grouping.
  • the inertia rate of each candidate defect area in the area group can be determined separately, and the square of the inertia rate of each candidate defect area can be normalized to obtain the weight q of each candidate defect area.
  • the meaning of the weight q is: the closer the shape of the candidate defect area in the binary image is to a circle, the more we want the defect, and therefore the greater the contribution to the position of the defect area in the grayscale image.
  • the weighted sum of the positions (pixel coordinates) of the center pixels of each candidate defect area can be determined as the center of the defect area corresponding to the area grouping. point location. In this way, the accuracy of the location of defective areas can be improved.
  • the sizes (such as area or radius) of each candidate defect region in the region group can be sorted, and the size in the middle of the sorting is determined as the defect corresponding to the region group.
  • the area size of the area; the mean value of the size of each candidate defective area in the area grouping can also be calculated, and the mean value is determined as the area size of the defective area corresponding to the area grouping. This disclosure does not limit this.
  • the defective area in the grayscale image that is, the defective area in the first image, can be obtained, thereby completing the entire process of defect detection.
  • step S13 if no defective area is detected in step S13, for example, no candidate defective area is extracted in step S133, or no defective area that meets the conditions is obtained after classification in step S134, then no subsequent processing may be performed and the return The first unprocessed image, and/or the prompt "No defective area detected” etc.
  • the defective area may be removed in step S14.
  • FIG. 5 is a flowchart of some steps of an image processing method according to an embodiment of the present disclosure.
  • step S14 may include:
  • Step S141 For any defective area, determine a defective frame corresponding to the defective area and a plurality of adjacent frames of the defective frame.
  • the size of the adjacent frame is the same as the size of the defective frame, and the adjacent frames are framed within said skin area;
  • Step S142 Perform gradient filtering and gradient calculation on the defective frame and multiple adjacent frames respectively to obtain the average gradient value of the defective frame and multiple adjacent frames;
  • Step S143 Determine a target frame from the defective frame and multiple adjacent frames based on the average gradient value of the defective frame and multiple adjacent frames;
  • Step S144 when the target frame is an adjacent frame, use the area image of the target frame to replace the area image of the defective frame;
  • Step S145 When all the area images of the defective frame in the first image have been replaced, the second image is obtained.
  • defect removal processing can be performed on each defective area in the first image.
  • a defect frame corresponding to the defective area can be determined in step S141.
  • the defective area obtained in step S13 may be a circular or irregular shape, and the circumscribed rectangle of the defective area can be used as a defective frame corresponding to the defective area, thereby simplifying the difficulty of subsequent processing.
  • multiple adjacent frames of the defective frame can also be determined, that is, four rectangular frames adjacent to the defective frame are selected around the defective frame, including the top, bottom, left, and right.
  • the size of the adjacency box is the same as the size of the corresponding defect box, and the adjacency box is within the skin area.
  • the position of the adjacent frame is close to the position of the defect frame, and the skin color, lighting, etc. are relatively close. Using the adjacent frame to achieve defect removal can improve the smoothness of the processed image, thereby improving the processing effect.
  • Figure 6 is a schematic diagram of a defective frame and an adjacent frame according to an embodiment of the present disclosure.
  • the skin area includes multiple blemish areas, that is, circular or approximately circular boxes in Figure 6.
  • blemish areas that is, circular or approximately circular boxes in Figure 6.
  • step S142 may include:
  • any region frame among the plurality of adjacent frames perform gradient transverse filtering and gradient longitudinal filtering on the regional image of the region frame to obtain the transverse filtering map and the longitudinal filtering map of the region frame;
  • the average value of each point in the gradient map is determined as the average gradient value of the region frame.
  • Sobel gradient transverse filtering and gradient longitudinal filtering can be performed on the pixel values (for example, grayscale values) of the pixels in the region frame to obtain the region.
  • the Sobel operator of the transverse filter and the longitudinal filter used in gradient transverse filtering and gradient longitudinal filtering can be expressed as follows:
  • the gradient values of each point in the transverse filter map and the longitudinal filter map can be obtained to obtain the gradient map.
  • the gradient value of any pixel in the area frame in the horizontal filter map is Gx
  • the gradient value in the vertical filter map is Gy.
  • the gradient value G of the pixel can be expressed as:
  • the average gradient value G of each point in the gradient map can be calculated to obtain the average gradient value mean(G) of the region frame.
  • the defective frame and multiple adjacent frames are processed separately, and the average gradient value of the defective frame and multiple adjacent frames can be obtained.
  • the average gradient value is used to represent the rate of density change in the multi-dimensional direction of the image, and can characterize the relative clarity of the image. The smaller the average gradient value, the clearer the image and the better the image quality.
  • step S143 based on the average gradient values of the defective frame and multiple adjacent frames, the frame with the smallest average gradient value can be used as the target frame, and the target frame has the best image quality.
  • step S144 if the target frame is an adjacent frame, the area image of the first image in the target frame is used to replace the area image of the defective frame to implement the defect removal process of the defective frame.
  • the target frame is the defect frame itself, it means that the quality of the adjacent frame is worse than the quality of the defect frame, and the adjacent frame cannot be used to remove defects.
  • the area image of the defective frame is used to implement the defect removal process of the defective frame. This disclosure does not limit the specific selection position when selecting the image frame again.
  • each defect frame in the first image can be processed separately through steps S141-S144.
  • step S145 if the replacement of the regional images of each defect frame in the first image is completed, a processed second image is obtained, and the defect removal process is completed.
  • This neighborhood compensation processing method is used to remove defects in the image, which can improve the smoothness of the processed image and achieve better defect removal effects.
  • the image processing method can guide correct and reasonable skin detection based on face detection; perform defect detection on skin areas while improving detection accuracy and reducing time consumption; and use neighborhood compensation to remove defects and accurately remove them. Blemishes in portraits, beautifying skin.
  • dynamic thresholds can be used for skin detection on different faces, and skin of different skin colors/skin under different lighting can be accurately detected.
  • the present invention can Achieve more reliable detection results.
  • the detector for defect detection can be adaptively initialized according to the face key point information, for example, the parameters of the detector are limited according to the size of the eye area, that is, the second limiting condition, This reduces the detection range of defect detection and improves detection accuracy.
  • detected defects can be removed through neighborhood compensation.
  • the effect of facial defect removal is obvious, and the quality of portrait skin is significantly improved. It can achieve a method of maintaining skin texture while also A beautifying effect that removes blemishes.
  • Embodiments according to the present disclosure can be combined with light microdermabrasion to significantly improve the beautification effect on the human face.
  • an image processing device is also provided.
  • 7 is a block diagram of an image processing device according to an embodiment of the present disclosure. As shown in Figure 7, the device includes:
  • the face detection module 71 is used to perform face detection on the first image to be processed, and determine the face area and face key point information in the first image;
  • Skin detection module 72 configured to perform skin detection on the first image according to the face area, and determine the skin area in the first image
  • the defect detection module 73 is used to perform defect detection on the skin area based on the facial key point information and determine the defect area in the skin area;
  • the defect removal module 74 is configured to perform defect removal processing on the defective area in the first image to obtain a processed second image when a defective area exists in the skin area.
  • the skin detection module 72 is used for:
  • the target detection value includes: the first mean and the first standard deviation of the pixel values of the Cr channel of the pixels in the face area in the third image, and the The second mean and second standard deviation of the pixel values of the Cb channel for the pixels in the face area.
  • the defect detection module 73 is used for:
  • the grayscale image of the first image and multiple preset grayscale thresholds multiple binary images corresponding to the first image are determined; based on the face key point information, a third image for the defective area is determined.
  • Two limiting conditions according to the second limiting condition, perform defect extraction on each of the binary images to obtain candidate defect areas in each of the binary images; according to the candidate defect areas in each of the binary images
  • the position of the central pixel point is used to classify the candidate defective areas in each of the binary images, and the defective area in the grayscale image is determined.
  • the defective area includes the center point position and area size.
  • the defect extraction is performed on each of the binary images according to the preset second limiting condition to obtain candidate defect areas in each of the binary images, including:
  • the second limiting condition includes at least one of the following: the color of the connected area is a preset color, the size of the connected area is within a preset size interval, the roundness of the connected area is greater than or equal to the roundness threshold, and the convexity of the connected area is greater than or equal to the convexity
  • the threshold, the eccentricity of the connected area is less than or equal to the eccentricity threshold, and the connected area is within the skin area and outside the facial features area; wherein the facial features area includes the eyebrow area, eye area, nostril area, mouth area, and ears At least one of the areas, the preset color includes black or white.
  • determining the second limiting condition for the defect area based on the facial key point information includes: determining the second limiting condition in the first image based on the facial key point information. Facial features region; if the facial features region includes an eye region, the preset size interval is determined based on the size of the eye region.
  • the candidate defect areas in each of the binary images are classified according to the position of the center pixel of the candidate defect area in each of the binary images, and the candidate defect areas in the grayscale image are determined.
  • the defective area includes: determining at least one area grouping according to the position of the center pixel of the candidate defective area in each of the binary images, and the candidate defective area in the area grouping belongs to different binary images, the area The distance between the central pixels of the candidate defective areas in the grouping is less than or equal to the distance threshold, and the number of candidate defective areas in the area grouping is greater than or equal to the quantity threshold; according to the central pixel of the candidate defective area in the area grouping point position, determine the center point position of the defective area corresponding to the area grouping; determine the area size of the defective area corresponding to the area grouping based on the size of the candidate defective area in the area grouping.
  • determining the center point position of the defective area corresponding to the area grouping according to the position of the center pixel of the candidate defective area in the area grouping includes:
  • the weight of each candidate defect area is determined respectively; according to the weight of each candidate defect area, the center pixel of each candidate defect area is The weighted sum of positions is determined as the center point position of the defective area corresponding to the area grouping.
  • the defect removal module 74 is used to:
  • any defective area determine a defective frame corresponding to the defective area, and a plurality of adjacent frames of the defective frame, the size of the adjacent frame is the same as the size of the defective frame, and the adjacent frame is located Within the skin area; perform gradient filtering and gradient calculation on the defective frame and multiple adjacent frames respectively to obtain the average gradient value of the defective frame and multiple adjacent frames; according to the defective frame and multiple adjacent frames
  • the average gradient value of the adjacent frames is used to determine the target frame from the defective frame and multiple adjacent frames; when the target frame is an adjacent frame, the regional image of the target frame is used to replace the target frame.
  • the area image of the defective frame is obtained; when all the area images of the defective frame in the first image are replaced, the second image is obtained.
  • the step of performing gradient filtering and gradient calculation on the defective frame and multiple adjacent frames to obtain the average gradient value of the defective frame and multiple adjacent frames includes:
  • the defective frame and any region frame among the plurality of adjacent frames perform gradient transverse filtering and gradient longitudinal filtering on the regional image of the region frame to obtain the transverse filtering map and the longitudinal filtering map of the region frame; According to the horizontal filter map and the longitudinal filter map, the gradient map of the region frame is determined; the average value of each point in the gradient map is determined as the average gradient value of the region frame.
  • the device before the face detection module, further includes: a control display module for displaying defect removal controls for the first image; wherein the face detection module uses In: in response to the defect removal control being triggered, perform face detection on the first image, and determine the face area and face key point information in the first image.
  • FIG. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • an embodiment of the present disclosure provides an electronic device including: one or more processors 101 , a memory 102 , and one or more I/O interfaces 103 .
  • One or more programs are stored on the memory 102.
  • the one or more processors implement the image processing method as in any of the above embodiments;
  • one One or more I/O interfaces 103 are connected between the processor and the memory, and are configured to realize information exchange between the processor and the memory.
  • the processor 101 is a device with data processing capabilities, including but not limited to a central processing unit (CPU), etc.
  • the memory 102 is a device with data storage capabilities, including but not limited to random access memory (RAM, more specifically Such as SDRAM, DDR, etc.), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory (FLASH);
  • the I/O interface (read-write interface) 103 is connected between the processor 101 and the memory 102 , can realize information interaction between the processor 101 and the memory 102, which includes but is not limited to a data bus (Bus), etc.
  • processor 101 memory 102, and I/O interface 103 are connected to each other and, in turn, to other components of the computing device via bus 104.
  • a non-transitory computer-readable storage medium stores a computer program, wherein when the program is executed by the processor, the steps in the image processing method in any of the above embodiments are implemented.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a machine-readable storage medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communications component, and/or installed from removable media.
  • CPU central processing unit
  • the computer-readable storage medium shown in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), removable Programmd read-only memory (EPROM or flash memory), fiber optics, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable storage medium other than a computer-readable storage medium that may be sent, propagated, or transmitted for use by or in connection with an instruction execution system, apparatus, or device program of.
  • Program code embodied on a computer-readable storage medium may be transmitted using any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more components that implement the specified logical function(s). executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the circuits or sub-circuits described in the embodiments of the present disclosure may be implemented in software or hardware.
  • the described circuit or sub-circuit can also be provided in a processor.
  • a processor including: a receiving circuit and a processing circuit.
  • the processing module includes a writing sub-circuit and a reading sub-circuit.
  • the names of these circuits or sub-circuits do not constitute a limitation on the circuit or sub-circuit itself under certain circumstances.
  • a receiving circuit can also be described as "receiving video signals".

Landscapes

  • Image Processing (AREA)

Abstract

一种图像处理方法及装置、电子设备、计算机可读存储介质,属于计算机技术领域。图像处理方法包括:对待处理的第一图像进行人脸检测,确定第一图像中的人脸区域及人脸关键点信息(S11);根据人脸区域,对第一图像进行皮肤检测,确定第一图像中的皮肤区域(S12);根据人脸关键点信息,对皮肤区域进行瑕疵检测,确定皮肤区域中的瑕疵区域(S13);在皮肤区域中存在瑕疵区域的情况下,对第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像(S14)。

Description

图像处理方法及装置、电子设备、计算机可读存储介质 技术领域
本公开属于计算机技术领域,具体涉及一种图像处理方法及装置、电子设备、非瞬态计算机可读存储介质。
背景技术
目前常见的美颜操作是对整幅图像进行磨皮和美白等操作。然而,人脸区域可能存在黑痣、粉刺、痘痘等瑕疵,在整幅图像美颜时一般去不干净;而如果提高磨皮程度,则容易丢失纹理,造成不自然的失真。
发明内容
本公开旨在至少解决相关技术中存在的技术问题之一,提供一种图像处理方法及装置、电子设备、非瞬态计算机可读存储介质。
第一方面,本公开实施例提供一种图像处理方法,该方法包括:对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息;根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域;根据所述人脸关键点信息,对所述皮肤区域进行瑕疵检测,确定所述皮肤区域中的瑕疵区域;在所述皮肤区域中存在瑕疵区域的情况下,对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像。
在一些可能的实现方式中,所述根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域,包括:将所述第一图像转换到YCrCb空间,得到第三图像;根据所述第三图像中人脸区域的像素点的像素值,确定用于皮肤检测的目标检测值;从所述第三图像中确定出至少一个候选皮肤区域,所述候选皮肤区域中像素点的像素值满足基于所述目标检测值的第一限定条件;将至少一个所述候选皮肤区域中与所述人脸区域连通的候选皮肤区域,确定为所述皮肤区域。
在一些可能的实现方式中,所述目标检测值包括:所述第三图像中人脸区域的像素点在Cr通道的像素值的第一均值和第一标准差,以及所述第三 图像中人脸区域的像素点在Cb通道的像素值的第二均值和第二标准差。
在一些可能的实现方式中,所述根据所述人脸关键点信息,对所述皮肤区域进行瑕疵检测,确定所述皮肤区域中的瑕疵区域,包括:根据所述第一图像的灰度图像以及预设的多个灰度阈值,确定与所述第一图像对应的多个二值图像;根据所述人脸关键点信息,确定针对瑕疵区域的第二限定条件;根据所述第二限定条件,对各个所述二值图像分别进行瑕疵提取,得到各个所述二值图像中的候选瑕疵区域;根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,对各个所述二值图像中的候选瑕疵区域进行分类,确定出所述灰度图像中的瑕疵区域,所述瑕疵区域包括中心点位置和区域尺寸。
在一些可能的实现方式中,所述根据预设的第二限定条件,对各个所述二值图像分别进行瑕疵提取,得到各个所述二值图像中的候选瑕疵区域,包括:
针对任一二值图像,提取所述二值图像中的连通区域;将满足所述第二限定条件的连通区域,确定为所述二值图像中的候选瑕疵区域;其中,所述第二限定条件包括以下至少一种:连通区域的颜色为预设颜色、连通区域的尺寸处于预设的尺寸区间内、连通区域的圆度大于或等于圆度阈值、连通区域的凸度大于或等于凸度阈值、连通区域的偏心率小于或等于偏心率阈值、连通区域在所述皮肤区域之内且在五官区域之外;其中,所述五官区域包括眉毛区域、眼睛区域、鼻孔区域、嘴巴区域、耳朵区域中的至少一种,所述预设颜色包括黑色或白色。
在一些可能的实现方式中,所述根据所述人脸关键点信息,确定针对瑕疵区域的第二限定条件,包括:根据所述人脸关键点信息,确定所述第一图像中的所述五官区域;在所述五官区域中包括眼睛区域的情况下,根据所述眼睛区域的尺寸,确定所述预设的尺寸区间。
在一些可能的实现方式中,根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,对各个所述二值图像中的候选瑕疵区域进行分类,确定出所述灰度图像中的瑕疵区域,包括:根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,确定至少一个区域分组,所述区域分组中的候选 瑕疵区域属于不同的二值图像、所述区域分组中的候选瑕疵区域的中心像素点之间的距离小于或等于距离阈值,且所述区域分组中的候选瑕疵区域的数量大于或等于数量阈值;根据所述区域分组中候选瑕疵区域的中心像素点的位置,确定与所述区域分组对应的瑕疵区域的中心点位置;根据所述区域分组中候选瑕疵区域的尺寸,确定与所述区域分组对应的瑕疵区域的区域尺寸。
在一些可能的实现方式中,根据所述区域分组中候选瑕疵区域的中心像素点的位置,确定与所述区域分组对应的瑕疵区域的中心点位置,包括:根据所述区域分组中的各个候选瑕疵区域的惯性率,分别确定各个所述候选瑕疵区域的权值;根据各个所述候选瑕疵区域的权值,将各个所述候选瑕疵区域的中心像素点的位置的加权和,确定为与所述区域分组对应的瑕疵区域的中心点位置。
在一些可能的实现方式中,所述对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像,包括:针对任一瑕疵区域,确定与所述瑕疵区域对应的瑕疵框,以及所述瑕疵框的多个邻接框,所述邻接框的尺寸与所述瑕疵框的尺寸相同,且所述邻接框在所述皮肤区域内;分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取,得到所述瑕疵框及多个所述邻接框的平均梯度值;根据所述瑕疵框及多个所述邻接框的平均梯度值,从所述瑕疵框及多个所述邻接框中确定出目标框;在所述目标框为邻接框的情况下,采用所述目标框的区域图像替换所述瑕疵框的区域图像;在所述第一图像中的瑕疵框的区域图像均完成替换的情况下,得到所述第二图像。
在一些可能的实现方式中,所述分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取,得到所述瑕疵框及多个所述邻接框的平均梯度值,包括:
针对所述瑕疵框及多个所述邻接框中的任一区域框,对所述区域框的区域图像进行梯度横向滤波和梯度纵向滤波,得到所述区域框的横向滤波图和纵向滤波图;根据所述横向滤波图和所述纵向滤波图,确定所述区域框的梯度图;将所述梯度图中各个点的平均值,确定为所述区域框的平均梯度值。
在一些可能的实现方式中,在所述对待处理的第一图像进行人脸检测, 确定所述第一图像中的人脸区域及人脸关键点信息之前,所述方法还包括:显示针对所述第一图像的瑕疵去除控件;
其中,所述对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息,包括:响应于所述瑕疵去除控件被触发,对所述第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息。
第二方面,本公开实施例提供一种图像处理装置,该装置包括:
人脸检测模块,用于对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息;皮肤检测模块,用于根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域;瑕疵检测模块,用于根据所述人脸关键点信息,对所述皮肤区域进行瑕疵检测,确定所述皮肤区域中的瑕疵区域;瑕疵去除模块,用于在所述皮肤区域中存在瑕疵区域的情况下,对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像。
第三方面,本公开实施例提供一种电子设备,包括:一个或多个处理器;存储器,用于存储一个或多个程序;当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现上述的图像处理方法。
第四方面,本公开实施例提供一种非瞬态计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序在被处理器执行时实现上述图像处理方法中的步骤。
附图说明
图1为本公开的实施例的图像处理方法的流程图。
图2为本公开的实施例的图像处理方法的部分步骤的流程图。
图3为根据相关技术和本公开实施例的皮肤检测的示意图。
图4为本公开的实施例的图像处理方法的部分步骤的流程图。
图5为本公开的实施例的图像处理方法的部分步骤的流程图。
图6为本公开的实施例的瑕疵框及邻接框的示意图。
图7为本公开的实施例的图像处理装置的框图。
图8为本公开实施例的一种电子设备的结构示意图。
具体实施方式
为使本领域技术人员更好地理解本公开的技术方案,下面结合附图和具体实施方式对本公开作进一步详细描述。
除非另外定义,本公开使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语并不表示任何顺序、数量或者重要性,而只是用来区分不同的组成部分。同样,“一个”、“一”或者“该”等类似词语也不表示数量限制,而是表示存在至少一个。“包括”或者“包含”等类似的词语意指出现该词前面的元件或者物件涵盖出现在该词后面列举的元件或者物件及其等同,而不排除其他元件或者物件。“连接”或者“相连”等类似的词语并非限定于物理的或者机械的连接,而是可以包括电性的连接,不管是直接的还是间接的。“上”、“下”、“左”、“右”等仅用于表示相对位置关系,当被描述对象的绝对位置改变后,则该相对位置关系也可能相应地改变。
人们为了拍出满意的照片,通常需要使用具有修饰照片功能的软件对照片进行处理。随着修图软件的普及率不断提高,人们对修图软件的美颜功能的要求也越来越高,都希望美颜的出来的效果与真实的自己更贴近,而又要高于真实的效果,特别是对比如美肤常见效果敏感。
美肤的主要原理是对整幅图像进行磨皮和美白等操作来达到整体美颜的效果。然而,人脸区域可能存在黑痣、粉刺、痘痘等瑕疵,在整幅图像美颜时一般去不干净;而如果提高磨皮程度,则容易丢失纹理,造成不自然的失真,导致处理效果较差。
根据本公开实施例的图像处理方法,能够根据人脸检测来引导皮肤检测,根据人脸关键点信息对皮肤区域进行瑕疵检测,进而去除皮肤区域的瑕疵,能够提高皮肤检测的准确性和瑕疵检测的精度,从而提高瑕疵去除的效果, 提升图像的美化效果。
根据本公开实施例的图像处理方法可以由终端设备或服务器等电子设备执行,终端设备可以为车载设备、用户设备(User Equipment,UE)、移动设备、用户终端、终端、蜂窝电话、无绳电话、个人数字助理(Personal Digital Assistant,PDA)、手持设备、计算设备、车载设备、可穿戴设备等,所述方法可以通过处理器调用存储器中存储的计算机可读程序指令的方式来实现。或者,可通过服务器执行所述方法。
图1为本公开的实施例的图像处理方法的流程图。该图像处理方法包括:
步骤S11,对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息;
步骤S12,根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域;
步骤S13,根据所述人脸关键点信息,对所述皮肤区域进行瑕疵检测,确定所述皮肤区域中的瑕疵区域;
步骤S14,在所述皮肤区域中存在瑕疵区域的情况下,对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像。
举例来说,待处理的第一图像可以为包括人脸的图像,例如智能手机等电子设备的摄像头采集的自拍图像、合影图像等;第一图像也可以为通过其他方式获取的图像,本公开对第一图像的具体来源不做限制。
在一些可能的实现方式中,可在电子设备中设置有针对图像的瑕疵去除功能,例如在对图像进行美化的界面中显示瑕疵去除控件。如果瑕疵去除控件被触发(例如用户点击瑕疵去除控件),则开启瑕疵去除功能,对待处理的第一图像进行处理。
在一些可能的实现方式中,可在步骤S11中对第一图像进行人脸检测,定位出第一图像中的人脸框(即人脸区域);并且,可对人脸区域进行人脸关键点提取,得到第一图像中的人脸关键点信息,例如68个人脸关键点的位置信息。其中,可采用相关技术中的检测方式(例如使用开源的人脸特征 识别库)实现人脸检测和人脸关键点提取。本公开对人脸检测及人脸关键点提取的具体方式不做限制。
在一些可能的实现方式中,在步骤S12中,可根据人脸区域中的像素点的像素值,确定出用于皮肤检测的目标检测值,例如人脸区域中像素点的像素值的平均值和/或标准差等;并且,可设定基于目标检测值的第一限定条件,搜索整个第一图像,将第一图像中满足第一限定条件的像素点作为可能是皮肤的点,从而得到皮肤的至少一个可能区域(称为候选皮肤区域);进而,将包含人脸的候选皮肤区域作为第一图像中的皮肤区域。
通过使用动态限定条件实现皮肤检测,能够准确检测不同肤色/不同光照下的皮肤,相比于根据绝对阈值进行皮肤检测的方式,本公开的实施例能够取得更准确可靠的检测效果。
在一些可能的实现方式中,在得到皮肤区域后,可在步骤S13中对皮肤区域进行瑕疵检测。其中,可预设有多个灰度阈值,将第一图像的灰度图像转换为多个二值图像,例如20个二值图像,得到二值图像集合。本公开对灰度阈值的具体取值及灰度阈值的数量均不做限制。
在一些可能的实现方式中,可根据人脸关键点信息,确定出瑕疵检测的第二限定条件,例如瑕疵区域的尺寸范围(例如与眼睛尺寸成一定比例)、瑕疵区域应该在五官区域之外等。其中,第二限定条件还可包括其他内容,例如瑕疵区域需满足预设的圆度阈值、凸度阈值等,本公开对此不做限制。
在一些可能的实现方式中,可通过检测每个二值图像的边界的方式提取出每个二值图像中的连通区域,并从连通区域中筛选出满足第二限定条件的区域,作为该二值图像中可能为瑕疵的区域,称为候选瑕疵区域。这样,可分别得到二值图像集合中的各个二值图像中的候选瑕疵区域。
在一些可能的实现方式中,可根据所有二值图像中候选瑕疵区域的中心像素点的位置,对二值图像中的候选瑕疵区域进行分类,例如可计算不同二值图像的候选瑕疵区域的中心像素点之间的距离,如果中心像素点之间的距离小于或等于预设阈值,且全部或大部分的二值图像中均存在该候选瑕疵区 域,则可认为该候选瑕疵区域为实际的瑕疵区域。
在一些可能的实现方式中,对于任一个瑕疵区域,可根据对应的一组候选瑕疵区域的中心点位置和区域尺寸,确定该瑕疵区域的中心点位置和区域尺寸,例如该组候选瑕疵区域的中心点位置的聚类中心和区域尺寸中大小居中的尺寸。
通过对各个瑕疵区域分别进行上述处理,即可得到第一图像的皮肤区域中的瑕疵区域。
在一些可能的实现方式中,如果步骤S13中未检测到瑕疵区域,则可不进行后续处理,返回未处理的第一图像,和/或提示“未检测到瑕疵区域”。如果步骤S13中检测到瑕疵区域,也即皮肤区域中存在瑕疵区域,则可在步骤S14中对第一图像中的瑕疵区域进行瑕疵去除处理。
例如,可选取与瑕疵区域邻接的多个区域,从多个区域中选取出用于修复瑕疵的目标区域,并通过目标区域的区域图像替换瑕疵区域的区域图像。这样,对各个瑕疵区域分别进行修复,即可得到修复处理后的图像,称为第二图像。
根据本公开的实施例,能够根据人脸检测来引导皮肤检测,根据人脸关键点信息对皮肤区域进行瑕疵检测,进而去除皮肤区域的瑕疵,能够提高皮肤检测的准确性和瑕疵检测的精度,从而提高瑕疵去除的效果,提升图像的美化效果。
下面对本公开实施例的图像处理方法进行展开说明。
如前所述,可在电子设备中设置有针对图像的瑕疵去除功能,以供用户选择是否开启该功能,在开启该功能时对图像进行瑕疵检测及去除。
在一些可能的实现方式中,在步骤S11之前,所述方法还可包括:显示针对所述第一图像的瑕疵去除控件;
该情况下,步骤S11可包括:响应于所述瑕疵去除控件被触发,对所述第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息。
也就是说,可在对图像进行美化的界面中显示瑕疵去除控件,例如显示 为可触控的图标,如果用户点击该瑕疵去除控件,则可开启瑕疵去除功能。本公开对瑕疵去除控件的具体设置方式及触发方式均不作限制。
在一些可能的实现方式中,如果用户触发界面中的瑕疵去除控件被,则电子设备响应于被触发的瑕疵去除控件,开启瑕疵去除功能,执行步骤S11的人脸检测,并相应地进行后续的处理。
通过这种方式,能够基于用户触发而开启瑕疵去除功能,提高用户使用的灵活性,从而提升用户的使用体验。
在一些可能的实现方式中,可在步骤S11中待处理的第一图像进行人脸检测,确定第一图像中的人脸区域;并对人脸区域进行人脸关键点提取,得到第一图像中的人脸关键点信息,例如68个人脸关键点的位置坐标。其中,可采用相关技术中的检测方式(例如使用开源的人脸特征识别库)和人脸关键点提取方式,分别实现人脸检测和人脸关键点提取。本公开对人脸检测及人脸关键点提取的具体方式不做限制。
在一些可能的实现方式中,可在步骤S12中对第一图像进行皮肤检测。
相关技术中的皮肤检测方式通常有基于颜色空间、基于光谱特征及基于肤色反射模型等方式。这些检测方式的主要步骤都是先进行颜色空间的变换,再建立肤色模型进行处理。皮肤检测中的颜色空间有RGB、YCrCb、HSV和Lab等,通常在处理时将图像从RGB颜色空间变换成相应的颜色空间,再基于基于肤色聚类/阈值分割等处理,例如比较常用的有YCbCr/HSV/RGB/CIELAB颜色空间阈值分割。
其中,肤色YCrCb颜色空间是一种常用的肤色检测的色彩模型,其中Y代表亮度,Cr代表光源中的红色分量,Cb代表光源中的蓝色分量。人的肤色在外观上的色差是由色度引起的,不同人的肤色分布集中在较小的区域内。相关技术中的皮肤分割算法,通常对Cr分量使用Otsu算法进行二值分割,可以大致算出肤色的聚类结果。但由于使用的是固定的经验阈值,误差很大,存在人脸区域检测不全,误检与肤色相近的背景像素等问题。
而根据本公开的实施例,能够根据人脸区域来引导皮肤检测,从而提 高皮肤检测的精度。
图2为本公开的实施例的图像处理方法的部分步骤的流程图。在一些可能的实现方式中,如图2所示,步骤S12包括:
步骤S121,将所述第一图像转换到YCrCb空间,得到第三图像;
步骤S122,根据所述第三图像中人脸区域的像素点的像素值,确定用于皮肤检测的目标检测值;
步骤S123,从所述第三图像中确定出至少一个候选皮肤区域,所述候选皮肤区域中像素点的像素值满足基于所述目标检测值的第一限定条件;
步骤S124,将至少一个所述候选皮肤区域中与所述人脸区域连通的候选皮肤区域,确定为所述皮肤区域。
举例来说,可在步骤S121中对第一图像(例如为RGB图像)进行色彩转换,得到YCrCb空间的第三图像。本公开对色彩转换的具体方式不做限制。
在一些可能的实现方式中,在步骤S122中,可根据第三图像中人脸区域的像素点的像素值,确定用于皮肤检测的目标检测值。其中,目标检测值可包括:所述第三图像中人脸区域的像素点在Cr通道的像素值的第一均值Mcr和第一标准差Vcr,以及所述第三图像中人脸区域的像素点在Cb通道的像素值的第二均值Mcb和第二标准差Vcb。
其中,第一均值和第二均值可体现人脸区域(大部分为皮肤)的像素值所处的范围,第一标准差和第二标准差可体现人脸区域的像素值之间的差异。基于人脸区域确定出的目标检测值会根据图像的不同而变化,为动态值,更能够表征对应图像中皮肤的特点,从而提高检测精度。
在一些可能的实现方式中,可设置有基于目标检测值的第一限定条件;遍历第三图像中的所有像素点,判断各个像素点是否满足第一限定条件。其中,第一限定条件可表示为:
abs(Crn-Mcr)≤k*Vcr&&abs(Cbn-Mcb)≤k*Vcb    (1)
公式(1)中,abs()为绝对值函数,abs(Cr-Mcr)即为(Cr-Mcr)的绝对值; (Crn,Cbn)表示第n个像素点在Cr通道和Cb通道的像素值;Mcr表示第一均值;Vcr表示第一标准差;Mcb表示第二均值;Vcb表示第二标准差;&&表示逻辑与;k为预设的系数,根据3 sigma原则,k取值区间为[1,3],例如取值为2或3。本领域技术人员可根据实际情况设定k的具体取值,本公开对此不做限制。
在一些可能的实现方式中,对于第三图像中的任意像素点,如果该像素点的Cr通道和Cb通道的像素值同时满足公式(1),则该像素点满足第一限定条件,将该像素点作为可能是皮肤的点(称为候选皮肤点)。这样,在步骤S123中,遍历整个第三图像中的像素点后,可得到候选皮肤点组成的至少一个连通区域,作为候选皮肤区域。本公开对连通区域的具体确定方式不做限制。
在一些可能的实现方式中,在步骤S124中,将至少一个候选皮肤区域中与人脸区域连通的候选皮肤区域,确定为皮肤区域。这样,能够去除与肤色相近的背景区域,进一步提高皮肤检测的精度。
应当理解,根据本公开的实施例也可以将第一图像变换到其他颜色空间,例如HSV、Lab等,基于人脸区域确定对应的目标检测值,并设定基于目标检测值的第一限定条件,以便实现皮肤检测,本公开对此不作限制。
图3为根据相关技术和本公开实施例的皮肤检测的示意图。如图3所示,第一行为待处理的第一图像,分别为不同肤色/不同光照下人员的合影;第二行和第三行分别为相关技术的皮肤检测结果和本公开实施例的皮肤检测结果。第二行中,根据相关技术的皮肤检测结果存在检测不准确、误检、漏检等情况;第三行中,根据本公开实施例的皮肤检测结果较为准确,满足实用要求。
可见,相比于根据绝对阈值进行皮肤检测的方式,本公开的实施例能够准确检测不同肤色/不同光照下的皮肤,有效减少背景区域的失真,提高皮肤检测的精度,取得更准确可靠的检测效果。
在得到皮肤区域后,可在步骤S13中进行瑕疵检测。图4为本公开的实 施例的图像处理方法的部分步骤的流程图。在一些可能的实现方式中,如图4所示,步骤S13可包括:
步骤S131,根据所述第一图像的灰度图像以及预设的多个灰度阈值,确定与所述第一图像对应的多个二值图像;
步骤S132,根据所述人脸关键点信息,确定针对瑕疵区域的第二限定条件;
步骤S133,根据所述第二限定条件,对各个所述二值图像分别进行瑕疵提取,得到各个所述二值图像中的候选瑕疵区域;
步骤S134,根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,对各个所述二值图像中的候选瑕疵区域进行分类,确定出所述灰度图像中的瑕疵区域,所述瑕疵区域包括中心点位置和区域尺寸。
举例来说,可预设有多个灰度阈值,在步骤S131中将第一图像的灰度图像转换为多个二值图像,得到二值图像集合。本公开对灰度阈值的具体取值及灰度阈值的数量均不做限制。
在一些可能的实现方式中,可设置阈值范围为[T1,T2],步长为t,从而得到多个灰度阈值:T1,T1+t,T1+2t,……,T2。其中T1和T2为[0,255]之间的数值,例如分别取值为50和200;t可为一个较小的数值,例如取值为5、10等。本公开对此不做限制。
例如,在T1、T2、t分别取值为50、200和10的情况下,可得到16个灰度阈值50、60、70、80、……、190、200。基于这些灰度阈值对第一图像的灰度图像进行转换,可得到16个二值图像,组成二值图像集合。
在一些可能的实现方式中,在步骤S132中,可根据步骤S11中得到的人脸关键点信息,确定针对瑕疵区域的第二限定条件,以缩小瑕疵检测的检测范围,提高检测的准确度。
其中,步骤S132可包括:根据所述人脸关键点信息,确定所述第一图像中的五官区域;在所述五官区域中包括眼睛区域的情况下,根据所述眼睛区域的尺寸,确定所述预设的尺寸区间。
在一些可能的实现方式中,根据人脸关键点信息,可确定出人脸的五官区域,包括眉毛区域、眼睛区域、鼻孔区域、嘴巴区域、耳朵区域中的至少一种。由于人脸角度、遮挡、光照的情况,可能会确定出全部或部分的五官区域。如果五官区域中存在眼睛区域,则可进一步确定出眼睛区域的尺寸,例如眼睛区域的面积。
在一些可能的实现方式中,第二限定条件可包括图像中连通区域的尺寸处于预设的尺寸区间内。如果确定出了眼睛区域,则可将该预设的尺寸区间设定为5个像素点至眼睛区域尺寸的0.1倍,以便去除过小或过大的连通区域;如果未确定出眼睛区域(比如眼睛被遮挡),则可直接设定一个适中的尺寸区间,例如5个像素点至人脸区域尺寸的0.01倍,或者5个像素点至200个像素点。应当理解,本领域技术人员可根据实际情况设定连通区域的尺寸区间,本公开对此不做限制。
在一些可能的实现方式中,第二限定条件还可包括连通区域在皮肤区域之内且在五官区域之外,以便去除皮肤区域之外和五官区域之内的连通区域,提高检测精度。
在一些可能的实现方式中,第二限定条件还可包括其他内容,例如连通区域的颜色为预设颜色、连通区域的圆度大于或等于圆度阈值、连通区域的凸度大于或等于凸度阈值、连通区域的偏心率小于或等于偏心率阈值等。这样,能够去除颜色、形状明显不符合要求的连通区域,进一步提高瑕疵检测的准确性。应当理解,本领域技术人员可根据实际情况设定第二限定条件,本公开对此不做限制。
在一些可能的实现方式中,可在步骤S133中对各个二值图像分别进行瑕疵区域提取。其中,步骤S133可包括:
针对任一二值图像,提取所述二值图像中的连通区域;将满足所述第二限定条件的连通区域,确定为所述二值图像中的候选瑕疵区域;
也就是说,对于二值图像集合中的任意一个二值图像,通过检测该二值图像中的边界的方式,提取出边界所围成的不同的连通区域。可通过相关技 术中的处理方式实现连通区域的提取,本公开对此不做限制。
在一些可能的实现方式中,可从连通区域中筛选出满足第二限定条件的区域,作为该二值图像中可能为瑕疵的区域,称为候选瑕疵区域。
在一些可能的实现方式中,第二限定条件包括以下至少一种:
连通区域的颜色为预设颜色、连通区域的尺寸处于预设的尺寸区间内、连通区域的圆度大于或等于圆度阈值、连通区域的凸度大于或等于凸度阈值、连通区域的偏心率小于或等于偏心率阈值、连通区域在所述皮肤区域之内且在五官区域之外;
其中,所述五官区域包括眉毛区域、眼睛区域、鼻孔区域、嘴巴区域、耳朵区域中的至少一种,所述预设颜色包括黑色或白色。
在一些可能的实现方式中,可例如将预设颜色设定为黑色;尺寸区间设定为5个像素点至眼睛区域尺寸的0.1倍;圆度阈值设定为0.5;凸度阈值设定为0.9;偏心率阈值设定为0.3。应当理解,本领域技术人员可根据实际情况设定第二限定条件的具体内容及其中各项阈值、区间的取值,本公开对此不做限制。
在一些可能的实现方式中,所检测的瑕疵可以为皮肤区域可能存在的黑痣、粉刺、痘痘等任意类型的瑕疵。可根据实际检测的瑕疵的类型来设定第二限定条件,例如可在检测粉刺时将预设颜色设定为白色等,本公开对此不做限制。
这样,对二值图像集合中的各个二值图像分别进行处理,即可得到各个二值图像中的候选瑕疵区域。
通过设置限定条件,能够筛选出人脸区域中处于预设颜色、尺寸适中、更接近圆形的瑕疵,并保护五官区域,从而去除明显不属于瑕疵的连通区域,减少噪声,提高瑕疵检测的精度。
在一些可能的实现方式中,可在步骤S134中,根据各个二值图像中的候选瑕疵区域的中心像素点的位置,对各个二值图像中的候选瑕疵区域进行分类,确定出灰度图像中的瑕疵区域。其中,步骤S134可包括:
根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,确定至少一个区域分组,所述区域分组中的候选瑕疵区域属于不同的二值图像、所述区域分组中的候选瑕疵区域的中心像素点之间的距离小于或等于距离阈值,且所述区域分组中的候选瑕疵区域的数量大于或等于数量阈值;
根据所述区域分组中候选瑕疵区域的中心像素点的位置,确定与所述区域分组对应的瑕疵区域的中心点位置;
根据所述区域分组中候选瑕疵区域的尺寸,确定与所述区域分组对应的瑕疵区域的区域尺寸。
举例来说,可分别计算不同二值图像的候选瑕疵区域的中心像素点之间的距离;如果中心像素点之间的距离小于或等于预设的距离阈值Tb,则可认为对应的候选瑕疵区域为同一个区域;如果全部或大部分的二值图像中均存在该对应的候选瑕疵区域,则可认为这一组候选瑕疵区域为实际的瑕疵区域,得到一个区域分组。
这样,对所有二值图像的候选瑕疵区域进行处理后,可得到至少一个区域分组,每个区域分组对应于一个瑕疵区域。其中,同一区域分组中候选瑕疵区域属于不同的二值图像,该区域分组中的候选瑕疵区域的中心像素点之间的距离小于或等于距离阈值,且该区域分组中的候选瑕疵区域的数量大于或等于数量阈值。本公开对距离阈值和数量阈值的具体取值不做限制。
在一些可能的实现方式中,针对任意一个区域分组,可根据该区域分组中候选瑕疵区域的中心像素点的位置,确定与该区域分组对应的瑕疵区域的中心点位置。其中,确定与该区域分组对应的瑕疵区域的中心点位置的步骤可包括:
根据所述区域分组中的各个候选瑕疵区域的惯性率,分别确定各个所述候选瑕疵区域的权值;
根据各个所述候选瑕疵区域的权值,将各个所述候选瑕疵区域的中心像素点的位置的加权和,确定为与所述区域分组对应的瑕疵区域的中心点位置。
也就是说,可分别确定该区域分组中的各个候选瑕疵区域的惯性率,将 各个候选瑕疵区域的惯性率的平方进行归一化处理,得到各个候选瑕疵区域的权值q。其中,权值q的含义是:二值图像中候选瑕疵区域的形状越接近圆形,越是我们所希望的瑕疵,因此对灰度图像中瑕疵区域位置的贡献就越大。
在一些可能的实现方式中,根据各个候选瑕疵区域的权值,可将各个候选瑕疵区域的中心像素点的位置(像素点坐标)的加权和,确定为与该区域分组对应的瑕疵区域的中心点位置。通过这种方式,能够提高瑕疵区域位置的准确性。
在一些可能的实现方式中,针对任意一个区域分组,可对该区域分组中各个候选瑕疵区域的尺寸(例如面积或半径)进行排序,将处于排序中间的尺寸确定为与该区域分组对应的瑕疵区域的区域尺寸;还可以求取该区域分组中各个候选瑕疵区域的尺寸的均值,将该均值确定为与该区域分组对应的瑕疵区域的区域尺寸。本公开对此不做限制。
这样,对所有区域分组分别进行处理后,可得到灰度图像中的瑕疵区域,也即第一图像中的瑕疵区域,从而完成瑕疵检测的整个过程。
通过这种方式,能够确定出较为准确的瑕疵区域,提高瑕疵检测的精度。
在一些可能的实现方式中,如果步骤S13中未检测到瑕疵区域,比如步骤S133中未提取到候选瑕疵区域,或者步骤S134中分类后未得到满足条件的瑕疵区域,则可不进行后续处理,返回未处理的第一图像,和/或提示“未检测到瑕疵区域”等。
在一些实施例中,如果步骤S13中检测到瑕疵区域,也即皮肤区域中存在瑕疵区域,则可在步骤S14中进行瑕疵去除。
图5为本公开的实施例的图像处理方法的部分步骤的流程图。在一些可能的实现方式中,如图5所示,步骤S14可包括:
步骤S141,针对任一瑕疵区域,确定与所述瑕疵区域对应的瑕疵框,以及所述瑕疵框的多个邻接框,所述邻接框的尺寸与所述瑕疵框的尺寸相同,且所述邻接框在所述皮肤区域内;
步骤S142,分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取,得到所述瑕疵框及多个所述邻接框的平均梯度值;
步骤S143,根据所述瑕疵框及多个所述邻接框的平均梯度值,从所述瑕疵框及多个所述邻接框中确定出目标框;
步骤S144,在所述目标框为邻接框的情况下,采用所述目标框的区域图像替换所述瑕疵框的区域图像;
步骤S145,在所述第一图像中的瑕疵框的区域图像均完成替换的情况下,得到所述第二图像。
举例来说,可对第一图像中的各个瑕疵区域分别进行瑕疵去除处理。针对任意一个瑕疵区域,可在步骤S141中确定与该瑕疵区域对应的瑕疵框。步骤S13中得到的瑕疵区域可能为圆形或不规则的图形,可将瑕疵区域的外接矩形作为与该瑕疵区域对应的瑕疵框,从而简化后续处理的难度。
在一些可能的实现方式中,还可确定该瑕疵框的多个邻接框,也即在该瑕疵框周围选择上下左右的四个、与该瑕疵框邻接的矩形框。邻接框的尺寸与对应的瑕疵框的尺寸相同,且邻接框在皮肤区域内。邻接框的位置与瑕疵框的位置较近,皮肤颜色、光照等均较为接近,采用邻接框来实现瑕疵去除,能够提升处理后的图像的平滑程度,从而提高处理效果。
图6为本公开的实施例的瑕疵框及邻接框的示意图。如图6所示,皮肤区域中包括多个瑕疵区域,即图6中圆形或近似于圆形的框。对于某个瑕疵区域61,将其外接矩形作为瑕疵框;选取与该瑕疵框邻接的四个矩形区域,作为邻接框(也可称为邻域)。
在一些可能的实现方式中,可在步骤S142中分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取。其中,步骤S142可包括:
针对所述瑕疵框及多个所述邻接框中的任一区域框,对所述区域框的区域图像进行梯度横向滤波和梯度纵向滤波,得到所述区域框的横向滤波图和纵向滤波图;
根据所述横向滤波图和所述纵向滤波图,确定所述区域框的梯度图;
将所述梯度图中各个点的平均值,确定为所述区域框的平均梯度值。
也就是说,对于瑕疵框及多个邻接框中的任意一个区域框,可对区域框中的像素点的像素值(例如灰度值)分别进行Sobel梯度横向滤波和梯度纵向滤波,得到该区域框的横向滤波图和纵向滤波图。
其中,梯度横向滤波和梯度纵向滤波所采用的横向滤波器和纵向滤波器的Sobel算子,可表示如下:
X横向滤波器:
Figure PCTCN2022094361-appb-000001
Y纵向滤波器:
Figure PCTCN2022094361-appb-000002
应当理解,本领域技术人员可以采用其它任意类型的滤波器实现梯度横向滤波和梯度纵向滤波,本公开对此不作限制。
在一些可能的实现方式中,可求取横向滤波图和纵向滤波图中各个点的梯度值,得到梯度图。设区域框中任意一个像素点在横向滤波图中的梯度值为Gx,在纵向滤波图中的梯度值为Gy,则该像素点的梯度值G可表示为:
Figure PCTCN2022094361-appb-000003
这样,对区域框中的所有像素点分别进行处理,即可得到区域框的梯度图。
在一些可能的实现方式中,可求取梯度图中各个点的梯度值G的平均值,得到该区域框的平均梯度值mean(G)。
通过这种方式,分别对瑕疵框及多个邻接框进行处理,即可得到瑕疵框及多个邻接框的平均梯度值。其中,平均梯度值用于表示图像多维方向上密度变化的速率,能够表征图像的相对清晰程度。平均梯度值越小,表示图像越清晰,图像质量越好。
在一些可能的实现方式中,在步骤S143中,根据瑕疵框及多个邻接框的平均梯度值,可将平均梯度值最小的框作为目标框,该目标框的图像质量最好。
在一些可能的实现方式中,在步骤S144中,如果目标框为邻接框,则采用第一图像在该目标框中的区域图像替换瑕疵框的区域图像,实现该瑕疵框的瑕疵去除处理。
在一些可能的实现方式中,如果目标框为瑕疵框本身,则表示邻接框的质量比瑕疵框的质量差,无法用邻接框进行瑕疵去除。该情况下,可进一步向外选取,例如选取与各个邻接框邻接的图像框,对各个图像框进行梯度滤波及梯度求取,得到平均梯度值;选取平均梯度值最小的图像框的区域图像替换瑕疵框的区域图像,实现该瑕疵框的瑕疵去除处理。本公开对再次选取图像框时的具体选取位置不作限制。
在一些可能的实现方式中,对于第一图像中的各个瑕疵框,可通过步骤S141-S144分别进行处理。在步骤S145中,如果第一图像中的各个瑕疵框的区域图像均完成替换,则得到处理后的第二图像,完成瑕疵去除的处理过程。
通过这种邻域补偿的处理方式来实现图像中的瑕疵去除,能够提升处理后的图像的平滑程度,取得比较好的瑕疵去除效果。
根据本公开实施例的图像处理方法,能够根据人脸检测引导正确合理的皮肤检测;针对皮肤区域进行瑕疵检测,同时提高检测精度并降低耗时;并且使用邻域补偿的方式去除瑕疵,准确去除人像中的瑕疵,美化皮肤。
根据本公开实施例的图像处理方法,能够对不同的人脸使用动态阈值进行皮肤检测,能进行准确检测不同肤色/不同光照下的皮肤,相比根据绝对的阈值进行检测的方法,本发明能取得更可靠的检测效果。
根据本公开实施例的图像处理方法,能够根据人脸关键点信息,自适应地初始化用于瑕疵检测的检测器,例如根据眼睛区域的尺寸来限定检测器的参数,也即第二限定条件,从而缩小瑕疵检测的检测范围,提高检测的准确度。
根据本公开实施例的图像处理方法,能够通过邻域补偿的方式,对检测出的瑕疵进行去除,人脸瑕疵去除的效果明显,人像皮肤质量明显提升,能够达到一种即保持皮肤纹理,又去除瑕疵的美化效果。根据本公开的实施例 能够与轻度磨皮相结合,显著提升对人脸的美化效果。
根据本公开的实施例,还提供了一种图像处理装置。图7为本公开的实施例的图像处理装置的框图。如图7所示,该装置包括:
人脸检测模块71,用于对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息;
皮肤检测模块72,用于根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域;
瑕疵检测模块73,用于根据所述人脸关键点信息,对所述皮肤区域进行瑕疵检测,确定所述皮肤区域中的瑕疵区域;
瑕疵去除模块74,用于在所述皮肤区域中存在瑕疵区域的情况下,对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像。
在一些可能的实现方式中,皮肤检测模块72,用于:
将所述第一图像转换到YCrCb空间,得到第三图像;根据所述第三图像中人脸区域的像素点的像素值,确定用于皮肤检测的目标检测值;从所述第三图像中确定出至少一个候选皮肤区域,所述候选皮肤区域中像素点的像素值满足基于所述目标检测值的第一限定条件;将至少一个所述候选皮肤区域中与所述人脸区域连通的候选皮肤区域,确定为所述皮肤区域。
在一些可能的实现方式中,所述目标检测值包括:所述第三图像中人脸区域的像素点在Cr通道的像素值的第一均值和第一标准差,以及所述第三图像中人脸区域的像素点在Cb通道的像素值的第二均值和第二标准差。
在一些可能的实现方式中,瑕疵检测模块73,用于:
根据所述第一图像的灰度图像以及预设的多个灰度阈值,确定与所述第一图像对应的多个二值图像;根据所述人脸关键点信息,确定针对瑕疵区域的第二限定条件;根据所述第二限定条件,对各个所述二值图像分别进行瑕疵提取,得到各个所述二值图像中的候选瑕疵区域;根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,对各个所述二值图像中的候选瑕疵区域进行分类,确定出所述灰度图像中的瑕疵区域,所述瑕疵区域包括中心 点位置和区域尺寸。
在一些可能的实现方式中,所述根据预设的第二限定条件,对各个所述二值图像分别进行瑕疵提取,得到各个所述二值图像中的候选瑕疵区域,包括:
针对任一二值图像,提取所述二值图像中的连通区域;将满足所述第二限定条件的连通区域,确定为所述二值图像中的候选瑕疵区域;其中,所述第二限定条件包括以下至少一种:连通区域的颜色为预设颜色、连通区域的尺寸处于预设的尺寸区间内、连通区域的圆度大于或等于圆度阈值、连通区域的凸度大于或等于凸度阈值、连通区域的偏心率小于或等于偏心率阈值、连通区域在所述皮肤区域之内且在五官区域之外;其中,所述五官区域包括眉毛区域、眼睛区域、鼻孔区域、嘴巴区域、耳朵区域中的至少一种,所述预设颜色包括黑色或白色。
在一些可能的实现方式中,所述根据所述人脸关键点信息,确定针对瑕疵区域的第二限定条件,包括:根据所述人脸关键点信息,确定所述第一图像中的所述五官区域;在所述五官区域中包括眼睛区域的情况下,根据所述眼睛区域的尺寸,确定所述预设的尺寸区间。
在一些可能的实现方式中,根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,对各个所述二值图像中的候选瑕疵区域进行分类,确定出所述灰度图像中的瑕疵区域,包括:根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,确定至少一个区域分组,所述区域分组中的候选瑕疵区域属于不同的二值图像、所述区域分组中的候选瑕疵区域的中心像素点之间的距离小于或等于距离阈值,且所述区域分组中的候选瑕疵区域的数量大于或等于数量阈值;根据所述区域分组中候选瑕疵区域的中心像素点的位置,确定与所述区域分组对应的瑕疵区域的中心点位置;根据所述区域分组中候选瑕疵区域的尺寸,确定与所述区域分组对应的瑕疵区域的区域尺寸。
在一些可能的实现方式中,根据所述区域分组中候选瑕疵区域的中心像素点的位置,确定与所述区域分组对应的瑕疵区域的中心点位置,包括:
根据所述区域分组中的各个候选瑕疵区域的惯性率,分别确定各个所述候选瑕疵区域的权值;根据各个所述候选瑕疵区域的权值,将各个所述候选瑕疵区域的中心像素点的位置的加权和,确定为与所述区域分组对应的瑕疵区域的中心点位置。
在一些可能的实现方式中,瑕疵去除模块74,用于:
针对任一瑕疵区域,确定与所述瑕疵区域对应的瑕疵框,以及所述瑕疵框的多个邻接框,所述邻接框的尺寸与所述瑕疵框的尺寸相同,且所述邻接框在所述皮肤区域内;分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取,得到所述瑕疵框及多个所述邻接框的平均梯度值;根据所述瑕疵框及多个所述邻接框的平均梯度值,从所述瑕疵框及多个所述邻接框中确定出目标框;在所述目标框为邻接框的情况下,采用所述目标框的区域图像替换所述瑕疵框的区域图像;在所述第一图像中的瑕疵框的区域图像均完成替换的情况下,得到所述第二图像。
在一些可能的实现方式中,所述分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取,得到所述瑕疵框及多个所述邻接框的平均梯度值,包括:
针对所述瑕疵框及多个所述邻接框中的任一区域框,对所述区域框的区域图像进行梯度横向滤波和梯度纵向滤波,得到所述区域框的横向滤波图和纵向滤波图;根据所述横向滤波图和所述纵向滤波图,确定所述区域框的梯度图;将所述梯度图中各个点的平均值,确定为所述区域框的平均梯度值。
在一些可能的实现方式中,在所述人脸检测模块之前,所述装置还包括:控件显示模块,用于显示针对所述第一图像的瑕疵去除控件;其中,所述人脸检测模块用于:响应于所述瑕疵去除控件被触发,对所述第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息。
图8为本公开实施例的一种电子设备的结构示意图。如图8所示,本公开实施例提供一种电子设备包括:一个或多个处理器101、存储器102、一个或多个I/O接口103。存储器102上存储有一个或多个程序,当该一个或 多个程序被该一个或多个处理器执行,使得该一个或多个处理器实现如上述实施例中任一的图像处理方法;一个或多个I/O接口103连接在处理器与存储器之间,配置为实现处理器与存储器的信息交互。
其中,处理器101为具有数据处理能力的器件,其包括但不限于中央处理器(CPU)等;存储器102为具有数据存储能力的器件,其包括但不限于随机存取存储器(RAM,更具体如SDRAM、DDR等)、只读存储器(ROM)、带电可擦可编程只读存储器(EEPROM)、闪存(FLASH);I/O接口(读写接口)103连接在处理器101与存储器102间,能实现处理器101与存储器102的信息交互,其包括但不限于数据总线(Bus)等。
在一些实施例中,处理器101、存储器102和I/O接口103通过总线104相互连接,进而与计算设备的其它组件连接。
根据本公开的实施例,还提供一种非瞬态计算机可读存储介质。该非瞬态计算机可读存储介质上存储有计算机程序,其中,该程序被处理器执行时实现如上述实施例中任一的图像图像处理方法中的步骤。
特别地,根据本公开实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在机器可读存储介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。在该计算机程序被中央处理单元(CPU)执行时,执行本公开的***中限定的上述功能。
需要说明的是,本公开所示的计算机可读存储介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的***、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机 可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行***、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读存储介质,该计算机可读存储介质可以发送、传播或者传输用于由指令执行***、装置或者器件使用或者与其结合使用的程序。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、电线、光缆、RF等等,或者上述的任意合适的组合。
附图中的流程图和框图,图示了按照本公开各种实施例的***、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,前述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的***来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的电路或子电路可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的电路或子电路也可以设置在处理器中,例如,可以描述为:一种处理器,包括:接收电路和处理电路,该处理模块包括写入子电路和读取子电路。其中,这些电路或子电路的名称在某种情况下并不构成对该电路或子电路本身的限定,例如,接收电路还可以被描述为“接收视频信号”。
可以理解的是,以上实施方式仅仅是为了说明本公开的原理而采用的示例性实施方式,然而本公开并不局限于此。对于本领域内的普通技术人员而 言,在不脱离本公开的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为本公开的保护范围。

Claims (14)

  1. 一种图像处理方法,包括:
    对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息;
    根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域;
    根据所述人脸关键点信息,对所述皮肤区域进行瑕疵检测,确定所述皮肤区域中的瑕疵区域;
    在所述皮肤区域中存在瑕疵区域的情况下,对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像。
  2. 根据权利要求1所述的方法,其中,所述根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域,包括:
    将所述第一图像转换到YCrCb空间,得到第三图像;
    根据所述第三图像中人脸区域的像素点的像素值,确定用于皮肤检测的目标检测值;
    从所述第三图像中确定出至少一个候选皮肤区域,所述候选皮肤区域中像素点的像素值满足基于所述目标检测值的第一限定条件;
    将至少一个所述候选皮肤区域中与所述人脸区域连通的候选皮肤区域,确定为所述皮肤区域。
  3. 根据权利要求2所述的方法,其中,所述目标检测值包括:所述第三图像中人脸区域的像素点在Cr通道的像素值的第一均值和第一标准差,以及所述第三图像中人脸区域的像素点在Cb通道的像素值的第二均值和第二标准差。
  4. 根据权利要求1所述的方法,其中,所述根据所述人脸关键点信息,对所述皮肤区域进行瑕疵检测,确定所述皮肤区域中的瑕疵区域,包括:
    根据所述第一图像的灰度图像以及预设的多个灰度阈值,确定与所述第一图像对应的多个二值图像;
    根据所述人脸关键点信息,确定针对瑕疵区域的第二限定条件;
    根据所述第二限定条件,对各个所述二值图像分别进行瑕疵提取,得到各个所述二值图像中的候选瑕疵区域;
    根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,对各个所述二值图像中的候选瑕疵区域进行分类,确定出所述灰度图像中的瑕疵区域,所述瑕疵区域包括中心点位置和区域尺寸。
  5. 根据权利要求4所述的方法,其中,所述根据预设的第二限定条件,对各个所述二值图像分别进行瑕疵提取,得到各个所述二值图像中的候选瑕疵区域,包括:
    针对任一二值图像,提取所述二值图像中的连通区域;
    将满足所述第二限定条件的连通区域,确定为所述二值图像中的候选瑕疵区域;
    其中,所述第二限定条件包括以下至少一种:
    连通区域的颜色为预设颜色、连通区域的尺寸处于预设的尺寸区间内、连通区域的圆度大于或等于圆度阈值、连通区域的凸度大于或等于凸度阈值、连通区域的偏心率小于或等于偏心率阈值、连通区域在所述皮肤区域之内且在五官区域之外;
    其中,所述五官区域包括眉毛区域、眼睛区域、鼻孔区域、嘴巴区域、耳朵区域中的至少一种,所述预设颜色包括黑色或白色。
  6. 根据权利要求5所述的方法,其中,所述根据所述人脸关键点信息,确定针对瑕疵区域的第二限定条件,包括:
    根据所述人脸关键点信息,确定所述第一图像中的所述五官区域;
    在所述五官区域中包括眼睛区域的情况下,根据所述眼睛区域的尺寸,确定所述预设的尺寸区间。
  7. 根据权利要求4所述的方法,其中,根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,对各个所述二值图像中的候选瑕疵区域进行分类,确定出所述灰度图像中的瑕疵区域,包括:
    根据各个所述二值图像中的候选瑕疵区域的中心像素点的位置,确定至少一个区域分组,所述区域分组中的候选瑕疵区域属于不同的二值图像、所述区域分组中的候选瑕疵区域的中心像素点之间的距离小于或等于距离阈值,且所述区域分组中的候选瑕疵区域的数量大于或等于数量阈值;
    根据所述区域分组中候选瑕疵区域的中心像素点的位置,确定与所述区域分组对应的瑕疵区域的中心点位置;
    根据所述区域分组中候选瑕疵区域的尺寸,确定与所述区域分组对应的瑕疵区域的区域尺寸。
  8. 根据权利要求7所述的方法,其中,根据所述区域分组中候选瑕疵区域的中心像素点的位置,确定与所述区域分组对应的瑕疵区域的中心点位置,包括:
    根据所述区域分组中的各个候选瑕疵区域的惯性率,分别确定各个所述候选瑕疵区域的权值;
    根据各个所述候选瑕疵区域的权值,将各个所述候选瑕疵区域的中心像素点的位置的加权和,确定为与所述区域分组对应的瑕疵区域的中心点位置。
  9. 根据权利要求1-8任一所述的方法,其中,所述对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像,包括:
    针对任一瑕疵区域,确定与所述瑕疵区域对应的瑕疵框,以及所述瑕疵框的多个邻接框,所述邻接框的尺寸与所述瑕疵框的尺寸相同,且所述邻接框在所述皮肤区域内;
    分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取,得到所述瑕疵框及多个所述邻接框的平均梯度值;
    根据所述瑕疵框及多个所述邻接框的平均梯度值,从所述瑕疵框及多个所述邻接框中确定出目标框;
    在所述目标框为邻接框的情况下,采用所述目标框的区域图像替换所述瑕疵框的区域图像;
    在所述第一图像中的瑕疵框的区域图像均完成替换的情况下,得到所述第二图像。
  10. 根据权利要求9所述的方法,其中,所述分别对所述瑕疵框及多个所述邻接框进行梯度滤波及梯度求取,得到所述瑕疵框及多个所述邻接框的平均梯度值,包括:
    针对所述瑕疵框及多个所述邻接框中的任一区域框,对所述区域框的区域图像进行梯度横向滤波和梯度纵向滤波,得到所述区域框的横向滤波图和纵向滤波图;
    根据所述横向滤波图和所述纵向滤波图,确定所述区域框的梯度图;
    将所述梯度图中各个点的平均值,确定为所述区域框的平均梯度值。
  11. 根据权利要求1所述的方法,其中,在所述对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息之前,所述方法还包括:
    显示针对所述第一图像的瑕疵去除控件;
    其中,所述对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息,包括:
    响应于所述瑕疵去除控件被触发,对所述第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息。
  12. 一种图像处理装置,包括:
    人脸检测模块,用于对待处理的第一图像进行人脸检测,确定所述第一图像中的人脸区域及人脸关键点信息;
    皮肤检测模块,用于根据所述人脸区域,对所述第一图像进行皮肤检测,确定所述第一图像中的皮肤区域;
    瑕疵检测模块,用于根据所述人脸关键点信息,对所述皮肤区域进行瑕 疵检测,确定所述皮肤区域中的瑕疵区域;
    瑕疵去除模块,用于在所述皮肤区域中存在瑕疵区域的情况下,对所述第一图像中的瑕疵区域进行瑕疵去除处理,得到处理后的第二图像。
  13. 一种电子设备,包括:
    一个或多个处理器;
    存储器,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1至11中任一所述的图像处理方法。
  14. 一种非瞬态计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序在被处理器执行时实现如权利要求1至11中任一所述的图像处理方法中的步骤。
PCT/CN2022/094361 2022-05-23 2022-05-23 图像处理方法及装置、电子设备、计算机可读存储介质 WO2023225774A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202280001363.XA CN117501326A (zh) 2022-05-23 2022-05-23 图像处理方法及装置、电子设备、计算机可读存储介质
PCT/CN2022/094361 WO2023225774A1 (zh) 2022-05-23 2022-05-23 图像处理方法及装置、电子设备、计算机可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/094361 WO2023225774A1 (zh) 2022-05-23 2022-05-23 图像处理方法及装置、电子设备、计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2023225774A1 true WO2023225774A1 (zh) 2023-11-30

Family

ID=88918099

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/094361 WO2023225774A1 (zh) 2022-05-23 2022-05-23 图像处理方法及装置、电子设备、计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN117501326A (zh)
WO (1) WO2023225774A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117952986B (zh) * 2024-03-27 2024-05-28 远景睿泰动力技术(上海)有限公司 瑕疵检测方法、装置、电子设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318262A (zh) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 通过人脸照片更换皮肤的方法及***
CN107103298A (zh) * 2017-04-21 2017-08-29 桂林电子科技大学 基于图像处理的引体向上计数***及计数方法
CN109003236A (zh) * 2018-06-29 2018-12-14 上海本趣网络科技有限公司 一种基于人脸色调与光影分离的自适应磨皮方法及***
CN109325468A (zh) * 2018-10-18 2019-02-12 广州智颜科技有限公司 一种图像处理方法、装置、计算机设备和存储介质
CN109389562A (zh) * 2018-09-29 2019-02-26 深圳市商汤科技有限公司 图像修复方法及装置
CN110533648A (zh) * 2019-08-28 2019-12-03 上海复硕正态企业管理咨询有限公司 一种黑头识别处理方法及***
KR20200055884A (ko) * 2018-11-14 2020-05-22 이시은 피부 진단용 이미지 데이터 처리 방법 및 이를 이용하는 피부 진단용 예약 방법
US20200311386A1 (en) * 2019-03-25 2020-10-01 Samsung Electronics Co., Ltd. Method and electronic device for processing facial images
WO2021016896A1 (zh) * 2019-07-30 2021-02-04 深圳市大疆创新科技有限公司 图像处理方法、***、设备、可移动平台和存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104318262A (zh) * 2014-09-12 2015-01-28 上海明穆电子科技有限公司 通过人脸照片更换皮肤的方法及***
CN107103298A (zh) * 2017-04-21 2017-08-29 桂林电子科技大学 基于图像处理的引体向上计数***及计数方法
CN109003236A (zh) * 2018-06-29 2018-12-14 上海本趣网络科技有限公司 一种基于人脸色调与光影分离的自适应磨皮方法及***
CN109389562A (zh) * 2018-09-29 2019-02-26 深圳市商汤科技有限公司 图像修复方法及装置
CN109325468A (zh) * 2018-10-18 2019-02-12 广州智颜科技有限公司 一种图像处理方法、装置、计算机设备和存储介质
KR20200055884A (ko) * 2018-11-14 2020-05-22 이시은 피부 진단용 이미지 데이터 처리 방법 및 이를 이용하는 피부 진단용 예약 방법
US20200311386A1 (en) * 2019-03-25 2020-10-01 Samsung Electronics Co., Ltd. Method and electronic device for processing facial images
WO2021016896A1 (zh) * 2019-07-30 2021-02-04 深圳市大疆创新科技有限公司 图像处理方法、***、设备、可移动平台和存储介质
CN110533648A (zh) * 2019-08-28 2019-12-03 上海复硕正态企业管理咨询有限公司 一种黑头识别处理方法及***

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邱佳梁等 (QIU, JIALIANG ET AL.): "肤色纹理保留实时人脸美化算法 (Fast Facial Beautification Algorithm Based on Skin Texture Preserving)", 计算机辅助设计与图形学学报 (JOURNAL OF COMPUTER-AIDED DESIGN & COMPUTER GRAPHICS), vol. 30, no. 2, 28 February 2018 (2018-02-28), XP093053600, DOI: 10.3724/SP.J.1089.2018.16281 *

Also Published As

Publication number Publication date
CN117501326A (zh) 2024-02-02

Similar Documents

Publication Publication Date Title
WO2021147387A1 (zh) 屏幕划痕碎裂检测方法及设备
CN112381775B (zh) 一种图像篡改检测方法、终端设备及存储介质
WO2019114036A1 (zh) 人脸检测方法及装置、计算机装置和计算机可读存储介质
RU2541353C2 (ru) Автоматическая съемка документа с заданными пропорциями
US8849062B2 (en) Eye defect detection in international standards organization images
CN102667810B (zh) 数字图像中的面部识别
US8000526B2 (en) Detecting redeye defects in digital images
JP2020536457A (ja) 画像処理方法および装置、電子機器、ならびにコンピュータ可読記憶媒体
CN108090511B (zh) 图像分类方法、装置、电子设备及可读存储介质
CN108509902B (zh) 一种驾驶员行车过程中手持电话通话行为检测方法
WO2021147386A1 (zh) 屏幕划痕碎裂检测方法及设备
WO2015149475A1 (zh) 图片处理方法及装置
CN107862663A (zh) 图像处理方法、装置、可读存储介质和计算机设备
CA2867365A1 (en) Method, system and computer storage medium for face detection
US20100302272A1 (en) Enhancing Images Using Known Characteristics of Image Subjects
JP2016521890A (ja) 文書バウンダリ検知方法
JP5779089B2 (ja) エッジ検出装置、エッジ検出プログラム、およびエッジ検出方法
CN112862832B (zh) 一种基于同心圆分割定位的脏污检测方法
CN116580028B (zh) 一种物体表面缺陷检测方法、装置、设备及存储介质
WO2023225774A1 (zh) 图像处理方法及装置、电子设备、计算机可读存储介质
CN113609984A (zh) 一种指针式仪表读数识别方法、装置及电子设备
CN110599553B (zh) 一种基于YCbCr的肤色提取及检测方法
CN113537211A (zh) 一种基于非对称iou的深度学习车牌框定位方法
KR20130064556A (ko) 다중 검출 방식을 이용한 얼굴 검출 장치 및 방법
JP4599110B2 (ja) 画像処理装置及びその方法、撮像装置、プログラム

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 202280001363.X

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 18027986

Country of ref document: US