CN113128372B - Blackhead identification method and blackhead identification device based on image processing and terminal equipment - Google Patents

Blackhead identification method and blackhead identification device based on image processing and terminal equipment Download PDF

Info

Publication number
CN113128372B
CN113128372B CN202110362210.5A CN202110362210A CN113128372B CN 113128372 B CN113128372 B CN 113128372B CN 202110362210 A CN202110362210 A CN 202110362210A CN 113128372 B CN113128372 B CN 113128372B
Authority
CN
China
Prior art keywords
image
target
threshold
gray
blackhead
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110362210.5A
Other languages
Chinese (zh)
Other versions
CN113128372A (en
Inventor
乔峤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Rongzhifu Technology Co ltd
Original Assignee
Xi'an Rongzhifu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Rongzhifu Technology Co ltd filed Critical Xi'an Rongzhifu Technology Co ltd
Priority to CN202110362210.5A priority Critical patent/CN113128372B/en
Publication of CN113128372A publication Critical patent/CN113128372A/en
Application granted granted Critical
Publication of CN113128372B publication Critical patent/CN113128372B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention discloses a blackhead identification method, a blackhead identification device and terminal equipment based on image processing, which are used for improving the accuracy of blackhead identification by the blackhead identification device. The method of the embodiment of the invention comprises the following steps: acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image; homomorphic filtering and local histogram equalization processing are carried out on the first image, and a second image is obtained; threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image; and identifying the total number and the total area of the blackheads from the target image.

Description

Blackhead identification method and blackhead identification device based on image processing and terminal equipment
Technical Field
The present invention relates to the field of terminal device applications, and in particular, to a blackhead recognition method, a blackhead recognition device and a terminal device based on image processing.
Background
At present, the facial skin problem of people is serious, and especially the blackheads of people appearing on the faces are common skin problems. Moreover, the severity of blackheads varies due to different factors such as sex, age and regional attributes of people.
In the field of beauty and make-up, a user's face picture can be automatically taken by a terminal device, the terminal device can automatically detect the number of blackheads of the user, and then in combination with other skin features of the user's face, skin care comments are provided for the user, for example, the terminal device can select proper skin care products and foods for the user so as to improve the facial skin problem of the user. In the prior art, the method for detecting the quantity of the blackheads can generally irradiate the skin through two light rays of ordinary white light and ultraviolet light, so that a highlight target in the ultraviolet light is extracted, a target bright image is obtained, then a blackhead area diagram is generated according to the target bright image, and further the blackhead area diagram is used as a mask to mark the blackhead area in the white light image, so that the blackheads are identified.
However, the equipment used in the method for identifying the blackheads in the prior art is complex in requirement and is doped with a plurality of influencing factors, so that the accuracy of identifying the blackheads by the terminal equipment is low.
Disclosure of Invention
The embodiment of the invention provides a blackhead identification method, a blackhead identification device and terminal equipment based on image processing, which are used for improving the accuracy of blackhead identification by the blackhead identification device.
A first aspect of the present invention provides a blackhead identifying method based on image processing, which may include:
acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image;
Homomorphic filtering and local histogram equalization processing are carried out on the first image, and a second image is obtained;
Threshold segmentation is carried out on the second image, and an initial gray threshold is obtained;
The initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image;
From the target image, the total number and total area of blackheads are identified.
Optionally, the acquiring an image to be identified, and performing gray processing on the image to be identified to obtain a first image, including:
Acquiring an image to be identified; determining face feature points in the image to be identified through a preset algorithm; determining a target area image according to the face feature points; and carrying out gray scale processing on the target area image to obtain a first image.
Optionally, the second image is subjected to threshold segmentation to obtain an initial gray threshold; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image, which comprises the following steps: obtaining an initial gray threshold value by using an Otsu algorithm for the second image; obtaining a target gray threshold according to the initial gray threshold and a preset gray difference value; obtaining a target image according to a preset function formula; wherein the preset function formula is bin=threshold (a), a=gray_h, η; η=thr- Δt; bin represents the target image; threshold (a) represents a threshold function formula; grayh represents the second image; thr represents an initial gray threshold; Δt represents a preset gray difference value; η represents a target gray threshold.
Optionally, the obtaining the target image according to the preset function formula includes: obtaining a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Optionally, the identifying the total number and the total area of the blackheads from the target image includes: determining the area of each black head from the target image; determining black heads with the area smaller than a preset area threshold value in each black head as first black heads; the number of the first blackheads and the total area of the first blackheads are determined.
Optionally, the method further comprises: normalizing the total number of blackheads to obtain a target total number; carrying out the normalization processing on the total area of the black head to obtain a target total area; obtaining a target value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is b=λc+ (1- λ) D; b represents the target value; c represents the total number of targets; d represents the target total area; lambda represents the severity coefficient.
Optionally, the method further comprises: when the target value is smaller than a first preset value, generating and outputting a first skin quality fraction according to a second formula; when the target value is larger than or equal to the first preset value and smaller than the second preset value, generating and outputting a second skin mass fraction according to a third formula; when the target value is greater than or equal to the second preset value, generating and outputting a third skin mass fraction according to a fourth formula; wherein the second formula is E 1 = 100-10B/F; the third formula is E 2 =90-10 (B-F)/(G-F); the fourth formula E 3=80-10(B-G)/(1-G);E1 represents the first skin mass fraction; f represents the first preset value; e 2 represents a second skin mass fraction; g represents the second preset value; e 3 represents this third skin mass fraction.
A second aspect of an embodiment of the present invention provides a blackhead identifying device, which may include:
The acquisition module is used for acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image;
the processing module is used for carrying out homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image;
and the identification module is used for identifying the total number and the total area of the blackheads from the target image.
Alternatively, in some embodiments of the invention,
The acquisition module is specifically used for acquiring an image to be identified;
the processing module is specifically used for determining face feature points in the image to be identified through a preset algorithm; determining a target area image according to the face feature points;
The acquisition module is also used for carrying out gray processing on the target area image to obtain a first image.
Optionally, the processing module is specifically configured to obtain an initial gray threshold for the second image by using an Otsu algorithm; obtaining a target gray threshold according to the initial gray threshold and a preset gray difference value; obtaining a target image according to a preset function formula; wherein the preset function formula is bin=threshold (a), a=gray_h, η; η=thr- Δt; bin represents the target image; threshold (a) represents a threshold function formula; grayh represents the second image; thr represents an initial gray threshold; Δt represents a preset gray difference value; η represents a target gray threshold.
Optionally, the processing module is specifically configured to obtain a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Optionally, the processing module is specifically configured to determine an area of each black head from the target image; determining black heads with the area smaller than a preset area threshold value in each black head as first black heads;
the identification module is specifically configured to determine the number of the first blackheads and the total area of the first blackheads.
Optionally, the processing module is further configured to normalize the total number of blackheads to obtain a target total number; carrying out the normalization processing on the total area of the black head to obtain a target total area; obtaining a target value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is b=λc+ (1- λ) D; b represents the target value; c represents the total number of targets; d represents the target total area; lambda represents the severity coefficient.
Optionally, the processing module is further configured to generate and output a first skin quality score according to a second formula when the target value is less than a first preset value; when the target value is larger than or equal to the first preset value and smaller than the second preset value, generating and outputting a second skin mass fraction according to a third formula; when the target value is greater than or equal to the second preset value, generating and outputting a third skin mass fraction according to a fourth formula; wherein the second formula is E 1 = 100-10B/F; the third formula is E 2 =90-10 (B-F)/(G-F); the fourth formula E 3=80-10(B-G)/(1-G);E1 represents the first skin mass fraction; f represents the first preset value; e 2 represents a second skin mass fraction; g represents the second preset value; e 3 represents this third skin mass fraction.
A third aspect of the embodiment of the present invention provides a blackhead identifying device, which may include:
a memory storing executable program code;
and a processor coupled to the memory;
The processor invokes the executable program code stored in the memory, which when executed by the processor causes the processor to implement the method according to the first aspect of the embodiment of the present invention.
In still another aspect, an embodiment of the present invention provides a terminal device, which may include a blackhead identifying device according to the second aspect or the third aspect of the embodiment of the present invention.
In yet another aspect, an embodiment of the present invention provides a computer readable storage medium having executable program code stored thereon, the executable program code implementing the method according to the first aspect of the embodiment of the present invention when executed by a processor.
In yet another aspect, embodiments of the present invention disclose a computer program product which, when run on a computer, causes the computer to perform any of the methods disclosed in the first aspect of the embodiments of the present invention.
In yet another aspect, an embodiment of the present invention discloses an application publishing platform, which is configured to publish a computer program product, where the computer program product, when run on a computer, causes the computer to perform any one of the methods disclosed in the first aspect of the embodiment of the present invention.
From the above technical solutions, the embodiment of the present invention has the following advantages:
In the embodiment of the invention, an image to be identified is obtained, and gray processing is carried out on the image to be identified to obtain a first image; homomorphic filtering and local histogram equalization processing are carried out on the first image, and a second image is obtained; threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image; and identifying the total number and the total area of the blackheads from the target image. The black head recognition device carries out gray level processing on the image to be recognized, and carries out noise point removal processing according to the adjusted initial gray level threshold value to obtain a target image; the blackhead recognition device recognizes blackheads in the target image. The method improves the accuracy of the blackhead recognition device in recognizing the blackhead.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments and the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings.
FIG. 1 is a schematic diagram of an embodiment of a black head recognition method based on image processing according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of another embodiment of a black head recognition method based on image processing according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an embodiment of a blackhead recognition device for recognizing and scoring blackheads according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an embodiment of marking and scoring blackheads by a blackhead identification device according to an embodiment of the present invention;
FIG. 5 is a schematic view of an embodiment of a blackhead recognition device according to the present invention;
FIG. 6 is a schematic diagram of another embodiment of a blackhead identification device according to the present invention;
Fig. 7 is a schematic diagram of an embodiment of a terminal device in an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a blackhead identification method, a blackhead identification device and terminal equipment based on image processing, which are used for improving the accuracy of blackhead identification by the blackhead identification device.
In order that those skilled in the art will better understand the present invention, reference will now be made to the accompanying drawings in which embodiments of the invention are illustrated, it being apparent that the embodiments described are only some, but not all, of the embodiments of the invention. Based on the embodiments of the present invention, it should be understood that the present invention is within the scope of protection.
It should be noted that the terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present invention are used for distinguishing between different objects and not necessarily for describing a particular sequential or chronological order. The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, the execution body in the embodiment of the present invention may be a blackhead recognition device or a terminal device. The following embodiment takes a blackhead recognition device as an example to further describe the technical scheme of the invention.
As shown in fig. 1, an embodiment of a blackhead recognition method based on image processing in an embodiment of the present invention is shown, which may include:
101. And acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image.
It should be noted that, the image to be recognized may be an image of the face of the user, or may be an image of a specific portion (for example, nose portion) of the face of the user; the image to be identified can be obtained through shooting by a camera or other shooting devices, and the method is not particularly limited herein.
The gray scale processing means that the black head recognition device processes three color components on the image to be recognized, wherein the three color components respectively comprise: red (Red, R), green (G) and Blue (B). The gray scale process may include, but is not limited to, the following four methods: component method, maximum method, average method, and weighted average method.
It is understood that the classification method refers to that the black head recognition device determines RGB on the image to be recognized as three target color components, for example: determining R as a first target color component having a gray scale N; determining G as a second target color component, the gray scale of the second target color component being P; b is determined as a third target color component, the gray scale of which is Q. The maximum value method is that the black head recognition device determines a color component with the largest brightness value in RGB on an image to be recognized as a maximum target color component, and the gray level of the maximum target color component is M. The average method is that the black head recognition device averages three brightness values corresponding to RGB on the image to be recognized to obtain a fourth target color component, and the gray value of the fourth target color component is the average gray value of RGB. The weighted average method is that the blackhead recognition device performs weighted average on three brightness values corresponding to RGB on an image to be recognized according to different weight proportions to obtain a fifth target color component, and the gray value of the fifth target color component is a weighted average gray value H of RGB.
Wherein N, P, Q, M and H both represent gray values different from R, G and B; n, P, Q, M and H may be the same or different, and are not particularly limited herein.
Optionally, the black head recognition device acquires an image to be recognized, and performs gray processing on the image to be recognized to obtain a first image, which may include, but is not limited to, the following implementation manners:
Implementation 1: the blackhead recognition device acquires an image to be recognized; the blackhead recognition device determines face feature points in the image to be recognized through a preset algorithm; the blackhead recognition device determines a target area image according to the face feature points; the blackhead recognition device carries out gray processing on the target area image to obtain a first image.
It should be noted that the preset algorithm may be at least one of a cross-platform computer vision function library (Open Source Computer Vision Library, openCV function library), an edge detection algorithm, a sobel algorithm, and an active contour model. The face feature points can be extracted from a preset algorithm. The target area image may be a specific partial image of the user's face.
Optionally, the black head recognition device determines the target area image according to the face feature point, and may include: the face feature points comprise a first feature point, a second feature point, a third feature point and a fourth feature point. The blackhead recognition device takes the ordinate of the first characteristic point as an upper boundary; taking the ordinate of the second characteristic point as a lower boundary; taking the abscissa of the third feature point as the left boundary; taking the ordinate of the fourth characteristic point as a right boundary; and determining a target area image according to the upper boundary, the lower boundary, the left boundary and the right boundary.
For example, the first feature point is the 28 th feature point of the OpenCV function library, the second feature point is the 29 th feature point, the third feature point is the 32 th feature point, and the fourth feature point is the 34 th feature point. The blackhead recognition device can determine the target area image according to the ordinate of the 28 th feature point, the ordinate of the 29 th feature point, the abscissa of the 32 nd feature point and the abscissa of the 34 th feature point. Wherein the target area image may be a nose area of the user.
Implementation 2: the blackhead recognition device detects the distance between a user and the blackhead recognition device; when the distance is within a preset distance range, the blackhead recognition device acquires an image to be recognized, and carries out gray processing on the image to be recognized to obtain a first image.
The preset distance range is a section constructed by the first distance threshold and the second distance threshold. The distance is within a preset distance range, that is, the distance is greater than the first distance threshold and less than or equal to the second distance threshold.
Illustratively, the first distance threshold is assumed to be 10 centimeters (simply: cm), the second distance threshold is assumed to be 25cm, and the preset distance range is assumed to be (10 cm,25 cm). The blackhead recognition device detects that the distance between a user and the blackhead recognition device is 18cm, the 18cm is located in a preset distance setting range (10 cm,25 cm), and at the moment, the blackhead recognition device acquires an image to be recognized.
Implementation 3: the black head recognition device detects the current ambient brightness value; when the current ambient brightness value is within a preset brightness range, the blackhead recognition device acquires an image to be recognized, and carries out gray processing on the image to be recognized to obtain a first image.
The preset luminance range is a section in which the first luminance threshold value and the second luminance threshold value are constructed. The current ambient brightness value is within a preset brightness range, i.e. the current ambient brightness value is greater than the first brightness threshold and less than or equal to the second brightness threshold.
Illustratively, assuming a first luminance threshold of 120 candelas per square meter (simply: cd/m 2), a second luminance threshold of 150cd/m 2, and a preset luminance range of (120 cd/m 2,150cd/m2). The black head recognition device detects that the current ambient brightness value is 136cd/m 2, the 136cd/m 2 is located in a preset brightness setting range (120 cd/m 2,150cd/m2), and at this time, the black head recognition device acquires an image to be recognized.
It can be understood that the image to be identified obtained by the blackhead identifying device in the preset distance range or the preset brightness range is clearer, so that the gray processing of the image to be identified is facilitated.
102. And carrying out homomorphic filtering and local histogram equalization on the first image to obtain a second image.
It should be noted that homomorphic filtering means that the homomorphic filter can enhance the contrast of the first image in the frequency domain, and at the same time, compress the brightness range of the first image. The homomorphism filter can reduce low frequencies and increase high frequencies, thereby reducing illumination variation in the first image and sharpening edge details of the first image. The homomorphic filtering is based on the principle of illumination reflection imaging in the process of acquiring a first image by the black head identification device, adjusts the gray scale range of the first image, eliminates the problem of uneven illumination on the first image, and effectively enhances the gray scale value of a target area image, wherein the first image comprises the target area image.
The implementation process of homomorphic filtering comprises the following steps: the homomorphic filter takes logarithm of the first image and then carries out Fourier transform to obtain a first target image; the homomorphic filter filters the first target image to obtain a gray level amplitude range; the homomorphic filter performs inverse Fourier transform on the gray amplitude range, and takes an index to obtain a second image.
Optionally, the homomorphic filter filters the first target image to obtain a gray scale range, which may include: the homomorphic filter utilizes a filtering function formula to the first target image to obtain a gray scale amplitude range.
Wherein, the formula of the filtering function is H= (gamma HL)[1-1/ecX]+γL;X=Y2/Z2;
H represents a filter function formula; gamma H denotes a first filtering threshold; gamma L denotes a second filtering threshold; c represents the slope of the transition from low frequency to high frequency; x represents a frequency ratio; y represents an input frequency; z represents the cut-off frequency.
Typically, γ H > 1 (e.g., γ H=2);γL < 1 (e.g., γ L =0.5).
Illustratively, c=4; z is taken as 10.
The local histogram equalization means that the first image is smoothed (abbreviated as image smoothing) and the first image is sharpened (abbreviated as image sharpening). Among these, image smoothing is a low frequency enhanced spatial filtering technique. Image smoothing may blur the first image or may eliminate noise from the first image. The image smoothing generally adopts a simple average method, i.e. an average brightness value between two adjacent pixel points is calculated. The size of the neighborhood between two adjacent pixel points is directly related to the smoothing effect of the first image, the larger the neighborhood is, the better the smoothing effect is, but the larger the neighborhood is, the larger the loss of edge information of the first image is, so that the output second image is blurred, namely the smoothing effect of the first image is poor, and therefore, the black head recognition device needs to set a proper neighborhood size to ensure the definition of the second image. However, image sharpening is an inverse image equalization technique to image smoothing, which is a high frequency enhanced spatial filtering technique. The image sharpening is to reduce the ambiguity in the first image by enhancing the high frequency component, i.e. to enhance the detail edge and contour of the first image, and at the same time to enhance the gray contrast on the first image, so as to obtain a clearer second image. But image sharpening, while enhancing the edge of the first image detail, also increases the noise of the first image. Therefore, the blackhead recognition device performs local histogram equalization on the first image to obtain the second image in combination with image smoothing and image sharpening.
Specifically, the black head recognition device divides the first image into small areas called tiles, and then performs histogram equalization on each tile. Because the histogram of each tile is concentrated in a small gray scale, noise on the first image is amplified by the histogram when it is present. The blackhead recognition device avoids amplifying noise present on the first image by using a contrast-limited method. For each tile, if the number of pixels of a certain gray value in the histogram exceeds the upper limit of contrast, the redundant pixels are equally divided into other gray values, after the histogram reconstruction operation, the blackhead recognition device performs histogram equalization, and finally, the boundary of each tile is spliced by bilinear interpolation. In general, tiles have a size of 8×8, a unit is a format of pixel, and an upper contrast limit is 3cd/m 2.
103. And carrying out threshold segmentation on the second image to obtain an initial gray threshold.
The threshold division means that the black head recognition device classifies the pixels of the second image by setting different gradation thresholds.
Optionally, the black head identifying device performs threshold segmentation on the second image to obtain an initial gray threshold, which may include: the blackhead recognition device obtains an initial gray threshold value for the second image by using an Otsu algorithm.
It should be noted that the Otsu algorithm may be an Otsu function formula.
Wherein, the Otsu function formula is thr=otsu (B), and b=gray_h;
thr represents an initial gray threshold; otsu (B) represents the Otsu function formula and gray_h represents the second image.
104. And adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image.
Optionally, the black head recognition device adjusts the initial gray threshold to obtain a target gray threshold, and processes the second image according to the target gray threshold to obtain a target image, which may include: the blackhead recognition device obtains a target gray threshold according to the initial gray threshold and a preset gray difference value; the blackhead recognition device obtains a target image according to a preset function formula;
Wherein the preset function formula is bin=threshold (a), a=gray_h, η; η=thr- Δt;
bin represents the target image; threshold (a) represents a threshold function formula; grayh represents the second image; thr represents an initial gray threshold; Δt represents a preset gray difference value; η represents a target gray threshold.
The threshold function formula means that the black head recognition device can binarize the pixel value of the second image by traversing the second image, and the target image has only two kinds of color components. The preset gray level difference value is used for adjusting an initial gray level threshold, and the preset gray level threshold can be set before leaving the factory of the blackhead identification device or can be set by a user in a self-defined mode according to the self-needs, and is not particularly limited.
It can be understood that the preset function formula is different from the existing threshold segmentation method, the preset function formula is applicable to all images, and meanwhile, noise on the target image obtained by the blackhead identification device according to the preset function formula is relatively less, so that blackheads on the target image can be conveniently and effectively identified.
Optionally, the black head recognition device obtains the target image according to a preset function formula, and the method may include: the blackhead recognition device obtains a third image according to a preset function formula; the blackhead recognition device removes noise on the third image to obtain a fourth image; and the black head recognition device cuts the fourth image to obtain a target image.
Optionally, the black head recognition device removes noise on the third image to obtain a fourth image, which may include, but is not limited to, the following implementation manners:
Implementation 1: the blackhead recognition device removes noise on the third image through low-pass filtering, and a fourth image is obtained.
It should be noted that, the low-pass filtering keeps the low-frequency part to remove the high-frequency part, so that noise in the third image can be filtered, so that the smoothness of the third image is higher, and the texture details of the third image are blurred, so as to effectively remove the noise.
Implementation 2: the blackhead recognition device removes noise on the third image through morphological processing, and a fourth image is obtained.
It is noted that the morphological treatment may include an expanded morphological treatment and/or an eroded morphological treatment. The morphological processing of first expansion and then corrosion refers to an open operation, which can separate target areas on a third image and eliminate some irrelevant areas; the morphological processing of erosion followed by dilation refers to a closed-loop operation that yields a contour corresponding to the target region on the third image. Noise in the third image can be removed whether the operation is on or off, so that the obtained fourth image is clearer.
It will be appreciated that both morphological processing and low pass filtering can remove noise on the third image. Since the morphological processing is more concise and clear, the blackhead recognition device typically uses morphological processing to remove noise from the third image.
Optionally, the cropping of the fourth image by the black head recognition device to obtain the target image may include: the black head recognition device determines the width to be cut of four edges on the fourth image; and the black head recognition device cuts the fourth image according to the width to be cut to obtain a target image.
Wherein the target image may comprise a target region, i.e. a region of interest (Rregion Of Interest, ROI).
It should be noted that the widths to be cut of the four sides in the fourth image may be the same or different. The width to be cut can be set before leaving the factory of the blackhead identification device, or can be set by a user according to custom, and the width to be cut is not particularly limited. For example, the width to be cut of four sides of the black head recognition device set before shipment is 0.5cm.
105. And identifying the total number and the total area of the blackheads from the target image.
Optionally, the black head identifying device identifies the total number and the total area of the black heads from the target image, and may include: the black head recognition device determines the area of each black head from the target image; the black head identification device determines black heads with the area smaller than a preset area threshold value in each black head as first black heads; the black head identifying device determines the number of the first black heads and the total area of the first black heads.
The area of the first black head refers to the number of first pixel points corresponding to the first black head; the preset area threshold is a preset pixel number threshold. The area of the first black head is smaller than a preset area threshold, namely the number of the pixels corresponding to the first black head is smaller than a preset pixel number threshold.
For example, assume that the preset pixel number threshold is 10. The black head identifying device determines that 9 first pixel points exist in a first target black head from the target image, and the 9 first pixel points are smaller than 10 first pixel points; 13 second pixel points are arranged on the second target black head, and the 13 second pixel points are more than 10; the third target black head has 7 third pixel points, and the 7 third pixel points are smaller than 10 third pixel points; the fourth target black head has 10 fourth pixel points, and the 10 fourth pixel points are equal to 10 fourth pixel points. At this time, the black head identifying device determines that the first target black head is the first black head, the third target black head is the second black head, and the fourth target black head is the third black head, that is, the black head identifying device may determine that there are three black heads on the target image, and the total area of the three black heads is 26 pixel points.
In the embodiment of the invention, an image to be identified is obtained, and gray processing is carried out on the image to be identified to obtain a first image; homomorphic filtering and local histogram equalization processing are carried out on the first image, and a second image is obtained; threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image; and identifying the total number and the total area of the blackheads from the target image. The black head recognition device carries out gray level processing on the image to be recognized, and carries out noise point removal processing according to the adjusted initial gray level threshold value to obtain a target image; the blackhead recognition device recognizes blackheads in the target image. The method improves the accuracy of the blackhead recognition device in recognizing the blackhead.
As shown in fig. 2, another embodiment of the blackhead identification method based on image processing in the embodiment of the present invention is shown, which may include:
201. and acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image.
202. And carrying out homomorphic filtering and local histogram equalization on the first image to obtain a second image.
203. And carrying out threshold segmentation on the second image to obtain an initial gray threshold.
204. And adjusting the initial gray threshold to obtain a target gray threshold, and processing the second image according to the target gray threshold to obtain a target image.
205. And identifying the total number and the total area of the blackheads from the target image.
It should be noted that steps 201 to 205 are similar to steps 101 to 105 shown in fig. 1 in this embodiment, and will not be described here again.
206. And normalizing the total number of the blackheads to obtain a target total number.
The normalization process converts the total number of blackheads to obtain a target total number, which is a scalar quantity.
207. And carrying out normalization processing on the total area of the black head to obtain a target total area.
208. And obtaining the target value according to a first formula.
Wherein the target value is used to characterize the severity of the blackhead.
The first formula is b=λc+ (1- λ) D;
B represents the target value; c represents the total number of targets; d represents the target total area; λ represents a severity coefficient, and λ is used to adjust the influence proportion of the target total area of the blackhead on the severity of the blackhead.
It is known from the first experimental data that, in general, when λ=0.6, the obtained target value, i.e., the severity of blackhead is more accurate.
The severity of the blackhead refers to a ratio of a target total area of the blackhead to a target area. The greater the ratio, the greater the severity of the blackheads, and conversely, the lesser the severity of the blackheads.
For example, as shown in fig. 3, an embodiment of identifying and scoring a blackhead by the blackhead identifying device according to the embodiment of the present invention is shown, which may include: an image 301 to be identified, a target area image 302, a first image 303, a second image 304, a third image 305, a fourth image 306, a target image 307, and a scoring image 308.
It should be noted that the image 301 to be recognized and the target area image 302 are colored; the blackhead recognition device processes the image 301 to be recognized according to a preset algorithm to obtain a target area image 302; the black head recognition device performs gray processing on the target area image 302 to obtain a first image 303; the blackhead recognition device performs homomorphic filtering and local histogram equalization on the first image 303 to obtain a second image 304; the blackhead recognition device processes the second image 304 by using an Otsu algorithm and a preset function formula to obtain a third image 305; the black head recognition device performs morphological processing on the third image 305 to obtain a fourth image 306; the black head recognition device cuts the fourth image 306 to obtain a target image 307; the blackhead recognition device recognizes and score evaluates blackheads on the target image 307 to obtain a scoring image 308.
The scoring image 308 may include the position obtained by labeling the identified blackheads by the blackhead identifying device, and may also include a score Grades. It will be appreciated that the score may be used to characterize the skin quality of the user, for example Grades is 72 points.
It will be appreciated that the process of cropping the fourth image 306 by the black head identifying device may be considered as a process of extracting the ROI by the black head identifying device. Optionally, after extracting the ROI, the black head recognition device may reject "false black heads" on the ROI, where the "false black heads" may be black heads with the number of pixels being greater than or equal to a preset threshold of the number of pixels; the blackhead recognition device obtains a target image 307 after eliminating false blackheads; the blackhead recognition device recognizes blackheads on the target image 307, obtains the number and the area of the blackheads, and performs score evaluation of skin quality.
Optionally, after step 208, the method may further include: when the target value is smaller than a first preset value, the blackhead recognition device obtains and generates and outputs a first skin quality fraction according to a second formula; when the target value is larger than or equal to the first preset value and smaller than the second preset value, generating and outputting a second skin mass fraction according to a third formula; and when the target value is greater than or equal to the second preset value, generating and outputting a third skin quality fraction according to a fourth formula.
Wherein the second formula is E 1 = 100-10B/F; the third formula is E 2 =90-10 (B-F)/(G-F);
the fourth formula is E 3 =80-10 (B-G)/(1-G);
E 1 represents the first skin mass fraction; f represents the first preset value; e 2 represents a second skin mass fraction; g represents the second preset value; e 3 represents this third skin mass fraction.
From the second experimental data, it is known that in general, when f=0.27 and g=0.40, the obtained skin mass fraction is accurate.
Optionally, after step 208, the method may further include: when the target value is smaller than a first preset value, the blackhead identification device marks blackheads, and generates and outputs a first skin quality fraction according to a second formula; when the target value is larger than or equal to the first preset value and smaller than the second preset value, the blackhead identification device marks blackheads and generates and outputs a second skin mass fraction according to a third formula; when the target value is greater than or equal to the second preset value, the blackhead identification device marks blackheads and generates and outputs a third skin mass fraction according to a fourth formula.
It can be understood that the black head identification device marks the black heads, namely, the black head identification device marks the black heads in the target area, so that a user can know the number and the positions of the black heads through the target image.
For example, as shown in fig. 4, an embodiment of marking and scoring a blackhead by the blackhead identification device according to the embodiment of the present invention is shown, which may include: (1) the original nose tip; (2) black header annotating results and scores.
It will be appreciated that if the blackhead recognition device takes the original nose tip as the target area, (1) the original nose tip represents an image containing the original nib, i.e., the target area image; (2) The blackhead labeling result and the marking score represent labeling results of the blackhead recognition device on the number and the position of the blackheads, and the generated skin quality score is 74.
Alternatively, the skin quality score may comprise: a first skin mass fraction, a second skin mass fraction, or a third skin mass fraction; after the blackhead identification device generates and outputs the skin quality score, the method may further include: based on the skin quality score, a skin quality evaluation result and advice are generated and output.
The skin quality evaluation result means that the blackhead recognition device evaluates blackheads of the user with respect to the skin quality score; the skin quality suggestion refers to that the blackhead identification device provides a targeted suggestion for the user how to remove blackheads according to the skin quality score and the skin quality evaluation result.
It can be understood that the blackhead recognition device outputs the skin quality evaluation result and the advice, so that the user can conveniently make corresponding actions according to the targeted advice to improve the skin quality of the user.
In the embodiment of the invention, an image to be identified is obtained, and gray processing is carried out on the image to be identified to obtain a first image; homomorphic filtering and local histogram equalization processing are carried out on the first image, and a second image is obtained; threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image; identifying and obtaining the total number and the total area of blackheads from the target image; normalizing the total number of blackheads to obtain a target total number; carrying out the normalization processing on the total area of the blackheads to obtain a target total area; according to a first formula, a target value is obtained, wherein the target value is used for representing the severity of the blackhead.
The black head recognition device carries out gray level processing on the image to be recognized, and carries out noise point removal processing according to the adjusted initial gray level threshold value to obtain a target image; the blackhead identification device identifies blackheads in the target image to obtain the total quantity and the total area of the blackheads, and outputs a target value for representing the severity degree of the blackheads according to the total quantity and the total area of the blackheads. The method ensures that the blackhead recognition device not only improves the accuracy of the blackhead recognition device in recognizing the blackhead, but also is convenient for a user to grasp the severity of the blackhead of the user in time by carrying out noise removal on the image to be recognized for a plurality of times.
As shown in fig. 5, which is a schematic diagram of an embodiment of a blackhead recognition device according to an embodiment of the present invention, the blackhead recognition device may include:
The acquiring module 501 is configured to acquire an image to be identified, and perform gray processing on the image to be identified to obtain a first image;
The processing module 502 is configured to perform homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image;
and the identifying module 503 is used for identifying the total number and the total area of the blackheads from the target image.
Alternatively, in some embodiments of the invention,
The acquiring module 501 is specifically configured to acquire an image to be identified;
the processing module 502 is specifically configured to determine, through a preset algorithm, a face feature point in the image to be identified; determining a target area image according to the face feature points;
the obtaining module 501 is further configured to perform gray-scale processing on the target area image to obtain a first image.
Alternatively, in some embodiments of the invention,
The processing module 502 is specifically configured to obtain an initial gray threshold for the second image by using an Otsu algorithm; obtaining a target gray threshold according to the initial gray threshold and a preset gray difference value; obtaining a target image according to a preset function formula; wherein the preset function formula is bin=threshold (a), a=gray_h, η; η=thr- Δt; bin represents the target image; threshold (a) represents a threshold function formula; grayh represents the second image; thr represents an initial gray threshold; Δt represents a preset gray difference value; η represents a target gray threshold.
Alternatively, in some embodiments of the invention,
The processing module 502 is specifically configured to obtain a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Alternatively, in some embodiments of the invention,
The processing module 502 is specifically configured to determine an area of each black head from the target image; determining black heads with the area smaller than a preset area threshold value in each black head as first black heads;
The identifying module 503 is specifically configured to determine the number of the first blackheads and the total area of the first blackheads.
Alternatively, in some embodiments of the invention,
The processing module 502 is further configured to normalize the total number of blackheads to obtain a target total number; carrying out the normalization processing on the total area of the black head to obtain a target total area; obtaining a target value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is b=λc+ (1- λ) D; b represents the target value; c represents the total number of targets; d represents the target total area; lambda represents the severity coefficient.
Alternatively, in some embodiments of the invention,
The processing module 502 is further configured to generate and output a first skin quality score according to a second formula when the target value is less than a first preset value; when the target value is larger than or equal to the first preset value and smaller than the second preset value, generating and outputting a second skin mass fraction according to a third formula; when the target value is greater than or equal to the second preset value, generating and outputting a third skin mass fraction according to a fourth formula; wherein the second formula is E 1 = 100-10B/F; the third formula is E 2 =90-10 (B-F)/(G-F); the fourth formula E 3=80-10(B-G)/(1-G);E1 represents the first skin mass fraction; f represents the first preset value; e 2 represents a second skin mass fraction; g represents the second preset value; e 3 represents this third skin mass fraction.
As shown in fig. 6, which is a schematic diagram of another embodiment of the blackhead identification device according to the embodiment of the present invention, the blackhead identification device may include: a processor 601 and a memory 602;
the processor 601 has the following functions:
acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image;
Homomorphic filtering and local histogram equalization processing are carried out on the first image, and a second image is obtained;
Threshold segmentation is carried out on the second image, and an initial gray threshold is obtained;
The initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image;
From the target image, the total number and total area of blackheads are identified.
Optionally, the processor 601 also has the following functions:
Acquiring an image to be identified; determining face feature points in the image to be identified through a preset algorithm; determining a target area image according to the face feature points; and carrying out gray scale processing on the target area image to obtain a first image.
Optionally, the processor 601 also has the following functions:
obtaining an initial gray threshold value by using an Otsu algorithm for the second image; obtaining a target gray threshold according to the initial gray threshold and a preset gray difference value; obtaining a target image according to a preset function formula; wherein the preset function formula is bin=threshold (a), a=gray_h, η; η=thr- Δt; bin represents the target image; threshold (a) represents a threshold function formula; grayh represents the second image; thr represents an initial gray threshold; Δt represents a preset gray difference value; η represents a target gray threshold.
Optionally, the processor 601 also has the following functions:
obtaining a third image according to a preset function formula; removing noise on the third image to obtain a fourth image; and cutting the fourth image to obtain a target image.
Optionally, the processor 601 also has the following functions:
Determining the area of each black head from the target image; determining black heads with the area smaller than a preset area threshold value in each black head as first black heads; the number of the first blackheads and the total area of the first blackheads are determined.
Optionally, the processor 601 also has the following functions:
Normalizing the total number of blackheads to obtain a target total number; carrying out the normalization processing on the total area of the black head to obtain a target total area; obtaining a target value according to a first formula; wherein the target value is used for representing the severity of the blackhead; the first formula is b=λc+ (1- λ) D; b represents the target value; c represents the total number of targets; d represents the target total area; lambda represents the severity coefficient.
Optionally, the processor 601 also has the following functions:
when the target value is smaller than a first preset value, generating and outputting a first skin quality fraction according to a second formula; when the target value is larger than or equal to the first preset value and smaller than the second preset value, generating and outputting a second skin mass fraction according to a third formula; when the target value is greater than or equal to the second preset value, generating and outputting a third skin mass fraction according to a fourth formula; wherein the second formula is E 1 = 100-10B/F; the third formula is E 2 =90-10 (B-F)/(G-F); the fourth formula E 3=80-10(B-G)/(1-G);E1 represents the first skin mass fraction; f represents the first preset value; e 2 represents a second skin mass fraction; g represents the second preset value; e 3 represents this third skin mass fraction.
The memory 602 has the following functions:
the processing procedure and the processing result of the processor 601 are stored.
As shown in fig. 7, which is a schematic diagram of another embodiment of the terminal device in the embodiment of the present invention, the blackhead identifying device shown in fig. 5 or fig. 6 may be included in the present embodiment.
It is understood that the terminal device in fig. 7 may include general hand-held, screen-electronic terminal devices such as a mobile phone, a smart phone, a portable terminal, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a portable multimedia player (Personal MEDIA PLAYER, PMP) device, a notebook computer, a notebook (Note Pad), a wireless broadband (Wireless Broadband, wibro) terminal, a tablet computer (Personal Computer, PC), a smart PC, a Point of Sales (POS), a car computer, and the like.
The terminal device may also comprise a wearable device. The wearable device may be worn directly on the user or be a portable electronic device integrated into the user's clothing or accessories. The wearable device is not only a hardware device, but also can realize powerful intelligent functions through software support and data interaction and cloud interaction, such as: the mobile phone terminal has the advantages of calculating function, positioning function and alarming function, and can be connected with mobile phones and various terminals. Wearable devices may include, but are not limited to, wrist-supported watch types (e.g., watches, wrist products, etc.), foot-supported shoes (e.g., shoes, socks, or other leg wear products), head-supported Glass types (e.g., glasses, helmets, headbands, etc.), and smart apparel, school bags, crutches, accessories, etc. in various non-mainstream product forms.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, produces a flow or function in accordance with embodiments of the present invention, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk Solid STATE DISK (SSD)), etc.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in the present invention, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. The blackhead identification method based on image processing is characterized by comprising the following steps:
acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image;
Homomorphic filtering and local histogram equalization processing are carried out on the first image, and a second image is obtained;
threshold segmentation is carried out on the second image, and an initial gray threshold is obtained;
The initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image;
identifying and obtaining the total number and the total area of blackheads from the target image;
Threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image, which comprises the following steps:
obtaining an initial gray threshold value by using an Otsu algorithm for the second image;
obtaining a target gray threshold according to the initial gray threshold and a preset gray difference value;
Obtaining a target image according to a preset function formula;
wherein the preset function formula is bin=threshold (a), a=gray_h, η; η=thr- Δt;
bin represents the target image; threshold (a) represents a threshold function formula; grayh represents the second image; thr represents an initial gray threshold; Δt represents a preset gray difference value; η represents a target gray threshold; the threshold function formula refers to binarizing pixel values of the second image by traversing the second image.
2. The method according to claim 1, wherein the acquiring the image to be identified and performing gray-scale processing on the image to be identified to obtain the first image includes:
Acquiring an image to be identified;
determining face feature points in the image to be identified through a preset algorithm;
Determining a target area image according to the face feature points;
and carrying out gray processing on the target area image to obtain a first image.
3. The method of claim 1, wherein obtaining the target image according to a predetermined function formula comprises:
Obtaining a third image according to a preset function formula;
Removing noise on the third image to obtain a fourth image;
And cutting the fourth image to obtain a target image.
4. The method of claim 1, wherein identifying the total number and total area of blackheads from the target image comprises:
Determining the area of each black head from the target image;
Determining the black head of which the area is smaller than a preset area threshold value in each black head as a first black head;
The number of the first blackheads and the total area of the first blackheads are determined.
5. The method according to any one of claims 1-4, further comprising:
Normalizing the total number of blackheads to obtain a target total number;
carrying out the normalization processing on the total area of the blackheads to obtain a target total area;
Obtaining a target value according to a first formula;
wherein the target value is used to characterize the severity of the blackhead;
The first formula is b=λc+ (1- λ) D;
b represents the target value; c represents the total number of targets; d represents the target total area; lambda represents the severity coefficient.
6. The method of claim 5, wherein the method further comprises:
When the target value is smaller than a first preset value, generating and outputting a first skin quality fraction according to a second formula;
When the target value is larger than or equal to the first preset value and smaller than the second preset value, generating and outputting a second skin quality fraction according to a third formula;
When the target value is greater than or equal to the second preset value, generating and outputting a third skin quality fraction according to a fourth formula;
Wherein the second formula is E 1 = 100-10B/F;
The third formula is E 2 = 90-10 (B-F)/(G-F);
The fourth formula is E 3 =80-10 (B-G)/(1-G);
E 1 represents the first skin mass fraction; f represents the first preset value; e 2 represents a second skin mass fraction; g represents the second preset value; e 3 represents the third skin mass fraction.
7. A blackhead identification device, characterized by comprising:
the acquisition module is used for acquiring an image to be identified, and carrying out gray processing on the image to be identified to obtain a first image;
The processing module is used for carrying out homomorphic filtering and local histogram equalization processing on the first image to obtain a second image; threshold segmentation is carried out on the second image, and an initial gray threshold is obtained; the initial gray threshold is adjusted to obtain a target gray threshold, and the second image is processed according to the target gray threshold to obtain a target image;
The identification module is used for identifying the total number and the total area of the blackheads from the target image;
The processing module is specifically configured to obtain an initial gray threshold for the second image by using an Otsu algorithm; obtaining a target gray threshold according to the initial gray threshold and a preset gray difference value; obtaining a target image according to a preset function formula; wherein the preset function formula is bin=threshold (a), a=gray_h, η; η=thr- Δt; bin represents the target image; threshold (a) represents a threshold function formula; grayh represents the second image; thr represents an initial gray threshold; Δt represents a preset gray difference value; η represents a target gray threshold; the threshold function formula refers to binarizing pixel values of the second image by traversing the second image.
8. A blackhead identification device, characterized by comprising:
a memory storing executable program code;
and a processor coupled to the memory;
the processor invoking the executable program code stored in the memory, which when executed by the processor, causes the processor to implement the method of any of claims 1-6.
9. A computer readable storage medium having stored thereon executable program code, which when executed by a processor, implements the method according to any of claims 1-6.
CN202110362210.5A 2021-04-02 2021-04-02 Blackhead identification method and blackhead identification device based on image processing and terminal equipment Active CN113128372B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110362210.5A CN113128372B (en) 2021-04-02 2021-04-02 Blackhead identification method and blackhead identification device based on image processing and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110362210.5A CN113128372B (en) 2021-04-02 2021-04-02 Blackhead identification method and blackhead identification device based on image processing and terminal equipment

Publications (2)

Publication Number Publication Date
CN113128372A CN113128372A (en) 2021-07-16
CN113128372B true CN113128372B (en) 2024-05-07

Family

ID=76774747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110362210.5A Active CN113128372B (en) 2021-04-02 2021-04-02 Blackhead identification method and blackhead identification device based on image processing and terminal equipment

Country Status (1)

Country Link
CN (1) CN113128372B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113128376B (en) * 2021-04-02 2024-05-14 西安融智芙科技有限责任公司 Wrinkle identification method and device based on image processing and terminal equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139027A (en) * 2015-08-05 2015-12-09 北京天诚盛业科技有限公司 Capsule head defect detection method and apparatus
CN106846276A (en) * 2017-02-06 2017-06-13 上海兴芯微电子科技有限公司 A kind of image enchancing method and device
CN108550131A (en) * 2018-04-12 2018-09-18 浙江理工大学 Feature based merges the SAR image vehicle checking method of sparse representation model
CN109033954A (en) * 2018-06-15 2018-12-18 西安科技大学 A kind of aerial hand-written discrimination system and method based on machine vision
CN109285171A (en) * 2018-09-21 2019-01-29 国网甘肃省电力公司电力科学研究院 A kind of insulator hydrophobicity image segmentation device and method
CN110084791A (en) * 2019-04-18 2019-08-02 天津大学 A kind of early blight of tomato based on image procossing and late blight automatic testing method
CN110321896A (en) * 2019-04-30 2019-10-11 深圳市四季宏胜科技有限公司 Blackhead recognition methods, device and computer readable storage medium
CN110533648A (en) * 2019-08-28 2019-12-03 上海复硕正态企业管理咨询有限公司 A kind of blackhead identifying processing method and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3398159B1 (en) * 2015-12-31 2021-05-19 Shanghai United Imaging Healthcare Co., Ltd. Methods and systems for image processing

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139027A (en) * 2015-08-05 2015-12-09 北京天诚盛业科技有限公司 Capsule head defect detection method and apparatus
CN106846276A (en) * 2017-02-06 2017-06-13 上海兴芯微电子科技有限公司 A kind of image enchancing method and device
CN108550131A (en) * 2018-04-12 2018-09-18 浙江理工大学 Feature based merges the SAR image vehicle checking method of sparse representation model
CN109033954A (en) * 2018-06-15 2018-12-18 西安科技大学 A kind of aerial hand-written discrimination system and method based on machine vision
CN109285171A (en) * 2018-09-21 2019-01-29 国网甘肃省电力公司电力科学研究院 A kind of insulator hydrophobicity image segmentation device and method
CN110084791A (en) * 2019-04-18 2019-08-02 天津大学 A kind of early blight of tomato based on image procossing and late blight automatic testing method
CN110321896A (en) * 2019-04-30 2019-10-11 深圳市四季宏胜科技有限公司 Blackhead recognition methods, device and computer readable storage medium
CN110533648A (en) * 2019-08-28 2019-12-03 上海复硕正态企业管理咨询有限公司 A kind of blackhead identifying processing method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Feature extraction and analysis on X-ray image of Xinjiang Kazak Esophageal cancer by using gray-level histograms";Fang Yan;《2013 IEEE International Conference on Medical Imaging Physics and Engineering》;第61-65页 *
"图像稀疏表示的K-SVD算法及其在人脸识别中的应用研究";张迪;《中国优秀硕士学位论文全文数据库 信息科技辑》(2021年第01期);第I138-1393页 *

Also Published As

Publication number Publication date
CN113128372A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
Thanh et al. A skin lesion segmentation method for dermoscopic images based on adaptive thresholding with normalization of color models
RU2711050C2 (en) Image and attribute quality, image enhancement and identification of features for identification by vessels and faces and combining information on eye vessels with information on faces and / or parts of faces for biometric systems
Esmaeili et al. Automatic detection of exudates and optic disk in retinal images using curvelet transform
CN113128376B (en) Wrinkle identification method and device based on image processing and terminal equipment
CN107093168A (en) Processing method, the device and system of skin area image
CN107172354B (en) Video processing method and device, electronic equipment and storage medium
Qureshi et al. Detection of glaucoma based on cup-to-disc ratio using fundus images
CN108369644B (en) Method for quantitatively detecting human face raised line, intelligent terminal and storage medium
JP2007272435A (en) Face feature extraction device and face feature extraction method
US20180228426A1 (en) Image Processing System and Method
US10229498B2 (en) Image processing device, image processing method, and computer-readable recording medium
JPWO2017061106A1 (en) Information processing apparatus, image processing system, image processing method, and program
EP3073415A1 (en) Image processing apparatus and image processing method
CN113569708A (en) Living body recognition method, living body recognition device, electronic apparatus, and storage medium
CN113128372B (en) Blackhead identification method and blackhead identification device based on image processing and terminal equipment
Bhardwaj et al. Automated optical disc segmentation and blood vessel extraction for fundus images using ophthalmic image processing
CN112070741B (en) Rice chalkiness degree detecting system based on image salient region extracting method
CN112712054B (en) Face wrinkle detection method
CN107358224B (en) Method for detecting outer iris outline in cataract surgery
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN113128374B (en) Sensitive skin detection method and sensitive skin detection device based on image processing
CN111860079B (en) Living body image detection method and device and electronic equipment
Shahril et al. Pre-processing Technique for Wireless Capsule Endoscopy Image Enhancement.
CN113128377B (en) Black eye recognition method, black eye recognition device and terminal based on image processing
CN111311610A (en) Image segmentation method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant