CN113080874B - Multi-angle cross validation intelligent skin measuring system - Google Patents

Multi-angle cross validation intelligent skin measuring system Download PDF

Info

Publication number
CN113080874B
CN113080874B CN202110415106.8A CN202110415106A CN113080874B CN 113080874 B CN113080874 B CN 113080874B CN 202110415106 A CN202110415106 A CN 202110415106A CN 113080874 B CN113080874 B CN 113080874B
Authority
CN
China
Prior art keywords
image
processing
face
mpi
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110415106.8A
Other languages
Chinese (zh)
Other versions
CN113080874A (en
Inventor
舒哲
黄鹏升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Medical Technology Research Institute Co ltd
Original Assignee
Beijing Medical Technology Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Medical Technology Research Institute Co ltd filed Critical Beijing Medical Technology Research Institute Co ltd
Priority to CN202110415106.8A priority Critical patent/CN113080874B/en
Publication of CN113080874A publication Critical patent/CN113080874A/en
Application granted granted Critical
Publication of CN113080874B publication Critical patent/CN113080874B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room
    • A61B5/004Features or image-related aspects of imaging apparatus classified in A61B5/00, e.g. for MRI, optical tomography or impedance tomography apparatus; arrangements of imaging apparatus in a room adapted for image acquisition of a particular organ or body part
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/44Detecting, measuring or recording for evaluating the integumentary system, e.g. skin, hair or nails
    • A61B5/441Skin evaluation, e.g. for skin disorder diagnosis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Signal Processing (AREA)
  • Dermatology (AREA)
  • Quality & Reliability (AREA)
  • Psychiatry (AREA)
  • Physiology (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an intelligent skin measuring system with multi-angle cross validation, which comprises an image acquisition device, an image processing device and a result display device, wherein the image acquisition device is used for acquiring a plurality of images; the image acquisition device is used for acquiring face images of a person to be subjected to skin quality detection from a plurality of preset angles; the image processing device is used for respectively carrying out image recognition processing on each face image to obtain a processing result and obtaining a face skin quality detection result of the person based on a plurality of processing results; and the result display device is used for visually displaying the face skin quality detection result. Compared with the prior art, the skin quality detection result obtained by the method is more accurate. This application judges through the face's image of many different angles, and the defect type that not only obtains is more comprehensive, can carry out cross validation moreover between many face's images to effectively improve the accuracy of skin quality testing result.

Description

Multi-angle cross validation intelligent skin measuring system
Technical Field
The invention relates to the field of skin measurement, in particular to an intelligent skin measurement system with multi-angle cross validation.
Background
With the development of image recognition technology, in the prior art, the detection of skin quality is often performed by acquiring a face image of a person to be subjected to skin quality detection and then performing image processing. However, in the prior art, the facial skin quality of the person to be subjected to skin quality detection is often determined only from a single angle, for example, from the front by acquiring a facial image of the person to be subjected to skin quality detection and then performing image processing on the facial image. However, because the face is not a regular plane, the accuracy of the skin quality detection result obtained by adopting a single-angle mode is not high enough.
Disclosure of Invention
In view of the above problems, the present invention aims to provide an intelligent skin test system with multi-angle cross validation.
The invention provides an intelligent skin measuring system with multi-angle cross validation, which comprises an image acquisition device, an image processing device and a result display device, wherein the image acquisition device is used for acquiring a plurality of images;
the image acquisition device is used for acquiring face images of a person to be subjected to skin quality detection from a plurality of preset angles and sending the acquired face images to the image processing device;
the image processing device is used for respectively carrying out image recognition processing on each face image to obtain a processing result and obtaining a face skin quality detection result of the person based on a plurality of processing results;
and the result display device is used for visually displaying the face skin quality detection result.
Preferably, the image acquisition device comprises a sliding rail, a moving device, a connecting rod, a vertical supporting rod and a camera; the camera is connected with the mobile device; the moving device is connected with the sliding rail in a sliding manner; the connecting rod is connected with the sliding rail.
Preferably, the image acquisition device further comprises a telescopic rod, a base and universal wheels; one end of the telescopic rod is connected with the supporting vertical rod, and the other end of the telescopic rod is connected with the base; the upper surface of base with the telescopic link is connected, the lower surface with the universal wheel is connected.
Preferably, the image processing device comprises one or more of a desktop computer, a notebook computer, a smart phone and a tablet computer.
Preferably, the performing image recognition processing on each face image respectively to obtain a processing result includes:
for each face image, the following recognition processing is performed:
respectively carrying out distortion correction processing on each face image to obtain a corrected image;
carrying out noise reduction processing on the corrected image to obtain a noise-reduced image;
and taking the obtained multiple noise-reduced images as a processing result.
Preferably, the obtaining a facial skin quality detection result of the person based on a plurality of processing results comprises:
recording the set of all correction images as ceU, selecting a main processing image mpI and an auxiliary processing image from ceU according to a preset selection rule, and recording the set of all auxiliary processing images as assiU;
respectively carrying out image splicing on mpI and each auxiliary processing image in the assiU, acquiring a spliced image of mpI in each auxiliary processing image in the assiU, and recording the set of all spliced images as csU;
fine-grained detection is carried out on mpI, defect types existing in mpI are obtained, and the defect types existing in mpI are stored into a set shU;
the stitched images in mpI and csU are placed in the same coordinate system for the i-th defect type s in shU i By the following way for s i And further judging:
determining s in mpI i A set of corresponding pixel points aimU;
determining the j Zhang Pinjie image aimU contained in csU for aimU j Of (3) a corresponding set tgU j ,j∈[1,numcsU]numcsU represents the total number of elements contained in csU;
in aimu j Middle obtention tgU j Minimum circumscribed rectangle miZ of pixel point in j
For miZ j Carrying out fine-grained detection on the pixel points contained in the image to judge the miZ j Whether defect type s exists in pixel points contained in the image i If so, the aimu will be j Storing the judgment set judU into a judgment set judU;
calculating s i Is proportional parameter cofsh(s) i ):
Figure BDA0003025570090000021
In the formula, cofsh(s) i ) Denotes s i numjudU denotes the total number of elements contained in the judU;
if cofsh(s) i ) If the ratio is larger than the preset ratio threshold value, s is represented i Belonging to the type of defects present in the facial skin of said person, s i Storing the data into a set resU; if cofsh(s) i ) Less than or equal to the preset proportional threshold value, s is represented i A type of defect present on the facial skin not belonging to said person;
and taking the elements contained in the resU as the face skin quality detection result of the person.
Compared with the prior art, the invention has the advantages that:
the prior art generally acquires a single-angle face image and then judges the defect type contained in the face image, but this judgment method is easily affected by the quality of the face image, for example, if the quality of a certain face image is too low, it is easy to obtain wrong skin quality. The invention obtains face images from a plurality of preset angles, and determines a main processing image and an auxiliary processing image after the images are subjected to a series of processing; and respectively carrying out auxiliary judgment on each defect type in the main processing image by using the auxiliary processing image so as to obtain a face skin quality detection result of the person to be subjected to skin quality detection. Compared with the prior art, the skin quality detection result obtained by the method is more accurate. This application judges through the face image of many different angles, and the defect type that not only obtains is more comprehensive, can carry out cross validation moreover between many face images to effectively improve the accuracy of skin quality testing result.
Drawings
The invention is further illustrated by means of the attached drawings, but the embodiments in the drawings do not constitute any limitation to the invention, and for a person skilled in the art, without inventive effort, further drawings may be derived from the following figures.
FIG. 1 is a diagram of an exemplary embodiment of a multi-angle cross-validation intelligent skin test system according to the present invention.
Fig. 2 is a diagram of an exemplary embodiment of a top view of an image capturing device according to the present invention.
Fig. 3 is a front view of an exemplary embodiment of an image capturing device according to the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In an embodiment shown in fig. 1, the invention provides a multi-angle cross-validation intelligent skin measurement system, which includes an image acquisition device, an image processing device and a result display device;
the image acquisition device is used for acquiring face images of a person to be subjected to skin quality detection from a plurality of preset angles and sending the acquired face images to the image processing device;
the image processing device is used for respectively carrying out image recognition processing on each face image to obtain a processing result and obtaining a face skin quality detection result of the person based on a plurality of processing results;
and the result display device is used for visually displaying the face skin quality detection result.
In one embodiment, as shown in fig. 2, the image capturing device includes a sliding rail 1, a moving device 2, a connecting rod 3, a vertical supporting rod 4 and a camera 5; the camera 5 is connected with the mobile device 2; the mobile device 2 is connected with the slide rail 1 in a sliding manner; the connecting rod 3 is connected with the sliding rail 1.
In one embodiment, as shown in fig. 3, the image capturing device further includes a telescopic rod 6, a base 7, and a universal wheel 8; one end of the telescopic rod 6 is connected with the supporting vertical rod 4, and the other end of the telescopic rod is connected with the base 7; the upper surface of the base 7 is connected with the telescopic rod 6, and the lower surface of the base is connected with the universal wheel 8.
In one embodiment, the telescopic rod 6 is an electro-hydraulic telescopic rod.
The image acquisition device is used as follows:
the height of the vertical supporting rod 4 is adjusted according to the height of the person, so that the sliding rail 1, the moving device 2, the connecting rod 3 and the camera 5 are driven to move to a target height, and then the camera 5 is used for acquiring a face image of the person;
when the shooting angle needs to be converted, the mobile device 2 is pushed to slide on the slide rail 1, so that the camera 5 reaches a preset angle, and then the camera 5 is used for acquiring the face image of the person.
The image acquisition device disclosed by the invention has the advantages that the height and the angle can be adjusted, and the image acquisition device can be well adapted to the statures of different people to be subjected to skin quality detection. In addition, the universal wheels are arranged, so that the image acquisition device can be moved conveniently.
In one embodiment, the image processing device comprises one or more of a desktop computer, a notebook computer, a smart phone, and a tablet computer.
In one embodiment, the result display device comprises one or more of a desktop computer display screen, a laptop computer display screen, a smartphone display screen, and a tablet computer display screen.
In one embodiment, the performing the image recognition processing on each face image separately to obtain the processing result includes:
for each face image, the following recognition processing is performed:
respectively carrying out distortion correction processing on each face image to obtain a corrected image;
carrying out noise reduction processing on the corrected image to obtain a noise-reduced image;
and taking the obtained multiple noise reduction images as a processing result.
The training process of the neural network model is as follows:
acquiring a sample image from a sample library, and then acquiring feature information contained in the sample image by adopting a feature extraction algorithm which is the same as that for acquiring the feature information contained in the face image;
and inputting the characteristic information into a pre-established neural network model for training to obtain the pre-trained neural network model.
The neural network model includes a support vector machine classification model.
The feature information extraction algorithm may be an HOG feature extraction algorithm or an LBP feature extraction algorithm.
In one embodiment, performing noise reduction processing on the corrected image to obtain a noise-reduced image includes:
converting the face image into a grayscale image;
acquiring edge pixel points in the gray level image;
performing noise reduction on the gray level image in an iterative mode:
in the 1 st iteration, the grayscale image is taken as the 1 st processed image, and the 1 st processed image is recorded as cP 1 Respectively calculate cP 1 The processing sequence index of the pixel points in (1):
Figure BDA0003025570090000051
in the formula, gr (u) 1 ) Represents cP 1 Pixel point u in 1 Pixel value of (b), xidx (u) 1 ) Represents u 1 The processing order index of (1), alNum (u) 1 ) Represents u 1 8 neighborhood of (2) has been subjected to the number of pixels subjected to noise reduction processing, v 1 、v 2 、v 3 Represents a preset weight parameter, zs (u) 1 ) Indicates a decision parameter if u 1 Having been subjected to noise reduction processing, zs (u) 1 ) =0, otherwise, zs (u) 1 )=1;blNum(u 1 ) Represents u 1 The total number of edge pixels contained in the 8 neighborhoods of (1),
will cP 1 All the pixel points with the value of 1 of the judging parameter are stored into a set dU 1 Will dU 1 The pixel points in the cluster are sorted from large to small according to the processing sequence index to obtain a set gU 1 For gU 1 Front numku of (1) 1 The xbm pixel points are respectively used for the front nummu by using a preset noise reduction processing algorithm 1 Carrying out noise reduction treatment on the xbm pixel points to obtain a 2 nd processed image cP 2 Bm represents a preset control coefficient, bm belongs to [0,0.1 ∈ [ ]];
The nth iteration respectively calculates the nth processing image cP obtained by the (n-1) th iteration n The processing sequence index of the pixel points in (1):
Figure BDA0003025570090000052
in the formula, gr (u) n ) Represents cP n Pixel point u in n Pixel value of (b), xidx (u) n ) Represents u n The processing order index of (1), alNum (u) n ) Represents u n 8 neighborhood of (2) has been subjected to the number of pixels subjected to noise reduction processing, v 1 、v 2 、v 3 Represents a preset weight parameter, zs (u) n ) Indicates a decision parameter if u n Having been subjected to noise reduction processing, zs (u) n ) =0, otherwise, zs (u) n )=1;blNum(u n ) Represents u n The total number of edge pixels contained in the 8 neighborhoods of (1),
will cP n All the pixel points with the value of 1 of the judging parameter are stored into a set dU n Will dU n The pixel points in the cluster are sorted from large to small according to the processing sequence index to obtain a set gU n For gU n Front numku of (1) n The xbm pixel points are respectively used for the front nummu by using a preset noise reduction processing algorithm n Carrying out noise reduction treatment on the xbm pixel points to obtain the (n + 1) th processed image cP n+1 Bm represents a preset control coefficient, bm belongs to [0,0.1 ∈ [ ]];
The conditions for the end of the iteration are:
if the iteration time s is larger than the preset iteration time threshold, the iteration is ended, and the s-1 th processed image cP obtained by the iteration is processed s As a noise-reduced image;
if the iteration times s are less than or equal to a preset iteration time threshold, the following judgment is carried out:
if in the s-th iteration, dU s If the number of the elements in (1) is 0, the iteration is ended, and the s-th processed image cP obtained by the s-1 th iteration is processed s As a noise reduced image.
In the above embodiment of the present invention, when performing noise reduction processing on a face image, instead of performing noise reduction processing on pixel points from beginning to end as in a conventional noise reduction processing mode, an iteration mode is adopted, and in each iteration, a judgment parameter and a processing sequence index of a pixel point in a current iteration processed image are respectively calculated, and after sorting pixel points with a judgment parameter of 1 according to a processing sequence index from large to small, pixel points with a preset proportion are selected from the pixel points to perform noise reduction processing. The processing mode can preferentially use the pixel values of the pixel points which are subjected to noise reduction processing and the pixel values of the edge pixel points to perform noise reduction processing on the pixel points which need to be subjected to noise reduction processing, so that the accuracy of the noise reduction processing is greatly improved, and more detailed information of the original image can be favorably kept in the noise-reduced image. However, since noise reduction belongs to a smoothing process, the conventional noise reduction processing sequence is not favorable for retaining detail information in the face image.
In one embodiment, the noise reduction algorithm includes one of a bilateral filtering algorithm, a non-local mean filtering algorithm, and a median filtering algorithm.
In one embodiment, the obtaining a facial skin quality detection result of the person based on a plurality of processing results comprises:
recording the set of all correction images as ceU, selecting a main processing image mpI and an auxiliary processing image from ceU according to a preset selection rule, and recording the set of all auxiliary processing images as assiU;
respectively carrying out image splicing on mpI and each auxiliary processing image in the assiU, acquiring a spliced image of mpI in each auxiliary processing image in the assiU, and recording the set of all spliced images as csU;
fine-grained detection is carried out on mpI, defect types existing in mpI are obtained, and the defect types existing in mpI are stored into a set shU;
the stitched images in mpI and csU are placed in the same coordinate system for the i-th defect type s in shU i By the following way for s i And further judging:
determining s in mpI i A set of corresponding pixel points aimU;
determining the j Zhang Pinjie image contained in csU for the aimUaimu j Of (3) a corresponding set tgU j ,j∈[1,numcsU]numcsU represents the total number of elements contained in csU;
in aimu j Middle obtention tgU j Minimum circumscribed rectangle miZ of pixel point in j
For miZ j Carrying out fine-grained detection on the pixel points contained in the image to judge the miZ j Whether defect type s exists in pixel points contained in the image i If so, the aimu will be j Storing the judgment set judU into a judgment set judU;
calculating s i Is proportional parameter cofsh(s) i ):
Figure BDA0003025570090000071
In the formula, cofsh(s) i ) Denotes s i numjudU denotes the total number of elements contained in the judU;
if cofsh(s) i ) If the ratio is larger than the preset ratio threshold value, s is represented i Belonging to the type of defects present in the facial skin of said person, s i Storing the data into a set resU; if cofsh(s) i ) Less than or equal to the preset proportional threshold value, s is represented i A type of defect present on the facial skin not belonging to said person;
and taking the elements contained in the resU as the face skin quality detection result of the person.
For the defect types contained in the main processing image, because the main processing image is only identified from one angle, the reliability is not high enough, in the above embodiment of the present invention, the main processing image and the auxiliary processing image are subjected to image stitching processing, so as to obtain the overlapping portion between the main processing image and the auxiliary processing image, and obtain the set tgU corresponding to aimU in the stitched image based on the overlapping relationship, and then, fine-grain detection is performed on the pixel points of the minimum circumscribed rectangular portion corresponding to tgU in the stitched image, so as to determine whether the stitched image also contains the same defect type, thereby implementing cross validation of the defect types existing in the main processing image by using the auxiliary processing image, and effectively improving the accuracy of the present invention.
In one embodiment, the mpI and the auxiliary processing image are subjected to image stitching processing, which comprises the following steps:
store all pixel points in mpI in set U mpI Storing all pixel points in the auxiliary processing image into a set U assI
Will U mpI And U assI The intersection of (5), i.e. the pixel points of the overlapping part between mpI and the auxiliary processing image, is stored in the set U db
At U mpI In get U db Complementary U of pb
By U assI And U pb The pixel points in the image form a spliced image.
In one embodiment, the selecting the main processing image mpI and the auxiliary processing image from ceU according to a preset selecting rule includes:
selecting a correction image corresponding to the front of the person from ceU as a main processing image mpI;
taking a corrected image of ceU in which the visual overlapping degree with the main processed image mpI is greater than a preset threshold value of the visual overlapping degree as an auxiliary processed image;
the visual overlap is determined as follows:
for the corrected images fixI other than the main processed image in the main processed images mpI and ceU, the visual overlapping degree determination method between the two is as follows:
performing image splicing operation on mpI and the fixI, and determining an overlapping area between mpI and the fixI;
and calculating the ratio of the area between the overlapping area and the fixI, and taking the ratio as the visual overlapping degree.
In another embodiment, the obtaining a facial skin quality detection result of the person based on a plurality of processing results comprises:
the preset defect type set is recorded as shU, the total number of face images is recorded as facNum, and therefore, the total number of processing results is facNum,
for the ith defect type si in shU, whether si belongs to the defect type existing in the face skin of the person is judged by the following method:
calculating s i Total score of(s) val(s) i ):
Figure BDA0003025570090000081
In the formula, hU(s) i ) Indicates that the facNum processing results contain s i Set of processing results of (1), val k Denotes hU(s) i ) The weight parameter of the kth element in (1);
if val(s) i ) Greater than a predetermined score threshold, s is indicated i Belonging to the type of defects present in the facial skin of said person, s i Storing the data into a set resU; if val(s) i ) Less than or equal to the preset fraction threshold value, s is represented i A type of defect that is not present on the facial skin of the person;
and taking the elements contained in the resU as the detection result of the facial skin quality of the person.
In one embodiment, the weight parameter of the kth element is obtained by:
recording a set of facNum face images as facU;
if hU(s) i ) The kth element in (1) is the processing result obtained from the u-th face image in the facU, val j =val u ,val u A preset weight parameter representing the u-th element in the facU,
Figure BDA0003025570090000091
in one embodiment, the defect types include blackheads, scars, moles.
Compared with the prior art, the invention has the advantages that:
the prior art generally acquires a single-angle face image and then judges the defect type contained in the face image, but this judgment method is easily affected by the quality of the face image, for example, if the quality of a certain face image is too low, it is easy to obtain wrong skin quality. The method acquires the face images from a plurality of preset angles, then respectively judges the defect types contained in the acquired face images, and judges whether the proportion of the frequency of each preset defect type appearing in the face images to the total number of the face images meets the preset judgment condition or not, thereby acquiring the face skin quality detection result of the person to be subjected to the skin quality detection. Compared with the prior art, the skin quality detection result obtained by the method is more accurate. This application judges through the face image of many different angles, and the defect type that not only obtains is more comprehensive, can carry out cross validation moreover between many face images to effectively improve the accuracy of skin quality testing result.
While embodiments of the invention have been shown and described, it will be understood by those skilled in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.

Claims (4)

1. An intelligent skin measuring system with multi-angle cross validation is characterized by comprising an image acquisition device, an image processing device and a result display device;
the image acquisition device is used for acquiring face images of a person to be subjected to skin quality detection from a plurality of preset angles and sending the acquired face images to the image processing device;
the image processing device is used for respectively carrying out image recognition processing on each face image to obtain a processing result and obtaining a face skin quality detection result of the person based on a plurality of processing results;
the result display device is used for visually displaying the face skin quality detection result;
the image recognition processing is respectively carried out on each face image to obtain a processing result, and the processing result comprises the following steps:
for each face image, the following recognition processing is performed:
respectively carrying out distortion correction processing on each face image to obtain a corrected image;
carrying out noise reduction processing on the corrected image to obtain a noise-reduced image;
using the obtained multiple noise reduction images as a processing result;
the obtaining a facial skin quality detection result of the person based on a plurality of processing results comprises:
recording the set of all correction images as ceU, selecting a main processing image mpI and an auxiliary processing image from ceU according to a preset selection rule, and recording the set of all auxiliary processing images as assiU;
respectively carrying out image splicing on mpI and each auxiliary processing image in the assiU, acquiring a spliced image of mpI in each auxiliary processing image in the assiU, and marking the set of all spliced images as csU;
fine-grained detection is carried out on mpI, defect types existing in mpI are obtained, and the defect types existing in mpI are stored into a set shU;
the stitched images in mpI and csU are placed in the same coordinate system for the i-th defect type s in shU i By the following way for s i And further judging:
determining s in mpI i A set of corresponding pixel points aimU;
determining the j Zhang Pinjie image aimU contained in csU for aimU j Corresponding set tgU j ,j∈[1,numcsU]numcsU represents the total number of elements contained in csU;
in aimu j Middle obtention tgU j Minimum circumscribed rectangle miZ of pixel point in j
For miZ j Carrying out fine-grained detection on the pixel points contained in the image to judge the miZ j Whether defect type s exists in pixel points contained in the image i If so, the aimu will be j Storing the judgment set judU into a judgment set judU;
calculating s i Is proportional parameter cofsh(s) i ):
Figure FDA0003923675050000021
In the formula, cofsh(s) i ) Denotes s i numjudU represents the total number of elements contained in the judU;
if cofsh(s) i ) If the ratio is larger than the preset ratio threshold value, s is represented i Belonging to the type of defects present in the facial skin of said person, s i Storing the data into a set resU; if cofsh(s) i ) If the ratio is less than or equal to the preset ratio threshold value, s is represented i A type of defect present on the facial skin not belonging to said person;
and taking the elements contained in the resU as the detection result of the facial skin quality of the person.
2. The multi-angle cross-validation intelligent skin test system according to claim 1, wherein the image acquisition device comprises a slide rail, a moving device, a connecting rod, a vertical support rod and a camera;
the camera is connected with the mobile device;
the moving device is connected with the sliding rail in a sliding manner;
the connecting rod is connected with the sliding rail.
3. The multi-angle cross-validation intelligent skin test system according to claim 2, wherein the image acquisition device further comprises a telescopic rod, a base and universal wheels;
one end of the telescopic rod is connected with the supporting vertical rod, and the other end of the telescopic rod is connected with the base;
the upper surface of the base is connected with the telescopic rod, and the lower surface of the base is connected with the universal wheels.
4. The multi-angle cross-validation intelligent skin test system according to claim 1, wherein the image processing device comprises one or more of a desktop computer, a laptop computer, a smart phone, and a tablet computer.
CN202110415106.8A 2021-04-17 2021-04-17 Multi-angle cross validation intelligent skin measuring system Active CN113080874B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110415106.8A CN113080874B (en) 2021-04-17 2021-04-17 Multi-angle cross validation intelligent skin measuring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110415106.8A CN113080874B (en) 2021-04-17 2021-04-17 Multi-angle cross validation intelligent skin measuring system

Publications (2)

Publication Number Publication Date
CN113080874A CN113080874A (en) 2021-07-09
CN113080874B true CN113080874B (en) 2023-02-07

Family

ID=76678866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110415106.8A Active CN113080874B (en) 2021-04-17 2021-04-17 Multi-angle cross validation intelligent skin measuring system

Country Status (1)

Country Link
CN (1) CN113080874B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549434B (en) * 2022-02-09 2022-11-08 南宁市第二人民医院 Skin quality detection device based on cloud calculates

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827916A (en) * 2011-09-22 2014-05-28 富士胶片株式会社 Wrinkle detection method, wrinkle detection device and wrinkle detection program, as well as wrinkle evaluation method, wrinkle evaluation device and wrinkle evaluation program
JP2016148933A (en) * 2015-02-10 2016-08-18 キヤノン株式会社 Image processing system and image processing method
CN108932493A (en) * 2018-06-29 2018-12-04 东北大学 A kind of facial skin quality evaluation method
CN111860250A (en) * 2020-07-14 2020-10-30 中南民族大学 Image identification method and device based on character fine-grained features
CN112382384A (en) * 2020-11-10 2021-02-19 中国科学院自动化研究所 Training method and diagnosis system for Turner syndrome diagnosis model and related equipment
CN112598576A (en) * 2020-12-24 2021-04-02 中标慧安信息技术股份有限公司 Safety verification method and system based on face recognition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097034B (en) * 2019-05-15 2022-10-11 广州纳丽生物科技有限公司 Intelligent face health degree identification and evaluation method
CN111524080A (en) * 2020-04-22 2020-08-11 杭州夭灵夭智能科技有限公司 Face skin feature identification method, terminal and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827916A (en) * 2011-09-22 2014-05-28 富士胶片株式会社 Wrinkle detection method, wrinkle detection device and wrinkle detection program, as well as wrinkle evaluation method, wrinkle evaluation device and wrinkle evaluation program
JP2016148933A (en) * 2015-02-10 2016-08-18 キヤノン株式会社 Image processing system and image processing method
CN108932493A (en) * 2018-06-29 2018-12-04 东北大学 A kind of facial skin quality evaluation method
CN111860250A (en) * 2020-07-14 2020-10-30 中南民族大学 Image identification method and device based on character fine-grained features
CN112382384A (en) * 2020-11-10 2021-02-19 中国科学院自动化研究所 Training method and diagnosis system for Turner syndrome diagnosis model and related equipment
CN112598576A (en) * 2020-12-24 2021-04-02 中标慧安信息技术股份有限公司 Safety verification method and system based on face recognition

Also Published As

Publication number Publication date
CN113080874A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
EP3826317B1 (en) Method and device for identifying key time point of video, computer apparatus and storage medium
US7965893B2 (en) Method, apparatus and storage medium for detecting cardio, thoracic and diaphragm borders
CN107610087B (en) Tongue coating automatic segmentation method based on deep learning
CN117078672B (en) Intelligent detection method for mobile phone screen defects based on computer vision
CN116664559B (en) Machine vision-based memory bank damage rapid detection method
CN108537751B (en) Thyroid ultrasound image automatic segmentation method based on radial basis function neural network
CN110728234A (en) Driver face recognition method, system, device and medium
EP2079054A1 (en) Detection of blobs in images
US8090151B2 (en) Face feature point detection apparatus and method of the same
CN112862824A (en) Novel coronavirus pneumonia focus detection method, system, device and storage medium
CN106611416B (en) Method and device for segmenting lung in medical image
CN110705468B (en) Eye movement range identification method and system based on image analysis
CN109376740A (en) A kind of water gauge reading detection method based on video
WO2015131468A1 (en) Method and system for estimating fingerprint pose
US20210200990A1 (en) Method for extracting image of face detection and device thereof
KR101034117B1 (en) A method and apparatus for recognizing the object using an interested area designation and outline picture in image
CN111369523A (en) Method, system, device and medium for detecting cell stacking in microscopic image
CN111860587A (en) Method for detecting small target of picture
CN112883824A (en) Finger vein feature recognition device for intelligent blood sampling and recognition method thereof
CN111916206A (en) CT image auxiliary diagnosis system based on cascade connection
CN113080874B (en) Multi-angle cross validation intelligent skin measuring system
CN108876776B (en) Classification model generation method, fundus image classification method and device
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
CN116523916B (en) Product surface defect detection method and device, electronic equipment and storage medium
JP2004188202A (en) Automatic analysis method of digital radiograph of chest part

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant