CN112906585B - Intelligent hairdressing auxiliary system, method and readable medium based on machine learning - Google Patents

Intelligent hairdressing auxiliary system, method and readable medium based on machine learning Download PDF

Info

Publication number
CN112906585B
CN112906585B CN202110214748.1A CN202110214748A CN112906585B CN 112906585 B CN112906585 B CN 112906585B CN 202110214748 A CN202110214748 A CN 202110214748A CN 112906585 B CN112906585 B CN 112906585B
Authority
CN
China
Prior art keywords
module
hair
hairstyle
user
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110214748.1A
Other languages
Chinese (zh)
Other versions
CN112906585A (en
Inventor
商楚苘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202110214748.1A priority Critical patent/CN112906585B/en
Publication of CN112906585A publication Critical patent/CN112906585A/en
Application granted granted Critical
Publication of CN112906585B publication Critical patent/CN112906585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45DHAIRDRESSING OR SHAVING EQUIPMENT; EQUIPMENT FOR COSMETICS OR COSMETIC TREATMENTS, e.g. FOR MANICURING OR PEDICURING
    • A45D44/00Other cosmetic or toiletry articles, e.g. for hairdressers' rooms
    • A45D44/005Other cosmetic or toiletry articles, e.g. for hairdressers' rooms for selecting or displaying personal cosmetic colours or hairstyle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an intelligent haircut auxiliary system based on machine learning, which adopts the technical scheme that a computer vision technology is adopted, a machine learning method is utilized to capture the state of a user, then visual intervention and adjustment are carried out on a haircut process by combining a perception positioning device, and voice and visual prompts are assisted in the period to realize the intelligent system flow for assisting haircut. The key points of the technical scheme comprise: the haircut library, haircut preprocessing module, real-time haircut camera module, the model building module, the haircut building module, the display module, the prompt module, the orientation module, the interaction module, the all-neural network is used for generating the picture of people's face haircut matching frame, the training module carries out deep learning, real-time haircut camera module is used for obtaining the head information of the user, the haircut building module is used for generating hair cutting data, the haircut auxiliary system can reduce the communication information difference between the user and a hairdresser, the intellectualization of haircut design is realized, the visual flow is realized, and the haircut satisfaction of the user is facilitated to be improved.

Description

Intelligent hairdressing auxiliary system, method and readable medium based on machine learning
Technical Field
The invention relates to the technical field of intelligent hairdressing assistance, in particular to an intelligent hairdressing assistance system, method and readable medium based on machine learning.
Background
At present, when a user goes to a barber shop to cut hair, the user usually informs the hair stylist of his/her own hair style requirement by dictation. Due to the non-intuitiveness of the language communication, the requirements and comprehension of the two parties often deviate, so that the hair style designed by the hair style designer deviates from the original requirements of the user. In order to reduce the information deviation between the user and the hair stylist, a real-time, visual and convenient auxiliary device for hair styling is needed.
Disclosure of Invention
In view of the shortcomings in the prior art, an object of the present invention is to provide an intelligent hair-cut assisting system based on machine learning, which can reduce the communication information difference between the user and the barber, realize the intelligent and visual process of hair-cut design, and contribute to improving the satisfaction degree of the user.
In order to realize the purpose, the invention provides the following technical scheme: an intelligent hairdressing auxiliary system based on machine learning comprises: the system comprises a hairstyle library, a hairstyle preprocessing module, a real-time hairstyle camera module, a model building module, a hairstyle building module, a display module, a prompting module, a positioning module and an interaction module;
the hairstyle library is used for storing various pictures containing head hairstyles on the network and providing the pictures containing the head hairstyles for the user to import by himself;
the hairstyle preprocessing module is connected with the hairstyle library, and comprises a whole neural network, a human face area neural network, a hairstyle area neural network and a training module, wherein the whole neural network is used for acquiring a head hairstyle picture in the hairstyle library to generate a picture of a frame matched with a human face hairstyle;
the face area neural network is connected with the all-neural network and is used for receiving the picture of the matching frame matched with the face hairstyle and generating the picture containing the face;
the hairstyle regional neural network is connected with the all-neural network and is used for receiving the picture of the matching frame of the hairstyle of the human face and generating the picture containing the hairstyle;
the training module performs deep learning training according to the pictures containing the faces and the pictures containing the hairstyles;
the interactive module is connected with the hairstyle preprocessing module and is used for leading the preferred hairstyle information and the preferred object information into the hairstyle preprocessing module by a user; the hairstyle preprocessing module carries out pointing deep learning network inference according to the information of the preferred hairstyle, and reversely acquires the hairstyle information of the specified object in the hairstyle library according to the information of the preferred object and carries out pointing deep learning network inference;
the real-time hair style camera module is used for acquiring the head information of a user;
the model building module is respectively connected with the hair style preprocessing module and the real-time hair style shooting module, the model building module is used for generating a real-time user model according to the head information of a user, and the model building module is used for obtaining an ideal hair style model based on the real-time user model matching;
the interaction module is connected with the model building module and is used for receiving user instructions so as to revise or determine an ideal hair style model;
the hairstyle building module is connected with the model building module and used for generating a combined model according to the real-time user model and the ideal hairstyle model and comparing hairstyle characteristic information in the combined model with hairstyle characteristic information in the real-time user model to generate hair cutting data;
the interactive module is connected with the hair style building module and is used for receiving a user instruction so as to revise or determine the combined model;
the display module is connected with the hairstyle building module and at least comprises a haircut auxiliary area for displaying the combined model and the real-time user model and/or an equipment prompt area for displaying the cutting equipment;
the interactive module is connected with the display module and is used for receiving a user instruction so as to select the cutting equipment displayed in the equipment prompt area;
the prompting module is respectively connected with the hairstyle building module and the display module, generates cutting information according to the hair cutting data, and feeds the cutting information back to a user through the hair cutting auxiliary area and/or the equipment prompting area;
the positioning module is respectively connected with the hairstyle building module and the prompting module, the positioning module is used for monitoring the displacement of the head of a user and feeding the displacement back to the hairstyle building module and the prompting module, the hairstyle building module changes the angle condition of the real-time user model in real time according to the displacement, and the prompting module changes the prompting information according to the displacement.
The invention is further configured to: the real-time hair style camera shooting module comprises a five sense organs acquisition sub-module, a hair quality acquisition sub-module, a chromaticity comparison sub-module, a hair style shielding partition sub-module, a hair style shielding detection sub-module and a fine segmentation sub-module;
the facial feature acquisition submodule is used for acquiring the skin chromaticity information of the face of the user, a face contour line, two-eye positions, a nose tip position, an eyebrow position and a mouth position;
the hair quality acquisition submodule is used for acquiring the hair volume information of a user, the thickness information of hair, the volume information of hair and the hair chromaticity information;
the chromaticity comparison submodule is respectively connected with the five sense organs acquisition submodule and the hair quality acquisition submodule and is used for comparing the hair chromaticity with the skin chromaticity to acquire a hairstyle area, a face area and a coincidence area;
the hairstyle shielding sub-module is connected with the chromaticity ratio sub-module and is used for dividing an area above the mouth into 18 sectors with 10 degrees of angle and different diameters in a counterclockwise way by taking a central area of the position of the mouth as a circle center, and then averagely dividing the sector into 10 blocks from outside to inside along the radial length direction of the sectors;
the hair style shielding detection submodule is connected with the hair style shielding partition submodule and is used for carrying out primary detection on all blocks from outside to inside, comparing RGB values in the blocks with hair chromaticity and generating a comparison value, and when the comparison value is greater than or equal to 0.8, the block is a coincidence area; when the comparison value is less than 0.8, the block is a face area;
the fine segmentation sub-module is connected with the hairstyle shielding sub-module and is used for detecting the coincidence area obtained by the primary detection again;
the detailed partitioning sub-module has the specific functions of:
step Z1, dividing the superposed area obtained by the preliminary detection into a plurality of fine cutting blocks;
step Z2, X ═ X ij :x ij E to S is defined as the pixel point of the fine cutting block S, and I is used x Color vector, I, representing a pixel point x =(Rx,Gx,Bx) T ;L x =(l 1 ,l 2 ,…,l m ) T Denotes a label vector, where m denotes the number of pixel points in the refinement block S,and l x 1 denotes x belongs to hair, l x 0 denotes x belongs to skin, and the probability that a pixel is hair is denoted as P (l) x 1), the following energy functions are defined: e (l) ═ c (l) + α b (l);
wherein α is used to balance the importance of both terms; c (L) represents the prediction probability of the pixel point; b (L) is a smoothing term for describing penalty cost when labels of adjacent pixels are different;
step Z3, the final label vector L X of the pixels in any of the refined blocks S is obtained by the energy function reaching a minimum value, the first term c (L) in the energy function being defined as:
Figure BDA0002952763700000041
wherein n is x Representing the number of pixels in the refined block S, c (l) x ) Representing the prediction probability of the color development model to the pixel point x;
in the energy function, b (l) is defined as:
Figure BDA0002952763700000042
Figure BDA0002952763700000043
Figure BDA0002952763700000044
wherein n is p The number of neighboring points of the pixel point p is represented, sigma represents the average smoothness of the image,
Figure BDA0002952763700000045
and selecting adjacent points of the pixel points by adopting an adjacent domain system, and carrying out equalization processing due to different shapes and sizes of the fine cutting blocks.
The invention is further configured to: the real-time hair style camera module also comprises a face type judging submodule which is respectively connected with the five sense organs obtaining submodule and the fine segmentation submodule and is used for obtaining final face type data after hair style shielding processing;
the face type judging submodule comprises the following concrete implementation steps:
step Q1: positioning the human face, and marking the basic facial feature information of the mouth, eyes, nose and the like of the human body correspondingly;
step Q2: the feature point acquisition and extraction steps are as follows: 1) extracting the distance between the central points of the two eyes; 2) extracting the inclination angle of the right eye; 3) positioning the center points of eyes and mouth in the face image and marking; 4) all extracted feature information is marked out in a rectangular frame;
step Q3: extracting the face, namely accurately positioning a face region, storing the face locally, and extracting when the face is matched with a hairstyle later;
step Q4: and (3) face recognition, namely dividing the face into a plurality of areas according to the marks of the feature points of each area of the face, and analyzing the information of five sense organs and the information of the face type.
The invention is further configured to: the step Q4 specifically includes:
step Q41, dividing the face into three characteristic line segments along the horizontal direction of the face, wherein the three characteristic line segments are respectively a left eye central vertical line QR, a right eye central vertical line UV and a mouth central vertical line ST;
step Q42, dividing the face into ten characteristic line segments in the vertical direction of the face, wherein the ten characteristic line segments are respectively a forehead horizontal line YZ, an eyebrow horizontal line AB, an upper eyelid horizontal line EF, an eye center horizontal line WX, a lower eyelid horizontal line GH, a widest horizontal line M1M2 of the face, a lower nose horizontal line IJ, an upper lip horizontal line KL, a lip center horizontal line MN and a lower lip horizontal line OP;
and step Q43, calculating and analyzing the distance and the proportional relation among the characteristic lines.
The invention is further configured to: the hairstyle building module comprises a comparison submodule, a formation submodule and an input submodule;
the comparison submodule is connected with the model construction module and is used for acquiring the hair length information of each dimension in the real-time user model and the ideal hair style model, and generating positive hair length data when the hair length information of the dimension in the real-time user model is larger than the hair length information of the same dimension in the ideal hair style model;
generating negative hair length data when the dimensional hair length information in the real-time user model is smaller than the same dimensional hair length information in the ideal hairstyle model;
the input sub-module is used for inputting and storing the hair growing speed and the average hair cutting interval length of the user;
the formation submodule is respectively connected with the comparison submodule and the input submodule and is used for receiving the positive hair length data, the negative hair length data, the user hair growth speed data and the average hair cutting interval length, and when the negative hair length data does not exist, the hair cutting data are directly generated;
and generating formation data when negative hair length data exist.
The invention is further configured to: the nurturing submodule comprises a hairstyle growing unit, a hairstyle simulation unit and a hairstyle storage unit;
the hair style growth unit is used for calculating the relationship between the data of the hair growth speed of the user multiplied by the average hair-cutting length and the negative hair length data, and recording the data as one-time formation data when the data of the hair growth speed of the user multiplied by the average hair-cutting length is less than or equal to the negative hair length data;
when the data of the hair growth speed of the user multiplied by the average hair cutting length is more than or equal to the data of the negative hair length, recording the data as multi-time formation data;
the hairstyle simulation unit is connected with the hairstyle growing unit and used for simulating a hairstyle forming model in the forming process;
the hairstyle storage unit is connected with the hairstyle simulation unit and used for storing formation data to form a hairstyle formation model;
the display module is connected with the hairstyle simulation unit and used for calling and checking the hairstyle formation model.
The invention is further configured to: the hair style preprocessing module further comprises an intention model, wherein the intention model comprises the following steps: (a) ═ β 1 × S + β 2 × R + β 3 × O + β 4 × P + β 5 × Q;
wherein f (a) represents an intention model, S represents skin color, and R represents facial form; o represents hair length; p represents the volume of hair, Q represents the curl, β 1, β 2, β 3, β 4, β 5 represent the corresponding specific gravity values, respectively, and β 1+ β 2+ β 3+ β 4+ β 5 is 1;
the training module is connected with the intention model and carries out deep learning network inference according to the intention direction of the intention model;
the interaction module is connected with the intention model and is used for receiving user instructions so as to change the specific gravity values of beta 1, beta 2 … … beta 5.
The invention is further configured to: the positioning module comprises a left eye positioning module and a right eye positioning module, the left eye positioning module is used for capturing a left eye central point of a user and generating a first positioning coordinate, and the right eye positioning module is used for capturing a right eye central point of the user and generating a second positioning coordinate;
and the positioning module generates a horizontal positioning line according to the first positioning coordinate and the second positioning coordinate.
The invention is further configured to: the positioning module comprises a left ear wearing device and a right ear wearing device, the left ear wearing device and the right ear wearing device respectively comprise a horizontal displacement induction module, a vertical displacement induction module and an angular displacement induction module, pressure sensors are arranged on the left ear wearing device and the right ear wearing device, and when the pressure sensors are triggered, the corresponding wearing devices are in a dormant state;
the left ear wearing device and the right ear wearing device respectively comprise mounting shells used for being mounted on auricles, the mounting shells are arranged in an arc structure, and the cross sections of the mounting shells are arranged in a concave structure;
the mounting shell comprises a mounting part, a first limiting part and a second limiting part, wherein the first limiting part and the second limiting part are made of elastic materials;
the first limiting part is internally provided with a counterweight channel which extends along the length direction of the mounting shell, a counterweight groove which is communicated with the counterweight channel and a counterweight block which is connected in the counterweight channel in a sliding way, the counterweight groove is provided with a plurality of counterweight grooves which are arranged along the extension direction of the counterweight channel, and the counterweight groove is arranged at the lower side of the counterweight channel;
the first limiting block is provided with a sliding channel, the sliding channel is connected with an electromagnet assembly in a sliding mode, and when the electromagnet assembly is communicated, the balancing weight moves along with the electromagnet assembly.
The invention is further configured to: the cutting information comprises cutting position information, cutting direction information, cutting length information, cutting angle information and cutting equipment orientation information, the cutting position information, the cutting direction information, the cutting length information and the cutting angle information are fed back to a user through the auxiliary haircut area, and the cutting angle information and the cutting equipment orientation information are fed back to the user through the equipment prompt area.
An intelligent hairdressing auxiliary method based on machine learning comprises the following steps:
s1, inputting an ideal hair style picture or an ideal star hair style picture to the hair style preprocessing module by the user through the interaction module;
s2, positioning the head information of the user through a positioning module;
s3, acquiring the head information of the user through the real-time hair style camera module;
s4, generating an ideal hairstyle model by the model building module, and confirming or revising by the user through the interaction module;
s5, generating a combined model by the hairstyle building module, and confirming or revising by the user through the interaction module;
s6, the haircut auxiliary area displays the real-time user model and feeds back cutting information, and the equipment prompting area displays the cutting equipment and feeds back equipment operation information.
The invention is further configured to: the step S1 further includes:
s11, the user, for the intention model: (a) ═ β 1 × S + β 2 × R + β 3 × O + β 4 × P + β 5 × Q; inputting a specific gravity value beta 1 of S, a specific gravity value beta 2 of R, a specific gravity value beta 3 of O, a specific gravity value beta 4 of P and a specific gravity value beta 5 of Q;
wherein f (a) represents an intention model, S represents skin color, and R represents facial form; o represents hair length; p represents the volume of hair, Q represents the curl, β 1, β 2, β 3, β 4, β 5 represent the corresponding specific gravity values, respectively, and β 1+ β 2+ β 3+ β 4+ β 5 is 1.
In conclusion, the invention has the following beneficial effects: the intelligent and visual auxiliary haircut system has the advantages that haircut combination can be learned through a deep neural network, the matching degree of the neural network is improved through the intention data of favorite haircuts and/or favorite stars and the intention data of users for skin colors, facial shapes, hair lengths and the like which are added by the users, so that the haircuts most suitable for the users are intelligently recommended, haircut data are generated according to the difference between the haircuts and the current haircuts, prompt information is displayed through a display module, the users can conveniently conduct haircut operation according to the prompt information, the effect of auxiliary haircut is achieved, in addition, the system can monitor the deviation condition of the heads of the users in real time, the prompt information can be timely corrected, and the intelligent and visual auxiliary haircut effect is achieved.
Drawings
FIG. 1 is a functional block diagram of a hair styling system;
FIG. 2 is a functional block diagram of a hair styling module;
FIG. 3 is a functional block diagram of a development submodule;
FIG. 4 is a functional block diagram of a positioning module;
FIG. 5 is a logical block diagram of a method of operation of a hair styling system;
FIG. 6 is a schematic block diagram of a real-time hair style camera module;
FIG. 7 is a schematic diagram of a neural network;
FIG. 8 is a perspective view of the left ear wearing device;
fig. 9 is a partial sectional view of the first stopper portion.
Reference numerals: 1. a hair style library; 2. a hairstyle preprocessing module; 21. an all-neural network; 22. a face area neural network; 23. a hair style regional neural network; 24. a training module; 25. an intent model; 3. a real-time hair style camera module; 31. a five sense organs acquisition submodule; 32. a hair quality obtaining submodule; 33. a chroma comparison submodule; 34. a hair style shielding sub-module; 35. a hairstyle occlusion detection submodule; 36. finely dividing the submodule; 37. a face shape judging submodule; 4. a model building module; 5. a hairstyle building module; 51. a comparison submodule; 52. cultivating a submodule; 521. a hair-style growing unit; 522. a hair style simulation unit; 523. a hair style storage unit; 53. an input sub-module; 6. a display module; 7. a prompt module; 8. a positioning module; 81. a left ear applicator; 82. a right ear applicator; 83. a pressure sensor; 84. installing a shell; 841. an installation part; 842. a first limiting part; 843. a second limiting part; 85. a counterweight passage; 86. a counterweight groove; 87. a counterweight block; 88. a slipping channel; 89. an electromagnet assembly; 9. and an interaction module.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. In which like parts are designated by like reference numerals. It should be noted that the terms "front," "back," "left," "right," "upper" and "lower" used in the following description refer to directions in the drawings, and the terms "bottom" and "top," "inner" and "outer" refer to directions toward and away from, respectively, the geometric center of a particular component.
Referring to fig. 1 to 9, in order to achieve the above object, the present invention provides the following technical solutions: an intelligent hairdressing auxiliary system based on machine learning comprises: the system comprises a hairstyle library 1, a hairstyle preprocessing module 2, a real-time hairstyle camera module 3, a model building module 4, a hairstyle building module 5, a display module 6, a prompt module 7, a positioning module 8 and an interaction module 9;
the hair style library 1 is used for storing various pictures containing head hair styles on a network and allowing a user to automatically import the pictures containing the head hair styles;
the hairstyle preprocessing module 2 is connected with the hairstyle library 1, the hairstyle preprocessing module 2 comprises a whole neural network 21, a human face regional neural network 22, a hairstyle regional neural network 23 and a training module 24, and the whole neural network 21 is used for obtaining a picture of a head hairstyle in the hairstyle library 1 to generate a picture of a frame matched with the human face hairstyle;
the face area neural network 22 is connected with the all-neural network 21 and is used for receiving the picture of the matching frame matched with the face hairstyle and generating a picture containing the face;
the hair style area neural network 23 is connected with the whole neural network 21 and is used for receiving the picture of the frame matched with the hair style of the human face and generating a picture containing the hair style;
the training module 24 performs deep learning training according to the pictures containing the faces and the pictures containing the hairstyles;
the interactive module 9 is connected with the hairstyle preprocessing module 2, and the interactive module 9 is used for leading preferred hairstyle information and preferred object information into the hairstyle preprocessing module 2 by a user; the hairstyle preprocessing module 2 carries out pointing deep learning network inference according to the preferred hairstyle information, and the hairstyle preprocessing module 2 reversely obtains the hairstyle information of the specified object in the hairstyle library 1 according to the preferred object information and carries out pointing deep learning network inference;
the real-time hair style camera module 3 is used for acquiring the head information of the user;
the model building module 4 is respectively connected with the hair style preprocessing module 2 and the real-time hair style camera module 3, the model building module 4 is used for generating a real-time user model according to the head information of the user, and the model building module 4 is used for obtaining an ideal hair style model based on the real-time user model matching;
the interaction module 9 is connected with the model building module 4, and the interaction module 9 is used for receiving user instructions so as to revise or determine an ideal hair style model;
the hairstyle building module 5 is connected with the model building module 4 and is used for generating a combined model according to the real-time user model and the ideal hairstyle model and comparing hairstyle characteristic information in the combined model with hairstyle characteristic information in the real-time user model to generate hair cutting data;
the interactive module 9 is connected with the hair style building module 5, and the interactive module 9 is used for receiving a user instruction so as to revise or determine the combined model;
the display module 6 is connected with the hairstyle building module 5, and the display module 6 comprises a haircut auxiliary area for displaying the combined model and the real-time user model and an equipment prompt area for displaying the cutting equipment;
the interactive module 9 is connected with the display module 6, and the interactive module 9 is used for receiving a user instruction so as to select the cutting equipment displayed in the equipment prompt area;
the prompting module 7 is respectively connected with the hairstyle building module 5 and the display module 6, generates cutting position information, cutting direction information, cutting length information, cutting angle information and cutting equipment orientation information according to the hair cutting data, feeds the cutting position information, the cutting direction information, the cutting length information and the cutting angle information back to a user through the hair cutting auxiliary area, and feeds the cutting angle information and the cutting equipment orientation information back to the client through the equipment prompting area;
the positioning module 8 is respectively connected with the hairstyle building module 5 and the prompt module 7, the positioning module 8 is used for monitoring the displacement of the head of the user and feeding the displacement back to the hairstyle building module 5 and the prompt module 7, the hairstyle building module 5 changes the angle condition of the real-time user model according to the displacement in real time, and the prompt module 7 changes prompt information according to the displacement.
The hairstyle library 1 is connected with the Internet, popular hairstyles in the network are acquired in real time, the purpose that the pictures of the head hairstyle in the hairstyle library 1 are updated by self is achieved, in addition, a user can import favorite hairstyle pictures and pictures of the user, and the pictures are imported, so that a hairstyle pool is expanded.
Neural networks include a variety of: such as convolutional neural networks, circular neural networks, deep belief networks, generative antagonistic neural networks, long-term neural networks, etc., and convolutional neural networks are exemplified below.
The all-neural network 21 is used as a first-stage neural network, the setting of the judgment threshold value is looser and looser, and is used for eliminating non-hair type pictures obtained from the network by the hair type library 1, the human face area neural network 22 and the hair type area neural network 23 are used as a second-stage neural network to improve the resolution and the pixel degree, and hair type information and face information in the hair type library 1 are accurately captured.
The advantage of the whole neural network 21 is that it can receive a picture of a hairstyle of any size without requiring all training images and test images to be of the same size and resolution.
The training module 24 combines the picture containing the face generated by the face area neural network 22 and the picture containing the hairstyle generated by the hairstyle area neural network 23 at the last full connection layer, and the training module 24 is connected with an early rejection classifier and a data routing layer, the early rejection classifier is used as a small neural network, the combination of the unfavorable hairstyle and face shapes can be set, the unfavorable hairstyle combination can be eliminated in the training process, and thus, the classification of the hairstyle can be realized and the training image can be optimized.
The face area neural network 22 classifies according to the face shape (face length, face width, face proportion, facial proportion, etc.), skin color and jewelry condition; the hair style area neural network 23 classifies according to the condition of hair length, hair color, hair curl, etc., then performs matching combination for each subclass, performs autonomous learning, and generates a matching value according to the degree of similarity by comparing with favorite hair style information input by the user.
The face region neural network 22 can obtain a face set of { In, Ln }, where N is 1, … …, N is the total number of training samples, where In represents the nth image and Ln represents the face data corresponding to the nth image. The hair style area neural network 23 can obtain hair style set as { Zm, Vm }, where M is 1, … …, M is the total number of training samples, where Zm represents the mth image, and Vm represents hair style data corresponding to the mth image; and the combination set is { In × Zm, Ln × Vm }, and then the dislike factors In the combination set are eliminated through the intention model and the early rejection classifier.
The user can input the favorite hairstyle information or the favorite star type through the interaction module 9, when the input hairstyle information is input, the training module 24 conducts guiding matching according to the favorite hairstyle condition and the face condition, generates matching degree according to the matching similarity, and finally generates a plurality of recommended hairstyles according to the matching degree;
when the input is a favorite star type, the training module 24 sends a search instruction to the hair style library 1 to obtain the historical hair styles of the star, and performs sequencing according to time, and the training module 24 performs guide matching according to the historical hair styles of the star and generates a plurality of groups of recommended hair styles according to matching degree and time sequencing;
in addition, the hair style preprocessing module 2 further includes an intentional model 25, wherein the intentional model 25 is as follows: f (a) ═ β 1 × S + β 2 × R + β 3 × O + β 4 × P + β 5 × Q;
wherein f (a) represents an intention model, S represents skin color, and R represents facial form; o represents hair length; p represents the volume of hair, Q represents the curl, β 1, β 2, β 3, β 4, β 5 represent the corresponding specific gravity values, respectively and β 1+ β 2+ β 3+ β 4+ β 5 is 1;
the training module 24 is connected with the intention model 25, and the training module 24 carries out deep learning network inference according to the intention direction of the intention model 25;
an interaction module 9 is connected with the intention model 25, and the interaction module 9 is used for the user to change the specific gravity values of the beta 1, the beta 2 … … beta 5.
The user can fill in the values of beta 1, beta 2, beta 3, beta 4 and beta 5 according to the degree of intention of the user, and the training module 24 searches for the corresponding face shape and hair style according to the filled values, so that the matching accuracy is improved.
The system can recommend an ideal hairstyle model to the user according to the user head information acquired by the real-time hairstyle camera module 3, and if the user is satisfied, the system can confirm the user through the interactive module 9, and if the user is not satisfied, the user can select the hairstyle model according to the matching degree, or the user can re-match the hairstyle model by revising the parameters.
The hair style building module 5 combines the selected ideal hair style model and the real-time user model of the user to generate a combined model, when the user is satisfied with the combined model, the combined model can be confirmed through the interactive module 9, and if the user feels that the ideal hair style is not satisfied when the user combines the ideal hair style with the user, the user can perform self-correction (increase or reduce local hair length) or return to the previous layer to select the ideal hair style model again.
The hairstyle building module 5 calculates according to the confirmed combined model and the real-time user model, a hair length difference is bound to exist between the combined model and the original real-time user model, the hairstyle building module 5 obtains the hair length difference with the same dimensionality to generate hair length data, then the optimal hair cutting data is converted according to the hair length data and transmitted to the prompting module 7, the prompting module 7 decomposes the hair cutting data, and the hair cutting data is prompted to the user through the display module 6 step by step;
the display module 6 includes, for example, an equipment prompt area located at the upper left corner, and the cutting equipment is displayed in the equipment prompt area, and the cutting equipment can prompt an angle according to information sent by the prompt module, so as to prompt a user how to correctly hold the cutting equipment, and how to correctly place the angle and the edge of the cutting equipment.
The display module 6 also has a hair-cutting assistance area in the middle, which has a plurality of display modes: 1. the double-screen display can display the combined model and the real-time user model; 2. switching modes, wherein a user switches among a real-time user model, a combined model and an ideal hairstyle model through an interactive module 9; 3. and the superposition mode is used for superposing the combined model on the real-time user model in an imaginary mode. In addition, the hairdressing auxiliary area receives the prompt information from the prompt module 7, generates a cutting line, and the user performs cutting operation according to the cutting line and prompts of the cutting direction, cutting length, cutting angle and the like of the prompt, and the system records the cutting operation in real time.
This orientation module 8 can be and wear in user's auricle position department, along with user's head swing and wobbling hardware device, the wobbling condition of head appears at will probably when haircut in-process, if horizontal migration and vertical migration can not too influence the haircut, when the angle rotation appears, when just greatly increasing to the haircut degree of difficulty of new hand, the user can't carry out the most effective tailorring according to prompt information, so through this orientation module 8, accurate reaction head action condition when urgent, change prompt information fast through reasonable system conversion.
The positioning module 8 can also be a software positioning method, the positioning module 8 comprises a left eye positioning module 8 and a right eye positioning module 8, the left eye positioning module 8 is used for capturing a left eye central point of a user and generating a first positioning coordinate, and the right eye positioning module 8 is used for capturing a right eye central point of the user and generating a second positioning coordinate;
the positioning module 8 generates a horizontal positioning line according to the first positioning coordinate and the second positioning coordinate.
And judging the displacement of the user head in real time according to the horizontal positioning line.
And the real-time hairstyle camera module 3 is converted into a camera mode, so that the haircut progress is obtained in real time, and when haircut errors occur, data remediation is carried out.
In summary, the invention has the advantages that the hair style combination can be learned through the deep neural network, the matching degree of the neural network is improved through the favorite hair style and/or favorite star money added by the user, and the intention data of the user on skin color, face shape, hair length and the like, so that the hair style most suitable for the user's mind is intelligently recommended, the hair style data is generated according to the difference between the hair style and the current self hair style, the prompt information is displayed through the display module, the user can conveniently perform hair cutting operation according to the prompt information, the effect of assisting hair cutting is achieved, besides, the system can monitor the deviation condition of the user's head in real time, and the intelligent and visual effect of assisting hair cutting is achieved.
The user can carry out the haircut operation by oneself according to self needs, reduces and goes the barber shop number of times, and haircut process real time monitoring, in time remedy monitor the head displacement condition constantly, reduce the haircut degree of difficulty, make things convenient for the upper part of the hand.
The real-time hair style camera module 3 comprises a five sense organs obtaining submodule 31, a hair quality obtaining submodule 32, a chromaticity comparison submodule 33, a hair style shielding partition submodule 34, a hair style shielding detection submodule 35 and a fine segmentation submodule 36.
The five sense organs acquisition submodule 31 is used for acquiring the skin chromaticity information of the face of the user, the face contour line, the positions of two eyes, the position of a nose tip, the position of an eyebrow and the position of a mouth;
the hair quality obtaining sub-module 32 is used for obtaining the user hair volume information, the hair thickness information, the hair volume information and the hair chroma information;
the chromaticity comparison submodule 33 is respectively connected with the facial feature acquisition submodule 31 and the hair quality acquisition submodule 32, and is used for comparing the hair chromaticity with the skin chromaticity to acquire a hairstyle area, a face area and a coincidence area;
the hair style occlusion sub-module 34 is connected to the chroma ratio sub-module 33, and gives an input image and a hair style area, with a center area of the mouth position as a centerThe area above the mouth is divided into 18 sectors each having a different diameter (hereinafter referred to as "sector") at every 10 °, and the counter-clockwise direction is denoted as "S 1 ~S 18 . For any sector area S i The image data are divided into 10 blocks (hereinafter referred to as blocks) according to the average radial length, and the blocks are marked as S from far to near in sequence according to the distance from the center of the circle i,1 ~S i,10
The hair style occlusion detection submodule 35 is connected to the hair style occlusion sub-module 34, and is configured to perform preliminary detection on all blocks from outside to inside, compare RGB values in the blocks with hair chromaticity, and generate a comparison value, where when the comparison value is greater than or equal to 0.8, the block is a coincidence area; when the comparison value is less than 0.8, the block is a face area;
the fine segmentation submodule 36 is connected with the hair style shielding submodule and is used for detecting the overlapping area obtained by the primary detection again;
the detailed partitioning sub-module 36 functions specifically as:
step Z1, dividing the superposed area obtained by the preliminary detection into a plurality of fine cutting blocks;
step Z2, X ═ X ij :x ij E to S is defined as the pixel point of the fine cutting block S, and I is used x Color vector, I, representing a pixel point x =(Rx,Gx,Bx) T ;L x =(l 1 ,l 2 ,…,l m ) T Denotes a label vector, where m denotes the number of pixel points in the refinement block S, and l x 1 means x belongs to hair, l x 0 denotes x belongs to skin, and the probability that a pixel is hair is denoted as P (l) x 1), the following energy functions are defined: e (l) ═ c (l) + α b (l);
wherein α is used to balance the importance of both terms; c (L) representing the prediction probability of the pixel points; b (L) is a smoothing term for describing penalty cost when labels of adjacent pixels are different;
step Z3, the final label vector L X of the pixels in any of the refined blocks S is obtained by the energy function reaching a minimum value, the first term c (L) in the energy function being defined as:
Figure BDA0002952763700000171
wherein n is x Indicates the number of pixels in the subdivided block S, c (l) x ) Representing the prediction probability of the color development model to the pixel point x;
in the energy function, b (l) is defined as:
Figure BDA0002952763700000172
Figure BDA0002952763700000173
Figure BDA0002952763700000174
wherein n is p The number of neighboring points of the pixel point p is represented, sigma represents the average smoothness of the image,
Figure BDA0002952763700000175
and selecting adjacent points of the pixel points by adopting an adjacent domain system, and carrying out equalization processing due to different shapes and sizes of the fine cutting blocks.
The minimization calculation is directly carried out on the energy function, the complexity is high, and the calculation amount is very large. A computational method for approximating a solution is presented. Firstly, arranging pixels in a fine segmentation block S from small to large according to the color development degree predicted by an ANN; then setting the initial block labels of all pixels in the block to be 1, and setting the pixel labels to be 0 in sequence according to the descending and ascending probability of the color development; and finally, respectively calculating the values of the energy functions. And selecting the label vector when the energy function reaches the minimum value as an optimal solution.
Finally through detailed calculation, get rid of the coincidence region and obtain the whole face structure, the design of this scheme both can carry out the face type to the user that does not have the bang and confirm, also can carry out the face type to the user that has the bang and confirm, and application scope is wide, in addition can collect the face data, gathers user's bang data, and rate of use is fast, need not to shoot many and has the bang and do not have the picture of bang to gather, reduces the pressure of system.
The real-time hair style camera module 3 further comprises a face shape judging submodule 37, and the face shape judging submodule 37 is respectively connected with the facial feature obtaining submodule 31 and the fine segmentation submodule 36 and is used for obtaining final face shape data after hair style shielding processing;
the face shape judging submodule 37 includes:
step Q1: positioning the human face, and marking the basic facial feature information of the mouth, eyes, nose and the like of the human body correspondingly;
step Q2: the method comprises the following steps of characteristic point acquisition and extraction: 1) extracting the distance between the central points of the two eyes; 2) extracting the inclination angle of the right eye; 3) positioning the center points of eyes and mouth in the face image and marking; 4) all extracted feature information is marked out in a rectangular frame;
step Q3: extracting a face, namely accurately positioning a face region, storing the face locally, and extracting when the face is matched with a hairstyle later;
step Q4: and (3) face recognition, namely dividing the face into a plurality of areas according to the marks of the feature points of each area of the face, and analyzing the information of five sense organs and the information of the face type.
The step Q4 specifically includes:
step Q41, dividing the face into three characteristic line segments along the horizontal direction of the face, wherein the three characteristic line segments are respectively a left eye central vertical line QR, a right eye central vertical line UV and a mouth central vertical line ST;
step Q42, dividing the face into ten characteristic line segments in the vertical direction of the face, wherein the ten characteristic line segments are respectively a forehead horizontal line YZ, an eyebrow horizontal line AB, an upper eyelid horizontal line EF, an eye center horizontal line WX, a lower eyelid horizontal line GH, a widest horizontal line M1M2 of the face, a lower nose horizontal line IJ, an upper lip horizontal line KL, a lip center horizontal line MN and a lower lip horizontal line OP;
and step Q43, calculating and analyzing the distance and the proportional relation among the characteristic lines.
The first step of the distance comparison is: comparison between the mouth center vertical line ST and the face widest horizontal line M1M2, the mouth center vertical line ST is actually the face longest vertical line,
1, when the mouth center vertical line ST/the widest horizontal line M1M2 of the face is 1.333 or more,
Figure BDA0002952763700000191
Judging the system to be a rectangular face;
2. when in use
Figure BDA0002952763700000192
Judging the system to be a diamond face;
3. other conditions are a long face;
when the mouth center vertical line ST/the face widest horizontal line M1M2 is less than 1.333:
Figure BDA0002952763700000193
0.89 and
Figure BDA0002952763700000194
judging the system to be a square face;
2. when the temperature is higher than the set temperature
Figure BDA0002952763700000195
And
Figure BDA0002952763700000196
judging the system to be a diamond face;
3. the other condition is a circular face.
And when the family is more than the upper family and the family is more than the middle family, the system judges the face shape is the upper face shape;
when the court is larger than the atrium and the court is larger than the atrium, the system judges the face shape to be lower;
when the atrium is larger than the upper atrium and the atrium is larger than the lower atrium, and the distance between two eyes is smaller than 2/5 facewidth, the system judges that the face is of an inner face type;
when the atrium > the upper atrium and the atrium > the lower atrium, and when the distance between both eyes is 2/5 face widths or more, the system judges as an external face type, and the system can recommend a preferred hairstyle from the hairstyle store 1 based on the judged face type.
The hairstyle building module 5 comprises a comparison submodule 51, a formation submodule 52 and an input submodule 53;
the comparison submodule 51 is connected with the model construction module 4 and is used for acquiring the hair length information of each dimension in the real-time user model and the ideal hair style model, and generating positive hair length data when the hair length information of the dimension in the real-time user model is larger than the hair length information of the same dimension in the ideal hair style model;
generating negative hair length data when the dimensional hair length information in the real-time user model is smaller than the same dimensional hair length information in the ideal hair style model;
the input sub-module 53 is used for inputting and storing the user's hair growth rate and the average hair-cutting interval length;
the fostering submodule 52 is connected to the comparison submodule 51 and the input submodule 53, respectively, and is configured to receive the positive hair length data and the negative hair length data, the user hair growth rate data, and the average hair cutting length, and directly generate hair cutting data when there is no negative hair length data;
and generating formation data when negative hair length data exist.
The cultivating submodule 52 comprises a hairstyle growing unit 521, a hairstyle simulating unit 522 and a hairstyle storing unit 523;
the hair style growth unit 521 is used for calculating the relationship between the data of the user's hair growth speed multiplied by the average hair length between cuts and the negative hair length data, and recording the data as one-time formation data when the data of the user's hair growth speed multiplied by the average hair length between cuts is less than or equal to the negative hair length data;
when the data of the hair growth speed of the user multiplied by the average hair cutting length is more than or equal to the data of the negative hair length, recording the data as multi-time formation data;
the hair style simulation unit 522 is connected with the hair style growing unit 521 and is used for simulating a hair style formation model in the formation process;
the hair style storage unit 523 is connected to the hair style simulation unit 522, and is configured to store hair style creation data and create a model;
the display module 6 is connected with the hair style simulation unit 522 for invoking and viewing the hair style formation model.
The design of the nourishing submodule 52 is mainly for the convenience of the user to trim the hairstyle suitable for mind through multiple times of hair growing and haircut.
An ideal hairstyle is often inconsistent with the existing hairstyle, not only is the existing hairstyle locally too long, but also the existing hairstyle locally too short is possible, so the excessively long area and the excessively short area of the hairstyle are distinguished and treated differently through the nourishing submodule 52, the excessively short area is properly trimmed, and the excessively long area is heavily trimmed, thereby achieving the effect of forming the ideal hairstyle after multiple times of trimming.
The system can achieve the effect of ideal hairstyle after conversion for several times by multiplying the average hair cutting interval length by the hair growth speed data input by a user, and can check the change conditions for several times after the initial haircut process, if unsatisfied, the haircut can be corrected in time by the system.
The positioning module 8 comprises a left eye positioning module 8 and a right eye positioning module 8, the left eye positioning module 8 is used for capturing a center point of a left eye of a user and generating a first positioning coordinate, and the right eye positioning module 8 is used for capturing a center point of a right eye of the user and generating a second positioning coordinate;
the positioning module 8 generates a horizontal positioning line according to the first positioning coordinate and the second positioning coordinate.
The positioning module 8 comprises a left ear wearing device 81 and a right ear wearing device 82, the left ear wearing device 81 and the right ear wearing device 82 respectively comprise a horizontal displacement sensing module, a vertical displacement sensing module and an angle displacement sensing module, pressure sensors 83 are respectively arranged on the left ear wearing device 81 and the right ear wearing device 82, and when the pressure sensors 83 are triggered, the corresponding wearing devices are in a dormant state;
during the hair cutting process, especially when trimming two temples, the ears need to be pushed, displacement change of the positioning module 8 is inevitably caused at the moment, through the design of the pressure sensor 83, when an operator manually pushes the ears, the operator can touch one of the wearing devices, at the moment, the wearing device enters a dormant state, and the other wearing device senses the displacement of the head of the user, so that the hair cutting is not influenced and delayed.
Each of the left ear wearing device 81 and the right ear wearing device 82 includes a mounting housing 84 for mounting to the auricle, the mounting housing 84 is disposed in an arc structure and the cross section of the mounting housing 84 is disposed in a concave structure;
the mounting housing 84 includes a mounting portion 841, and a first limiting portion 842 and a second limiting portion 843 made of elastic material, the first limiting portion 842 is mounted to one side of the mounting portion 841 near the cheek, and the second limiting portion 843 is mounted to one side of the mounting portion 841 near the back of the brain;
the first limiting portion 842 is provided with a counterweight passage 85 extending along the length direction of the mounting housing 84, a counterweight groove 86 communicating with the counterweight passage 85, and a counterweight 87 connected in the counterweight passage 85 in a sliding manner, wherein a plurality of counterweight grooves 86 are arranged along the extension direction of the counterweight passage 85, and the counterweight groove 86 is arranged at the lower side of the counterweight passage 85;
be provided with the passageway 88 that slides on the first stopper, the passageway 88 that slides is interior to slide and is connected with electromagnet assembly 89, and balancing weight 87 removes along with electromagnet assembly 89 removes when electromagnet assembly 89 communicates.
The left ear wearing device 81 and the right ear wearing device 82 are designed to be firmly attached to the auricle by the first stopper 842 and the second stopper 843, and the auricle is a cartilage so that the user does not feel uncomfortable even if the user is firmly attached in a short time.
This balancing weight 87 and electromagnet assembly 89's design, when pressing the outside button of electromagnet assembly 89, the inside intercommunication of electromagnet assembly 89 produces magnetic force and produces suction to balancing weight 87 in the balancing weight passageway 85, and balancing weight 87 removes along with electromagnet assembly 89 this moment, when balancing weight 87 removes the assigned position, through breaking electromagnet assembly 89 for balancing weight 87 landing forms spacingly to in the counter weight groove 86.
The benefit of this design is that the user can self-adjust the comfort zone by self-adjusting the weight 87 position, 1; 2. by changing the key positions, the left ear wearing device 81 and the right ear wearing device 82 are stressed evenly and are more stable when being placed.
An intelligent hairdressing auxiliary method based on machine learning comprises the following steps:
s1, inputting an ideal hair style picture or an ideal star hair style picture into the hair style preprocessing module 2 by the user through the interaction module 9;
s2, positioning the head information of the user through the positioning module 8;
s3, acquiring the head information of the user through the real-time hair style camera module 3;
s4, the model building module 4 generates an ideal hairstyle model, and the user confirms or revises the ideal hairstyle model through the interaction module 9;
s5, the hairstyle building module 5 generates a combined model, and the user confirms or revises the combined model through the interaction module 9;
s6, the haircut assisting area displays the real-time user model and instructs the cutting position information, the cutting direction information, the cutting length information, and the cutting angle information, and the device presenting area displays the cutting device and instructs the cutting angle information and the cutting device orientation information.
Step S1 further includes:
s11, the user, for the intention model: f (a) ═ β 1 × S + β 2 × R + β 3 × O + β 4 × P + β 5 × Q; inputting a specific gravity value beta 1 of S, a specific gravity value beta 2 of R, a specific gravity value beta 3 of O, a specific gravity value beta 4 of P and a specific gravity value beta 5 of Q;
wherein f (a) represents an intention model, S represents skin color, and R represents facial form; o represents hair length; p represents the volume of hair, Q represents the curl, β 1, β 2, β 3, β 4, β 5 represent the corresponding specific gravity values, respectively, and β 1+ β 2+ β 3+ β 4+ β 5 is 1.
A computer-readable medium, in which a computer program is stored, said program being adapted to be executed by a processor to carry out the above-mentioned method.
The above is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above-mentioned embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (13)

1. An intelligent haircut auxiliary system based on machine learning is characterized by comprising: the system comprises a hairstyle library (1), a hairstyle preprocessing module (2), a real-time hairstyle camera module (3), a model building module (4), a hairstyle building module (5), a display module (6), a prompt module (7), a positioning module (8) and an interaction module (9);
the hairstyle library (1) is used for storing various types of pictures containing head hairstyles on a network and providing the pictures containing the head hairstyles for a user to import by himself;
the hairstyle preprocessing module (2) is connected with the hairstyle library (1), the hairstyle preprocessing module (2) comprises a whole neural network (21), a human face regional neural network (22), a hairstyle regional neural network (23) and a training module (24), and the whole neural network (21) is used for obtaining a picture of the head hairstyle in the hairstyle library (1) to generate a picture matched with the human face hairstyle;
the face area neural network (22) is connected with the all-neural network (21) and used for receiving the picture matched with the hair style of the face and generating a picture containing the face;
the hair style area neural network (23) is connected with the whole neural network (21) and is used for receiving the picture matched with the hair style of the human face and generating a picture containing the hair style;
the training module (24) performs deep learning training according to the pictures containing the faces and the pictures containing the hairstyles;
the interaction module (9) is connected with the hairstyle preprocessing module (2), and the interaction module (9) is used for leading the preferred hairstyle information and the preferred object information into the hairstyle preprocessing module (2) by a user; the hairstyle preprocessing module (2) carries out pointing deep learning network inference according to the preferred hairstyle information, and the hairstyle preprocessing module (2) reversely acquires the hairstyle information of the specified object in the hairstyle library (1) according to the preferred object information and carries out pointing deep learning network inference;
the real-time hair style camera module (3) is used for acquiring head information of a user;
the model building module (4) is respectively connected with the hairstyle preprocessing module (2) and the real-time hairstyle camera module (3), the model building module (4) is used for generating a real-time user model according to the head information of a user, and the model building module (4) is matched based on the real-time user model to obtain an ideal hairstyle model;
the interaction module (9) is connected with the model building module (4), and the interaction module (9) is used for receiving user instructions so as to revise or determine an ideal hairstyle model;
the hairstyle building module (5) is connected with the model building module (4) and is used for generating a combined model according to the real-time user model and the ideal hairstyle model and comparing hairstyle characteristic information in the combined model with hairstyle characteristic information in the real-time user model to generate hair cutting data;
the interaction module (9) is connected with the hair style building module (5), and the interaction module (9) is used for receiving a user instruction so as to revise or determine the combined model;
the display module (6) is connected with the hairstyle building module (5), and the display module (6) at least comprises a haircut auxiliary area for displaying the combined model and the real-time user model and/or an equipment prompt area for displaying the cutting equipment;
the interaction module (9) is connected with the display module (6), and the interaction module (9) is used for receiving a user instruction so as to select the cutting equipment displayed in the equipment prompt area;
the prompting module (7) is respectively connected with the hairstyle building module (5) and the display module (6), generates cutting information according to hair cutting data, and feeds the cutting information back to a user through the hair cutting auxiliary area and/or the equipment prompting area;
positioning module (8) are connected with hairdo construction module (5) and suggestion module (7) respectively, positioning module (8) are used for monitoring the displacement volume of user's head to with displacement volume feedback to hairdo construction module (5) and suggestion module (7) in, hairdo construction module (5) change real-time user model's angle condition according to the displacement volume in real time, suggestion module (7) change prompt information according to the displacement volume.
2. The intelligent hair dressing assistance system based on machine learning of claim 1, wherein: the real-time hair style camera module (3) comprises a five sense organs acquisition sub-module (31), a hair quality acquisition sub-module (32), a chromaticity comparison sub-module (33), a hair style shielding sub-module (34), a hair style shielding detection sub-module (35) and a fine segmentation sub-module (36);
the five sense organs acquisition sub-module (31) is used for acquiring the skin chromaticity information of the face of the user, a face contour line, two eye positions, a nose tip position, an eyebrow position and a mouth position;
the hair quality obtaining submodule (32) is used for obtaining the user hair volume information, the hair thickness information, the volume information and the hair chroma information;
the chroma comparison submodule (33) is respectively connected with the five sense organs acquisition submodule (31) and the hair quality acquisition submodule (32) and is used for comparing the hair chroma and the skin chroma to acquire a hairstyle area, a face area and a superposition area;
the hair style shielding sub-module (34) is connected with the chromaticity comparison sub-module (33) and is used for dividing a region above the mouth into 18 sectors with 10 degrees of angle and different diameters in an anticlockwise way by taking a central region of the mouth as a circle center, and then averagely dividing the sectors into 10 blocks from outside to inside along the radial length direction of the sectors;
the hairstyle shielding detection submodule (35) is connected with the hairstyle shielding partitioning submodule (34) and is used for carrying out primary detection on all blocks from outside to inside, comparing RGB values in the blocks with hair chroma and generating a comparison value, and when the comparison value is larger than or equal to 0.8, the blocks are overlapped areas; when the comparison value is less than 0.8, the block is a face area;
the fine segmentation submodule (36) is connected with the hairstyle shielding submodule and is used for detecting the coincidence area obtained by the primary detection again;
the fine segmentation submodule (36) is specifically used for: step Z1, dividing the superposed region obtained by the preliminary detection into a plurality of fine cutting blocks;
step Z2, X ═ X ij :x ij E is S, is defined as a pixel point of a fine segmentation block S, and is represented by I x Color vector, I, representing a pixel point x =(Rx,Gx,Bx) T ;L x =(l 1 ,l 2 ,…,l m ) T Representing a label vector, in which m is a tableNumber of pixel points in the refinement block S, and l x 1 denotes x belongs to hair, l x 0 denotes x belongs to skin, and the probability that a pixel is hair is denoted as P (l) x 1), the following energy functions are defined: e (l) ═ c (l) + α b (l);
wherein α is used to weigh the importance of both terms; c (L) represents the prediction probability of the pixel point; b (L) is a smoothing term for describing the penalty cost when the labels of the adjacent pixels are different;
step Z3, the final label vector L X of the pixels in any of the refined blocks S is obtained by the energy function reaching a minimum value, the first term c (L) in the energy function being defined as:
Figure FDA0003724168640000041
wherein n is x Representing the number of pixels in the refined block S, c (l) x ) Representing the prediction probability of the color development model to the pixel point x;
in the energy function, b (l) is defined as:
Figure FDA0003724168640000042
Figure FDA0003724168640000043
wherein n is p The number of adjacent points of the pixel point p is represented, sigma represents the average smoothness of the image,
Figure FDA0003724168640000044
and selecting adjacent points of the pixel points by adopting an adjacent domain system, and carrying out equalization processing due to different shapes and sizes of the fine cutting blocks.
3. The intelligent hair dressing auxiliary system based on machine learning as claimed in claim 2, wherein: the real-time hair style camera module (3) further comprises a face type judging submodule (37), and the face type judging submodule (37) is respectively connected with the five sense organs obtaining submodule (31) and the fine segmentation submodule (36) and is used for obtaining final face type data after hair style shielding processing;
the face type judging submodule (37) comprises the following concrete implementation steps:
step Q1: positioning the human face, and marking the basic facial feature information of the mouth, eyes, nose and the like of the human body correspondingly;
step Q2: the method comprises the following steps of characteristic point acquisition and extraction: 1) extracting the distance between the central points of the two eyes; 2) extracting the inclination angle of the right eye; 3) positioning the center points of eyes and mouth in the face image and marking; 4) all extracted feature information is marked out in a rectangular frame;
step Q3: extracting the face, namely accurately positioning a face region, storing the face locally, and extracting when the face is matched with a hairstyle later;
step Q4: and (3) face recognition, namely dividing the face into a plurality of areas according to the marks of the feature points of each area of the face, and analyzing the information of five sense organs and the information of the face type.
4. The intelligent hair dressing auxiliary system based on machine learning as claimed in claim 3, wherein: the step Q4 specifically includes:
step Q41, dividing the face into three characteristic line segments along the horizontal direction of the face, wherein the three characteristic line segments are respectively a left eye central vertical line QR, a right eye central vertical line UV and a mouth central vertical line ST;
step Q42, dividing the face into ten characteristic line segments along the vertical direction of the face, wherein the ten characteristic line segments are respectively a forehead horizontal line YZ, an eyebrow horizontal line AB, an upper eyelid horizontal line EF, an eye center horizontal line WX, a lower eyelid horizontal line GH, a face widest horizontal line M1M2, a lower nose horizontal line IJ, an upper lip horizontal line KL, a lip center horizontal line MN and a lower lip horizontal line OP;
and step Q43, calculating and analyzing the distance and the proportional relation among the characteristic lines.
5. The intelligent hair dressing assistance system based on machine learning of claim 1, wherein: the hairstyle setting module (5) comprises a comparison submodule (51), a setting submodule (52) and an input submodule (53);
the comparison submodule (51) is connected with the model construction module (4) and is used for acquiring the hair length information of each dimension in the real-time user model and the ideal hair style model, and generating positive hair length data when the hair length information of the dimension in the real-time user model is larger than the hair length information of the same dimension in the ideal hair style model;
generating negative hair length data when the dimensional hair length information in the real-time user model is smaller than the same dimensional hair length information in the ideal hair style model;
the input sub-module (53) is used for inputting and storing the hair growing speed and the average hair cutting interval length of the user;
the nourishing submodule (52) is respectively connected with the comparison submodule (51) and the input submodule (53) and is used for receiving the positive hair length data, the negative hair length data, the user hair growth speed data and the average hair cutting interval length, and when the negative hair length data does not exist, the hair cutting data are directly generated;
and generating formation data when negative hair length data exist.
6. The intelligent hair dressing auxiliary system based on machine learning as claimed in claim 5, wherein: the nourishing submodule (52) comprises a hairstyle growing unit (521), a hairstyle simulating unit (522) and a hairstyle storing unit (523);
the hair style growth unit (521) is used for calculating the relationship between the data of the user's hair growth speed multiplied by the average hair-cutting length and the negative hair length, and recording the data as one-time formation data when the data of the user's hair growth speed multiplied by the average hair-cutting length is less than or equal to the negative hair length data;
when the data of the hair growth speed of the user multiplied by the average hair cutting length is more than or equal to the data of the negative hair length, recording the data as multiple times of formation data;
the hair style simulation unit (522) is connected with the hair style growing unit (521) and is used for simulating a hair style formation model in the formation process;
the hair style storage unit (523) is connected with the hair style simulation unit (522) and is used for storing formation data to form a hair style formation model;
the display module (6) is connected with the hair style simulation unit (522) and used for calling and viewing the hair style formation model.
7. The intelligent hair dressing assistance system based on machine learning of claim 1, wherein: the hair style preprocessing module (2) further comprises an intentional model (25), wherein the intentional model (25) is as follows: f (a) ═ β 1 × S + β 2 × R + β 3 × O + β 4 × P + β 5 × Q;
wherein f (a) represents an intention model (25), S represents skin color, and R represents facial form; o represents hair length; p represents the volume of hair, Q represents the curl, β 1, β 2, β 3, β 4, β 5 represent the corresponding specific gravity values, respectively, and β 1+ β 2+ β 3+ β 4+ β 5 is 1;
the training module (24) is connected with the intention model (25), and the training module (24) carries out deep learning according to the intention direction of the intention model (25);
the interaction module (9) is connected with the intention model (25), and the interaction module (9) is used for receiving a user instruction so as to change the specific gravity values of beta 1, beta 2 … … beta 5.
8. The intelligent hair dressing assistance system based on machine learning of claim 1, wherein: the positioning module (8) comprises a left eye positioning module (8) and a right eye positioning module (8), the left eye positioning module (8) is used for capturing a left eye central point of a user and generating a first positioning coordinate, and the right eye positioning module (8) is used for capturing a right eye central point of the user and generating a second positioning coordinate;
and the positioning module (8) generates a horizontal positioning line according to the first positioning coordinate and the second positioning coordinate.
9. The intelligent hair dressing assistance system based on machine learning of claim 1, wherein: the positioning module (8) comprises a left ear wearing device (81) and a right ear wearing device (82), the left ear wearing device (81) and the right ear wearing device (82) respectively comprise a horizontal displacement sensing module, a vertical displacement sensing module and an angle displacement sensing module, pressure sensors (83) are respectively arranged on the left ear wearing device (81) and the right ear wearing device (82), and the corresponding wearing devices are in a dormant state when the pressure sensors (83) are triggered;
the left ear wearing device (81) and the right ear wearing device (82) comprise mounting shells (84) used for being mounted on auricles, the mounting shells (84) are arranged in an arc structure, and the cross sections of the mounting shells (84) are arranged in a concave structure;
the mounting shell (84) comprises a mounting portion (841), a first limiting portion (842) and a second limiting portion (843), wherein the first limiting portion (842) and the second limiting portion (843) are made of elastic materials, the first limiting portion (842) is mounted to one side, close to the cheek, of the mounting portion (841), and the second limiting portion (843) is mounted to one side, close to the back head, of the mounting portion (841);
a counterweight channel (85) extending along the length direction of the installation shell (84), a counterweight groove (86) communicated with the counterweight channel (85) and a counterweight block (87) connected in the counterweight channel (85) in a sliding manner are arranged in the first limiting part (842), a plurality of counterweight grooves (86) are arranged along the extending direction of the counterweight channel (85), and the counterweight groove (86) is arranged at the lower side of the counterweight channel (85);
be provided with passageway (88) that slides on the first stopper, it is connected with electromagnet assembly (89) to slide in passageway (88) slides, and balancing weight (87) move and remove along with electromagnet assembly (89) when electromagnet assembly (89) intercommunication.
10. The intelligent hair cutting assisting system based on machine learning as claimed in claim 1, wherein: the cutting information comprises cutting position information, cutting direction information, cutting length information, cutting angle information and cutting equipment orientation information, the cutting position information, the cutting direction information, the cutting length information and the cutting angle information are fed back to a user through the hairdressing auxiliary area, and the cutting angle information and the cutting equipment orientation information are fed back to the user through the equipment prompting area.
11. An intelligent hair dressing assisting method based on machine learning, which is applied to the intelligent hair dressing assisting system based on machine learning according to any one of claims 1-10, and is characterized by comprising the following steps:
s1, inputting an ideal hair style picture or an ideal star hair style picture to the hair style preprocessing module (2) by the user through the interaction module (9);
s2, positioning the head information of the user through a positioning module (8);
s3, acquiring the head information of the user through the real-time hairstyle camera module (3);
s4, generating an ideal hairstyle model by the model building module (4), and confirming or revising by a user through the interaction module (9);
s5, the hairstyle building module (5) generates a combined model, and the user confirms or revises the combined model through the interaction module (9);
s6, the haircut assisting area displays the real-time user model and feeds back the cutting information, and the device prompting area displays the cutting device and feeds back the device operation information.
12. The intelligent machine learning-based hair cutting assistance method as claimed in claim 11, wherein said step S1 further comprises:
s11, user-intent-to-model (25): f (a) ═ β 1 × S + β 2 × R + β 3 × O + β 4 × P + β 5 × Q; inputting a specific gravity value beta 1 of S, a specific gravity value beta 2 of R, a specific gravity value beta 3 of O, a specific gravity value beta 4 of P and a specific gravity value beta 5 of Q;
wherein f (a) represents an intention model, S represents skin color, and R represents facial form; o represents hair length; p represents the volume of hair, Q represents the curl, β 1, β 2, β 3, β 4, β 5 represent the corresponding specific gravity values, respectively, and β 1+ β 2+ β 3+ β 4+ β 5 is 1.
13. A computer-readable medium, in which a computer program is stored which is adapted to be executed by a processor to carry out the method of claim 11 or 12.
CN202110214748.1A 2021-02-25 2021-02-25 Intelligent hairdressing auxiliary system, method and readable medium based on machine learning Active CN112906585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110214748.1A CN112906585B (en) 2021-02-25 2021-02-25 Intelligent hairdressing auxiliary system, method and readable medium based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110214748.1A CN112906585B (en) 2021-02-25 2021-02-25 Intelligent hairdressing auxiliary system, method and readable medium based on machine learning

Publications (2)

Publication Number Publication Date
CN112906585A CN112906585A (en) 2021-06-04
CN112906585B true CN112906585B (en) 2022-08-23

Family

ID=76108418

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110214748.1A Active CN112906585B (en) 2021-02-25 2021-02-25 Intelligent hairdressing auxiliary system, method and readable medium based on machine learning

Country Status (1)

Country Link
CN (1) CN112906585B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113628350A (en) * 2021-09-10 2021-11-09 广州帕克西软件开发有限公司 Intelligent hair dyeing and testing method and device
CN117389676B (en) * 2023-12-13 2024-02-13 成都白泽智汇科技有限公司 Intelligent hairstyle adaptive display method based on display interface

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108185624A (en) * 2018-02-09 2018-06-22 武汉技兴科技有限公司 The method and device that a kind of human body hair style is intelligently trimmed
CN109885704A (en) * 2019-02-21 2019-06-14 杭州数为科技有限公司 It is a kind of based on hair style identification intelligent hair style arrange care method and system
CN111280771A (en) * 2018-12-09 2020-06-16 贾成保 Intelligent hair clipper, intelligent hair cutting equipment and intelligent hair cutting method
CN112015934A (en) * 2020-08-27 2020-12-01 华南理工大学 Intelligent hair style recommendation method, device and system based on neural network and Unity

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180349979A1 (en) * 2017-06-01 2018-12-06 The Gillette Company Llc Method for providing a customized product recommendation
CN108389077B (en) * 2018-02-11 2022-04-05 Oppo广东移动通信有限公司 Electronic device, information recommendation method and related product
US11172873B2 (en) * 2018-05-17 2021-11-16 The Procter & Gamble Company Systems and methods for hair analysis

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108185624A (en) * 2018-02-09 2018-06-22 武汉技兴科技有限公司 The method and device that a kind of human body hair style is intelligently trimmed
CN111280771A (en) * 2018-12-09 2020-06-16 贾成保 Intelligent hair clipper, intelligent hair cutting equipment and intelligent hair cutting method
CN109885704A (en) * 2019-02-21 2019-06-14 杭州数为科技有限公司 It is a kind of based on hair style identification intelligent hair style arrange care method and system
CN112015934A (en) * 2020-08-27 2020-12-01 华南理工大学 Intelligent hair style recommendation method, device and system based on neural network and Unity

Also Published As

Publication number Publication date
CN112906585A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN105637512B (en) For creating the method and system of customed product
CN112906585B (en) Intelligent hairdressing auxiliary system, method and readable medium based on machine learning
EP3479351B1 (en) System and method for digital makeup mirror
CN110111246B (en) Virtual head portrait generation method and device and storage medium
JP3582458B2 (en) Makeup advice system
JP7020626B2 (en) Makeup evaluation system and its operation method
US10789748B2 (en) Image processing device, image processing method, and non-transitory computer-readable recording medium storing image processing program
CN108537126B (en) Face image processing method
US20170169501A1 (en) Method and system for evaluating fitness between wearer and eyeglasses
CN107506559B (en) Star face shaping makeup recommendation method and device based on face similarity analysis
CN108596091A (en) Figure image cartooning restoring method, system and medium
US10911695B2 (en) Information processing apparatus, information processing method, and computer program product
US11978242B2 (en) Systems and methods for improved facial attribute classification and use thereof
JP7278724B2 (en) Information processing device, information processing method, and information processing program
CN106293362A (en) A kind of guiding cosmetic equipment
JP2004094917A (en) Virtual makeup device and method therefor
JPH07168875A (en) Designing system for spectacle shape
WO2020252969A1 (en) Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model
CN108021308A (en) Image processing method, device and terminal
US9330300B1 (en) Systems and methods of analyzing images
CN112508777A (en) Beautifying method, electronic equipment and storage medium
CN112802031A (en) Real-time virtual hair trial method based on three-dimensional human head tracking
CN117389676B (en) Intelligent hairstyle adaptive display method based on display interface
CN109965493A (en) A kind of split screen interactive display method and device
CN110349202B (en) Hairstyle design method based on face analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant