WO2022257456A1 - Procédé, appareil et dispositif de reconnaissance d'informations capillaires, et support de stockage - Google Patents

Procédé, appareil et dispositif de reconnaissance d'informations capillaires, et support de stockage Download PDF

Info

Publication number
WO2022257456A1
WO2022257456A1 PCT/CN2022/071475 CN2022071475W WO2022257456A1 WO 2022257456 A1 WO2022257456 A1 WO 2022257456A1 CN 2022071475 W CN2022071475 W CN 2022071475W WO 2022257456 A1 WO2022257456 A1 WO 2022257456A1
Authority
WO
WIPO (PCT)
Prior art keywords
hair
information
face
face image
image
Prior art date
Application number
PCT/CN2022/071475
Other languages
English (en)
Chinese (zh)
Inventor
朱磊
朱运
张霖
俞丽娟
朱艳乔
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2022257456A1 publication Critical patent/WO2022257456A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Definitions

  • the present application relates to the field of artificial intelligence, in particular to a hair information recognition method, device, equipment and storage medium.
  • the popularity of mobile terminals provides users with great convenience. For example, due to the portability of smart terminals, users use mobile phones instead of cameras to take pictures. In the process of taking pictures or processing images, facial information recognition can be carried out on the image. As a step in the whole face, hair also plays an important role in the overall image of a person. In practical applications, the hair details identified Information can also be used for reference in applications. For example, when paying attention to hair health, by identifying the hairline height and hair volume in hair detail information, it can be used as a reference for the hair health of the user to be identified. In the insurance industry, through Identify the length of the bangs of the person in the picture, and adjust the claim settlement strategy for customers with too long bangs who may block the line of sight and have a higher risk of accidents during driving.
  • the hair recognition method in the prior art mainly recognizes the hair region by the pixel gap between the human face and the hair.
  • this recognition method mainly recognizes the entire hair region, which is difficult Accurately and effectively identify many details of hair.
  • the present application provides a hair information recognition method, device, equipment and storage medium, which are used to solve the technical problem of low accuracy in hair detail information recognition in the existing hair recognition methods.
  • the first aspect of the present application provides a hair information recognition method, comprising: acquiring a face image to be recognized; inputting the face image into a pre-trained face key point recognition model for face feature recognition, and obtaining The human face key points of the human face image and coordinate information thereof; the human face image is input into a pre-trained contour recognition model to obtain at least one contour figure of the human face image; according to the human face key point, determine the hair contour graphics of the face image from the contour graphics, and calculate the coordinate information of the hair contour graphics according to the coordinate information of the key points of the human face; obtain the hair recognition request input by the user, and analyze The hair recognition task identifier carried in the hair recognition request, wherein the hair recognition task identifier includes a hair detail recognition identifier and/or a hairstyle recognition identifier; if the hair recognition task identifier includes a hair detail recognition identifier, then according to the person The coordinate information of the key points of the face and the coordinate information of the hair outline graphics are used to calculate the hair detail information of the human face
  • the second aspect of the present application provides a hair information identification device, including a memory, a processor, and computer-readable instructions stored in the memory and operable on the processor, and the processor executes the computer-readable instructions.
  • the following steps are realized when reading the instruction: obtain the face image to be recognized; input the face image into the pre-trained face key point recognition model to perform face feature recognition, and obtain the face key of the face image Points and coordinate information thereof; input the face image into a pre-trained contour recognition model to obtain at least one contour figure of the face image; determine from the contour figure according to the key points of the face The hair outline graphic of the face image, and calculate the coordinate information of the hair outline graphic according to the coordinate information of the key points of the face; obtain the hair identification request input by the user, and analyze the hair carried in the hair identification request Recognition task identification, wherein the hair identification task identification includes hair detail identification and/or hairstyle identification; if the hair identification task identification includes hair detail identification, then according to the coordinate information of the key points of the face and the coordinate information of the hair outline
  • the third aspect of the present application provides a computer-readable storage medium, where computer instructions are stored in the computer-readable storage medium, and when the computer instructions are run on the computer, the computer is made to perform the following steps: obtain the person to be identified Face image; the face image is input into the pre-trained face key point recognition model to carry out face feature recognition, and the face key point and coordinate information thereof of the face image are obtained; the face image is Input to the pre-trained contour recognition model to obtain at least one contour figure of the human face image; according to the key points of the human face, determine the hair contour figure of the human face image from the contour figure, and according to The coordinate information of the key points of the face is used to calculate the coordinate information of the hair outline graphic; the hair recognition request input by the user is obtained, and the hair recognition task identifier carried in the hair recognition request is analyzed, wherein the hair recognition task identifier includes Hair detail identification and/or hairstyle identification; if the hair identification task identification includes hair detail identification, then according to the coordinate information of the key points of the face and the coordinate
  • the fourth aspect of the present application provides a hair information recognition device, wherein the hair information recognition device includes: a first model input module for inputting the face image into a pre-trained face key point recognition model Perform face feature recognition in the face image to obtain the face key points and coordinate information of the face image; the second model input module is used to input the face image into the pre-trained contour recognition model to obtain the obtained At least one contour figure of the human face image; a hair contour determination module, configured to determine the hair contour figure of the human face image from the contour figure according to the key points of the human face, and determine the hair contour figure of the human face image according to the key points of the human face The coordinate information of the hair outline figure is calculated according to the coordinate information; the task analysis module is used to obtain the hair identification request input by the user, and analyze the hair identification task identification carried in the hair identification request, wherein the hair identification task identification includes Hair detail identification and/or hairstyle identification; hair detail identification module, used for when the hair identification task identification includes hair detail identification, according to the coordinate information of the key points of the human
  • the person of the face image is obtained
  • the coordinate information of the key points of the face and the hair contour graphics of the face image calculate the coordinate information of the hair contour graphics according to the coordinate information of the key points of the face; obtain the hair recognition request input by the user, and Request to select the corresponding hair recognition task, wherein the hair recognition task includes hair detail recognition and hairstyle recognition; if the hair recognition task is hair detail recognition, then according to the coordinate information of the key points of the face and the hair contour graphics
  • the coordinate information of the human face image is calculated to calculate the hair detail information of the human face image; if the hair recognition task is hairstyle recognition, the hair image features in the human face image are extracted according to the hair contour graphics, and the hair image features are extracted according to the hair image
  • the feature identifies hairstyle information of the face image.
  • this method recognizes the face image, obtains the hair outline and key points of the face, defines the hair information based on the application scene, and calculates the hair information. It can not only identify the overall hair area, but also accurately Effective recognition of many hair information improves the recognition accuracy of face images.
  • Fig. 1 is a schematic diagram of the first embodiment of the hair information identification method in the embodiment of the present application
  • Fig. 2 is a schematic diagram of the second embodiment of the hair information identification method in the embodiment of the present application.
  • Fig. 3 is a schematic diagram of the third embodiment of the hair information identification method in the embodiment of the present application.
  • Fig. 4 is a schematic diagram of the fourth embodiment of the hair information identification method in the embodiment of the present application.
  • Fig. 5 is a schematic diagram of a fifth embodiment of the method for identifying hair information in the embodiment of the present application.
  • Fig. 6 is a schematic diagram of an embodiment of the hair information identification device in the embodiment of the present application.
  • Fig. 7 is a schematic diagram of another embodiment of the hair information identification device in the embodiment of the present application.
  • Fig. 8 is a schematic diagram of an embodiment of a device for identifying hair information in the embodiment of the present application.
  • Embodiments of the present application provide a hair information identification method, device, device, and storage medium, which are used to solve the technical problem of low accuracy in identifying detailed hair information in existing hair identification methods.
  • the first embodiment of the method for identifying hair information in the embodiment of the present application includes:
  • the subject of execution of the present application may be a hair information recognition device, and may also be a terminal or a server, which is not specifically limited here.
  • the embodiment of the present application is described by taking the server as an execution subject as an example.
  • the above-mentioned face images can be stored in nodes of a block chain.
  • the face image to be recognized may be a static image or an image such as a photo, or may be a video frame or the like in a dynamic video.
  • the face in the face image may be a frontal face, or a non-frontal face with a certain deflection angle.
  • the face image to be recognized may be an image stored locally, an image acquired from a network or a server, or an image acquired by an image acquisition device, which is not limited in this embodiment.
  • the face image may be all the images in the image to be recognized, or it may be a part of the image in the image to be recognized.
  • the face recognition method recognizes and intercepts the face from the complete image.
  • the intercepted face image should contain all the faces and add part of the background area.
  • the face recognition method may use dlib, or other face recognition methods, which are not limited in this embodiment.
  • the neural network structure used by the face key point recognition model is PFLD (A Practical Facial Landmark Detector), which is a face key point detection model with high precision, fast speed and small model.
  • PFLD uses an auxiliary network to estimate the face pose information of face samples, predicts face pose while training key point regression, and solves the problem of data imbalance by punishing rare and excessively large pose angle samples, thereby improving Prediction accuracy.
  • the neural network structure used by the contour recognition model is PSPNet (Pyramid Scene Parseing Network, Pyramid Scene Parseing Network), and PSPNet can aggregate context information in different regions, thereby improving the ability to obtain global information.
  • PSPNet Pulid Scene Parseing Network, Pyramid Scene Parseing Network
  • PSPNet uses a pre-trained ResNet model with an extended network strategy to extract the feature map.
  • the main network is composed of ResNet101, using the method of residual network, hole convolution and dimensionality reduction convolution (first use 1 *1 reduces the dimension, then uses 3*3 convolution, and then restores the dimension with 1*1).
  • the pyramid pooling module is divided into 4 levels, and the pooling kernel sizes are all, half and a small part of the image, and finally they can be fused into global features Then, the fused global features are concatenated with the original feature map, and finally, the final prediction map is generated through a convolutional layer.
  • the contour recognition model recognizes multiple contour images in the human face image, such as human face contours, hair contours, etc., by determining the left eye key points and right eye key points in the human face key points, and Determine the contour region above the left eye key point and the right eye key point as the hair contour, and determine the hair contour graphics.
  • the facial key point recognition model outputs the position information of the key point of the human face, which is mainly the coordinates of the key points of the human face in the face image, and the coordinates of the key points of the human face can be used to calculate the position of each point in the hair contour graph.
  • the vector between the key points of the face and the outline of the hair can be obtained through the distance between the key points of the face and the outline of the hair, and the vector is split into two directions of the x-axis and the y-axis in the coordinate axis Vector, calculate the coordinate information of each point on the hair outline graphic according to the length of the two directions and the coordinates of the corresponding key points of the face, or calculate the coordinate information of the hair outline graphic through the coordinate origin in the preset coordinate axis.
  • the hair recognition task identification includes the hair detail identification identification, then calculate the hair detail information of the face image according to the coordinate information of the key points of the face and the coordinate information of the hair outline graphic;
  • the hair detail information mainly includes information such as hairline height, hair volume, and bangs length.
  • the hairline height, hair volume, etc. in the hair detail information the hair health of the user to be identified is referred to.
  • the insurance industry by identifying the length of the bangs of the person in the picture, the claim settlement strategy is adjusted for the customers whose bangs are too long during the driving process, because they may block the line of sight and the risk of accident may be higher, according to different
  • different types of hair detail information can be obtained through different definitions of hair, such as the length of sideburns, by identifying the vertical distance between the lowest point in the hair contour graph and the line before the two eyes, and dividing it by the calculated face length , the sideburn length can be calculated, and the application does not limit the type of hair detail information.
  • the hair recognition task identifier includes a hairstyle recognition identifier, extract the hair image features in the face image according to the hair contour graphics, and recognize the hairstyle information of the face image according to the hair image features.
  • the face image within the range of the hair outline graphics is processed by a convolutional neural network to extract the hair image features in the face image.
  • the convolutional neural network (Convolutional Neural Network, referred to as CNN) is an artificial neural network. network.
  • the convolutional neural network includes a convolutional layer (Convolutional Layer) and a sub-sampling layer (Pooling Layer).
  • Subsampling is also called pooling (Pooling), usually in two forms: mean subsampling (Mean Pooling) and maximum subsampling (Max Pooling).
  • the pooling operation is an effective way to reduce dimensionality and prevent overfitting. Convolution and subsampling greatly simplify the complexity of the neural network and reduce the parameters of the neural network.
  • a multi-task convolutional neural network is a convolutional neural network that can perform multi-task learning.
  • the multi-task convolutional neural network can have multiple outputs for the input, and each output corresponds to a task, so it can extract multiple hair image features in the face image.
  • the face image by acquiring the face image to be recognized; inputting the face image into a pre-trained face key point recognition model for face feature recognition, obtaining the face key points and their coordinates of the face image Information; the face image is input into the pre-trained contour recognition model to obtain at least one contour figure of the face image; according to the key points of the face, the hair contour figure of the face image is determined from the contour figure, and according to the face Calculate the coordinate information of the hair outline graphics from the coordinate information of the key points; obtain the hair recognition request input by the user, and parse the hair recognition task identifier carried in the hair recognition request, wherein the hair recognition task identifier includes the hair detail recognition identifier and/or the hairstyle recognition identifier ; If the hair recognition task identification includes the hair detail identification identification, then according to the coordinate information of the key points of the face and the coordinate information of the hair contour graphics, calculate the hair detail information of the face image; if the hair identification task identification includes the hairstyle identification, then according to The hair contour graph extracts the hair image
  • this method recognizes the face image, obtains the hair outline and key points of the face, defines the hair information based on the application scene, and calculates the hair information. It can not only identify the overall hair area, but also accurately Effectively identify a lot of hair information.
  • the second embodiment of the hair information identification method in the embodiment of the present application includes:
  • the main face key points used are 68, so the human face
  • the training sample set uses a data set of 300W, a total of 600 pictures, and 68 key points.
  • other key point data sets can also be used, such as the XM2VTS data set with 68 key points.
  • the WFLW data set of 98 key points, etc. are not limited in this application.
  • the neural network structure used is PFLD, which can adjust the loss function according to the pose information of the face image, solve the problem of data imbalance, and improve the accuracy of recognition, so it is necessary to Each key point is marked to obtain the face pose information of each key point.
  • the PRnet model is used to mark the face pose, and three pose data (yaw, pitch, roll) are obtained.
  • C represents different types of faces, including: front face, side face, head up, head down, expression and occlusion, Adjusted according to the number of training samples, Calculated from the master-branch network, Indicates the L2 distance, by calculating the loss value, and judging whether the loss value is less than the preset threshold value, iterative training of PFLD is performed until the loss value is less than the preset threshold value, and the face key point recognition model is obtained.
  • the hair recognition task identifier includes the hair detail recognition identifier, calculate the hair detail information of the face image according to the coordinate information of the key points of the face and the coordinate information of the hair outline graphic;
  • the hair recognition task identifier includes a hairstyle recognition identifier, extract the hair image features in the face image according to the hair contour graphic, and recognize the hairstyle information of the face image according to the hair image features.
  • Steps 206-211 in this embodiment are similar to steps 101-106 in the first embodiment, and will not be repeated here.
  • this embodiment describes in detail the training process of the face key point recognition model.
  • the face key point model in this embodiment uses PFLD, which is a high precision, fast, small model
  • PFLD a high precision, fast, small model
  • the face key point detection model solves the problem of data imbalance by using an auxiliary network to estimate the face pose information of the face sample, thereby improving the accuracy of the model prediction.
  • the third embodiment of the hair information identification method in the embodiment of the present application includes:
  • the training sample set includes sample face images
  • the test set includes test face images
  • the corresponding marking information can be obtained by manually marking each pixel value in the sample face image and the test face image, for example, by manually identifying the hair region and non- In the hair area, the pixel value of the hair area is marked as 1, the pixel value of the non-hair area is marked as 0, and the area marked as 1 is masked to obtain a mask image, and the sample face image is input to the semantic segmentation After the network, the semantic segmentation image is obtained, and the formula of the cross-entropy loss function is as follows:
  • the intersection ratio and pixel accuracy are used as the segmentation evaluation index of the semantic segmentation network, wherein the intersection ratio is calculated by calculating the mask image corresponding to the test set and the semantic segmentation image obtained after inputting the semantic segmentation network after training.
  • the overlap rate between them that is, the ratio of their intersection and union
  • the pixel accuracy is the ratio of the correct pixel value predicted by the semantic segmentation image to the total pixel value between the mask image and the semantic segmentation image, by Determine whether the intersection ratio and pixel accuracy meet the preset standards.
  • the hair recognition task identifier includes a hair detail recognition identifier and/or a hairstyle recognition identifier
  • the hair recognition task identifier includes the hair detail recognition identifier, calculate the hair detail information of the face image according to the coordinate information of the key points of the face and the coordinate information of the hair outline graphic;
  • the hair recognition task identifier includes a hairstyle recognition identifier, extract the hair image features in the face image according to the hair contour graphic, and recognize the hairstyle information of the face image according to the hair image features.
  • Steps 306-308 in this embodiment are similar to steps 104-106 in the first embodiment, and will not be repeated here.
  • This embodiment describes the training process of the contour recognition model in detail on the basis of the previous embodiment.
  • the contour recognition model can recognize the hair contour according to the pixel values in different regions of the face image, and the recognition accuracy is high, so that the subsequent calculation of hair detail information more precise.
  • the fourth embodiment of the hair information identification method in the embodiment of the present application includes:
  • the hair recognition task identifier includes a hair detail recognition identifier and a hairstyle recognition identifier
  • the hair recognition task identification includes the hair detail identification identification, divide the face image into a left area, a middle area, and a right area according to the coordinate information of the left eye key point and the right eye key point;
  • the formula is:
  • the face length is defined as follows:
  • the connecting line between the key point of the left eye and the key point of the right eye is defined as between_eyes_line, and the point at the bottom of the hair contour graphics in each area is defined as hair_bottom_point, and the formula for calculating the height of the forehead is:
  • forehead_length dist(hair_bottom_point, between_eyes_line)
  • the formula for defining the hairline height is:
  • the height of the forehead is a normalized value between 0 and 1, so the height of the hairline is also a value between 0 and 1.
  • the width of the rectangle is hair_x_len
  • the height is hair_y_len
  • the hair volume area can be roughly expressed as hair_x_len*hair_y_len
  • the construction method can be obtained by obtaining the hair outline
  • the four points on the uppermost edge, the lowermost edge, the leftmost edge, and the rightmost edge of the graph, the vertical distance between the uppermost edge and the lowermost edge is taken as the height of the rectangle, and the distance between the leftmost and rightmost points is The vertical distance of is defined as the height of the rectangle.
  • the face rectangle is obtained by obtaining the four points of the uppermost edge, the lowermost edge, the leftmost edge, and the rightmost point of the key points of the face. Divide the two to get the volume information, the calculation formula is:
  • the hair recognition task identifier includes a hairstyle recognition identifier, extract the hair image features in the face image according to the hair contour graphic, and recognize the hairstyle information of the face image according to the hair image features.
  • this embodiment describes in detail how to calculate the hair detail information of the face image according to the coordinate information of the key points of the face and the coordinate information of the hair contour figure.
  • the fifth embodiment of the hair information recognition method in the embodiment of the present application includes:
  • the hair recognition task identifier includes a hair detail recognition identifier and/or a hairstyle recognition identifier
  • the hair recognition task identifier includes the hair detail recognition identifier, calculate the hair detail information of the face image according to the coordinate information of the key points of the face and the coordinate information of the hair outline graphic;
  • the hair recognition task identifier includes a hairstyle recognition identifier, then extract the hair image in the face image according to the hair contour graphic;
  • the hair image feature is the image feature about hair extracted from the face image, wherein the image feature is a feature representing the color, texture, shape or spatial relationship of the image.
  • the hair image feature can specifically be the data extracted by the computer device from the hairstyle image, which can represent the color, length or shape of the hair, etc., which can be regarded as the representation or description of the "non-image" of the hairstyle image, Such as numeric values, vectors, matrices, or symbols.
  • the face image within the range of the hair outline graphics can be processed through the convolutional neural network to extract the hair image features in the face image, and a multi-task convolutional neural network may be set for different hairstyle information required Multi-feature extraction is carried out.
  • hairstyle information may include hair length and hair color, including long hair, medium hair, short hair, ultra-short hair, bald head, and tied hair. Hair colors can include: black, brown, blonde, off-white, red, etc.
  • this embodiment describes in detail the process of extracting the hair image features in the face image according to the hair contour graphics, and identifying the hairstyle information of the face image according to the hair image features , extract the hair image in the face image through the hair contour graph; extract the hair image features in the hair image according to the preset multi-task convolutional neural network; perform at least one hairstyle recognition task according to the extracted hair image features, Get at least one hairstyle information.
  • various hair image features can be extracted, and various hairstyle information can be correspondingly obtained.
  • An embodiment of the hair information recognition device in the embodiment of the present application includes:
  • the obtaining module 601 is used to obtain the face image to be recognized;
  • the first model input module 602 is used to input the face image into the pre-trained face key point recognition model for face feature recognition, and obtain the The face key points of the face image and their coordinate information;
  • the second model input module 603 is used to input the face image into a pre-trained contour recognition model to obtain at least one contour of the face image Graphics;
  • hair contour determination module 604 used to determine the hair contour pattern of the human face image from the contour pattern according to the key points of the human face, and calculate the hair contour pattern according to the coordinate information of the key points of the human face The coordinate information of the contour graphics;
  • the task parsing module 605, configured to obtain the hair recognition request input by the user, and parse the hair recognition task identifier carried in the hair recognition request, wherein the hair recognition task identifier includes the hair detail recognition identifier and/or or hairstyle identification;
  • the hair detail identification module 606 is used to calculate the The hair detail information of the face image;
  • the above-mentioned face images can be stored in nodes of a block chain.
  • the hair information recognition device runs the above hair information recognition method, the hair information recognition device obtains the face image to be recognized; respectively inputs the face image into the pre-trained face key
  • the point recognition model and the contour recognition model the coordinate information of the key points of the human face and the hair contour figure of the human face image are obtained; the hair contour figure is calculated according to the coordinate information of the key points of the human face coordinate information; obtain the hair recognition request input by the user, and select the corresponding hair recognition task according to the hair recognition request, wherein the hair recognition task includes hair detail recognition and hairstyle recognition; if the hair recognition task is hair detail recognition, Then, according to the coordinate information of the key points of the human face and the coordinate information of the hair outline graphic, calculate the hair detail information of the human face image; if the hair recognition task is hairstyle recognition, then extract The features of the hair image in the face image, and identifying the hairstyle information of the face image according to the features of the hair image.
  • this method recognizes the face image, obtains the hair outline and key points of the face, defines the hair information based on the application scene, and calculates the hair information. It can not only identify the overall hair area, but also accurately Effective identification of many hair information.
  • the second embodiment of the hair information recognition device in the embodiment of the present application includes:
  • the obtaining module 601 is used to obtain the face image to be recognized;
  • the first model input module 602 is used to input the face image into the pre-trained face key point recognition model for face feature recognition, and obtain the The face key points of the face image and their coordinate information;
  • the second model input module 603 is used to input the face image into a pre-trained contour recognition model to obtain at least one contour of the face image Graphics;
  • hair contour determination module 604 used to determine the hair contour pattern of the human face image from the contour pattern according to the key points of the human face, and calculate the hair contour pattern according to the coordinate information of the key points of the human face The coordinate information of the contour graphics;
  • the task parsing module 605, configured to obtain the hair recognition request input by the user, and parse the hair recognition task identifier carried in the hair recognition request, wherein the hair recognition task identifier includes the hair detail recognition identifier and/or or hairstyle identification;
  • the hair detail identification module 606 is used to calculate the The hair detail information of the face image;
  • the hair information recognition device further includes a first model training module 608, and the first model training module 608 is specifically used to: obtain a training sample set, wherein the training sample set includes a sample face image; obtain the The first sample human face key point information of the sample human face image, and the first sample human face key point information is marked to obtain the first sample human face posture information; the sample human face image is input to the preset In the neural network, obtain the second sample human face key point information and the second sample human face posture information; combine the first sample human face key point information and the first sample human face posture information with the second sample Comparing the face key point information with the second sample face pose information, calculating a loss function; judging whether the loss function is greater than a preset threshold; if so, adjusting the parameters of the neural network according to the loss function, and The sample face image is input into the neural network after parameter adjustment, and the model training is repeated until the loss function is not greater than the preset threshold, and a face key point recognition model is obtained.
  • the training sample set includes a sample face image
  • the hair information recognition device further includes a second model training module 609, and the second model training module 609 is specifically used to: obtain a preset test set, wherein the test set includes a test face image; obtain the The mark information of the sample face image and the test face image, and generate the mask image corresponding to the sample face image and the test face image according to the mark information; input the sample face image In the semantic segmentation network, the semantic segmentation image is obtained, and the loss value between the semantic segmentation image and the mask image corresponding to the sample face image is calculated through the cross-entropy loss function, and the semantic segmentation is adjusted according to the value Segment the parameters of the network for the next round of training until the end of the preset rounds of training; input the test set into the semantic segmentation network after the training to obtain the corresponding semantic segmentation image, and according to the test set Calculate the segmentation evaluation index for the corresponding mask image and the semantic segmentation image corresponding to the test set; judge whether the segmentation evaluation index reaches the preset standard; if not, adjust the cross
  • the face key points include left eye key points and right eye key points
  • the hair detail information includes fringe length information
  • the hair detail recognition module 606 is specifically configured to: according to the left eye key points and The coordinate information of the key point of the right eye is divided into the left area, the middle area and the right area; according to the coordinate information of the lowest point of the key point of the human face in the middle area and the hair outline
  • the coordinate information of the highest point in the graph is used to calculate the length of the face in the face image; according to the coordinate information of the highest point and the lowest point in the hair outline graphics in the left area, the middle area and the right area, the calculation is obtained
  • the hair lengths of the left region, the middle region and the right region dividing the hair length by the face length to obtain the length information of bangs.
  • the hair detail information also includes hairline height information
  • the hair detail identification module 605 is specifically further configured to: connect the left eye key point and the right eye key point to obtain a connecting line; respectively calculate The distance value between the lowest point and the connecting line in the hair contour graphics of the left region, the middle region and the right region, and select the maximum value of the distance value in the left region, the middle region and the right region divided by the human face length to obtain the forehead height of the human face image; subtract one from the forehead height to obtain the hairline height information of the human face image.
  • the hair detail identification module 605 is specifically further configured to: construct a minimum hair region rectangle according to the coordinate information of the hair outline graphic, and calculate an area of the minimum hair region rectangle, wherein the minimum hair region The rectangle is the smallest rectangle containing the hair outline graphics; according to the coordinate information of the key points of the human face, the minimum face rectangle is constructed, and the area of the minimum face rectangle is calculated, wherein the minimum face rectangle contains all The minimum rectangle of the key points of the human face; dividing the area of the minimum facial rectangle by the area of the minimum facial rectangle to obtain the hair volume information of the human face image.
  • the hair style information identification module 607 is specifically configured to: extract the hair image in the face image according to the hair contour graphic; extract the hair image in the hair image according to a preset multi-task convolutional neural network. Hair image features; according to the extracted hair image features, perform at least one hairstyle recognition task, and obtain at least one hairstyle information.
  • this embodiment describes in detail the specific functions of each module and the unit composition of some modules.
  • the face image is recognized, and the hair outline and human body are obtained.
  • Face key points define hair information based on application scenarios, and calculate hair information, not only can identify the overall area of hair, but also can accurately and effectively identify many information of hair.
  • Fig. 8 is a schematic structural diagram of a hair information recognition device provided by an embodiment of the present application.
  • the hair information recognition device 800 may have relatively large differences due to different configurations or performances, and may include one or more processors (central processing units) , CPU) 810 (eg, one or more processors) and memory 820, and one or more storage media 830 (eg, one or more mass storage devices) for storing application programs 833 or data 832 .
  • the memory 820 and the storage medium 830 may be temporary storage or persistent storage.
  • the program stored in the storage medium 830 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations for the hair information recognition device 800 .
  • the processor 810 may be configured to communicate with the storage medium 830, and execute a series of instruction operations in the storage medium 830 on the hair information identification device 800, so as to realize the steps of the above hair information identification method.
  • the hair information identification device 800 can also include one or more power sources 840, one or more wired or wireless network interfaces 850, one or more input and output interfaces 860, and/or, one or more operating systems 831, such as Windows Server , Mac OS X, Unix, Linux, FreeBSD, etc.
  • operating systems 831 such as Windows Server , Mac OS X, Unix, Linux, FreeBSD, etc.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with each other using cryptographic methods. Each data block contains a batch of network transaction information for verification The validity of its information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • the computer-readable storage medium may also be a volatile computer-readable storage medium. Instructions are stored in the computer-readable storage medium, and when the instructions are run on the computer, the computer is made to execute the steps of the method for identifying hair information.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disc and other media that can store program codes. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

La présente demande se rapporte au domaine de l'intelligence artificielle et concerne un procédé, un appareil et un dispositif de reconnaissance d'informations capillaires, ainsi qu'un support de stockage. Le procédé consiste à : acquérir une image de visage, puis entrer respectivement l'image du visage dans un modèle de reconnaissance de points-clés de visage et un modèle de reconnaissance de contour afin d'obtenir les informations de coordonnées d'un point-clé de visage, ainsi qu'une pluralité de graphiques de contour ; déterminer un graphique de contour de cheveux à partir des graphiques de contour en fonction du point-clé de visage, puis calculer les informations de coordonnées du graphique de contour de cheveux en fonction des informations de coordonnées du point-clé de visage ; et acquérir une demande de reconnaissance de cheveux, puis sélectionner une tâche de reconnaissance de cheveux correspondante et obtenir respectivement des informations détaillées sur les cheveux ainsi que des informations sur la coiffure. Selon le procédé, l'image du visage est reconnue d'après la technologie d'apprentissage profond, le contour des cheveux et le point-clé de visage sont obtenus, les informations sur les cheveux sont définies d'après la scène d'application et sont calculées, la zone globale des cheveux peut être reconnue, et une grande variété d'informations sur les cheveux peut être reconnue avec précision. De plus, la présente demande se rapporte à la technologie des chaînes de blocs, et l'image du visage peut être stockée dans une chaîne de blocs.
PCT/CN2022/071475 2021-06-10 2022-01-12 Procédé, appareil et dispositif de reconnaissance d'informations capillaires, et support de stockage WO2022257456A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110645384.2A CN113255561B (zh) 2021-06-10 2021-06-10 头发信息识别方法、装置、设备及存储介质
CN202110645384.2 2021-06-10

Publications (1)

Publication Number Publication Date
WO2022257456A1 true WO2022257456A1 (fr) 2022-12-15

Family

ID=77187246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/071475 WO2022257456A1 (fr) 2021-06-10 2022-01-12 Procédé, appareil et dispositif de reconnaissance d'informations capillaires, et support de stockage

Country Status (2)

Country Link
CN (1) CN113255561B (fr)
WO (1) WO2022257456A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984426A (zh) * 2023-03-21 2023-04-18 美众(天津)科技有限公司 发型演示图像的生成的方法、装置、终端及存储介质
CN117237397A (zh) * 2023-07-13 2023-12-15 天翼爱音乐文化科技有限公司 基于特征融合的人像分割方法、***、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255561B (zh) * 2021-06-10 2021-11-02 平安科技(深圳)有限公司 头发信息识别方法、装置、设备及存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001344278A (ja) * 2000-03-29 2001-12-14 Seiko Epson Corp 検索方法、検索装置、検索プログラムを記憶した記憶媒体、検索対象マップ作成方法、検索対象マップ作成装置、画像検索方法、画像検索装置、画像検索プログラムを記憶した記憶媒体、画像検索用データを記憶した記憶媒体、画像マップ作成方法及び画像マップ作成装置
CN107451950A (zh) * 2016-05-30 2017-12-08 北京旷视科技有限公司 人脸图像生成方法、人脸识别模型训练方法及相应装置
CN108009470A (zh) * 2017-10-20 2018-05-08 深圳市朗形网络科技有限公司 一种图像提取的方法和装置
CN108960167A (zh) * 2018-07-11 2018-12-07 腾讯科技(深圳)有限公司 发型识别方法、装置、计算机可读存储介质和计算机设备
CN110021000A (zh) * 2019-05-06 2019-07-16 厦门欢乐逛科技股份有限公司 基于图层变形的发际线修复方法及装置
CN111582278A (zh) * 2019-02-19 2020-08-25 北京嘀嘀无限科技发展有限公司 人像分割方法、装置及电子设备
CN111814573A (zh) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 一种人脸信息的检测方法、装置、终端设备及存储介质
CN111931908A (zh) * 2020-07-23 2020-11-13 北京电子科技学院 一种基于人脸轮廓的人脸图像自动生成方法
CN112836566A (zh) * 2020-12-01 2021-05-25 北京智云视图科技有限公司 针对边缘设备的多任务神经网络人脸关键点检测方法
CN113255561A (zh) * 2021-06-10 2021-08-13 平安科技(深圳)有限公司 头发信息识别方法、装置、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102121654B1 (ko) * 2018-06-29 2020-06-10 전자부품연구원 딥러닝 기반 제스처 자동 인식 방법 및 시스템
CN109829431B (zh) * 2019-01-31 2021-02-12 北京字节跳动网络技术有限公司 用于生成信息的方法和装置
US11594074B2 (en) * 2019-09-10 2023-02-28 Amarjot Singh Continuously evolving and interactive Disguised Face Identification (DFI) with facial key points using ScatterNet Hybrid Deep Learning (SHDL) network
CN112257504A (zh) * 2020-09-16 2021-01-22 深圳数联天下智能科技有限公司 脸型识别方法、脸型识别模型训练方法及相关装置

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001344278A (ja) * 2000-03-29 2001-12-14 Seiko Epson Corp 検索方法、検索装置、検索プログラムを記憶した記憶媒体、検索対象マップ作成方法、検索対象マップ作成装置、画像検索方法、画像検索装置、画像検索プログラムを記憶した記憶媒体、画像検索用データを記憶した記憶媒体、画像マップ作成方法及び画像マップ作成装置
CN107451950A (zh) * 2016-05-30 2017-12-08 北京旷视科技有限公司 人脸图像生成方法、人脸识别模型训练方法及相应装置
CN108009470A (zh) * 2017-10-20 2018-05-08 深圳市朗形网络科技有限公司 一种图像提取的方法和装置
CN108960167A (zh) * 2018-07-11 2018-12-07 腾讯科技(深圳)有限公司 发型识别方法、装置、计算机可读存储介质和计算机设备
CN111582278A (zh) * 2019-02-19 2020-08-25 北京嘀嘀无限科技发展有限公司 人像分割方法、装置及电子设备
CN110021000A (zh) * 2019-05-06 2019-07-16 厦门欢乐逛科技股份有限公司 基于图层变形的发际线修复方法及装置
CN111814573A (zh) * 2020-06-12 2020-10-23 深圳禾思众成科技有限公司 一种人脸信息的检测方法、装置、终端设备及存储介质
CN111931908A (zh) * 2020-07-23 2020-11-13 北京电子科技学院 一种基于人脸轮廓的人脸图像自动生成方法
CN112836566A (zh) * 2020-12-01 2021-05-25 北京智云视图科技有限公司 针对边缘设备的多任务神经网络人脸关键点检测方法
CN113255561A (zh) * 2021-06-10 2021-08-13 平安科技(深圳)有限公司 头发信息识别方法、装置、设备及存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984426A (zh) * 2023-03-21 2023-04-18 美众(天津)科技有限公司 发型演示图像的生成的方法、装置、终端及存储介质
CN115984426B (zh) * 2023-03-21 2023-07-04 美众(天津)科技有限公司 发型演示图像的生成的方法、装置、终端及存储介质
CN117237397A (zh) * 2023-07-13 2023-12-15 天翼爱音乐文化科技有限公司 基于特征融合的人像分割方法、***、设备及存储介质
CN117237397B (zh) * 2023-07-13 2024-05-28 天翼爱音乐文化科技有限公司 基于特征融合的人像分割方法、***、设备及存储介质

Also Published As

Publication number Publication date
CN113255561A (zh) 2021-08-13
CN113255561B (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
US11256905B2 (en) Face detection method and apparatus, service processing method, terminal device, and storage medium
WO2022257456A1 (fr) Procédé, appareil et dispositif de reconnaissance d'informations capillaires, et support de stockage
US11651797B2 (en) Real time video processing for changing proportions of an object in the video
WO2020063527A1 (fr) Procédé de génération de coiffures sur la base d'une récupération et déformation de multiples caractéristiques
CN109325437B (zh) 图像处理方法、装置和***
US11403874B2 (en) Virtual avatar generation method and apparatus for generating virtual avatar including user selected face property, and storage medium
WO2017193906A1 (fr) Procédé et système de traitement d'image
WO2020103700A1 (fr) Procédé de reconnaissance d'image basé sur des expressions microfaciales, appareil et dispositif associé
JP4950787B2 (ja) 画像処理装置及びその方法
CN107463865B (zh) 人脸检测模型训练方法、人脸检测方法及装置
CN108629336B (zh) 基于人脸特征点识别的颜值计算方法
US20050084140A1 (en) Multi-modal face recognition
CN110147721A (zh) 一种三维人脸识别方法、模型训练方法和装置
JPH09251534A (ja) 人物認証装置及び人物認証方法
JP5644773B2 (ja) 顔画像を照合する装置および方法
CN108446672B (zh) 一种基于由粗到细脸部形状估计的人脸对齐方法
CN108615256B (zh) 一种人脸三维重建方法及装置
CN105335719A (zh) 活体检测方法及装置
CN113570684A (zh) 图像处理方法、装置、计算机设备和存储介质
CN115050064A (zh) 人脸活体检测方法、装置、设备及介质
CN113591763B (zh) 人脸脸型的分类识别方法、装置、存储介质及计算机设备
JP2000030065A (ja) パターン認識装置及びその方法
CN114005169B (zh) 人脸关键点检测方法、装置、电子设备及存储介质
CN112101127A (zh) 人脸脸型的识别方法、装置、计算设备及计算机存储介质
CN107153806B (zh) 一种人脸检测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22819064

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22819064

Country of ref document: EP

Kind code of ref document: A1