CN113313093B - Face identification method and system based on face part extraction and skin color editing - Google Patents

Face identification method and system based on face part extraction and skin color editing Download PDF

Info

Publication number
CN113313093B
CN113313093B CN202110861691.4A CN202110861691A CN113313093B CN 113313093 B CN113313093 B CN 113313093B CN 202110861691 A CN202110861691 A CN 202110861691A CN 113313093 B CN113313093 B CN 113313093B
Authority
CN
China
Prior art keywords
face
skin color
data set
race
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110861691.4A
Other languages
Chinese (zh)
Other versions
CN113313093A (en
Inventor
李来
王东
王月平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Moredian Technology Co ltd
Original Assignee
Hangzhou Moredian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Moredian Technology Co ltd filed Critical Hangzhou Moredian Technology Co ltd
Priority to CN202110861691.4A priority Critical patent/CN113313093B/en
Publication of CN113313093A publication Critical patent/CN113313093A/en
Application granted granted Critical
Publication of CN113313093B publication Critical patent/CN113313093B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to a face identification method based on face part extraction and skin color editing, wherein the method comprises the following steps: carrying out face part segmentation extraction on the own face data set through a face part segmentation model to obtain a face part and a corresponding skin color region position; clustering the own face data set according to skin colors, and then acquiring scarce race samples with small distribution in the own face data set; according to the position of the skin color area, transferring the skin color of the human face of the scarce race to a facial part to generate a large number of scarce race samples; and training a face recognition model based on the own face data set, and performing face recognition through the face recognition model. Through the application, the problem that the race recognition rate is low due to small distribution and small occupation ratio in the related technology is solved, and the recognition capability of a small number of races is obviously improved.

Description

Face identification method and system based on face part extraction and skin color editing
Technical Field
The present application relates to the field of face recognition technology, and in particular, to a face recognition method and system based on face component extraction and skin color editing.
Background
The growth in data size has greatly pushed the development of deep learning, to which face recognition techniques would benefit. Before the face recognition model is deployed in an application scene, a large amount of data is required to train the model.
The currently publicly available large-scale face data sets mainly come from open sources in academia, and such data sets mainly lack sufficient attention to human distribution and the number of colored races in specific scenes such as age, expression and the like, so that the trained face recognition model has poor generalization capability and has a tendency to the large-number and large-number races.
Another data source is service scenario and internal data collection, and the scheme also has two major difficulties: data acquisition: a large amount of effective multi-race face data sampling in a full scene is difficult, and race distribution difference in the existing training data cannot be corrected; (II) data cleaning: the existing model has human bias, large errors can be introduced when the model is relied on for cleaning, and in addition, the problems of low speed, high cost, difficult guarantee of data quality and the like exist when the model is relied on for manual cleaning.
At present, no effective solution is provided for the problem that the human face recognition method in the related technology has low recognition rate for the race with small distribution and small occupation ratio.
Disclosure of Invention
The embodiment of the application provides a face identification method, a system and a computer readable storage medium based on face part extraction and skin color editing, so as to at least solve the problem that the face identification method in the related technology has low recognition rate for the race with small distribution and small proportion.
In a first aspect, an embodiment of the present application provides a face recognition method based on face component extraction and skin color editing, where the method includes:
carrying out face part segmentation extraction on the own face data set through a face part segmentation model to obtain a face part and a corresponding skin color region position;
clustering the self face data set according to skin colors, and then acquiring scarce race samples with small distribution in the self face data set;
migrating the skin color of the human face of the rare race to the facial component according to the skin color area position to generate a large number of rare race samples, wherein the generated rare race samples are continuously stored in the own human face data set;
training a face recognition model based on the own face data set, and performing face recognition through the face recognition model.
In some embodiments, before performing facial component segmentation extraction on the own face data set by the face component segmentation model, the method further includes:
constructing a lightweight split network;
and training the lightweight segmentation network by adopting a face component extraction data set to obtain the face component segmentation model.
In some of these embodiments, the constructing a lightweight split network comprises:
constructing a feature extraction layer of the lightweight partition network in a reparameterization mode;
building a double-branch network structure, wherein the double branches comprise a main branch and an auxiliary branch, the main branch is a semantic information branch, and the auxiliary branch is a visual information branch;
setting the trunk branch to iterate by adopting a first loss function;
adding the first loss function in the auxiliary branch, and setting the auxiliary branch to iterate by adopting the first loss function, wherein the first loss function is a deep supervision loss function;
and setting the lightweight partition network to iterate by adopting a second loss function as a total loss function.
In some of these embodiments, the following formula is employed as the first loss function:
Figure 100002_DEST_PATH_IMAGE001
the loss is a first loss function, N is the product of width and height of the input face image, i is the index of the loss function, j is the index of a pixel point in the image, and P is the predicted value of the pixel point in the image.
In some of these embodiments, the following formula is employed as the second loss function:
Figure DEST_PATH_IMAGE002
wherein L is the second loss function, X is an input to the lightweight split network, Y is an output of the lightweight split network,
Figure 100002_DEST_PATH_IMAGE003
is a loss function of the trunk branches,
Figure DEST_PATH_IMAGE004
is a loss function of the auxiliary branch, K is the number of branches,
Figure 100002_DEST_PATH_IMAGE005
is a balance parameter for controlling the degree of contribution of the loss function of the main branch and the auxiliary branch.
In some embodiments, the migrating the human skin color of the rare race onto the facial component to generate a plurality of samples of the rare race according to the skin color region position comprises:
the method comprises the following steps of simulating illumination distribution of a human face in a real environment through a color space dithering method so as to enhance the authenticity and diversity of skin color, wherein the color space dithering method is realized through the following formula:
Figure DEST_PATH_IMAGE006
wherein X is the face region, i is the index of the color space, r is the proportion of skin tone adjustment,
Figure 100002_DEST_PATH_IMAGE007
is a random factor that simulates illumination.
In some embodiments, the clustering the self-owned face dataset by skin color comprises:
setting N clustering centers according to the race categories in the self face data set, wherein one clustering center represents one race complexion;
configuring a skin color label for the face image in the self-owned face data set through an unsupervised clustering algorithm;
and distributing the face images to corresponding clustering centers according to the skin color labels for clustering.
In some embodiments, the obtaining of the sparsely distributed rare species samples in the self-owned face dataset includes:
acquiring the skin color label in the self face data set after clustering;
and acquiring the scarce race samples with small distribution in the self face data set according to the skin color label.
In a second aspect, an embodiment of the present application further provides a face recognition system based on face component extraction and skin color editing, where the system includes: the device comprises a segmentation extraction module, a clustering module, a migration module and a training module.
The segmentation extraction module is used for performing face part segmentation extraction through a face part segmentation model based on an own face data set to obtain a face part and a corresponding skin color region position;
the clustering module is used for clustering the self-owned face data set according to skin colors and then acquiring scarce race samples with small distribution in the self-owned face data set;
the migration module is used for migrating the skin color of the human face of the scarce race to the facial component according to the skin color region position so as to generate a large number of scarce race samples, wherein the generated scarce race samples are continuously stored in the own human face data set;
the training module is used for training a face recognition model based on the own face data set and carrying out face recognition through the face recognition model.
In a third aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the face recognition method based on face component extraction and skin color editing as described in the first aspect above.
Compared with the related art, the face identification method based on face part extraction and skin color editing, provided by the embodiment of the application, obtains the positions of the face parts and the skin color regions from an open source database through a lightweight face part extraction model, further clusters the own face data set to obtain rare race samples which are few in distribution and small in quantity, migrates the skin colors of the rare race samples to the face parts to generate a large quantity of rare race samples, and finally performs model training based on the own face data set with a large quantity of rare race samples. In this embodiment, because the data set used for training is uniform in the population distribution, the face model obtained through training has no bias for face recognition of the rare population, and has the same recognition effect.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic diagram of an application environment of a face recognition method based on face part extraction and skin color editing according to an embodiment of the present application;
FIG. 2 is a flow chart of a face recognition method based on face parts trend and skin color editing according to an embodiment of the application;
FIG. 3 is a block diagram of a face recognition system based on face component extraction and skin color editing according to an embodiment of the present application;
FIG. 4 is a workflow diagram of a face recognition system based on face parts extraction and skin color editing according to an embodiment of the application;
FIG. 5 is a schematic diagram of the operation of a face recognition system based on face component extraction and skin color editing according to an embodiment of the present application;
fig. 6 is an internal structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described and illustrated below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments provided in the present application without any inventive step are within the scope of protection of the present application.
It is obvious that the drawings in the following description are only examples or embodiments of the present application, and that it is also possible for a person skilled in the art to apply the present application to other similar contexts on the basis of these drawings without inventive effort. Moreover, it should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another.
Reference in the specification to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the specification. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of ordinary skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments without conflict.
Unless defined otherwise, technical or scientific terms referred to herein shall have the ordinary meaning as understood by those of ordinary skill in the art to which this application belongs. Reference to "a," "an," "the," and similar words throughout this application are not to be construed as limiting in number, and may refer to the singular or the plural. The present application is directed to the use of the terms "including," "comprising," "having," and any variations thereof, which are intended to cover non-exclusive inclusions; for example, a process, method, system, article, or apparatus that comprises a list of steps or modules (elements) is not limited to the listed steps or elements, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus. Reference to "connected," "coupled," and the like in this application is not intended to be limited to physical or mechanical connections, but may include electrical connections, whether direct or indirect. The term "plurality" as referred to herein means two or more. "and/or" describes an association relationship of associated objects, meaning that three relationships may exist, for example, "A and/or B" may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. Reference herein to the terms "first," "second," "third," and the like, are merely to distinguish similar objects and do not denote a particular ordering for the objects.
The face recognition method based on face part extraction and skin color editing provided by the application can be applied to an application environment shown in fig. 1, fig. 1 is an application environment schematic diagram of the face recognition method based on face part extraction and skin color editing according to the embodiment of the application, as shown in fig. 1, a face recognition algorithm is deployed on a terminal 10, and because the model adopts a data set which is obtained by face part extraction and skin color migration processing and is uniformly distributed in races in a training stage, the face recognition model has no human bias and has equal recognition effect on different races. Further, the terminal 10 acquires a face image of a person through a camera or the like, downloads comparative face data from the server 11 through a network, and performs face recognition. Further, after the recognition is finished, the external equipment is indicated to act according to the face recognition result. For example, in an entrance guard service scenario, when the face recognition of the terminal 10 is successful, the entrance guard device is instructed to open the gate. It should be noted that the terminal 10 in the embodiment of the present application may be an access identification device, or may be a mobile terminal such as a smart phone or a tablet computer, and the server 11 may be an individual server or a cluster formed by multiple services.
Fig. 2 is a flowchart of a face recognition method based on face component extraction and skin color editing according to an embodiment of the present application, and as shown in fig. 2, the flowchart includes the following steps:
s201, face part segmentation and extraction are carried out on the own face data set through a face part segmentation model, and a face part and a corresponding skin color region position are obtained. The self-owned face data set refers to a data set for face recognition training that is acquired by a unit such as an individual, an organization, or a company and is not an open source. The face images in the self-owned face dataset are not evenly distributed due to geographical constraints, for example, if the data set owner is a Chinese company, the face images may have a large proportion of yellow people and a small distribution of white people and black people. Further, the resulting facial component may include: ear, nose, eye, mouth, skin, hair, etc. In the segmentation process of the model, the model also synchronously acquires the skin color region position corresponding to each segmentation region;
s202, clustering the own face data set according to skin colors, and then acquiring rare race samples with small distribution in the own face data set. Because the free face data set simultaneously comprises a plurality of race data with different skin colors, in the step, clustering is carried out according to the skin colors, so that the face data set is divided into a plurality of subclasses according to the race skin colors, and the face data of rare race which is distributed rarely is conveniently acquired. Specifically, clustering can be performed through an unsupervised clustering algorithm;
and S203, migrating the skin color of the human face of the scarce race to the facial component according to the position of the skin color area to generate a large number of scarce race samples, wherein the scarce race samples are continuously stored in the own human face data set. Migrating human facial skin colors of a rare ethnic group onto a facial component includes: respectively calculating the skin color adjusting proportion of the skin color of the human face of the scarce race under the color space (R, G and B) indexes, and further adjusting the skin color of the human face of the face part in a code layer according to the skin color adjusting proportion and the position of the skin color area, thereby generating a large number of scarce race samples. Through the steps, the skin color of the face data which is distributed in a small and small amount in the original face data set is migrated to the face part of the race with a large skin color, and the face data of various scenes and various rare races of a plurality of age groups can be randomly, rapidly expanded in batches based on parameters such as different scenes, time, age groups and the like in the original data set, so that the race or the skin color in the own data set is uniformly distributed;
and S204, training a face recognition model based on the own face data set, and carrying out face recognition through the face recognition model. Through the steps S201 to S203, a new self-owned face data set with uniform race or skin color distribution is obtained, and the model is retrained depending on the face data set with relatively uniform distribution, so that the obtained model has relatively equal recognition effect for different races. It should be noted that, because the core invention point of the present application lies in the expansion of the data set to achieve the human race or skin color distribution balance, and how to train the face recognition model is a common means of those skilled in the art, it has no influence on the core invention point of the present application, and therefore, it is not described in detail in this embodiment.
Through the steps S201 to S204, compared with the method for training the human face recognition model after the data of the original data set or the internal data set is cleaned in the related art, in the embodiment of the present application, a large number of human face images of rare races are expanded in the original self-owned human face data set by the way of human face component extraction and skin color editing, a new human face data set with uniform skin color or race distribution is generated, and finally, the method for performing model training by using the new human face data set solves the problem that the human face recognition model in the related art has a poor recognition effect on a few races, and improves the recognition capability of the model.
In some embodiments, before the face part segmentation model performs face part segmentation extraction on the own face data set, the face part segmentation model needs to be constructed and trained according to business requirements, including: and constructing a light weight segmentation network, and adopting a face component extraction data set to train the light weight segmentation network to obtain the face component segmentation model. The face component extraction dataset is an open-source dataset specially used for training a face component segmentation model, for example, a CelebAMask-HQ dataset. Further, constructing the lightweight split network includes the steps of:
constructing a lightweight feature extraction layer (backbone) in a heavy parameterization mode to reduce the number of model parameters and reasoning time;
build two branch network structure to semantic information is as the backbone branch, regards visual information as the auxiliary branch, in addition, still adds the supplementary loss function of deep supervision to the auxiliary branch in step, and in this embodiment, two branches all have respective loss function, and all adopt first loss function to iterate. Therefore, stability during updating of the reverse gradient in the model training process is guaranteed, the problem of abnormal network divergence caused by large parameter updating fluctuation is reduced, and model convergence is accelerated;
and setting a second loss function as a total loss function of the lightweight segmentation network, and performing iteration in the training model.
In some embodiments, the first loss function and the second loss function may be characterized by the following equations 1 and 2, respectively:
equation 1:
Figure DEST_PATH_IMAGE008
wherein, loss is a first loss function, N is the product of width and height of the input face image, i is the index of the loss function, optionally, in this embodiment, 2.0-representing the trunk branch segmentation loss is set, and 1-representing the auxiliary branch segmentation loss is set; further, j is an index of a pixel point in the image, and P is a predicted value of the pixel point in the image;
equation 2:
Figure DEST_PATH_IMAGE009
wherein, L is the total loss of the lightweight split network, namely the second loss function; x is the input to a lightweight segmented network, typically an image; y is the output of the lightweight segmentation network, typically the segmentation annotation tags for the image;
Figure 862805DEST_PATH_IMAGE003
is the loss function of the trunk branches;
Figure 45524DEST_PATH_IMAGE004
is the loss function of the auxiliary branch; k is the number of branches, and optionally, in order to increase the training speed, the value of K is set to 1; further, in the above-mentioned case,
Figure 294103DEST_PATH_IMAGE005
is a balance parameter for controlling the degree of contribution of the loss function to the main branch and the auxiliary branch, and optionally, the present implementationIn the example set to 0.5.
In some embodiments, in consideration of the reality and diversity of the extended human face data of the rare race, during the process of transferring the skin color of the human face of the rare race onto the facial component, the illumination distribution of the human face in the real environment is simulated by a color space dithering method to enhance the reality and diversity, wherein the color space dithering method is characterized by the following formula 3:
equation 3:
Figure 112149DEST_PATH_IMAGE006
where X is the face region, i is the index of R, G and B color space, r is the scale of skin tone adjustment, and ε is the random factor of simulated lighting. In the actual migration process, the simulation of the illumination scenes with different illumination is realized by changing the random factor epsilon.
In some of these embodiments, clustering the self-contained face data set by skin color comprises: setting N clustering centers according to the race categories in the self face data set, wherein one clustering center represents the skin color of one race; configuring a skin color label for a face image in a self face data set through an unsupervised clustering algorithm; and distributing the face images to corresponding clustering centers according to the skin color labels for clustering.
Further, the acquiring of the scarce race sample with small distribution in the self-owned face data set comprises: acquiring a skin color label in the self face data set after clustering; and acquiring rare race samples with small distribution in the self face data set according to the skin color label.
It should be noted that the steps illustrated in the above-described flow diagrams or in the flow diagrams of the figures may be performed in a computer system, such as a set of computer-executable instructions, and that, although a logical order is illustrated in the flow diagrams, in some cases, the steps illustrated or described may be performed in an order different than here.
The embodiment also provides a face recognition system based on face component extraction and skin color editing, which is used for implementing the above embodiments and preferred embodiments, and the description of the system is omitted. As used hereinafter, the terms "module," "unit," "subunit," and the like may implement a combination of software and/or hardware for a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 3 is a block diagram of a face recognition system based on face component extraction and skin color editing according to an embodiment of the present application, and as shown in fig. 3, the system includes: a segmentation extraction module 31, a clustering module 32, a migration module 33, and a training module 34.
The segmentation extraction module 31 is configured to perform face component segmentation extraction based on an own face data set through a face component segmentation model to obtain a face component and a corresponding skin color region position;
the clustering module 32 is configured to cluster the self-owned face data set according to skin colors, and then obtain rare race samples with small distribution in the self-owned face data set;
the migration module 33 is configured to migrate the skin color of the human face of the scarce race to the facial component according to the skin color region position to generate a large number of scarce race samples, where the scarce race samples are continuously stored in the own human face data set;
the training module 34 is configured to train a face recognition model based on the own face data set, and perform face recognition through the face recognition model.
In some embodiments, fig. 4 is a workflow diagram of a face recognition system based on face parts extraction and skin color editing according to an embodiment of the application;
as shown in fig. 4, the system extracts a face part and extracts a precise position of a face skin color through facescene for a face image which is not divided much and accounts for a few face data sets;
meanwhile, on the other side, for the face images with less distribution and small proportion, randomly selecting the images and extracting face skin color RGB values;
further, under the condition of randomly disturbing the skin color, according to the accurate position of the skin color of the human face, the GRB value of the skin color of the human face is transferred to a human face part, so that a new human face image is obtained;
and finally, performing model training based on the data set with uniformly distributed race after updating.
Fig. 5 is a schematic diagram of the operation of a face recognition system based on face component extraction and skin color editing according to an embodiment of the present application, as shown in fig. 5, a conventional face image is input at the input end of the system, and a face component is output after face segmentation extraction; furthermore, the skin color data of a few ethnic groups are migrated to the face part and are superposed with the face part to obtain a new face image.
In addition, in combination with the face recognition method based on face part extraction and skin color editing in the above embodiment, the embodiment of the present application may provide a storage medium to implement. The storage medium having stored thereon a computer program; the computer program, when executed by a processor, implements any of the above-described embodiments of a face recognition method based on face component extraction and skin color editing.
In one embodiment, a computer device is provided, which may be a terminal. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a face recognition method based on face component extraction and skin color editing. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
In an embodiment, fig. 6 is a schematic internal structure diagram of an electronic device according to an embodiment of the present application, and as shown in fig. 6, there is provided an electronic device, which may be a server, and its internal structure diagram may be as shown in fig. 6. The electronic device comprises a processor, a network interface, an internal memory and a non-volatile memory connected by an internal bus, wherein the non-volatile memory stores an operating system, a computer program and a database. The processor is used for providing calculation and control capability, the network interface is used for communicating with an external terminal through network connection, the internal memory is used for providing an environment for an operating system and the running of a computer program, the computer program is executed by the processor to realize a face recognition method based on face part extraction and skin color editing, and the database is used for storing data.
Those skilled in the art will appreciate that the configuration shown in fig. 6 is a block diagram of only a portion of the configuration associated with the present application, and does not constitute a limitation on the electronic device to which the present application is applied, and a particular electronic device may include more or less components than those shown in the drawings, or may combine certain components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be understood by those skilled in the art that various features of the above embodiments can be combined arbitrarily, and for the sake of brevity, all possible combinations of the features in the above embodiments are not described, but should be considered as within the scope of the present disclosure as long as there is no contradiction between the combinations of the features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A face recognition method based on face component extraction and skin color editing is characterized by comprising the following steps:
carrying out face part segmentation extraction on the own face data set through a face part segmentation model to obtain a face part and a corresponding skin color region position;
clustering the self face data set according to skin colors, and then acquiring scarce race samples with small distribution in the self face data set;
migrating the human facial flesh tone of the rare ethnic group onto the facial component to generate a plurality of rare ethnic group samples according to the flesh tone region locations comprises: calculating a skin color adjusting proportion of the skin color of the human face of the scarce race under the color space index, and adjusting the skin color of the human face of the face part according to the skin color adjusting proportion and the position of the skin color area to generate a large number of scarce race samples, wherein the generated scarce race samples are continuously stored in the own human face data set;
training a face recognition model based on the own face data set, and performing face recognition through the face recognition model.
2. The method of claim 1, wherein prior to performing face component segmentation extraction on the own face data set by the face component segmentation model, the method further comprises:
constructing a lightweight split network;
and training the lightweight segmentation network by adopting a face component extraction data set to obtain the face component segmentation model.
3. The method of claim 2, wherein constructing the lightweight split network comprises:
constructing a feature extraction layer of the lightweight partition network in a reparameterization mode;
building a double-branch network structure, wherein the double branches comprise a main branch and an auxiliary branch, the main branch is a semantic information branch, and the auxiliary branch is a visual information branch;
setting the trunk branch to iterate by adopting a first loss function;
adding the first loss function in the auxiliary branch, and setting the auxiliary branch to synchronously iterate by adopting the first loss function, wherein the first loss function is a deep supervision loss function;
and setting the lightweight partition network to iterate by adopting a second loss function as a total loss function.
4. A method according to claim 3, characterized by using the following formula as the first loss function:
Figure DEST_PATH_IMAGE001
the loss is a first loss function, N is the product of width and height of the input face image, i is the index of the loss function, j is the index of a pixel point in the image, and P is the predicted value of the pixel point in the image.
5. A method according to claim 3, characterized by using the following formula as the second loss function:
Figure 971722DEST_PATH_IMAGE002
wherein L is the second loss function, X is an input to the lightweight split network, Y is an output of the lightweight split network,
Figure DEST_PATH_IMAGE003
is a loss function of the trunk branches,
Figure 92125DEST_PATH_IMAGE004
is a loss function of the auxiliary branch, K is the number of branches,
Figure DEST_PATH_IMAGE005
is a balance parameter for controlling the degree of contribution of the loss function of the main branch and the auxiliary branch.
6. The method of claim 1, wherein the migrating the human complexion of the rare ethnic group onto the facial component to generate a plurality of samples of the rare ethnic group according to the location of the complexion area comprises:
the method comprises the following steps of simulating illumination distribution of a human face in a real environment through a color space dithering method so as to enhance the authenticity and diversity of skin color, wherein the color space dithering method is realized through the following formula:
Figure 403020DEST_PATH_IMAGE006
wherein X is the face region, i is the index of the color space, r is the proportion of skin tone adjustment,
Figure DEST_PATH_IMAGE007
is a random factor that simulates illumination.
7. The method of claim 1, wherein the clustering the self-owned face dataset by skin tone comprises:
setting N clustering centers according to the race categories in the self face data set, wherein one clustering center represents a race complexion;
configuring skin color labels for the face images of different races in the self-owned face data set through an unsupervised clustering algorithm;
and distributing the face images to corresponding clustering centers according to the skin color labels for clustering.
8. The method of claim 7, wherein the obtaining of the sparse population sample with the small distribution in the self-owned face dataset comprises:
acquiring the skin color label in the self face data set after clustering;
and acquiring the scarce race samples with small distribution in the self face data set according to the skin color label.
9. A face recognition system based on face component extraction and skin tone editing, the system comprising: the device comprises a segmentation extraction module, a clustering module, a migration module and a training module;
the segmentation extraction module is used for performing face part segmentation extraction through a face part segmentation model based on an own face data set to obtain a face part and a corresponding skin color region position;
the clustering module is used for clustering the self-owned face data set according to skin colors and then acquiring scarce race samples with small distribution in the self-owned face data set;
the migration module is used for migrating the skin color of the human face of the scarce race to the facial component according to the skin color region position so as to generate a large number of scarce race samples, and the migration module comprises: calculating a skin color adjusting proportion of the skin color of the human face of the scarce race under the color space index, and adjusting the skin color of the human face of the face part according to the skin color adjusting proportion and the position of the skin color area to generate a large number of scarce race samples, wherein the generated scarce race samples are continuously stored in the own human face data set;
the training module is used for training a face recognition model based on the own face data set and carrying out face recognition through the face recognition model.
10. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out a face recognition method based on face component extraction and skin color editing as claimed in any one of claims 1 to 8.
CN202110861691.4A 2021-07-29 2021-07-29 Face identification method and system based on face part extraction and skin color editing Active CN113313093B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110861691.4A CN113313093B (en) 2021-07-29 2021-07-29 Face identification method and system based on face part extraction and skin color editing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110861691.4A CN113313093B (en) 2021-07-29 2021-07-29 Face identification method and system based on face part extraction and skin color editing

Publications (2)

Publication Number Publication Date
CN113313093A CN113313093A (en) 2021-08-27
CN113313093B true CN113313093B (en) 2021-11-05

Family

ID=77382034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110861691.4A Active CN113313093B (en) 2021-07-29 2021-07-29 Face identification method and system based on face part extraction and skin color editing

Country Status (1)

Country Link
CN (1) CN113313093B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485222A (en) * 2016-10-10 2017-03-08 上海电机学院 A kind of method for detecting human face being layered based on the colour of skin
CN110503078A (en) * 2019-08-29 2019-11-26 的卢技术有限公司 A kind of remote face identification method and system based on deep learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101630363B (en) * 2009-07-13 2011-11-23 中国船舶重工集团公司第七〇九研究所 Rapid detection method of face in color image under complex background
CN103605964A (en) * 2013-11-25 2014-02-26 上海骏聿数码科技有限公司 Face detection method and system based on image on-line learning
CN109299701B (en) * 2018-10-15 2021-12-14 南京信息工程大学 Human face age estimation method based on GAN expansion multi-human species characteristic collaborative selection
CN112825122A (en) * 2019-11-20 2021-05-21 北京眼神智能科技有限公司 Ethnicity judgment method, ethnicity judgment device, ethnicity judgment medium and ethnicity judgment equipment based on two-dimensional face image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485222A (en) * 2016-10-10 2017-03-08 上海电机学院 A kind of method for detecting human face being layered based on the colour of skin
CN110503078A (en) * 2019-08-29 2019-11-26 的卢技术有限公司 A kind of remote face identification method and system based on deep learning

Also Published As

Publication number Publication date
CN113313093A (en) 2021-08-27

Similar Documents

Publication Publication Date Title
US20230037908A1 (en) Machine learning model training method and device, and expression image classification method and device
CN107016415B (en) A kind of color image Color Semantic classification method based on full convolutional network
CN113994384A (en) Image rendering using machine learning
US20190087683A1 (en) Method and apparatus for outputting information
CN108287857A (en) Expression picture recommends method and device
CN114943789A (en) Image processing method, model training method and related device
WO2024109374A1 (en) Training method and apparatus for face swapping model, and device, storage medium and program product
CN111353546A (en) Training method and device of image processing model, computer equipment and storage medium
CN111489401A (en) Image color constancy processing method, system, equipment and storage medium
EP3975109A1 (en) Image processing method and apparatus, computer device and storage medium
CN113657404B (en) Image processing method of Dongba pictograph
CN110288513A (en) For changing the method, apparatus, equipment and storage medium of face character
CN115908613B (en) AI model generation method, system and storage medium based on artificial intelligence
CN110866469A (en) Human face facial features recognition method, device, equipment and medium
CN111062930A (en) Image selection method and device, storage medium and computer equipment
Feng et al. Finding intrinsic color themes in images with human visual perception
CN114282059A (en) Video retrieval method, device, equipment and storage medium
CN114638914A (en) Image generation method and device, computer equipment and storage medium
CN113590854A (en) Data processing method, data processing equipment and computer readable storage medium
CN115984930A (en) Micro expression recognition method and device and micro expression recognition model training method
US20200167655A1 (en) Method and apparatus for re-configuring neural network
CN114330514A (en) Data reconstruction method and system based on depth features and gradient information
CN115147261A (en) Image processing method, device, storage medium, equipment and product
CN113313093B (en) Face identification method and system based on face part extraction and skin color editing
CN113821663A (en) Image processing method, device, equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant