CN112101121B - Face sensitive identification method and device, storage medium and computer equipment - Google Patents

Face sensitive identification method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN112101121B
CN112101121B CN202010841596.3A CN202010841596A CN112101121B CN 112101121 B CN112101121 B CN 112101121B CN 202010841596 A CN202010841596 A CN 202010841596A CN 112101121 B CN112101121 B CN 112101121B
Authority
CN
China
Prior art keywords
face
sensitive
module
feature extraction
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010841596.3A
Other languages
Chinese (zh)
Other versions
CN112101121A (en
Inventor
陈仿雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010841596.3A priority Critical patent/CN112101121B/en
Publication of CN112101121A publication Critical patent/CN112101121A/en
Application granted granted Critical
Publication of CN112101121B publication Critical patent/CN112101121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Finance (AREA)
  • General Health & Medical Sciences (AREA)
  • Accounting & Taxation (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a face sensitive identification method and device, a storage medium and computer equipment, wherein the method comprises the following steps: acquiring a face image to be recognized of a user, acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises a collected subjective cognition sensitive type of the user, inputting the face image to be recognized into a face sensitive recognition model, acquiring second face sensitive characteristic information of the user, which is output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises a detected objective sensitive type of the user, and determining face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitivity type of the user and the objectively detected sensitivity type can be combined, and the accuracy of face sensitivity recognition of the user is improved.

Description

Face sensitive identification method and device, storage medium and computer equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a face sensitivity recognition method and apparatus, a storage medium, and a computer device.
Background
Along with the rapid development of mobile communication technology and the improvement of the living standard of people, various intelligent terminals are widely applied to daily work and living of people, and various application programs can be installed on the intelligent terminals, so that people are more and more used to use the application programs to solve problems, and currently, application programs for recommending skin care products to users exist, the application programs can help the users to know the skin conditions of the users and recommend skin care products suitable for the skin of the users, wherein the face sensitivity condition is an important index of the skin condition of the users, and the problem of low accuracy of face sensitivity identification of the users exists at present.
Disclosure of Invention
The invention mainly aims to provide a face sensitive identification method and device, a storage medium and computer equipment, which can effectively improve the accuracy of face sensitive identification.
To achieve the above object, a first aspect of the present invention provides a face-sensitive recognition method, the method including:
Acquiring a face image to be recognized of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
inputting the face image to be recognized into a face sensitive recognition model to obtain second face sensitive characteristic information of the user output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
and determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
To achieve the above object, a second aspect of the present invention provides a face-sensitive identification apparatus, the apparatus comprising:
The acquisition module is used for acquiring a face image to be recognized of a user and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
The model identification module is used for inputting the face image to be identified into a face sensitive identification model to obtain second face sensitive characteristic information of the user output by the face sensitive identification model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
and the determining module is used for determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
To achieve the above object, a third aspect of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, causes the processor to perform the steps in the face-sensitive recognition method according to the first aspect.
To achieve the above object, a fourth aspect of the present invention provides a computer device, comprising a memory and a processor, the memory storing a computer program, which when executed by the processor, causes the processor to perform the steps of the face-sensitive identification method according to the first aspect.
The embodiment of the invention has the following beneficial effects:
The invention provides a face sensitivity recognition method, which comprises the following steps: acquiring a face image to be recognized of a user, acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises a collected subjective cognition sensitive type of the user, inputting the face image to be recognized into a face sensitive recognition model, acquiring second face sensitive characteristic information of the user, which is output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises a detected objective sensitive type of the user, and determining face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitivity type of the user and the objectively detected sensitivity type can be combined, and the accuracy of face sensitivity recognition of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Wherein:
FIG. 1 is a flow chart of a face-sensitive recognition method according to an embodiment of the invention;
FIG. 2 is a schematic flow chart of a face-sensitive recognition method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a face-sensitive recognition model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a feature extraction module according to an embodiment of the invention;
FIG. 5 is a schematic diagram of a feature extraction sub-module according to an embodiment of the invention;
FIG. 6 is a schematic structural diagram of a face-sensitive recognition model according to an embodiment of the present invention;
FIG. 7 is a block diagram of a face-sensitive recognition device according to an embodiment of the present invention;
fig. 8 is a block diagram showing the structure of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, a flow chart of a face sensitivity recognition method according to an embodiment of the invention is shown, the method includes:
Step 101, acquiring a face image to be recognized of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
In the embodiment of the invention, the face-sensitive recognition method is realized by a face-sensitive recognition device, the face-sensitive recognition device is a program module, the program module is stored in a computer-readable storage medium of computer equipment, and a processor in the computer equipment can read and operate the face-sensitive recognition device from the computer-readable storage medium so as to realize the face-sensitive recognition method.
When a user needs to be subjected to face sensitive recognition, a face image to be recognized of the user can be obtained, wherein the face image to be recognized is an image shot by the front face of the user, the size of a region occupied by the face in the image is larger than a preset threshold, and the preset threshold can be 70%.
In addition, first face sensitive characteristic information of the user can be obtained, and the first face sensitive characteristic information comprises the collected sensitive type of subjective cognition of the user. The method comprises the steps of presetting sensitivity types with different characteristics, specifically, erythema sensitivity types corresponding to facial erythema characteristics, time scale sensitivity types corresponding to facial scale characteristics, and acne sensitivity types corresponding to facial acne characteristics. And a facial skin information table may be preset, where the facial skin information table is mainly used for collecting skin information subjective to a user, including erythema, scales, acne, and the like, and may further include burning, stinging, itching, tightness, and the like, and before facial sensitivity recognition is required for the user, the table may be filled out by the user, so as to determine the above-mentioned first facial sensitivity characteristic information of the user based on the content of the table filled out by the user. The first face sensitive feature information may be empty, or may include at least one of an erythema sensitive type, a scale sensitive type, and an acne sensitive type.
102, Inputting the face image to be recognized into a face sensitive recognition model to obtain second face sensitive characteristic information of the user output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
In the embodiment of the invention, a face sensitive recognition model is preset and is used for recognizing the face image to be recognized of the user so as to determine second face sensitive characteristic information of the user, wherein the face sensitive recognition model is obtained by training a face sensitive sample data set, the face sensitive sample data set comprises a plurality of face sensitive sample images, each face sensitive sample image is marked with a sensitive type, and in order to obtain the accuracy of the face sensitive recognition model by training, a position area corresponding to the sensitive type is marked on each face sensitive sample image. Wherein the face-sensitive sample dataset may be used to train a face-sensitive recognition model capable of recognizing at least erythema-sensitive type, scaling-sensitive type, and acne-sensitive type. It can be understood that, because of the face sensitive recognition model obtained by training the face sensitive sample data set, the face image to be recognized is recognized by using the face sensitive recognition model, and the obtained second face sensitive characteristic information is objective information, specifically, the detected objective sensitive type of the user.
The first face sensitive characteristic information and the second face sensitive characteristic information are used for distinguishing different face sensitive characteristic information, the first face sensitive characteristic information is used for representing a sensitive type of subjective cognition of a user, and the second face sensitive characteristic information is used for representing a objectively detected sensitive type of face sensitivity of the user, and other limitations are not made.
Step 103, determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
In the embodiment of the invention, a face image of a user to be identified is obtained, first face sensitive characteristic information of the user is obtained, the first sensitive characteristic information comprises a collected sensitive type of subjective cognition of the user, the face image to be identified is input into a face sensitive identification model, second face sensitive characteristic information of the user output by the face sensitive identification model is obtained, the second face sensitive characteristic information comprises a detected objective sensitive type of the user, and the face sensitivity of the user is determined according to the first face sensitive characteristic information and the second face sensitive characteristic information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitivity type of the user and the objectively detected sensitivity type can be combined, and the accuracy of face sensitivity recognition of the user is improved.
In order to better understand the technical solution in the embodiment of the present invention, please refer to fig. 2, which is another flow chart of a face-sensitive recognition method in the embodiment of the present invention, the method is obtained based on the method in the embodiment shown in fig. 1, and includes:
Step 201, acquiring a face image to be recognized of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
In the embodiment of the present invention, the above-mentioned step 201 is similar to the description of step 101 in the embodiment shown in fig. 1, and the details related to step 101 in the embodiment shown in fig. 1 may be referred to, which is not described herein.
Step 202, inputting the face image to be recognized into a face sensitive recognition model to obtain second face sensitive characteristic information of the user output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
in the embodiment of the invention, the preset face sensitive recognition model may be an improved yolov model, and the improvement mode may be to replace the feature extraction network Darknet53 of the original yolov model with a INVERTED RES Block (IRB) structure.
Referring to fig. 3, a schematic diagram of a possible structure of a face-sensitive recognition model in an embodiment of the present invention is shown, where the face-sensitive recognition model includes a convolutional layer module and N feature extraction modules, where the N feature extraction modules are sequentially cascaded in the order from the 1 st feature extraction module to the N feature extraction module, and the convolutional layer module is connected to the 1 st feature extraction module, and it should be noted that one feature extraction module may also be referred to as an IRB module, and in the embodiment of the present invention, the N IRB modules (feature extraction modules) are used to replace feature extraction network Darknet in yolov models to obtain an improved yolov model.
Any one of the N feature extraction modules includes a plurality of feature extraction sub-modules that are sequentially cascaded, specifically, for the 1 st to N-1 st feature extraction modules, each includes a plurality of feature extraction sub-modules that are sequentially cascaded, and further includes a downsampling module connected with a last feature extraction sub-module, for the N-th feature extraction module, each of the N-th feature extraction modules includes a plurality of feature extraction sub-modules, and an output of the last feature extraction sub-module is a part of an output of the face sensitive recognition model. In addition, a first feature extraction sub-module in the M-th feature extraction module is connected with a downsampling module in the M-1 th feature extraction module, and the value of M is [2, N ], and N is larger than 3.
Specifically, please refer to fig. 4, which is a schematic structural diagram of a feature extraction module in the embodiment of the present invention, and the feature extraction module in fig. 4 is a schematic diagram of the 1 st to N-1 st feature extraction modules, and for the nth feature extraction module, there is no last downsampling module relative to the previous N-1 th feature extraction module, as shown in fig. 4, the feature extraction module includes a plurality of feature extraction sub-modules, and the last feature extraction sub-module is connected with the downsampling module.
Further, for each feature extraction sub-module, the size of the feature image input to the feature extraction sub-module is consistent with the size of the feature image output from the feature extraction sub-module, for example, if the size of the feature image input to the feature extraction sub-module is 26×26, the size of the feature image output from the feature extraction sub-module is 26×26. The feature extraction submodule comprises at least three layers of convolution layers, the at least three layers of convolution layers are sequentially cascaded, and in the transmission direction of the feature image, the channel number of the at least three layers of convolution layers is changed according to the trend of increasing firstly and then decreasing, so that the feature image can be increased by expanding the channel number, more features are extracted, the feature image increasing processing is realized, and then the feature image compressing processing is carried out, so that the fusion of the features can be enhanced. For better understanding of the feature extraction submodule in the embodiment of the present invention, please refer to fig. 5, which is a block diagram of a possible implementation manner of the feature extraction submodule in the embodiment of the present invention, in fig. 5, a description is given by taking convolution layers of 1×1×32, 3×3×64, and 1×1×8 as examples, wherein 1*1 and 3*3 represent sizes of convolution kernels, 32, 64, and 8 represent channel numbers, and it is understood that in practical application, the number of convolution kernels and the number of channels of each convolution layer in the feature extraction submodule may be set according to specific needs, only the size of the feature image input into the feature extraction submodule needs to be ensured to be consistent with the size of the feature image output by the feature extraction submodule, and the channel numbers may change according to a trend of increasing and then decreasing after increasing, so that feature fusion can be effectively enhanced.
In the embodiment of the invention, for the face image to be identified of the user, the face image to be identified can be input into the face sensitive identification model, specifically, the convolution layer module of the face sensitive identification model is input, and the convolution layer module carries out convolution processing on the face image to be identified to obtain the initial characteristic image.
And further, taking a q-th feature extraction sub-module in the i-th feature extraction module as an example, when the q-th feature extraction sub-module has an input feature image, sequentially performing feature image adding processing and feature image compressing processing on the input feature image through the q-th feature extraction sub-module, and outputting an output feature image of the q-th feature extraction sub-module, wherein the i-th feature extraction module is any one of the N-th feature extraction modules, the q-th feature extraction sub-module is any one of the i-th feature extraction sub-modules, and the input feature image of the 1-th feature extraction sub-module in the 1-th feature extraction module is an initial feature image, and q is a positive integer.
Specifically, after the initial feature image is obtained, the initial feature image is input to a1 st feature extraction sub-module in the 1 st feature extraction module, the feature image enhancement processing and the feature image compression processing are sequentially performed on the initial feature image through the 1 st feature extraction sub-module, the output feature image of the 1 st feature extraction sub-module is output, the output feature image of the 1 st feature extraction sub-module is used as the input feature image of a2 nd feature extraction sub-module in the 1 st feature extraction module, the processing mode of each feature extraction sub-module in the 1 st feature extraction module is so similar to the processing mode until the output feature image of the last feature extraction sub-module in the 1 st feature extraction module is obtained, the output feature image of the last feature extraction sub-module in the 1 st feature extraction module is input to a downsampling module in the 1 st feature extraction module, and the output feature image of the downsampling module is used as the input feature image of the 1 st feature extraction sub-module in the 2 nd feature extraction module, and the identification process using face sensitivity recognition is completed by the same method.
In the embodiment of the invention, the second face sensitive characteristic information of the face image to be recognized is determined by acquiring a first characteristic image output by a downsampling module in an N-2 th characteristic extraction module in the face sensitive recognition model, a second characteristic image output by the downsampling module in the N-1 th characteristic extraction module and a third characteristic image output by a last characteristic extraction submodule in the N-1 th characteristic extraction module.
Further, the first feature image output by the N-2 feature extraction module is fused with the downsampled image of the first feature image to obtain a feature image serving as an input feature image of the N-1 feature extraction module, and in this way, the model size can be further reduced, and the fusion of features with different sizes can be increased, so that the fusion of the features can be enhanced.
Further, the model size can be further reduced, and the fusion of features with different sizes can be increased simultaneously by using the feature image obtained by fusing the downsampled image of the first feature image output by the N-2 th feature extraction module after downsampling twice, the second feature image output by the N-1 th feature extraction module and the downsampled image of the second feature image as the input feature image of the N-th feature extraction module.
In order to better understand the technical solution in the embodiment of the present invention, a specific face-sensitive recognition model will be described below, referring to fig. 6, which is a schematic structural diagram of the face-sensitive recognition model in the embodiment of the present invention, in fig. 6, description is given by taking N as 5 as an example, that is, the embodiment includes 5 feature extraction modules, namely, feature extraction modules 1 to 5, respectively, a convolution layer module is connected with feature extraction module 1, and feature extraction modules 1 to 5 are cascaded in sequence.
The size of the face image to be identified is set to 416, and because the pixel value of the image is 0-255, and the data of the face sensitive identification model is of float type, the value is 0-1, when the pixel value is greater than 1, the image will be displayed as white and cannot express effective image information, therefore, the pixel point of the face image to be identified needs to be normalized, i.e. the pixel point with the pixel value range of 0-255 is normalized to the range of 0-1, so as to correctly express the image information.
The size of the face image to be identified of the input convolution layer module is 416×416, the size of the output feature image is 208×208, the size of the feature image of the input feature extraction module 1 is 208×208, the size of the output feature image of the feature extraction module 1 is 104×104, the sizes of the input and output feature images of other feature extraction modules are the same, and finally the size of the feature image output by the feature extraction module 5 is 13×13. From this, it is possible to obtain a first feature image of size 26 x 26, a second feature image of size 13 x 13, and a third feature image of size 13 x 13.
After the first feature image, the second feature image and the third feature image are obtained, mapping processing is carried out on the first feature image, the second feature image and the third feature image, the first feature image, the second feature image and the third feature image are mapped into the face image to be identified, coordinate values of pixel points mapped in the face image to be identified and corresponding sensitive types of the pixel points are determined, and in this way, second face sensitive feature information of the face image to be identified can be determined.
Step 203, determining a first number of sensitive types contained in the first face sensitive feature information, and determining a second number of sensitive types contained in the second face sensitive feature information;
Step 204, determining the face sensitivity of the user according to the first number and the second number. In the embodiment of the invention, a first number of sensitive types contained in the first face sensitive feature information may be determined, and a second number of sensitive types contained in the second face sensitive feature information may also be determined, where the sensitive types include erythema, scales, acne, and the like, for example, if the first face sensitive feature information contains erythema sensitive types, the first number is 1, and if the second face sensitive feature information contains erythema sensitive types and acne sensitive types, the second number is 2. It can be understood that the sensitivity type included in the first face sensitive feature information and the second face sensitive feature information may be 0.
In the embodiment of the invention, the face sensitivity of the user can be determined according to the first number and the second number.
Specifically, when the first number is smaller than a preset first threshold value and the second number is smaller than a preset second threshold value, determining that the face sensitivity of the user is slightly sensitive;
When the first number is greater than or equal to a preset first threshold value and smaller than a preset third threshold value, and the second number is smaller than a preset second threshold value; or when the first number is smaller than a preset first threshold value, the second number is larger than or equal to a preset second threshold value and smaller than a preset fourth threshold value, determining that the face sensitivity of the user is moderately sensitive, wherein the third threshold value is larger than the first threshold value, and the fourth threshold value is larger than the second threshold value;
And when the first number is greater than or equal to a preset third threshold value and the second number is greater than or equal to a preset second threshold value, or when the second number is greater than or equal to a preset fourth threshold value, determining that the face sensitivity is heavy sensitivity.
It should be understood that the foregoing is a feasible manner of determining the face sensitivity based on the first number and the second number, and in practical application, the rule for determining the face sensitivity may be set according to specific needs, which is not limited herein.
In the embodiment of the invention, the accuracy of determining the second face sensitive characteristic information by using the model can be improved by using the improved yolov model, and the accuracy of face sensitivity identification can be improved by combining objective second face sensitive characteristic information with subjective first face sensitive characteristic information and determining face sensitivity by setting the first face sensitive characteristic information.
Referring to fig. 7, a schematic structural diagram of a face-sensitive recognition device according to an embodiment of the invention includes:
The acquiring module 701 is configured to acquire a face image to be identified of a user, and acquire first face sensitive feature information of the user, where the first face sensitive feature information includes a collected sensitive type of subjective cognition of the user;
The model recognition module 702 is configured to input the face image to be recognized to a face sensitive recognition model, and obtain second face sensitive feature information of the user output by the face sensitive recognition model, where the second face sensitive feature information includes a detected objective sensitivity type of the user;
A determining module 703, configured to determine a face sensitivity of the user according to the first face sensitive feature information and the second face sensitive feature information.
It can be understood that, in the embodiment of the present invention, the content related to the acquiring module 701, the model identifying module 702 and the determining module 703 is similar to the content described in the foregoing method embodiment of the face-sensitive identifying method, and specific reference may be made to the content related to the foregoing embodiment of the face-sensitive identifying method, which is not described herein.
In the embodiment of the invention, a face image of a user to be identified is obtained, first face sensitive characteristic information of the user is obtained, the first sensitive characteristic information comprises a collected sensitive type of subjective cognition of the user, the face image to be identified is input into a face sensitive identification model, second face sensitive characteristic information of the user output by the face sensitive identification model is obtained, the second face sensitive characteristic information comprises a detected objective sensitive type of the user, and the face sensitivity of the user is determined according to the first face sensitive characteristic information and the second face sensitive characteristic information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitivity type of the user and the objectively detected sensitivity type can be combined, and the accuracy of face sensitivity recognition of the user is improved.
FIG. 8 illustrates an internal block diagram of a computer device in one embodiment. The computer device may specifically be a terminal or a server. As shown in fig. 8, the computer device includes a processor, a memory, and a network interface connected by a system bus. The memory includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system, and may also store a computer program that, when executed by a processor, causes the processor to implement an age identification method. The internal memory may also have stored therein a computer program which, when executed by the processor, causes the processor to perform the age identification method. It will be appreciated by those skilled in the art that the structure shown in FIG. 8 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements may be applied, and that a particular computer device may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, a computer device is presented comprising a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of:
Acquiring a face image to be recognized of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
inputting the face image to be recognized into a face sensitive recognition model to obtain second face sensitive characteristic information of the user output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
and determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
In one embodiment, a computer-readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of:
Acquiring a face image to be recognized of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
inputting the face image to be recognized into a face sensitive recognition model to obtain second face sensitive characteristic information of the user output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
and determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
Those skilled in the art will appreciate that all or part of the processes in the methods of the above embodiments may be implemented by a computer program for instructing relevant hardware, where the program may be stored in a non-volatile computer readable storage medium, and where the program, when executed, may include processes in the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application and are described in detail herein without thereby limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (8)

1. A method for face-sensitive recognition, the method comprising:
Acquiring a face image to be recognized of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
inputting the face image to be recognized into a face sensitive recognition model to obtain second face sensitive characteristic information of the user output by the face sensitive recognition model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information;
the face sensitive recognition model comprises a convolution layer module and N feature extraction modules, wherein the feature extraction modules are sequentially cascaded according to the sequence from 1 st to N, the convolution layer module is connected with the 1 st feature extraction module, the 1 st to N-1 st feature extraction modules comprise a plurality of feature extraction sub-modules which are sequentially cascaded, the N feature extraction module comprises a plurality of feature extraction sub-modules which are sequentially cascaded, the M feature extraction module is connected with the downsampling module in the M-1 th feature extraction module, and the value of M is [2, N ] and N is more than 3;
The step of inputting the face image to be recognized into a face sensitive recognition model, and the step of obtaining the second face sensitive characteristic information of the user output by the face sensitive recognition model comprises the following steps:
inputting the face image to be recognized into the convolution layer module of the face sensitive recognition model, and carrying out convolution processing on the face image to be recognized through the convolution layer module to obtain an initial characteristic image;
When a q-th feature extraction sub-module in the i-th feature extraction module has an input feature image, sequentially performing feature image adding processing and feature image compressing processing on the input feature image through the q-th feature extraction sub-module, and outputting an output feature image of the q-th feature extraction sub-module; the ith feature extraction module is any one of N feature extraction modules, the q feature extraction sub-module is any one of the ith feature extraction sub-modules, an input feature image of the 1 st feature extraction sub-module of the 1 st feature extraction module is the initial feature image, and the size of the input feature image of the q feature extraction sub-module is the same as the size of the output feature image;
And determining second face sensitive feature information of the face image to be recognized according to the first feature image output by the downsampling module in the N-2 th feature extraction module, the second feature image output by the downsampling module in the N-1 th feature extraction module and the third feature image output by the last feature extraction submodule in the N-th feature extraction module of the face sensitive recognition model.
2. The method of claim 1, wherein the determining the face sensitivity of the user based on the first face sensitive feature information and the second face sensitive feature information comprises:
determining a first quantity of sensitive types contained in the first face sensitive feature information, and determining a second quantity of sensitive types contained in the second face sensitive feature information;
and determining the face sensitivity of the user according to the first quantity and the second quantity.
3. The method of claim 2, wherein the determining the face sensitivity of the user from the first number and the second number comprises:
When the first number is smaller than a preset first threshold value and the second number is smaller than a preset second threshold value, determining that the face sensitivity of the user is slightly sensitive;
When the first number is larger than or equal to a preset first threshold value and smaller than a preset third threshold value, and the second number is smaller than a preset second threshold value; or when the first number is smaller than a preset first threshold, the second number is larger than or equal to a preset second threshold and smaller than a preset fourth threshold, determining that the face sensitivity of the user is moderately sensitive, wherein the third threshold is larger than the first threshold, and the fourth threshold is larger than the second threshold;
And when the first number is greater than or equal to the preset third threshold value and the second number is greater than or equal to the preset second threshold value, or when the second number is greater than or equal to the preset fourth threshold value, determining that the face sensitivity is weight sensitivity.
4. The method according to claim 1, wherein the first feature image output by the N-2 th feature extraction module is a feature image obtained by fusing the first feature image with a downsampled image of the first feature image, and is used as the input feature image of the N-1 th feature extraction module.
5. The method according to claim 1, wherein the feature image obtained by fusing the downsampled image of the first feature image output by the nth-2 feature extraction module after two downsampling, the second feature image output by the nth-1 feature extraction module, and the downsampled image of the second feature image is used as the input feature image of the nth feature extraction module.
6. A face-sensitive recognition apparatus, the apparatus comprising:
The acquisition module is used for acquiring a face image to be recognized of a user and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises acquired sensitive types of subjective cognition of the user;
The model identification module is used for inputting the face image to be identified into a face sensitive identification model to obtain second face sensitive characteristic information of the user output by the face sensitive identification model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
the determining module is used for determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information;
the face sensitive recognition model comprises a convolution layer module and N feature extraction modules, wherein the feature extraction modules are sequentially cascaded according to the sequence from 1 st to N, the convolution layer module is connected with the 1 st feature extraction module, the 1 st to N-1 st feature extraction modules comprise a plurality of feature extraction sub-modules which are sequentially cascaded, the N feature extraction module comprises a plurality of feature extraction sub-modules which are sequentially cascaded, the M feature extraction module is connected with the downsampling module in the M-1 th feature extraction module, and the value of M is [2, N ] and N is more than 3;
The model identification module comprises:
inputting the face image to be recognized into the convolution layer module of the face sensitive recognition model, and carrying out convolution processing on the face image to be recognized through the convolution layer module to obtain an initial characteristic image;
When a q-th feature extraction sub-module in the i-th feature extraction module has an input feature image, sequentially performing feature image adding processing and feature image compressing processing on the input feature image through the q-th feature extraction sub-module, and outputting an output feature image of the q-th feature extraction sub-module; the ith feature extraction module is any one of N feature extraction modules, the q feature extraction sub-module is any one of the ith feature extraction sub-modules, an input feature image of the 1 st feature extraction sub-module of the 1 st feature extraction module is the initial feature image, and the size of the input feature image of the q feature extraction sub-module is the same as the size of the output feature image;
And determining second face sensitive feature information of the face image to be recognized according to the first feature image output by the downsampling module in the N-2 th feature extraction module, the second feature image output by the downsampling module in the N-1 th feature extraction module and the third feature image output by the last feature extraction submodule in the N-th feature extraction module of the face sensitive recognition model.
7. A computer readable storage medium storing a computer program, which when executed by a processor causes the processor to perform the steps of the method according to any one of claims 1 to 5.
8. A computer device comprising a memory and a processor, wherein the memory stores a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 5.
CN202010841596.3A 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment Active CN112101121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010841596.3A CN112101121B (en) 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010841596.3A CN112101121B (en) 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112101121A CN112101121A (en) 2020-12-18
CN112101121B true CN112101121B (en) 2024-04-30

Family

ID=73753932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010841596.3A Active CN112101121B (en) 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112101121B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921128A (en) * 2018-07-19 2018-11-30 厦门美图之家科技有限公司 Cheek sensitivity flesh recognition methods and device
WO2019080580A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
CN110059546A (en) * 2019-03-08 2019-07-26 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN110674748A (en) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and readable storage medium
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019080580A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
CN108921128A (en) * 2018-07-19 2018-11-30 厦门美图之家科技有限公司 Cheek sensitivity flesh recognition methods and device
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium
CN110059546A (en) * 2019-03-08 2019-07-26 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN110674748A (en) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于决策融合的红外与可见光图像人脸识别研究;赵云丰;尹怡欣;;激光与红外(06);全文 *
基于智能视觉的动态人脸跟踪;郝俊寿;丁艳会;;现代电子技术(24);全文 *
基于特征向量的敏感图像识别技术;彭强;张晓飞;;西南交通大学学报(01);全文 *

Also Published As

Publication number Publication date
CN112101121A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN109543627B (en) Method and device for judging driving behavior category and computer equipment
CN109034078B (en) Training method of age identification model, age identification method and related equipment
CN109063742B (en) Butterfly identification network construction method and device, computer equipment and storage medium
CN110222791B (en) Sample labeling information auditing method and device
CN112330685B (en) Image segmentation model training method, image segmentation device and electronic equipment
US20220165053A1 (en) Image classification method, apparatus and training method, apparatus thereof, device and medium
CN110287836B (en) Image classification method and device, computer equipment and storage medium
CN111968134B (en) Target segmentation method, device, computer readable storage medium and computer equipment
CN111881737B (en) Training method and device of age prediction model, and age prediction method and device
WO2023065503A1 (en) Facial expression classification method and electronic device
CN111028006B (en) Service delivery auxiliary method, service delivery method and related device
CN113160087B (en) Image enhancement method, device, computer equipment and storage medium
EP4047509A1 (en) Facial parsing method and related devices
CN111144285B (en) Fat and thin degree identification method, device, equipment and medium
CN112434556A (en) Pet nose print recognition method and device, computer equipment and storage medium
CN111259256B (en) Content processing method, content processing device, computer readable storage medium and computer equipment
CN112001399A (en) Image scene classification method and device based on local feature saliency
CN111612732B (en) Image quality evaluation method, device, computer equipment and storage medium
CN111652245B (en) Vehicle contour detection method, device, computer equipment and storage medium
CN112101121B (en) Face sensitive identification method and device, storage medium and computer equipment
CN112183303A (en) Transformer equipment image classification method and device, computer equipment and medium
CN112699809B (en) Vaccinia category identification method, device, computer equipment and storage medium
CN111354096A (en) Intelligent attendance checking method and device and electronic equipment
CN114120053A (en) Image processing method, network model training method and device and electronic equipment
CN109740671B (en) Image identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant