CN113128309A - Facial expression recognition method, device, equipment and medium - Google Patents

Facial expression recognition method, device, equipment and medium Download PDF

Info

Publication number
CN113128309A
CN113128309A CN202010028209.4A CN202010028209A CN113128309A CN 113128309 A CN113128309 A CN 113128309A CN 202010028209 A CN202010028209 A CN 202010028209A CN 113128309 A CN113128309 A CN 113128309A
Authority
CN
China
Prior art keywords
face image
features
facial expression
image
expression recognition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010028209.4A
Other languages
Chinese (zh)
Inventor
赵京霞
王宁宁
曹寒梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Shanghai ICT Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Shanghai ICT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Shanghai ICT Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202010028209.4A priority Critical patent/CN113128309A/en
Publication of CN113128309A publication Critical patent/CN113128309A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a facial expression recognition method, a device, equipment and a medium. The method comprises the following steps: acquiring a target face image; extracting global features and local features of the target face image; fusing the global features and the local features to obtain fused features corresponding to the target face image; recognizing the facial expression in the target facial image according to the fusion characteristics and a pre-trained facial expression recognition model; the facial expression recognition model is obtained by training by utilizing fusion features corresponding to at least one facial image. According to the facial expression recognition method, the device, the equipment and the medium, the facial expression is recognized through the facial expression recognition model obtained by training the fusion characteristics corresponding to at least one facial image, and the accuracy of facial expression recognition can be improved.

Description

Facial expression recognition method, device, equipment and medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a facial expression recognition method, a device, equipment and a medium.
Background
In real life, facial expression is one of the most powerful, natural and common signals when human beings express their emotional states and intentions, contains very rich emotional information, and plays an important role in the emotional expression between people. With the development of computer vision in recent years, facial expression automatic analysis technology is increasingly applied to many other human-computer interaction systems such as social robots, medical treatment, driver fatigue monitoring and the like.
At present, the main methods for recognizing the expression are as follows: based on a facial action coding system, the facial muscle movement is detected, and the mapping relation between the facial muscle movement and the emotion is constructed, so that the purpose of expression recognition is achieved.
However, the accuracy of expression recognition is low by this method.
Disclosure of Invention
The embodiment of the invention provides a facial expression recognition method, a device, equipment and a medium, which can improve the accuracy of facial expression recognition.
In a first aspect, an embodiment of the present invention provides a facial expression recognition method, including:
acquiring a target face image;
extracting global features and local features of the target face image;
fusing the global features and the local features to obtain fused features corresponding to the target face image;
recognizing the facial expression in the target facial image according to the fusion characteristics and a pre-trained facial expression recognition model; the facial expression recognition model is obtained by training by utilizing fusion features corresponding to at least one facial image.
In a possible implementation manner of the embodiment of the present invention, before extracting the global feature and the local feature of the target face image, the method for recognizing facial expression provided by the embodiment of the present invention further includes:
and preprocessing the target face image.
In a possible implementation manner of the embodiment of the present invention, before the obtaining of the target face image, the method for recognizing facial expressions provided in the embodiment of the present invention further includes:
and training a facial expression recognition model.
In a possible implementation manner of the embodiment of the present invention, training a facial expression recognition model includes:
acquiring at least one face image;
extracting global features and local features of each face image in at least one face image;
fusing the global features and the local features of each face image to obtain fused features corresponding to each face image;
and training a facial expression recognition model according to the corresponding fusion characteristics of each facial image.
In one possible implementation manner of the embodiment of the present invention, at least one face image includes:
the face image collected by the image collecting device and the face image in the disclosed face image data set.
In a possible implementation manner of the embodiment of the present invention, the fusing the global features and the local features of each face image to obtain fused features corresponding to each face image includes:
determining the weight corresponding to the global feature and the weight corresponding to the local feature by using a classifier;
and calculating the fusion feature corresponding to each face image according to the global feature of each face image, the local feature of each face image, the weight corresponding to the global feature and the weight corresponding to the local feature.
In a possible implementation manner of the embodiment of the present invention, before extracting the global feature and the local feature of each facial image in at least one facial image, the method for recognizing facial expressions provided in the embodiment of the present invention further includes:
and preprocessing at least one face image.
In a second aspect, an embodiment of the present invention provides a facial expression recognition apparatus, including:
the acquisition module acquires a target face image;
the extraction module is used for extracting global features and local features of the target face image;
the fusion module is used for fusing the global features and the local features to obtain fusion features corresponding to the target face image;
the recognition module is used for recognizing the facial expression in the target facial image according to the fusion characteristics and a pre-trained facial expression recognition model; the facial expression recognition model is obtained by training by utilizing fusion features corresponding to at least one facial image.
In a possible implementation manner of the embodiment of the present invention, the apparatus for recognizing a facial expression provided in the embodiment of the present invention further includes:
the first preprocessing module is used for preprocessing the target face image.
In a possible implementation manner of the embodiment of the present invention, the apparatus for recognizing a facial expression provided in the embodiment of the present invention further includes:
and the training module is used for training the facial expression recognition model.
In a possible implementation manner of the embodiment of the present invention, the training module includes:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring at least one face image;
the extraction unit is used for extracting the global features and the local features of each face image in at least one face image;
the fusion unit is used for fusing the global features and the local features of each face image to obtain fused features corresponding to each face image;
and the training unit is used for training the facial expression recognition model according to the fusion characteristics corresponding to each facial image.
In one possible implementation manner of the embodiment of the present invention, at least one face image includes:
the face image collected by the image collecting device and the face image in the disclosed face image data set.
In a possible implementation manner of the embodiment of the present invention, the fusion unit is specifically configured to:
determining the weight corresponding to the global feature and the weight corresponding to the local feature by using a classifier;
and calculating the fusion feature corresponding to each face image according to the global feature of each face image, the local feature of each face image, the weight corresponding to the global feature and the weight corresponding to the local feature.
In a possible implementation manner of the embodiment of the present invention, the training module further includes:
and the preprocessing unit is used for preprocessing at least one face image.
In a third aspect, an embodiment of the present invention provides a facial expression recognition apparatus, including: a memory, a processor, and a computer program stored on the memory and executable on the processor;
the processor, when executing the computer program, implements the method for facial expression recognition in the first aspect or any one of the possible implementations of the first aspect.
In another aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for recognizing a facial expression in the first aspect or any one of the possible implementation manners of the first aspect is implemented.
According to the facial expression recognition method, the device, the equipment and the medium, the facial expression is recognized through the facial expression recognition model obtained by training the fusion characteristics corresponding to at least one facial image, and the accuracy of facial expression recognition can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a facial expression recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a process for training a facial expression recognition model according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a facial expression recognition apparatus according to an embodiment of the present invention;
fig. 4 is a block diagram of a hardware architecture of a computing device according to an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In order to solve the problem of the prior art, embodiments of the present invention provide a method, an apparatus, a device, and a medium for recognizing a facial expression. First, a method for recognizing a facial expression according to an embodiment of the present invention will be described in detail.
Fig. 1 is a schematic flow chart of a facial expression recognition method according to an embodiment of the present invention. The facial expression recognition method may include:
s101: and acquiring a target face image.
S102: and extracting global features and local features of the target face image.
S103: and fusing the global features and the local features to obtain fused features corresponding to the target face image.
S104: and recognizing the facial expression in the target facial image according to the fusion characteristics and the pre-trained facial expression recognition model.
The facial expression recognition model is obtained by training by utilizing fusion features corresponding to at least one facial image.
In a possible implementation manner of the embodiment of the present invention, a Convolutional Neural Network (CNN) may be used to extract global features of the target face image.
In a possible implementation manner of the embodiment of the present invention, a Gabor may be used to extract local features of a target face image.
When local features of a target face image are extracted by using a Gabor, real number type Gabor transformation can be performed on the target face image to obtain a Gabor transformation image, if the features are directly extracted, feature dimensions are too high and are not beneficial to subsequent processing, the Gabor transformation image is generally partitioned, energy corresponding to each block is calculated to obtain an energy matrix, dimension reduction processing is performed on the energy matrix, and the matrix obtained after the dimension reduction processing is used as a local feature vector of the target face image.
And after the global features and the local features of the target face image are extracted, fusing the global features and the local features to obtain fused features corresponding to the target face image.
In a possible implementation manner of the embodiment of the present invention, the global feature and the local feature may be subjected to weighted summation calculation to obtain a fusion feature corresponding to the target face image.
Then, the fusion features corresponding to the target face image are input into a facial expression recognition model obtained by training the fusion features corresponding to at least one face image to classify the facial expressions, so as to recognize the facial expressions in the target face image.
According to the facial expression recognition method provided by the embodiment of the invention, the facial expression is recognized by the facial expression recognition model obtained by training the fusion characteristics corresponding to at least one facial image, so that the accuracy of facial expression recognition can be improved.
In a possible implementation manner of the embodiment of the present invention, the facial expression recognition result may also be recorded, displayed, or sent to the terminal, and so on.
In a possible implementation manner of the embodiment of the present invention, before extracting the global features and the local features of the target face image, the target face image may be preprocessed.
The main purposes of image preprocessing are to eliminate irrelevant information in images, recover useful real information, enhance the detectability of relevant information, and simplify data to the maximum extent, thereby improving the reliability of feature extraction, image segmentation, matching and recognition.
The pre-treatment in the embodiment of the invention includes but is not limited to: graying, geometric transformation, and image enhancement.
In one possible implementation manner of the embodiment of the present invention, the image may be grayed by using a component method, a maximum value method, an average value method, or a weighted average method.
The geometric transformation of the image is also called image space transformation, and the image is processed through the geometric transformation such as translation, transposition, mirror image, rotation, scaling and the like, so that the geometric transformation is used for correcting the system error of an image acquisition system and the random error of the position of an instrument (imaging angle, perspective relation and even the reason of a lens).
The image enhancement is to enhance useful information in an image, aiming at improving the visual effect of the image, purposefully emphasizing the overall or local characteristics of the image aiming at the application occasion of a given image, turning the original unclear image into clear or emphasizing certain interesting characteristics, expanding the difference between different object characteristics in the image, inhibiting the uninteresting characteristics, improving the image quality and enriching the information content, enhancing the image interpretation and recognition effect and meeting the requirements of certain special analysis. Image enhancement algorithms can be divided into two broad categories: a spatial domain method and a frequency domain method.
The spatial domain method is a direct image enhancement algorithm and is divided into a point operation algorithm and a neighborhood enhancement algorithm. The point arithmetic algorithm is gray level correction, gray level conversion, histogram modification and the like. The neighborhood enhancement algorithm is divided into two types, namely image smoothing and sharpening. Common smoothing algorithms include mean filtering, median filtering, spatial filtering, and the like. Common sharpening algorithms include gradient operator method, second derivative operator method, high-pass filtering and mask matching method.
The frequency domain method is an indirect image enhancement algorithm, and the common frequency domain methods include low-pass filtering and high-pass filtering.
In a possible implementation manner of the embodiment of the present invention, a spatial domain-based median filtering may be adopted for denoising. Specifically, a region which may be a noise point in the image is predetermined, and only the region which may be the noise point is denoised. The embodiment of the invention adopts the median filtering based on the spatial domain to carry out denoising, can avoid the fuzzy efficiency caused by simply using the median filtering, and improves the accuracy of expression recognition.
In a possible implementation manner of the embodiment of the present invention, the facial expression recognition model may be trained in advance.
In one possible implementation manner of the embodiment of the invention, at least one face image can be acquired; extracting global features and local features of each face image in at least one face image; fusing the global features and the local features of each face image to obtain fused features corresponding to each face image; and training a facial expression recognition model according to the corresponding fusion characteristics of each facial image.
In a possible implementation manner of the embodiment of the present invention, the at least one face image may include: the face image collected by the image collecting device and the face image in the disclosed face image data set.
Image acquisition devices include, without limitation: a mobile phone, tablet, computer or other device with an image acquisition unit (such as a camera).
In a possible implementation manner of the embodiment of the present invention, the global feature of at least one face image may be extracted by using CNN, and the local feature of at least one face image may be extracted by using Gabor.
And after the global features and the local features of each face image are extracted, fusing the global features and the local features of each face image to obtain fused features corresponding to each face image.
In a possible implementation manner of the embodiment of the present invention, the global feature and the local feature of each face image may be subjected to weighted summation calculation to obtain a fusion feature corresponding to the face image of each face image.
In a possible implementation manner of the embodiment of the present invention, a classifier may be used to determine a weight corresponding to the global feature and a weight corresponding to the local feature; and calculating the fusion feature corresponding to each face image according to the global feature of each face image, the local feature of each face image, the weight corresponding to the global feature and the weight corresponding to the local feature.
In one possible implementation manner of the embodiment of the present invention, the classifier may be an Extreme Learning Machine (ELM).
By utilizing the classifier, the classification accuracy of the global features and the classification accuracy of the local features can be obtained, and then the classification accuracy of the global features is used as the weight corresponding to the global features, and the classification accuracy of the local features is used as the weight corresponding to the local features.
Suppose that the fusion feature corresponding to the face image i is recorded as Xi
Then Xi=FGi*WFG+FLi*WFL。
Therein, FGiGlobal features of the face image i; FLiLocal features of the face image i; WFG is the weight corresponding to the global feature; WFL is the weight corresponding to the local feature.
In a possible implementation manner of the embodiment of the present invention, before extracting the global feature and the local feature of each of the at least one face image, the at least one face image may be preprocessed.
The preprocessing performed on the at least one face image includes, but is not limited to: graying, geometric transformation, and image enhancement.
In a possible implementation manner of the embodiment of the present invention, the preprocessing performed on at least one face image may further include: and expanding the number of the face images based on at least one face image in a data enhancement mode.
The data enhancement mode of the embodiment of the invention includes but is not limited to: random clipping, rotational transformation, translational transformation, scale transformation, Principal Component Analysis (PCA), Whitening (whiting), and the like.
In one possible implementation manner of the embodiment of the present invention, the number of face images is preferably expanded by adopting a rotation transformation manner. Specifically, each face image is rotated by 90 °, 180 °, and 270 ° around the origin, respectively.
Based on the above, the process of training the face recognition model according to the embodiment of the present invention is shown in fig. 2.
Firstly, at least one face image acquired by at least one image acquisition device is received and a published face image data set is acquired.
And preprocessing at least one received face image and each face image in the face images in the face image data set.
And extracting global features and local features of each face image.
And fusing the global features and the local features of each face image to obtain fused features corresponding to each face image.
And training a facial expression recognition model based on the corresponding fusion characteristics of each facial image.
Corresponding to the above method embodiment, the embodiment of the invention also provides a facial expression recognition device.
Fig. 3 is a schematic structural diagram of a facial expression recognition apparatus according to an embodiment of the present invention. The facial expression recognition apparatus may include:
an acquisition module 301 for acquiring a target face image;
an extraction module 302, configured to extract global features and local features of a target face image;
the fusion module 303 is configured to fuse the global features and the local features to obtain fusion features corresponding to the target face image;
and the recognition module 304 is configured to recognize a facial expression in the target facial image according to the fusion feature and the pre-trained facial expression recognition model.
The facial expression recognition model is obtained by training by utilizing fusion features corresponding to at least one facial image.
In a possible implementation manner of the embodiment of the present invention, the apparatus for recognizing a facial expression provided in the embodiment of the present invention may further include:
the first preprocessing module is used for preprocessing the target face image.
In a possible implementation manner of the embodiment of the present invention, the apparatus for recognizing a facial expression provided in the embodiment of the present invention may further include:
and the training module is used for training the facial expression recognition model.
In a possible implementation manner of the embodiment of the present invention, the training module may include:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring at least one face image;
the extraction unit is used for extracting the global features and the local features of each face image in at least one face image;
the fusion unit is used for fusing the global features and the local features of each face image to obtain fused features corresponding to each face image;
and the training unit is used for training the facial expression recognition model according to the fusion characteristics corresponding to each facial image.
In a possible implementation manner of the embodiment of the present invention, the at least one face image may include:
the face image collected by the image collecting device and the face image in the disclosed face image data set.
In a possible implementation manner of the embodiment of the present invention, the fusion unit may be specifically configured to:
determining the weight corresponding to the global feature and the weight corresponding to the local feature by using a classifier;
and calculating the fusion feature corresponding to each face image according to the global feature of each face image, the local feature of each face image, the weight corresponding to the global feature and the weight corresponding to the local feature.
In a possible implementation manner of the embodiment of the present invention, the training module may further include:
and the preprocessing unit is used for preprocessing at least one face image.
Fig. 4 is a block diagram of a hardware architecture of a computing device according to an embodiment of the present invention. As shown in fig. 4, computing device 400 includes an input device 401, an input interface 402, a central processor 403, a memory 404, an output interface 405, and an output device 406. The input interface 402, the central processing unit 403, the memory 404, and the output interface 405 are connected to each other through a bus 410, and the input device 401 and the output device 406 are connected to the bus 410 through the input interface 402 and the output interface 405, respectively, and further connected to other components of the computing device 400.
Specifically, the input device 401 receives input information from the outside and transmits the input information to the central processor 403 through the input interface 402; the central processor 403 processes the input information based on computer-executable instructions stored in the memory 404 to generate output information, stores the output information temporarily or permanently in the memory 404, and then transmits the output information to the output device 406 through the output interface 405; output device 406 outputs the output information outside of computing device 400 for use by a user.
That is, the computing device shown in fig. 4 may also be implemented as a facial expression recognition device, which may include: a memory storing a computer program; and a processor, which can implement the facial expression recognition method provided by the embodiment of the present invention when executing the computer program.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium; the computer program realizes the facial expression recognition method provided by the embodiment of the invention when being executed by a processor.
It is to be understood that the invention is not limited to the specific arrangements and instrumentality described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as hardware, software, firmware, or a combination thereof. When implemented in hardware, it may be, for example, an electronic circuit, an Application Specific Integrated Circuit (ASIC), suitable firmware, plug-in, function card, or the like. When implemented in software, the elements of the invention are the programs or code segments used to perform the required tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include electronic circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the present invention is not limited to the order of the above-described steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (13)

1. A facial expression recognition method, the method comprising:
acquiring a target face image;
extracting global features and local features of the target face image;
fusing the global features and the local features to obtain fused features corresponding to the target face image;
recognizing the facial expression in the target facial image according to the fusion characteristics and a pre-trained facial expression recognition model; the facial expression recognition model is obtained by training by utilizing fusion features corresponding to at least one facial image.
2. The method of claim 1, wherein prior to said extracting global and local features of the target face image, the method further comprises:
and preprocessing the target face image.
3. The method of claim 1, wherein prior to said obtaining the target face image, the method further comprises:
and training the facial expression recognition model.
4. The method of claim 3, wherein the training the facial expression recognition model comprises:
acquiring at least one face image;
extracting global features and local features of each face image in the at least one face image;
fusing the global features and the local features of each face image to obtain fused features corresponding to each face image;
and training the facial expression recognition model according to the fusion characteristics corresponding to each facial image.
5. The method of claim 4, wherein the at least one facial image comprises:
the face image collected by the image collecting device and the face image in the disclosed face image data set.
6. The method according to claim 4, wherein the fusing the global features and the local features of each face image to obtain fused features corresponding to each face image comprises:
determining the weight corresponding to the global feature and the weight corresponding to the local feature by using a classifier;
and calculating the fusion feature corresponding to each face image according to the global feature of each face image, the local feature of each face image, the weight corresponding to the global feature and the weight corresponding to the local feature.
7. The method of claim 4, wherein prior to said extracting global and local features of each of said at least one facial image, said method further comprises:
and preprocessing the at least one face image.
8. An apparatus for recognizing a facial expression, the apparatus comprising:
the acquisition module acquires a target face image;
the extraction module is used for extracting the global features and the local features of the target face image;
the fusion module is used for fusing the global features and the local features to obtain fusion features corresponding to the target face image;
the recognition module is used for recognizing the facial expression in the target facial image according to the fusion characteristics and a pre-trained facial expression recognition model; the facial expression recognition model is obtained by training by utilizing fusion features corresponding to at least one facial image.
9. The apparatus of claim 8, further comprising:
and the first preprocessing module is used for preprocessing the target face image.
10. The apparatus of claim 8, further comprising:
and the training module is used for training the facial expression recognition model.
11. The apparatus of claim 8, wherein the training module comprises:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring at least one face image;
the extraction unit is used for extracting the global features and the local features of each face image in the at least one face image;
the fusion unit is used for fusing the global features and the local features of each face image to obtain fused features corresponding to each face image;
and the training unit is used for training the facial expression recognition model according to the fusion characteristics corresponding to each facial image.
12. A facial expression recognition apparatus, the apparatus comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor;
the processor, when executing the computer program, implements the method of facial expression recognition according to any one of claims 1 to 7.
13. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when executed by a processor, implements the facial expression recognition method according to any one of claims 1 to 7.
CN202010028209.4A 2020-01-10 2020-01-10 Facial expression recognition method, device, equipment and medium Pending CN113128309A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010028209.4A CN113128309A (en) 2020-01-10 2020-01-10 Facial expression recognition method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010028209.4A CN113128309A (en) 2020-01-10 2020-01-10 Facial expression recognition method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN113128309A true CN113128309A (en) 2021-07-16

Family

ID=76771737

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010028209.4A Pending CN113128309A (en) 2020-01-10 2020-01-10 Facial expression recognition method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN113128309A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887487A (en) * 2021-10-20 2022-01-04 河海大学 Facial expression recognition method and device based on CNN-Transformer

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN109446980A (en) * 2018-10-25 2019-03-08 华中师范大学 Expression recognition method and device
CN109934197A (en) * 2019-03-21 2019-06-25 深圳力维智联技术有限公司 Training method, device and the computer readable storage medium of human face recognition model
CN110263673A (en) * 2019-05-31 2019-09-20 合肥工业大学 Human facial expression recognition method, apparatus, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107045618A (en) * 2016-02-05 2017-08-15 北京陌上花科技有限公司 A kind of facial expression recognizing method and device
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN109446980A (en) * 2018-10-25 2019-03-08 华中师范大学 Expression recognition method and device
CN109934197A (en) * 2019-03-21 2019-06-25 深圳力维智联技术有限公司 Training method, device and the computer readable storage medium of human face recognition model
CN110263673A (en) * 2019-05-31 2019-09-20 合肥工业大学 Human facial expression recognition method, apparatus, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113887487A (en) * 2021-10-20 2022-01-04 河海大学 Facial expression recognition method and device based on CNN-Transformer
CN113887487B (en) * 2021-10-20 2024-03-15 河海大学 Facial expression recognition method and device based on CNN-transducer

Similar Documents

Publication Publication Date Title
CN108009520B (en) Finger vein identification method and system based on convolution variational self-encoder network
CN111445905A (en) Hybrid speech recognition network training method, hybrid speech recognition device and storage medium
CN110070029B (en) Gait recognition method and device
CN111505632B (en) Ultra-wideband radar action attitude identification method based on power spectrum and Doppler characteristics
CN110443148B (en) Action recognition method, system and storage medium
CN109685037B (en) Real-time action recognition method and device and electronic equipment
CN110555380A (en) Finger vein identification method based on Center Loss function
Wagh et al. Eyelids, eyelashes detection algorithm and hough transform method for noise removal in iris recognition
Harini et al. Sign language translation
CN114170184A (en) Product image anomaly detection method and device based on embedded feature vector
CN111461202A (en) Real-time thyroid nodule ultrasonic image identification method and device
CN111814682A (en) Face living body detection method and device
CN111488853A (en) Big data face recognition method and system for financial institution security system and robot
CN108182399B (en) Finger vein feature comparison method and device, storage medium and processor
CN114821682A (en) Multi-sample mixed palm vein identification method based on deep learning algorithm
Qin et al. Finger-vein quality assessment based on deep features from grayscale and binary images
CN111966219B (en) Eye movement tracking method, device, equipment and storage medium
KR20210082624A (en) Fingerprint Enhancement method
kumar Shukla et al. A novel method for identification and performance improvement of Blurred and Noisy Images using modified facial deblur inference (FADEIN) algorithms
CN113128309A (en) Facial expression recognition method, device, equipment and medium
CN107886093B (en) Character detection method, system, equipment and computer storage medium
CN112084915A (en) Model training method, living body detection method, device and electronic equipment
CN117496389A (en) Hand hygiene real-time detection method suitable for Android equipment
WO2021054217A1 (en) Image processing device, image processing method and program
CN114463789A (en) Non-contact fingerprint image enhancement method, apparatus, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination