CN112016410B - Micro expression recognition method, storage medium and system - Google Patents

Micro expression recognition method, storage medium and system Download PDF

Info

Publication number
CN112016410B
CN112016410B CN202010809744.3A CN202010809744A CN112016410B CN 112016410 B CN112016410 B CN 112016410B CN 202010809744 A CN202010809744 A CN 202010809744A CN 112016410 B CN112016410 B CN 112016410B
Authority
CN
China
Prior art keywords
micro
hoof
expression
sequence
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010809744.3A
Other languages
Chinese (zh)
Other versions
CN112016410A (en
Inventor
于蒙
孙肖杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202010809744.3A priority Critical patent/CN112016410B/en
Publication of CN112016410A publication Critical patent/CN112016410A/en
Application granted granted Critical
Publication of CN112016410B publication Critical patent/CN112016410B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a micro-expression recognition method, which comprises the steps of collecting images with face videos, preprocessing the video images to extract images of face areas, and aligning faces; extracting an image of a set region in the face region image, and calculating an HOOF characteristic sequence of the set region; calculating a final chi-square distance sequence of the HOOF characteristic sequence, and judging whether a microexpressive expression exists in the set region; and transmitting the HOOF characteristic sequence with the micro expression into a micro expression analysis model to obtain a micro expression classification result. The invention also provides a storage medium and a micro-expression recognition system, and the micro-expression recognition method, the storage medium and the system provided by the invention enhance the robustness of micro-expression recognition by reducing the extracted redundant information.

Description

Micro expression recognition method, storage medium and system
Technical Field
The present invention relates to the field of microexpressive recognition, and in particular, to a microexpressive recognition method, a storage medium, and a system.
Background
The expression well known in daily life is called macro expression and usually lasts 3/4 to 2 seconds. Spatially, macro-expressions may appear on multiple or single areas of the face, depending on the expression. For example, surprise expression typically causes movement around the eyes, forehead, cheeks and mouth, while fear expressions typically only produce movement near the eyes. While micro-expressions are described as a non-voluntary and uninhibited expression that appears on the face, with very small magnitudes and very short durations (between 1/25 and 1/3 seconds), they are hardly noticeable by a person without professional training.
Since the characteristics of micro-expressions are very helpful for lie detection and the training of recognition ability is complex, the recognition of micro-expressions was only applied to the national security field at the earliest. However, with the rapid development of artificial intelligence, scientists began to begin automatic technological studies of micro-expressions. In many scenarios, recognition of micro-expressions is more meaningful than ordinary expressions. For example, during financial credit auditing, the microexpressions of the user during the question and answer process can be identified to judge the authenticity of the answer result so as to avoid enterprise risks. Depression patients, when receiving questions, can analyze their mental emotion by recognizing the patient's microexpressions. When a customer is subjected to service satisfaction investigation, more real investigation information can be obtained through their micro-expressions. The state of their class can be known by the emotion of their microexpressive representative while the student is in class.
The traditional micro-expression recognition method adopts a mode of manually extracting global features, has insufficient feature extraction capability and has a large amount of redundant information, so that the micro-expression detection rate and recognition rate of the model are generally low.
Disclosure of Invention
In view of the above, the present invention provides a micro-expression recognition method, a storage medium and a system for solving the problems of low micro-expression detection rate and recognition rate caused by insufficient feature extraction capability and a large amount of redundant information in the conventional micro-expression recognition method.
In order to achieve the above object, the technical solution of the present invention is to provide a micro-expression recognition method, which includes the steps of: acquiring images with face videos, preprocessing the video images to extract images of face areas, and aligning faces; extracting an image of a set region in the face region image, and calculating an HOOF characteristic sequence of the set region; calculating a final chi-square distance sequence of the HOOF characteristic sequence, and judging whether a microexpressive expression exists in the set region; and transmitting the HOOF characteristic sequence with the micro expression into a micro expression analysis model to obtain a micro expression classification result.
The present invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform a microexpressive recognition method at run-time.
The invention also provides a micro-expression recognition system, which comprises a processor and a memory, wherein the memory is stored with a computer program, and when the computer program is executed by the processor, the micro-expression recognition method is realized.
Compared with the prior art, the micro-expression recognition method, the storage medium and the system provided by the invention have the following beneficial effects:
through selecting the images of the set areas, redundant information is reduced, robustness of micro-expression recognition is enhanced, meanwhile, the sum of the forward and backward chi-square distance sequences is used as a judging basis for judging whether the micro-expression exists or not, influence caused by head deviation in a human face video stream can be reduced, a time domain attention mechanism is fused in a micro-expression analysis model, weight of key information is increased, weight of the redundant information is reduced, and accuracy of micro-expression recognition is effectively improved.
The above-described embodiments of the present invention do not limit the scope of the present invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.
Drawings
Fig. 1 is a schematic flow chart of steps of a micro-expression recognition method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the location of the setup region in step S2 in FIG. 1;
FIG. 3 is a schematic diagram of the micro-expression analysis model of step S4 in FIG. 1;
FIG. 4 is a flow chart of the substeps of step S1 in FIG. 1;
FIG. 5 is a flow chart of substeps of step S2 in FIG. 1;
FIG. 6 is a flow chart of substeps of step S3 in FIG. 1;
fig. 7 is a flow chart of the substeps of step S4 in fig. 1.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Referring to fig. 1-3, the method for identifying micro-expressions provided by the present invention includes the steps of:
s1, acquiring images with face videos, preprocessing the video images to extract images of face areas, and aligning faces;
specifically, after a video image with a human face is acquired, an image of each frame of the video image is acquired, then the pixel value of each frame of image is normalized, each frame of image is grayed, the frame number of the grayed image is normalized by using a linear interpolation model, and finally the affine change is used for aligning the human face in the image.
It can be understood that the face video image can be an existing video data, or can be acquired in real time through the acquisition of the camera equipment.
In this embodiment, the collected images are images with video frame rate up to 200fps and face area size up to 280×340 pixels in the CASME II dataset, then the pixel values of each frame of image are normalized from 0-255 to 0-1, then the images are converted from RGB color space to gray space, the frame number normalization is performed on the gray image sequence, and the frame number normalization is performed on the image sequence frame number to 81 frames.
The method for frame number normalization of the image after graying is a linear interpolation model, and the linear interpolation model is a commonly used frame number normalization algorithm, for example, the specific process of normalizing an image sequence with the size of H multiplied by W and the frame number of N to the frame number of M is as follows:
firstly, a discretization vector TM of a target frame number M and a frame number discretization vector TN expression of the target frame number M are obtained on a time axis:
for any pixel point I (I, j) on the image, where I, j are the height and width coordinates on the image, respectively. For the coordinates M/M-1 on the target time axis, the two nearest coordinates N-1/N-1 and N/N-1 in the source discrete vector TN are found. Then the target frame pixel I at that point in time M,m/M-1(i,j) The method comprises the following steps:
finally, a Haar-like characteristic face detection model in OpenCV is adopted, 68 key points of the face are detected by a Dlib library, and affine transformation is used for aligning the face.
S2, extracting an image of a set region in the face region image, and calculating an HOOF characteristic sequence of the set region;
specifically, since some features on the face have a certain interference in identifying the micro-expressions, for example, the eye area can generate a large amount of interference information due to blinking, and the nose area does not have too much movement, and hardly contains useful information, for example, the micro-expressions of all the areas on the face are uniformly identified, so that a large amount of noise interference can be generated. Therefore, it is necessary to extract a set region on the face image and analyze the image in the set region to reduce noise infection.
It will be appreciated that the HOOF feature, one of the manual features, all Histogram of Oriented Optical Flow, is an improvement to the light flow histogram and is unaffected whether the person is far from the lens or near the lens.
In the present embodiment, four setting Regions (ROIs) are selected, which are left and right mouth corners and left and right eyebrow regions, respectively. The region corresponding to the left eyebrow is R0I1, the region corresponding to the right eyebrow is ROI2, the region corresponding to the left mouth angle is ROI3, the region corresponding to the right mouth angle is ROI4, the region of ROI1 is determined by the 17 th point and the 21 st point in the dlib library human face 68 key points, the region of ROI2 is determined by the 22 nd and 26 th key points, the region of ROI3 is determined by the 48 th key point, and the region of ROI4 is determined by the 54 th key point; the x-direction range of the ROI1 and the ROI2 is the range of the corresponding two points in the x-direction, the upper limit of the y-direction is a coordinate value larger in the y-direction in the corresponding two points, and the lower limit of the y is the upper limit of the y minus about 15% of the image height; the regions of ROI3 and ROI4 are square regions with sides of 12% image height centered on their corresponding keypoints.
Thus, optical flow vectors of the four ROI areas are calculated, and optical flow vectors of the first frame and the rest frames are calculated, respectively, to obtain 4X80 optical flow field matrices. The embodiment of the invention adopts a TV-L1 optical flow calculation method, and the method adopts an L1 norm in the objective function, so that the increase of the error of the objective function becomes slow, and the penalty for large offset is relatively smaller. Optical flow vectors in the x and y directions are converted into amplitude and angle, the angle range of 0-2 pi is quantized into n intervals, the amplitude of each interval is counted and summed, and finally normalization processing is carried out. It should be noted that n is 8 in the embodiment of the present invention, so that obtaining a HOOF feature sequence with a length of 80 dimensions of 4×8 for each video sequence is obtained.
S3, calculating a final chi-square distance sequence of the HOOF feature sequence, and judging whether a microexpressive expression exists in a set area;
specifically, in the HOOF features in the same sample, there is a large difference between the HOOF features of the peak frame and the first frame and the HOOF features of the rest frame and the first frame, and the chi-square distance can measure the difference between the HOOF features, which is defined as follows:
where Pi and Qi represent the values of the ith dimension of the two HOOF features, respectively. The invention calculates the forward chi-square distance and the reverse chi-square distance respectively, the forward chi-square distance is calculated by the first HOOF characteristic and the rest and is arranged in time sequence, the reverse chi-square distance is calculated by the last HOOF characteristic and the rest and is arranged in time sequence, and then the two sequences are added to obtain the final chi-square distance sequence which is recorded asThe method for judging whether the micro expression exists or not by using the threshold value comprises the following steps:
wherein the method comprises the steps ofIs->Alpha is a threshold coefficient. When the inequality is satisfied, the micro expression is detected, and it is to be noted that the value of alpha is calculated according to the training sample, if the micro expression is detected, the expression is +.>The time instant of (a) is the peak frame time instant.
S4, transmitting the HOOF feature sequence with the micro expression into a micro expression analysis model to obtain a micro expression classification result;
specifically, after obtaining the HOOF feature with the micro-expression in the step S3, placing the HOOF feature into a pre-established micro-expression analysis model, and obtaining the micro-expression classification of the HOOF feature through the micro-expression analysis model.
In this embodiment, the network basic architecture is BiLSTM, the full connection layer maps the output of the time domain attention layer into vectors with dimensions of the number of micro expression categories, and finally the probabilities of the categories are obtained through activation of the softmax function. It should be noted that, in the embodiment of the present invention, the input data has a time length of 80, a dimension of 32, a node number of the bilstm network is 80, the layer number is 2, the two direction hidden layer states are spliced to obtain an output sequence with a time length of 80 and a dimension of 64, and a specific implementation manner of the time domain attention layer is as follows:
M=w T tanh(h t )
α t =softmax(M)
γ t =Σα t h t
wherein h is t Is the hidden layer state at time t, w is a pre-initialized trainable weight matrix, alpha t Is h t Weight coefficient of (gamma) t Is the output of the attention layer at time t.
The effect of detecting the micro-expression in CASME II provided by the invention is compared with the existing method as follows:
wherein P is include Representing probability of accurately judging whether video contains micro expression or not, P location Representing the probability of accurately locating the peak frame; it should be noted that, because the frame rate of the data set is higher and the gap between adjacent frames is smaller in the embodiment of the present invention, the predicted peak frame does not need to be completely identical to the real peak frame, as long as it is within 5 frames before and after the real peak frame.
The effect of micro expression classification in CASME II provided by the invention is compared with the existing method as follows:
referring to fig. 4, step S1 further includes the sub-steps of:
s11, acquiring a video image comprising a human face;
specifically, the video images including the human face may be acquired in real time by the image capturing device, or may be existing video data.
S12, carrying out pixel value normalization processing on each frame of image;
specifically, image normalization refers to a process of performing a series of standard processing transformations on an image to transform it into a fixed standard form, which standard image is called a normalized image. The original image can obtain various duplicate images after being subjected to some processing or attack, and the images can obtain standard images in the same form after being subjected to image normalization processing of the same parameters. Image normalization is the conversion of an original image to be processed into a corresponding unique standard form (the standard form image has invariant properties for affine transformations such as translation, rotation, scaling, etc.) by a series of transformations (i.e. using invariant moment of the image to find a set of parameters that enable it to eliminate the effect of other transformation functions on the image transformation).
The basic working principle of the moment-based image normalization technology is as follows: the parameters of the transformation function are first determined using moments in the image that have invariance to the affine transformation, and then the transformation function determined using this parameters transforms the original image into an image of standard form (the image is independent of the affine transformation). In general, the moment-based image normalization process includes 4 steps, namely coordinate centering, x-shaping normalization, scaling normalization, and rotation normalization.
Image normalization allows images to be resistant to attacks by geometric transformations, which can find those invariant in the images, knowing that they were originally identical or a series.
S13, graying each frame of image;
specifically, pixel points in each frame of picture in the video are converted into a gray scale picture through gray scale processing.
S14, normalizing the frame number by using a linear interpolation model;
specifically, an interpolation model is used to normalize the number of frames of the image sequence to 81 frames.
S15, aligning the face by using an affine transformation method;
specifically, a Haar-like characteristic face detection model in OpenCV is adopted, 68 key points of a face are detected by a Dlib library, and affine transformation is used for aligning the face.
Referring to fig. 5, step S2 further includes the sub-steps of:
s21, selecting a setting area for face feature recognition;
specifically, according to the noise level of each part of the face, the left and right eyebrows and the left and right corners of the mouth are selected as setting areas.
S22, calculating a first frame of the face features extracted from the set area and the rest of optical flow fields to obtain an optical flow vector;
specifically, the optical flow (Optical flow or optic flow) method is an important method for moving image analysis, and refers to the mode motion speed in a time-varying image. Because when an object is in motion, its brightness pattern at the corresponding point on the image is also in motion. The apparent motion of this image brightness mode is the optical flow. Optical flow expresses the change of an image and can be used by an observer to determine the movement of an object, since it contains information about the movement of the object. The definition of optical flow can be extended to an optical flow field, which refers to a two-dimensional instantaneous velocity field formed by all pixels in an image, wherein the two-dimensional velocity vector is the projection of a three-dimensional velocity vector of a visible point in the scene onto the imaging surface.
S23, converting the optical flow vector into an optical flow amplitude value and an optical flow angle;
specifically, the optical flow vector obtained in step S22 is converted into an optical flow magnitude and an optical flow angle.
S24, quantifying the optical flow amplitude and the optical flow angle to obtain HOOF characteristics;
specifically, the HOOF feature is obtained by the optical flow amplitude and the optical flow angle.
S25, splicing HOOF characteristics in the set area, and then arranging the HOOF characteristics in time sequence to obtain a HOOF characteristic sequence;
specifically, the HOOF features in the set region are spliced, and then a plurality of HOOF features are spliced into a HOOF feature sequence according to the time sequence of each frame of the image.
Referring to fig. 6, step S3 further includes the sub-steps of:
s31, calculating the chi-square distance of the HOOF characteristic sequence to obtain a chi-square distance sequence;
specifically, chi-square Distance (Chi-square Distance) is a measure of the difference between two volumes by obtaining a Chi-square statistic using a method of column-tie analysis. And calculating a forward chi-square distance and a reverse chi-square distance through the difference of the HOOF characteristics of the peak frame and the first frame in the HOOF characteristics and the HOOF characteristics of the other frames and the first frame, then calculating the forward chi-square distance by the first HOOF characteristic and the other items and arranging the forward chi-square distance in time sequence, calculating the reverse chi-square distance by the last HOOF characteristic and the other items and arranging the reverse chi-square distance in time sequence, and adding the two sequences to obtain a final chi-square distance sequence.
S32, calculating the ratio of the maximum value to the average value of the chi-square distance sequence;
specifically, the maximum value and the average value of the kava distance sequence obtained in step S31 are calculated to obtain the ratio of the maximum value to the average value.
S33, judging a threshold value of the comparison value to obtain a result of whether the micro expression exists or not;
specifically, the ratio is determined by a threshold value determination method to determine whether the microexpressions exist.
Referring to fig. 7, step S4 further includes the sub-steps of:
s41, establishing a microexpressive analysis model by using a cyclic neural network;
specifically, a neural network is utilized to self-learn to establish a micro-expression analysis model with the HOOF characteristic sequence as input and the micro-expression result classification as output, so that after the HOOF characteristic sequence with the micro-expression is obtained, the micro-expression analysis model is directly utilized to obtain the micro-expression classification corresponding to the HOOF characteristic sequence.
S42, placing the HOOF characteristic sequence with the micro-expression into a micro-expression analysis model to obtain micro-expression classification.
The invention also provides a storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above-mentioned method steps when run. The storage medium may include, for example, a floppy disk, an optical disk, a DVD, a hard disk, a flash Memory, a U-disk, a CF card, an SD card, an MMC card, an SM card, a Memory Stick (Memory Stick), an XD card, and the like.
The computer software product is stored in a storage medium and includes instructions for causing one or more computer devices (which may be personal computer devices, servers or other network devices, etc.) to perform all or part of the steps of the method of the invention.
The invention also provides a micro-expression recognition system, which comprises a processor and a memory, wherein the memory stores a computer program, and when the computer program is executed by the processor, the micro-expression recognition method is realized.
Compared with the prior art, the micro-expression recognition method, the storage medium and the system provided by the invention have the following beneficial effects:
through selecting the images of the set areas, redundant information is reduced, robustness of micro-expression recognition is enhanced, meanwhile, the sum of the forward and backward chi-square distance sequences is used as a judging basis for judging whether the micro-expression exists or not, influence caused by head deviation in a human face video stream can be reduced, a time domain attention mechanism is fused in a micro-expression analysis model, weight of key information is increased, weight of the redundant information is reduced, and accuracy of micro-expression recognition is effectively improved.
The above-described embodiments of the present invention do not limit the scope of the present invention. Any other corresponding changes and modifications made in accordance with the technical idea of the present invention shall be included in the scope of the claims of the present invention.

Claims (8)

1. The micro expression recognition method is characterized by comprising the following steps:
acquiring images with face videos, preprocessing the video images to extract images of face areas, and aligning faces;
extracting an image of a set region in the face region image, and calculating an HOOF characteristic sequence of the set region;
calculating a final chi-square distance sequence of the HOOF characteristic sequence, and judging whether a microexpressive expression exists in the set region; a kind of electronic device with high-pressure air-conditioning system
Transmitting the HOOF characteristic sequence with the micro expression into a micro expression analysis model to obtain a micro expression classification result;
the step of extracting the image of the set region in the face region image and calculating the HOOF characteristic sequence of the set region comprises the following steps:
selecting a setting area for face feature recognition;
calculating a first frame of the extracted face features in the set area and the rest optical flow fields to obtain an optical flow vector;
converting the optical flow vector into an optical flow magnitude and an optical flow angle;
quantifying the optical flow amplitude and the optical flow angle to obtain HOOF characteristics; a kind of electronic device with high-pressure air-conditioning system
Splicing HOOF characteristics in a set region, and then arranging the HOOF characteristics in time sequence to obtain a HOOF characteristic sequence;
the step of calculating the final chi-square distance sequence of the HOOF feature sequence and judging whether the micro expression exists in the set area comprises the following steps:
calculating the chi-square distance of the HOOF characteristic sequence to obtain a chi-square distance sequence;
calculating the ratio of the maximum value to the average value of the chi-square distance sequence; a kind of electronic device with high-pressure air-conditioning system
Threshold value judgment is carried out on the ratio value, and a result of whether the micro expression exists or not is obtained;
the chi-square distance is used to measure the difference between HOOF features and is defined as follows:
where Pi and Qi represent the values of the ith dimension of the two HOOF features, respectively.
2. The method of claim 1, wherein the steps of capturing images of the video with the face and preprocessing the video images to extract images of the face area for face alignment include:
acquiring a video image comprising a human face;
carrying out pixel value normalization processing on each frame of image;
graying each frame of image;
normalizing the frame number by using a linear interpolation model; a kind of electronic device with high-pressure air-conditioning system
The face is aligned using an affine transformation method.
3. The method for identifying a microexpressive motion as defined in claim 1, wherein said step of transferring said HOOF feature sequence having a microexpressive motion into a microexpressive motion analysis model to obtain a microexpressive motion classification result comprises the steps of:
establishing a microexpressive analysis model by using a cyclic neural network;
and (3) placing the HOOF characteristic sequence with the micro expression into a micro expression analysis model to obtain micro expression classification.
4. The micro-expression recognition method of claim 1, wherein:
the set area comprises four areas including left and right eyebrows and corners of a mouth of a human face.
5. The micro-expression recognition method of claim 1, wherein:
the final chi-square distance sequence is the result of adding the forward chi-square sequence and the reverse chi-square sequence.
6. The micro-expression recognition method of claim 1, wherein:
the micro expression analysis model is a BiLSTM network integrating a time domain attention mechanism.
7. A storage medium, characterized by:
the storage medium having stored therein a computer program, wherein the computer program is arranged to perform the microexpressive recognition method according to any of the claims 1-6 when run.
8. The remote sensing image classification post-processing system is characterized in that:
the remote sensing image classification post-processing system comprises a processor and a memory, wherein a computer program is stored on the memory, and the computer program is executed by the processor to realize the micro-expression recognition method as claimed in any one of claims 1 to 6.
CN202010809744.3A 2020-08-13 2020-08-13 Micro expression recognition method, storage medium and system Active CN112016410B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010809744.3A CN112016410B (en) 2020-08-13 2020-08-13 Micro expression recognition method, storage medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010809744.3A CN112016410B (en) 2020-08-13 2020-08-13 Micro expression recognition method, storage medium and system

Publications (2)

Publication Number Publication Date
CN112016410A CN112016410A (en) 2020-12-01
CN112016410B true CN112016410B (en) 2023-12-26

Family

ID=73504252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010809744.3A Active CN112016410B (en) 2020-08-13 2020-08-13 Micro expression recognition method, storage medium and system

Country Status (1)

Country Link
CN (1) CN112016410B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358206A (en) * 2017-07-13 2017-11-17 山东大学 Micro- expression detection method that a kind of Optical-flow Feature vector modulus value and angle based on area-of-interest combine
CN108830222A (en) * 2018-06-19 2018-11-16 山东大学 A kind of micro- expression recognition method based on informedness and representative Active Learning
CN108830223A (en) * 2018-06-19 2018-11-16 山东大学 A kind of micro- expression recognition method based on batch mode Active Learning
CN110991348A (en) * 2019-12-05 2020-04-10 河北工业大学 Face micro-expression detection method based on optical flow gradient amplitude characteristics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107358206A (en) * 2017-07-13 2017-11-17 山东大学 Micro- expression detection method that a kind of Optical-flow Feature vector modulus value and angle based on area-of-interest combine
CN108830222A (en) * 2018-06-19 2018-11-16 山东大学 A kind of micro- expression recognition method based on informedness and representative Active Learning
CN108830223A (en) * 2018-06-19 2018-11-16 山东大学 A kind of micro- expression recognition method based on batch mode Active Learning
CN110991348A (en) * 2019-12-05 2020-04-10 河北工业大学 Face micro-expression detection method based on optical flow gradient amplitude characteristics

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
平均光流方向直方图描述的微表情识别;马浩原 等;信号处理;第34卷(第3期);全文 *

Also Published As

Publication number Publication date
CN112016410A (en) 2020-12-01

Similar Documents

Publication Publication Date Title
Lin et al. Face liveness detection by rppg features and contextual patch-based cnn
Davison et al. Objective micro-facial movement detection using facs-based regions and baseline evaluation
Liu et al. Learning deep models for face anti-spoofing: Binary or auxiliary supervision
Shao et al. Joint discriminative learning of deep dynamic textures for 3D mask face anti-spoofing
Happy et al. Fuzzy histogram of optical flow orientations for micro-expression recognition
KR102462818B1 (en) Method of motion vector and feature vector based fake face detection and apparatus for the same
Lin Face detection in complicated backgrounds and different illumination conditions by using YCbCr color space and neural network
CN111444881A (en) Fake face video detection method and device
US20210264144A1 (en) Human pose analysis system and method
Fourati et al. Anti-spoofing in face recognition-based biometric authentication using image quality assessment
CN113869229B (en) Deep learning expression recognition method based on priori attention mechanism guidance
CN105160318A (en) Facial expression based lie detection method and system
CN110838119B (en) Human face image quality evaluation method, computer device and computer readable storage medium
CN109325472B (en) Face living body detection method based on depth information
KR101243294B1 (en) Method and apparatus for extracting and tracking moving objects
Hebbale et al. Real time COVID-19 facemask detection using deep learning
Shrivastava et al. Conceptual model for proficient automated attendance system based on face recognition and gender classification using Haar-Cascade, LBPH algorithm along with LDA model
Nikitin et al. Face anti-spoofing with joint spoofing medium detection and eye blinking analysis
Liu et al. A novel video forgery detection algorithm for blue screen compositing based on 3-stage foreground analysis and tracking
Sakthimohan et al. Detection and Recognition of Face Using Deep Learning
CN111488779A (en) Video image super-resolution reconstruction method, device, server and storage medium
Fathy et al. Benchmarking of pre-processing methods employed in facial image analysis
CN112016410B (en) Micro expression recognition method, storage medium and system
CN113486788A (en) Video similarity determination method and device, electronic equipment and storage medium
Ribeiro et al. Access control in the wild using face verification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant