CN105825192A - Facial expression identification method and system - Google Patents

Facial expression identification method and system Download PDF

Info

Publication number
CN105825192A
CN105825192A CN201610173246.8A CN201610173246A CN105825192A CN 105825192 A CN105825192 A CN 105825192A CN 201610173246 A CN201610173246 A CN 201610173246A CN 105825192 A CN105825192 A CN 105825192A
Authority
CN
China
Prior art keywords
topography
key point
face key
eigenvalue
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610173246.8A
Other languages
Chinese (zh)
Other versions
CN105825192B (en
Inventor
于仕琪
李立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201610173246.8A priority Critical patent/CN105825192B/en
Publication of CN105825192A publication Critical patent/CN105825192A/en
Application granted granted Critical
Publication of CN105825192B publication Critical patent/CN105825192B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention is suitable for the computer technology field, and provides a facial expression identification method and system. The method comprises the steps of extracting n face key points Pi of a facial expression image, and obtaining the local images in the surrounding set areas of the face key points Pi by taking the face key points Pi as the center; carrying out the filtering processing and the binary processing on the local images orderly to obtain the feature values of all pixel points in the local images; using a frequency histogram to gather the feature values of all pixel points in the local images to obtain the corresponding feature vectors Vi of the face key points Pi, and combining the feature vectors Vi of the n face key points Pi into the joint feature vectors; classifying the joint feature vectors via a classifier algorithm to identify the facial expression. According to the present invention, the identification accuracy is improved, at the same time, the influence of the illumination change and the facial pose change on the identification results is reduced.

Description

A kind of facial expression recognizing method and system
Technical field
The invention belongs to field of computer technology, particularly relate to a kind of facial expression recognizing method and system.
Background technology
People the most mutually exchange, and information transmission 7% comes from word, and 38% comes from sound, and 55% depends on facial expression, human face expression be the mankind in order to show emotion, transmission inward world and the important of attitude can use expression to express the emotion of oneself by way of, people.This shows, facial expression plays an important role in the communication of people.If computer can obtain and understand human face expression, then the relation between people and computer will be changed to a great extent, thus reach more preferable man-machine interaction effect.The further investigation of expression recognition, can be greatly promoted the development of these subjects, by identifying the expression of people, and then can analyze the mental status and the mental activity of people.
Facial expression recognizing method mainly includes human facial feature extraction and expressive features two parts of classification, it is high generally to there is complexity in existing facial expression recognizing method, recognition accuracy is low, the shortcoming that recognition speed is slow, when face is in complex illumination condition, has bigger attitudes vibration, existing algorithm is easily subject to impact, and discrimination substantially reduces.
Summary of the invention
It is an object of the invention to provide a kind of facial expression recognizing method and system, it is intended to solve, owing to the face key point extracted only being identified by prior art, to cause the problem that Expression Recognition accuracy rate reduces when human face posture changes.
On the one hand, the invention provides a kind of facial expression recognizing method, described method comprises the steps:
Facial Expression Image is extracted n face key point Pi, and with described face key point PiCentered by, obtaining the topography in its surrounding setting regions, described n is a positive integer value preset, i=1,2 ..., n;
Described topography is filtered successively, binary conversion treatment, obtain the eigenvalue of all pixels in described topography;
Use frequency histogram to add up the eigenvalue of all pixels in described topography, obtain face key point P of correspondenceiCharacteristic vector Vi, and by n face key point PiCharacteristic vector ViIt is combined into union feature vectorial, i=1,2 ..., n;
Described union feature vector is classified by classifier algorithm, identifies human face expression.
On the other hand, the invention provides a kind of expression recognition system, described system includes:
Topography's acquiring unit, for extracting n face key point P to Facial Expression Imagei, and with described face key point PiCentered by, obtaining the topography in its surrounding setting regions, described n is a positive integer value preset, i=1,2 ..., n;
Eigenvalue calculation unit, for being filtered described topography successively, binary conversion treatment, obtains the eigenvalue of all pixels in described topography;
Union feature vector assembled unit, for using frequency histogram to add up the eigenvalue of all pixels in described topography, obtains face key point P of correspondenceiCharacteristic vector Vi, and by n face key point PiCharacteristic vector ViIt is combined into union feature vectorial, i=1,2 ..., n;
Expression recognition unit, for being classified by classifier algorithm by described union feature vector, identifies human face expression.
In embodiments of the present invention, after extracting face key point, obtain the topography of face key point surrounding, identify by carrying out topography processing, improve the accuracy rate of identification, reduce simultaneously and look after change and the human face posture change impact on recognition result.
Accompanying drawing explanation
Fig. 1 is the flowchart of the facial expression recognizing method that the embodiment of the present invention one provides;
Fig. 2 is topography's filtering of the embodiment of the present invention two offer, the flowchart of binary conversion treatment;
Fig. 3 is the structure chart of the expression recognition system that the embodiment of the present invention three provides;And
Fig. 4 is the structure chart of eigenvalue calculation unit in the expression recognition system that the embodiment of the present invention four provides.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
Below in conjunction with specific embodiment the present invention implemented and is described in detail:
Embodiment one:
What Fig. 1 showed the facial expression recognizing method that the embodiment of the present invention one provides realizes flow process, for convenience of description, illustrate only the part relevant to the embodiment of the present invention, and details are as follows:
In step S101, Facial Expression Image is extracted n face key point Pi, and with described face key point PiCentered by, obtaining the topography in its surrounding setting regions, described n is a positive integer value preset, i=1,2 ..., n.
In embodiments of the present invention, the key point of the human face five-sense-organ of Facial Expression Image is positioned, obtain n face key point position P on image1, P2..., Pn, in the present embodiment, extract 15 face key points, i.e. n=15, actual application can extract 5-100 face key point as required.Face key point PiBeing positioned at face critical zone locations, described face critical zone locations includes: eyebrow, eyes, nose and face.By different human face postures and human face expression, also have different illumination effect and colour of skin impact, the Facial Expression Image generally yielded is extremely complex and disunity, simple dependence face key point carries out processing, identifying, it is identified result will certainly be affected by above-mentioned factor, therefore, in order to improve recognition accuracy, reducing the factor impacts on its recognition result such as human face posture change, illumination variation, the present invention is with face key point PiCentered by the topography that obtains in its surrounding setting regions be identified.
Specifically, topography is for obtaining with face key point PiCentered by the image-region of N × N, wherein, according to amount of calculation during actual treatment and the requirement of recognition result precision, N can be set as positive integer 5,7 or 9.
In step s 102, described topography is filtered successively, binary conversion treatment, obtains the eigenvalue of all pixels in described topography.
In embodiments of the present invention, after the topography extracted around face key point position, m wave filter is used to be filtered, again filter result is carried out binaryzation, the m position binary-coding obtained is converted into decimal scale, the decimal value obtained is eigenvalue, after above-mentioned filtering, binary conversion treatment, can obtain the eigenvalue of all pixels in topography.
In step s 103, use frequency histogram to add up the eigenvalue of all pixels in described topography, obtain face key point P of correspondenceiCharacteristic vector Vi, and by n face key point PiCharacteristic vector ViIt is combined into union feature vectorial, i=1,2 ..., n.
Further, use frequency histogram to add up the eigenvalue of all pixels in described topography, obtain face key point P of correspondenceiCharacteristic vector ViParticularly as follows:
With face key point PiCentered by topography in, the frequency that in statistics topography, the eigenvalue of all pixels is occurred in its interval [0,255];
Obtain face key point P of correspondenceiCharacteristic vector Vi=(v0, v1..., vj..., v255), wherein, vjThe frequency occurred for j value, j=1,2 ..., 255.
In embodiments of the present invention, step S102 has been calculated with face key point PiCentered by NxN image-region in the eigenvalue C of all pixels (x, y), by frequency histogram statistical characteristics C (x, y) frequency of occurrences.Due to eigenvalue C, (x, y) interval is [0,255], therefore, has 256 kinds of eigenvalues, and frequency histogram can represent with face key point P by 256 dimension row vectorsiCentered by topography in the frequency that occurred in its interval [0,255] of the eigenvalue of all pixels, be face key point PiCharacteristic vector Vi, Vi=(v0, v1..., vj..., v255), wherein, vjThe frequency occurred for j value, j=1,2 ..., 255.
Further, by n face key point PiCharacteristic vector ViBe combined into union feature vector particularly as follows:
By n face key point PiCharacteristic vector ViConnecting together, be combined into union feature vector, described union feature vector is 256 × n dimensional feature vector [V1, V2..., Vn]。
In step S104, described union feature vector is classified by classifier algorithm, identify human face expression.
In embodiments of the present invention, after extracting face key point, obtain the topography of face key point surrounding, by using m wave filter to be filtered topography, again filter result is carried out binaryzation, the m position binary-coding obtained is converted into metric eigenvalue, and by frequency histogram, eigenvalue is added up, obtain representing the characteristic vector of human face expression feature, by topography is processed, improve the accuracy rate of identification, reduce simultaneously and look after change and the human face posture change impact on recognition result.
Embodiment two:
Fig. 2 shows topography's filtering, the flowchart of binary conversion treatment that the embodiment of the present invention two provides, and details are as follows:
In step s 201, m wave filter M is usediDescribed topography is filtered, obtains m numerical value a of corresponding each pixel in described topographyi, described m is the positive integer value more than or equal to 8, i=0,1 ..., m-1.
In embodiments of the present invention, owing to eigenvalue obtains span in 0~255, accordingly, use 8 just can meet needs for binary number, therefore, choose m=8, be 8 wave filter MiBeing filtered described topography, certainly, reality can be chosen the wave filter of more than 8 and be filtered in applying, accordingly, obtains more than 8 binary numbers.With 8 wave filter MiAs a example by be filtered, it is possible to use following 8 wave filter are filtered respectively:
M 0 = - 3 - 3 5 - 3 0 5 - 3 - 3 5 , M 1 = - 3 5 5 - 3 0 5 - 3 - 3 - 3 , M 2 = 5 5 5 - 3 0 - 3 - 3 - 3 - 3
M 3 = 5 5 - 3 5 0 - 3 - 3 - 3 - 3 , M 4 = 5 - 3 - 3 5 0 - 3 5 - 3 - 3 , M 5 = - 3 - 3 - 3 5 0 - 3 5 5 - 3
M 6 = - 3 - 3 - 3 - 3 0 - 3 5 5 5 , M 7 = - 3 - 3 - 3 - 3 0 5 - 3 5 5
Accordingly, 8 numerical value: a are obtained0, a1, a2, a3, a4, a5, a6, a7
In step S202, to described m numerical value aiCarry out binary conversion treatment, obtain the eigenvalue of corresponding each pixel.
Further, to described m numerical value aiCarrying out binary conversion treatment, the step of the eigenvalue obtaining corresponding each pixel includes:
Pass through rectangular window functionTo m numerical value aiCarry out binaryzation, obtain the binary data b of m position0…bm-1
By the binary data b of m position0…bm-1Be converted into decimal data, obtain corresponding each pixel eigenvalue C (x, y), wherein, 0≤C (x, y)≤255, described x, y are respectively pixel position.
In embodiments of the present invention, rectangular window function is passed throughTo 8 numerical value a0, a1, a2, a3, a4, a5, a6, a7Carry out binaryzation, obtain the binary data b of 80…b7, by the binary data b of 80…b7Being converted into decimal data, (x, y), its numerical value is 0~255, has 256 kinds of possible values to obtain the eigenvalue C of corresponding each pixel.
One of ordinary skill in the art will appreciate that all or part of step realizing in above-described embodiment method can be by program and completes to instruct relevant hardware, described program can be stored in a computer read/write memory medium, described storage medium, such as ROM/RAM, disk, CD etc..
Embodiment three:
Fig. 3 shows the structure chart of the expression recognition system that the embodiment of the present invention three provides, and for convenience of description, illustrate only the part relevant to the embodiment of the present invention.
This expression recognition system includes: topography's acquiring unit 31, eigenvalue calculation unit 32, union feature vector assembled unit 33 and expression recognition unit 34, wherein:
Topography's acquiring unit 31, for extracting n face key point P to Facial Expression Imagei, and with described face key point PiCentered by, obtaining the topography in its surrounding setting regions, described n is a positive integer value preset, i=1,2 ..., n.
In embodiments of the present invention, the key point of the human face five-sense-organ of Facial Expression Image is positioned, obtain n face key point position P on image1, P2..., Pn, in the present embodiment, extract 15 face key points, i.e. n=15, actual application can extract 5-100 face key point as required.Face key point PiBeing positioned at face critical zone locations, described face critical zone locations includes: eyebrow, eyes, nose and face.By different human face postures and human face expression, also have different illumination effect and colour of skin impact, the Facial Expression Image generally yielded is extremely complex and disunity, simple dependence face key point carries out processing, identifying, it is identified result will certainly be affected by above-mentioned factor, therefore, in order to improve recognition accuracy, reducing the factor impacts on its recognition result such as human face posture change, illumination variation, the present invention is with face key point PiCentered by the topography that obtains in its surrounding setting regions be identified.Specifically, topography is for obtaining with face key point PiCentered by the image-region of N × N, wherein, according to amount of calculation during actual treatment and the requirement of recognition result precision, N can be set as positive integer 5,7 or 9.
Eigenvalue calculation unit 32, for being filtered described topography successively, binary conversion treatment, obtains the eigenvalue of all pixels in described topography.
In embodiments of the present invention, after the topography extracted around face key point position, m wave filter is used to be filtered, again filter result is carried out binaryzation, the m position binary-coding obtained is converted into decimal scale, the decimal value obtained is eigenvalue, after above-mentioned filtering, binary conversion treatment, can obtain the eigenvalue of all pixels in topography.
Union feature vector assembled unit 33, for using frequency histogram to add up the eigenvalue of all pixels in described topography, obtains face key point P of correspondenceiCharacteristic vector Vi, and by n face key point PiCharacteristic vector ViIt is combined into union feature vectorial, i=1,2 ..., n.
Further, with face key point PiCentered by topography in, the frequency that in statistics topography, the eigenvalue of all pixels is occurred in its interval [0,255];Obtain face key point P of correspondenceiCharacteristic vector Vi=(v0, v1..., vj..., v255), wherein, vjThe frequency occurred for j value, j=1,2 ..., 255.
In embodiments of the present invention, to face key point PiCentered by NxN image-region in the eigenvalue C of all pixels (x, y), by frequency histogram statistical characteristics C (x, y) frequency of occurrences.Due to eigenvalue C, (x, y) interval is [0,255], therefore, has 256 kinds of eigenvalues, and frequency histogram can represent with face key point P by 256 dimension row vectorsiCentered by topography in the frequency that occurred in its interval [0,255] of the eigenvalue of all pixels, be face key point PiCharacteristic vector Vi, Vi=(v0, v1..., vj..., v255), wherein, vjThe frequency occurred for j value, j=1,2 ..., 255.
Further, by n face key point PiCharacteristic vector ViConnecting together, be combined into union feature vector, described union feature vector is 256 × n dimensional feature vector [V1, V2..., Vn]。
Expression recognition unit 34, for being classified by classifier algorithm by described union feature vector, identifies human face expression.
In embodiments of the present invention, after extracting face key point, obtain the topography of face key point surrounding, by using m wave filter to be filtered topography, again filter result is carried out binaryzation, the m position binary-coding obtained is converted into metric eigenvalue, and by frequency histogram, eigenvalue is added up, obtain representing the characteristic vector of human face expression feature, by topography is processed, improve the accuracy rate of identification, reduce simultaneously and look after change and the human face posture change impact on recognition result.
Embodiment four:
Fig. 4 shows the structure chart of eigenvalue calculation unit in the expression recognition system that the embodiment of the present invention four provides, and for convenience of description, illustrate only the part relevant to the embodiment of the present invention.
This feature value computing unit 32 includes: filter unit 321 and binarization unit 322, wherein:
Filter unit 321, is used for using m wave filter MiDescribed topography is filtered, obtains m numerical value a of corresponding each pixel in described topographyi, described m is the positive integer value more than or equal to 8, i=0,1 ..., m-1.
In embodiments of the present invention, owing to eigenvalue obtains span in 0~255, accordingly, use 8 just can meet needs for binary number, therefore, choose m=8, be 8 wave filter MiBeing filtered described topography, certainly, reality can be chosen the wave filter of more than 8 and be filtered in applying, accordingly, obtains more than 8 binary numbers.
Binarization unit 322, for described m numerical value aiCarry out binary conversion treatment, obtain the eigenvalue of corresponding each pixel.
In embodiments of the present invention, rectangular window function is passed throughTo 8 numerical value a0, a1, a2, a3, a4, a5, a6, a7Carry out binaryzation, obtain the binary data b of 80…b7, by the binary data b of 80…b7Being converted into decimal data, (x, y), its numerical value is 0~255, has 256 kinds of possible values to obtain the eigenvalue C of corresponding each pixel.
In embodiments of the present invention, each unit can be realized by corresponding hardware or software unit, and each unit can be independent soft and hardware unit, it is also possible to be integrated into a soft and hardware unit, at this not in order to limit the present invention.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all any amendment, equivalent and improvement etc. made within the spirit and principles in the present invention, should be included within the scope of the present invention.

Claims (10)

1. a facial expression recognizing method, it is characterised in that described method comprises the steps:
Facial Expression Image is extracted n face key point Pi, and with described face key point PiCentered by, obtaining the topography in its surrounding setting regions, described n is a positive integer value preset, i=1,2 ..., n;
Described topography is filtered successively, binary conversion treatment, obtain the eigenvalue of all pixels in described topography;
Use frequency histogram to add up the eigenvalue of all pixels in described topography, obtain face key point P of correspondenceiCharacteristic vector Vi, and by n face key point PiCharacteristic vector ViIt is combined into union feature vectorial, i=1,2 ..., n;
Described union feature vector is classified by classifier algorithm, identifies human face expression.
2. the method for claim 1, it is characterised in that be filtered described topography successively, the step of binary conversion treatment includes:
Use m wave filter MiDescribed topography is filtered, obtains m numerical value a of corresponding each pixel in described topographyi, described m is the positive integer value more than or equal to 8, i=0,1 ..., m-1;
To described m numerical value aiCarry out binary conversion treatment, obtain the eigenvalue of corresponding each pixel.
3. method as claimed in claim 2, it is characterised in that to described m numerical value aiCarrying out binary conversion treatment, the step of the eigenvalue obtaining corresponding each pixel includes:
Pass through rectangular window functionTo m numerical value aiCarry out binaryzation, obtain the binary data b of m position0…bm-1
By the binary data b of m position0…bm-1Be converted into decimal data, obtain corresponding each pixel eigenvalue C (x, y), wherein, 0≤C (x, y)≤255, described x, y are respectively pixel position.
4. the method for claim 1, it is characterised in that use frequency histogram to add up the eigenvalue of all pixels in described topography, obtain face key point P of correspondenceiCharacteristic vector ViStep include:
With face key point PiCentered by topography in, add up the frequency that the eigenvalue of all pixels in described topography is occurred in its interval [0,255];
Obtain face key point P of correspondenceiCharacteristic vector Vi=(v0, v1..., vj..., v255), wherein, vjThe frequency occurred for j value, j=1,2 ..., 255.
5. the method for claim 1, it is characterised in that by n face key point PiCharacteristic vector ViThe step being combined into union feature vector includes:
By n face key point PiCharacteristic vector ViConnecting together, be combined into union feature vector, described union feature vector is 256 × n dimensional feature vector [V1, V2..., Vn]。
6. the method for claim 1, it is characterised in that described face key point PiBeing positioned at face critical zone locations, described face critical zone locations includes: eyebrow, eyes, nose and face.
7. the method for claim 1, it is characterised in that with described face key point PiCentered by, obtain the step of topography in its surrounding setting regions, including
Obtain with face key point PiCentered by the image-region of N × N, wherein, N is positive integer 5,7 or 9.
8. an expression recognition system, it is characterised in that described system includes:
Topography's acquiring unit, for extracting n face key point P to Facial Expression Imagei, and with described face key point PiCentered by, obtaining the topography in its surrounding setting regions, described n is a positive integer value preset, i=1,2 ..., n;
Eigenvalue calculation unit, for being filtered described topography successively, binary conversion treatment, obtains the eigenvalue of all pixels in described topography;
Union feature vector assembled unit, for using frequency histogram to add up the eigenvalue of all pixels in described topography, obtains face key point P of correspondenceiCharacteristic vector Vi, and by n face key point PiCharacteristic vector ViIt is combined into union feature vectorial, i=1,2 ..., n;
Expression recognition unit, for being classified by classifier algorithm by described union feature vector, identifies human face expression.
9. system as claimed in claim 8, it is characterised in that described eigenvalue calculation unit includes:
Filter unit, is used for using m wave filter MiDescribed topography is filtered, obtains m numerical value a of corresponding each pixel in described topographyi, described m is the positive integer value more than or equal to 8, i=0,1 ..., m-1;
Binarization unit, for described m numerical value aiCarry out binary conversion treatment, obtain the eigenvalue of corresponding each pixel.
10. system as claimed in claim 8, it is characterised in that described face key point PiBeing positioned at face critical zone locations, described face critical zone locations includes: eyebrow, eyes, nose and face.
CN201610173246.8A 2016-03-24 2016-03-24 A kind of facial expression recognizing method and system Expired - Fee Related CN105825192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610173246.8A CN105825192B (en) 2016-03-24 2016-03-24 A kind of facial expression recognizing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610173246.8A CN105825192B (en) 2016-03-24 2016-03-24 A kind of facial expression recognizing method and system

Publications (2)

Publication Number Publication Date
CN105825192A true CN105825192A (en) 2016-08-03
CN105825192B CN105825192B (en) 2019-06-25

Family

ID=56524544

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610173246.8A Expired - Fee Related CN105825192B (en) 2016-03-24 2016-03-24 A kind of facial expression recognizing method and system

Country Status (1)

Country Link
CN (1) CN105825192B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106651302A (en) * 2016-11-30 2017-05-10 浙江水马环保科技有限公司 Intelligent PC attendance management and state monitoring method through water purifier
CN106845917A (en) * 2016-11-30 2017-06-13 浙江水马环保科技有限公司 A kind of intelligent APP Work attendance management systems based on water purifier
CN106845916A (en) * 2016-11-30 2017-06-13 浙江水马环保科技有限公司 A kind of intelligent APP attendance management and method for monitoring state based on water purifier
CN106845915A (en) * 2016-11-30 2017-06-13 浙江水马环保科技有限公司 A kind of water purifier intelligence PC Work attendance management systems
CN107481374A (en) * 2017-08-18 2017-12-15 深圳市益鑫智能科技有限公司 A kind of intelligent terminal unlocked by fingerprint door opener
CN107507175A (en) * 2017-08-18 2017-12-22 潘荣兰 A kind of device for being used to calculate Maize Leaf helminthosporium maydis scab occupied area ratio
CN107818785A (en) * 2017-09-26 2018-03-20 平安普惠企业管理有限公司 A kind of method and terminal device that information is extracted from multimedia file
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
CN107944398A (en) * 2017-11-27 2018-04-20 深圳大学 Based on depth characteristic association list diagram image set face identification method, device and medium
CN111144348A (en) * 2019-12-30 2020-05-12 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN112699797A (en) * 2020-12-30 2021-04-23 常州码库数据科技有限公司 Static facial expression recognition method and system based on joint feature pair relationship network
CN115840834A (en) * 2023-02-20 2023-03-24 深圳市视美泰技术股份有限公司 Method and system for rapidly searching face database

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729625A (en) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 Face identification method
CN104021372A (en) * 2014-05-20 2014-09-03 北京君正集成电路股份有限公司 Face recognition method and device thereof
CN104778472A (en) * 2015-04-24 2015-07-15 南京工程学院 Extraction method for facial expression feature
CN105117707A (en) * 2015-08-29 2015-12-02 电子科技大学 Regional image-based facial expression recognition method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103729625A (en) * 2013-12-31 2014-04-16 青岛高校信息产业有限公司 Face identification method
CN104021372A (en) * 2014-05-20 2014-09-03 北京君正集成电路股份有限公司 Face recognition method and device thereof
CN104778472A (en) * 2015-04-24 2015-07-15 南京工程学院 Extraction method for facial expression feature
CN105117707A (en) * 2015-08-29 2015-12-02 电子科技大学 Regional image-based facial expression recognition method

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295566B (en) * 2016-08-10 2019-07-09 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106295566A (en) * 2016-08-10 2017-01-04 北京小米移动软件有限公司 Facial expression recognizing method and device
CN106651302A (en) * 2016-11-30 2017-05-10 浙江水马环保科技有限公司 Intelligent PC attendance management and state monitoring method through water purifier
CN106845917A (en) * 2016-11-30 2017-06-13 浙江水马环保科技有限公司 A kind of intelligent APP Work attendance management systems based on water purifier
CN106845916A (en) * 2016-11-30 2017-06-13 浙江水马环保科技有限公司 A kind of intelligent APP attendance management and method for monitoring state based on water purifier
CN106845915A (en) * 2016-11-30 2017-06-13 浙江水马环保科技有限公司 A kind of water purifier intelligence PC Work attendance management systems
CN107481374A (en) * 2017-08-18 2017-12-15 深圳市益鑫智能科技有限公司 A kind of intelligent terminal unlocked by fingerprint door opener
CN107507175A (en) * 2017-08-18 2017-12-22 潘荣兰 A kind of device for being used to calculate Maize Leaf helminthosporium maydis scab occupied area ratio
CN107818785A (en) * 2017-09-26 2018-03-20 平安普惠企业管理有限公司 A kind of method and terminal device that information is extracted from multimedia file
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
CN107944398A (en) * 2017-11-27 2018-04-20 深圳大学 Based on depth characteristic association list diagram image set face identification method, device and medium
CN111144348A (en) * 2019-12-30 2020-05-12 腾讯科技(深圳)有限公司 Image processing method, image processing device, electronic equipment and storage medium
WO2021135509A1 (en) * 2019-12-30 2021-07-08 腾讯科技(深圳)有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112699797A (en) * 2020-12-30 2021-04-23 常州码库数据科技有限公司 Static facial expression recognition method and system based on joint feature pair relationship network
CN112699797B (en) * 2020-12-30 2024-03-26 常州码库数据科技有限公司 Static facial expression recognition method and system based on joint feature pair relational network
CN115840834A (en) * 2023-02-20 2023-03-24 深圳市视美泰技术股份有限公司 Method and system for rapidly searching face database

Also Published As

Publication number Publication date
CN105825192B (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN105825192A (en) Facial expression identification method and system
CN109409222B (en) Multi-view facial expression recognition method based on mobile terminal
CN109389074B (en) Facial feature point extraction-based expression recognition method
CN110363091B (en) Face recognition method, device and equipment under side face condition and storage medium
Chen et al. Total variation models for variable lighting face recognition
US20160371539A1 (en) Method and system for extracting characteristic of three-dimensional face image
US11216652B1 (en) Expression recognition method under natural scene
CN108830237B (en) Facial expression recognition method
Zhao et al. Applying contrast-limited adaptive histogram equalization and integral projection for facial feature enhancement and detection
Basha et al. Face gender image classification using various wavelet transform and support vector machine with various kernels
Kumar Arora et al. Optimal facial feature based emotional recognition using deep learning algorithm
HN et al. Human Facial Expression Recognition from static images using shape and appearance feature
Xue et al. Automatic 4D facial expression recognition using DCT features
Jachimski et al. A comparative study of English viseme recognition methods and algorithms
Ardiansyah et al. Systematic literature review: American sign language translator
CN111738050A (en) Method for processing image and electronic equipment
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
Linda et al. Color-mapped contour gait image for cross-view gait recognition using deep convolutional neural network
Vani et al. Using the keras model for accurate and rapid gender identification through detection of facial features
Patil et al. Expression invariant face recognition using semidecimated DWT, Patch-LDSMT, feature and score level fusion
Eleyan Comparative study on facial expression recognition using gabor and dual-tree complex wavelet transforms
Ayatollahi et al. Expression-invariant face recognition using depth and intensity dual-tree complex wavelet transform features
Mali et al. Human gender classification using machine learning
CN111950403A (en) Iris classification method and system, electronic device and storage medium
Roy et al. A novel local wavelet energy mesh pattern (LWEMeP) for heterogeneous face recognition

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190625

Termination date: 20210324

CF01 Termination of patent right due to non-payment of annual fee