CN108446601A - A kind of face identification method based on sound Fusion Features - Google Patents

A kind of face identification method based on sound Fusion Features Download PDF

Info

Publication number
CN108446601A
CN108446601A CN201810163721.2A CN201810163721A CN108446601A CN 108446601 A CN108446601 A CN 108446601A CN 201810163721 A CN201810163721 A CN 201810163721A CN 108446601 A CN108446601 A CN 108446601A
Authority
CN
China
Prior art keywords
face
behavioral characteristics
feature
static nature
fusion features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810163721.2A
Other languages
Chinese (zh)
Other versions
CN108446601B (en
Inventor
帅立国
秦博豪
陈慧玲
王旭
张志胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201810163721.2A priority Critical patent/CN108446601B/en
Publication of CN108446601A publication Critical patent/CN108446601A/en
Application granted granted Critical
Publication of CN108446601B publication Critical patent/CN108446601B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of face identification methods based on sound Fusion Features, are based especially on the face identification method of comprehensive static nature and behavioral characteristics;Static nature stresses global profile, regard facial image feature as high dimensional feature, in the subspace by linear and nonlinear transformed mappings to low-dimensional, the feature to obtain the original sample in lower dimensional space is classified, behavioral characteristics stress localized variation, mainly according to the expression of face, such as smile, it is dejected etc., by the behavioral characteristics for extracting face muscle variation, the characteristic function of one group of muscle variation and time is obtained, to be accurately identified and be classified, realizes the raising of recognition of face precision.

Description

A kind of face identification method based on sound Fusion Features
Technical field
The present invention relates to a kind of face identification methods based on sound Fusion Features, belong to technical field of face recognition.
Background technology
At present, there are not readily portable, easy to be lost, fragile and be easily cracked or steal for traditional personal identification method Risk.Therefore recognition of face obtained extensive concern, due to its with stronger stability, it is hidden and individual between difference Property, safety is ensured that application field is also further extensive, such as the fields such as safe, civilian, military.Recognition of face conduct The typical case of living things feature recognition has wide foreground in fields such as national defence, finance, the administration of justice, business, receives society It pays close attention to and approves.At the same time, the accuracy of recognition of face also becomes an important factor for restricting recognition of face development.
Recognition of face typically encounters small sample size problem, that is, the training sample number possessed is much smaller than face to be measured The problem of size, Small Sample Database collection, can make traditional feature extracting method and classifying identification method in recognition of face Upper is difficult to obtain stronger robustness and preferable discrimination.The face that this patent passes through comprehensive static nature and behavioral characteristics Recognition methods can greatly improve the accuracy of recognition of face.
In similar patent, patent CN201010522281.9, the sparse representation face identification method based on multiclass classification is A kind of static nature recognition methods, it is different from the method that the static nature and behavioral characteristics that this patent is emphasized combine;Patent CN201510102708.2, a kind of dynamic human face recognition methods and system propose a kind of dynamic human face recognition methods, signified Dynamic be that people carries out crawl tracking during exercise, be substantially a kind of identification of static nature, dynamic refers to people's itself Movement.It is directly related that the muscular movement feature of human body moves accumulation with people for a long time, has apparent personal feature.Muscle is special Sign is that people's long-term habits are formed by feature, is not easy to imitate and feature is apparent.The present invention is static special with dynamic by proposing to combine The method of sign can effectively improve the precision of recognition of face under the premise of not influencing speed.
Invention content
The present invention provides a kind of face identification method based on sound Fusion Features, passes through static nature and behavioral characteristics two Global profile and local behavioral characteristics are combined, face can be greatly improved under the premise of not influencing recognition speed by part Accuracy of identification solves the problems, such as that current recognition of face precision is low.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of face identification method based on sound Fusion Features is realized by the method that static nature and behavioral characteristics are combined The raising of recognition of face precision;
As present invention further optimization, static nature above-mentioned is the overall profile feature for extracting face, dynamic above-mentioned Muscular features when characterized by extraction human face expression variation;
As present invention further optimization, include the following steps:
Step A, is extracted using static nature, further includes specifically following sub-step:
Step A1 obtains video flowing in the video file stored by camera or in advance,
Step A2 intercepts key frame from the video flowing obtained,
Step A3 is combined using Principal Component Analysis, independent component analysis method and linear discriminant method, from the key frame of gained Image information in obtain face contour feature,
Step A4, to obtain high dimensional feature data, is utilized using the contour feature of gradient image algorithm process face above-mentioned Two-value, histogram be linear or the contour feature of Nonlinear Processing face above-mentioned, and transformation obtains low-dimensional characteristic,
High dimensional feature data and low-dimensional characteristic are carried out measuring similarity, i.e. characteristic matching, obtain static nature by step A5 One or more matched analog result;
Step B, is extracted using behavioral characteristics, further includes specifically following sub-step:
Step B1 obtains video flowing in the video file stored by camera or in advance,
Step B2 determines target area using the behavioral characteristics in the method extraction video flowing of light stream, difference,
Step B3 chooses required face's window from target area, establishes local window,
Step B4, carries out binaryzation by the image of local window, extracts dynamic contour feature, and core or cunning are matched using pyramid Obtained contour feature information is transformed to action sequence by dynamic window algorithm, to build facial expressions and acts sequence,
Step B5, facial expressions and acts sequence above-mentioned is generated and is used for matched action vector information, and behavioral characteristics extract face table End of love, by specifying expression, the corresponding muscle dynamic change of extraction face to establish action model, by action above-mentioned vector It is matched with action model, finally;
Step C, the action vector that one or more analog result that static nature matches is obtained with Dynamic Matching into Row result set merges, and is verified using dynamic result set pair static state result set, error result is rejected, obtains final knowledge Other result and the confidence level for providing identification complete identification operation;
Step D restarts entire identification process if confidence level is unsatisfactory for requiring;
As present invention further optimization, the behavioral characteristics in the aforementioned method extraction video flowing using light stream, difference, light stream The extracting method of as Optic flow information method, behavioral characteristics further includes space-time characteristic point method or partial descriptions operator;
As present invention further optimization, static nature extracting method includes but not limited to Principal Component Analysis, independent element Analytic approach or linear discriminant method.
By above technical scheme, compared with the existing technology, the invention has the advantages that:
Global profile and local behavioral characteristics are combined, so as to greatly increase face by the precision for improving recognition of face Therefore recognition of face also can be introduced into industrial circle, to improve production efficiency by the confidence level of identification
Description of the drawings
Present invention will be further explained below with reference to the attached drawings and examples.
Fig. 1 is the flow chart of the face recognition algorithms of the present invention,
Fig. 2 is that the recognition of face behavioral characteristics of the present invention capture schematic diagram,
Fig. 3 is the expression diagrammatic series of views that the present invention is built.
Specific implementation mode
In conjunction with the accompanying drawings, the present invention is further explained in detail.These attached drawings are simplified schematic diagram, only with Illustration illustrates the basic structure of the present invention, therefore it only shows the composition relevant to the invention.
It is directly related that the muscular movement feature of human body moves accumulation with people for a long time, has apparent personal feature, flesh Meat is characterized in that people's long-term habits are formed by feature, is not easy to imitate and feature is apparent.
The present invention provides a kind of face identification method based on sound Fusion Features, passes through static nature and behavioral characteristics phase In conjunction with method realize recognition of face precision raising;
As present invention further optimization, static nature above-mentioned is the overall profile feature for extracting face, dynamic above-mentioned Muscular features when characterized by extraction human face expression variation;
Face identification method of the present invention based on sound Fusion Features is divided into learning process in application and identified Journey;
Wherein, learning process may be used ELM algorithms, learnt to improve the real-time and accuracy of face identification system Journey is also classified into static nature and behavioral characteristics two parts;Wherein static part includes intercepting key frame from the video flowing obtained, Followed by the methods of Static Analysis Method such as Principal Component Analysis, independent component analysis method, linear discriminant method from acquired Image information in obtain face contour feature, high dimensional feature data are obtained by gradient image scheduling algorithm, followed by two Value, histogram etc. linearly or nonlinearly convert and obtain low-dimensional characteristic and stored;
Behavioral characteristics study includes the behavioral characteristics for utilizing the methods of light stream, difference to extract video from the video flowing obtained, and Required face's window is therefrom chosen, image is subjected to binaryzation, extracts contour feature, core or sliding are matched by pyramid The dynamic outline information architecture action sequence of gained is finally translated into action vector and is stored by window scheduling algorithm;It is converting In learning process for action vector, RBM may be used(Limit Boltzmann machine)、DBN(Depth belief network)The methods of add Rapid convergence speed.
As shown in Figure 1, identification process includes the following steps:
Step A, is extracted using static nature, further includes specifically following sub-step:
Step A1 obtains video flowing in the video file stored by camera or in advance,
Step A2 intercepts key frame from the video flowing obtained,
Step A3 is combined using Principal Component Analysis, independent component analysis method and linear discriminant method, from the key frame of gained Image information in obtain face contour feature,
Step A4, to obtain high dimensional feature data, is utilized using the contour feature of gradient image algorithm process face above-mentioned Two-value, histogram be linear or the contour feature of Nonlinear Processing face above-mentioned, and transformation obtains low-dimensional characteristic,
High dimensional feature data and low-dimensional characteristic are carried out measuring similarity, i.e. characteristic matching, obtain static nature by step A5 One or more matched analog result;
Step B, is extracted using behavioral characteristics, further includes specifically following sub-step:
Step B1 obtains video flowing in the video file stored by camera or in advance,
Step B2 determines target area using the behavioral characteristics in the method extraction video flowing of light stream, difference,
Step B3 chooses required face's window from target area, establishes local window,
Step B4, carries out binaryzation by the image of local window, extracts dynamic contour feature, and core or cunning are matched using pyramid Obtained contour feature information is transformed to action sequence by dynamic window algorithm, to build facial expressions and acts sequence,
Facial expressions and acts sequence above-mentioned is generated and is used for matched action vector information by step B5, and shown in Fig. 3, behavioral characteristics carry Human face expression is taken to change, it, will be above-mentioned by specifying expression, the corresponding muscle dynamic change of extraction face to establish action model Action vector is matched with action model, finally;
Step C, the action vector that one or more analog result that static nature matches is obtained with Dynamic Matching into Row result set merges, and is verified using dynamic result set pair static state result set, error result is rejected, obtains final knowledge Other result and the confidence level for providing identification complete identification operation;
Step D restarts entire identification process if confidence level is unsatisfactory for requiring;
Static nature mainly includes the overall profile of face, muscular features when behavioral characteristics include mainly human face expression variation. Shown in Fig. 2, static nature mainly extracts the overall profile of face, and carries out face matching by contour feature, aforementioned to utilize light It flows, the behavioral characteristics in the method for difference extraction video flowing, light stream is Optic flow information method, the extracting method dynamic of behavioral characteristics Feature extracting method includes but not limited to the methods of Optic flow information method, space-time characteristic point method, partial descriptions operator;
As present invention further optimization, static nature extracting method includes but not limited to Principal Component Analysis, independent element Analytic approach or linear discriminant method.
Face identification method based on sound Fusion Features proposed by the invention is a kind of combination room and time proposition New face identification method gained image is expressed with facial contour on Spatial Dimension, on time dimension, The expression of people is abstracted into vector information, this two group information is matched and calculated, the accurate of recognition of face can be greatly improved Rate.
Those skilled in the art of the present technique are appreciated that unless otherwise defined, all terms used herein(Including technology art Language and scientific terminology)With meaning identical with the general understanding of the those of ordinary skill in the application fields.Should also Understand, those terms such as defined in the general dictionary, which should be understood that, to be had and the meaning in the context of the prior art The consistent meaning of justice, and unless defined as here, will not be with idealizing or the meaning of too formal be explained.
The meaning of "and/or" described herein refers to that the case where respective individualism or both exists simultaneously wraps Including including.
The meaning of " connection " described herein can be directly connected to can also be to pass through between component between component Other components are indirectly connected with.
It is enlightenment with above-mentioned desirable embodiment according to the present invention, through the above description, relevant staff is complete Various changes and amendments can be carried out without departing from the scope of the technological thought of the present invention' entirely.The technology of this invention Property range is not limited to the contents of the specification, it is necessary to determine its technical scope according to right.

Claims (5)

1. a kind of face identification method based on sound Fusion Features, it is characterised in that:Pass through static nature and behavioral characteristics phase In conjunction with method realize recognition of face precision raising.
2. the face identification method according to claim 1 based on sound Fusion Features, it is characterised in that:Static state above-mentioned Overall profile feature characterized by extraction face, behavioral characteristics above-mentioned are muscular features when extracting human face expression variation.
3. the face identification method according to claim 2 based on sound Fusion Features, it is characterised in that:Including following step Suddenly:
Step A, is extracted using static nature, further includes specifically following sub-step:
Step A1 obtains video flowing in the video file stored by camera or in advance,
Step A2 intercepts key frame from the video flowing obtained,
Step A3 is combined using Principal Component Analysis, independent component analysis method and linear discriminant method, from the key frame of gained Image information in obtain face contour feature,
Step A4, to obtain high dimensional feature data, is utilized using the contour feature of gradient image algorithm process face above-mentioned Two-value, histogram be linear or the contour feature of Nonlinear Processing face above-mentioned, and transformation obtains low-dimensional characteristic,
High dimensional feature data and low-dimensional characteristic are carried out measuring similarity, i.e. characteristic matching, obtain static nature by step A5 One or more matched analog result;
Step B, is extracted using behavioral characteristics, further includes specifically following sub-step:
Step B1 obtains video flowing in the video file stored by camera or in advance,
Step B2 determines target area using the behavioral characteristics in the method extraction video flowing of light stream, difference,
Step B3 chooses required face's window from target area, establishes local window,
Step B4, carries out binaryzation by the image of local window, extracts dynamic contour feature, and core or cunning are matched using pyramid Obtained contour feature information is transformed to action sequence by dynamic window algorithm, to build facial expressions and acts sequence,
Step B5, facial expressions and acts sequence above-mentioned is generated and is used for matched action vector information, and behavioral characteristics extract face table End of love, by specifying expression, the corresponding muscle dynamic change of extraction face to establish action model, by action above-mentioned vector It is matched with action model, finally;
Step C, the action vector that one or more analog result that static nature matches is obtained with Dynamic Matching into Row result set merges, and is verified using dynamic result set pair static state result set, error result is rejected, obtains final knowledge Other result and the confidence level for providing identification complete identification operation;
Step D restarts entire identification process if confidence level is unsatisfactory for requiring.
4. the face identification method according to claim 3 based on sound Fusion Features, it is characterised in that:It is aforementioned to utilize light It flows, the behavioral characteristics in the method for difference extraction video flowing, light stream is Optic flow information method, and the extracting method of behavioral characteristics is also wrapped Include space-time characteristic point method or partial descriptions operator.
5. the face identification method according to claim 3 based on sound Fusion Features, it is characterised in that:Static nature carries It includes but not limited to Principal Component Analysis, independent component analysis method or linear discriminant method to take method.
CN201810163721.2A 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion Active CN108446601B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810163721.2A CN108446601B (en) 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810163721.2A CN108446601B (en) 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion

Publications (2)

Publication Number Publication Date
CN108446601A true CN108446601A (en) 2018-08-24
CN108446601B CN108446601B (en) 2021-07-13

Family

ID=63192521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810163721.2A Active CN108446601B (en) 2018-02-27 2018-02-27 Face recognition method based on dynamic and static feature fusion

Country Status (1)

Country Link
CN (1) CN108446601B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162664A (en) * 2018-12-17 2019-08-23 腾讯科技(深圳)有限公司 Video recommendation method, device, computer equipment and storage medium
CN110245593A (en) * 2019-06-03 2019-09-17 浙江理工大学 A kind of images of gestures extraction method of key frame based on image similarity
CN110427825A (en) * 2019-07-01 2019-11-08 上海宝钢工业技术服务有限公司 The video flame recognition methods merged based on key frame with quick support vector machines
WO2020156245A1 (en) * 2019-01-29 2020-08-06 广州市百果园信息技术有限公司 Action recognition method, apparatus and device and storage medium
CN111508105A (en) * 2019-12-25 2020-08-07 南通市海王电气有限公司 Access control system of intelligent power distribution cabinet
CN111680639A (en) * 2020-06-11 2020-09-18 支付宝(杭州)信息技术有限公司 Face recognition verification method and device and electronic equipment
CN111860400A (en) * 2020-07-28 2020-10-30 平安科技(深圳)有限公司 Face enhancement recognition method, device, equipment and storage medium
WO2021068613A1 (en) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 Face recognition method and apparatus, device and computer-readable storage medium
CN112749657A (en) * 2021-01-07 2021-05-04 北京码牛科技有限公司 House renting management method and system
WO2021217856A1 (en) * 2020-04-30 2021-11-04 平安科技(深圳)有限公司 Face image generation method and apparatus, electronic device, and readable storage medium
CN113642446A (en) * 2021-08-06 2021-11-12 湖南检信智能科技有限公司 Detection method and device based on face dynamic emotion recognition
CN114299602A (en) * 2021-11-09 2022-04-08 北京九州安华信息安全技术有限公司 Micro-amplitude motion image processing method
CN115249393A (en) * 2022-05-09 2022-10-28 深圳市麦驰物联股份有限公司 Identity authentication access control system and method
WO2023226239A1 (en) * 2022-05-24 2023-11-30 网易(杭州)网络有限公司 Object emotion analysis method and apparatus and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216889A (en) * 2008-01-14 2008-07-09 浙江大学 A face image super-resolution method with the amalgamation of global characteristics and local details information
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN101388075B (en) * 2008-10-11 2011-11-16 大连大学 Human face identification method based on independent characteristic fusion
CN104200804A (en) * 2014-09-19 2014-12-10 合肥工业大学 Various-information coupling emotion recognition method for human-computer interaction
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion
CN103136730B (en) * 2013-01-25 2015-06-03 西安理工大学 Fusion method of light stream of content in video image and dynamic structure of contour feature
CN103279745B (en) * 2013-05-28 2016-07-06 东南大学 A kind of face identification method based on half face multiple features fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216889A (en) * 2008-01-14 2008-07-09 浙江大学 A face image super-resolution method with the amalgamation of global characteristics and local details information
CN101388075B (en) * 2008-10-11 2011-11-16 大连大学 Human face identification method based on independent characteristic fusion
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN103136730B (en) * 2013-01-25 2015-06-03 西安理工大学 Fusion method of light stream of content in video image and dynamic structure of contour feature
CN103279745B (en) * 2013-05-28 2016-07-06 东南大学 A kind of face identification method based on half face multiple features fusion
CN104200804A (en) * 2014-09-19 2014-12-10 合肥工业大学 Various-information coupling emotion recognition method for human-computer interaction
CN104408440A (en) * 2014-12-10 2015-03-11 重庆邮电大学 Identification method for human facial expression based on two-step dimensionality reduction and parallel feature fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
WANG X 等: "A coordinated real-time optimal dispatch method for unbundled electricity markets", 《IEEE TRANSACTIONS ON POWER SYSTEMS》 *
孙锐 等: "采用局部线性嵌入的稀疏目标跟踪方法", 《电子测量与仪器学报》 *
宫玉娇: "基于几何与表观特征融合的表情识别方法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162664A (en) * 2018-12-17 2019-08-23 腾讯科技(深圳)有限公司 Video recommendation method, device, computer equipment and storage medium
WO2020156245A1 (en) * 2019-01-29 2020-08-06 广州市百果园信息技术有限公司 Action recognition method, apparatus and device and storage medium
CN110245593B (en) * 2019-06-03 2021-08-03 浙江理工大学 Gesture image key frame extraction method based on image similarity
CN110245593A (en) * 2019-06-03 2019-09-17 浙江理工大学 A kind of images of gestures extraction method of key frame based on image similarity
CN110427825A (en) * 2019-07-01 2019-11-08 上海宝钢工业技术服务有限公司 The video flame recognition methods merged based on key frame with quick support vector machines
CN110427825B (en) * 2019-07-01 2023-05-12 上海宝钢工业技术服务有限公司 Video flame identification method based on fusion of key frame and fast support vector machine
WO2021068613A1 (en) * 2019-10-12 2021-04-15 深圳壹账通智能科技有限公司 Face recognition method and apparatus, device and computer-readable storage medium
CN111508105A (en) * 2019-12-25 2020-08-07 南通市海王电气有限公司 Access control system of intelligent power distribution cabinet
WO2021217856A1 (en) * 2020-04-30 2021-11-04 平安科技(深圳)有限公司 Face image generation method and apparatus, electronic device, and readable storage medium
CN111680639A (en) * 2020-06-11 2020-09-18 支付宝(杭州)信息技术有限公司 Face recognition verification method and device and electronic equipment
CN111860400A (en) * 2020-07-28 2020-10-30 平安科技(深圳)有限公司 Face enhancement recognition method, device, equipment and storage medium
WO2021139171A1 (en) * 2020-07-28 2021-07-15 平安科技(深圳)有限公司 Facial enhancement based recognition method, apparatus and device, and storage medium
CN111860400B (en) * 2020-07-28 2024-06-07 平安科技(深圳)有限公司 Face enhancement recognition method, device, equipment and storage medium
CN112749657A (en) * 2021-01-07 2021-05-04 北京码牛科技有限公司 House renting management method and system
CN113642446A (en) * 2021-08-06 2021-11-12 湖南检信智能科技有限公司 Detection method and device based on face dynamic emotion recognition
CN114299602A (en) * 2021-11-09 2022-04-08 北京九州安华信息安全技术有限公司 Micro-amplitude motion image processing method
CN115249393A (en) * 2022-05-09 2022-10-28 深圳市麦驰物联股份有限公司 Identity authentication access control system and method
WO2023226239A1 (en) * 2022-05-24 2023-11-30 网易(杭州)网络有限公司 Object emotion analysis method and apparatus and electronic device

Also Published As

Publication number Publication date
CN108446601B (en) 2021-07-13

Similar Documents

Publication Publication Date Title
CN108446601A (en) A kind of face identification method based on sound Fusion Features
CN107145842B (en) Face recognition method combining LBP characteristic graph and convolutional neural network
CN109325443A (en) A kind of face character recognition methods based on the study of more example multi-tag depth migrations
Sun et al. ROI-attention vectorized CNN model for static facial expression recognition
CN109344759A (en) A kind of relatives' recognition methods based on angle loss neural network
CN113822192A (en) Method, device and medium for identifying emotion of escort personnel based on Transformer multi-modal feature fusion
Bu Human motion gesture recognition algorithm in video based on convolutional neural features of training images
CN111401105B (en) Video expression recognition method, device and equipment
Xu et al. Intelligent emotion detection method based on deep learning in medical and health data
CN111028319A (en) Three-dimensional non-photorealistic expression generation method based on facial motion unit
CN108960171B (en) Method for converting gesture recognition into identity recognition based on feature transfer learning
Varsha et al. Indian sign language gesture recognition using deep convolutional neural network
Gajurel et al. A fine-grained visual attention approach for fingerspelling recognition in the wild
Chauhan et al. Analysis of Intelligent movie recommender system from facial expression
Su et al. Nesterov accelerated gradient descent-based convolution neural network with dropout for facial expression recognition
CN114022687A (en) Image description countermeasure generation method based on reinforcement learning
CN110826397B (en) Video description method based on high-order low-rank multi-modal attention mechanism
CN113627218A (en) Figure identification method and device based on video data
CN116662924A (en) Aspect-level multi-mode emotion analysis method based on dual-channel and attention mechanism
Yi et al. STAN: spatiotemporal attention network for video-based facial expression recognition
Khellas et al. Alabib-65: A realistic dataset for algerian sign language recognition
Wang et al. RETRACTED ARTICLE: Human behaviour recognition and monitoring based on deep convolutional neural networks
Chen An analysis of Mandarin emotional tendency recognition based on expression spatiotemporal feature recognition
CN113191135A (en) Multi-category emotion extraction method fusing facial characters
CN114613016A (en) Gesture image feature extraction method based on Xscene network improvement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant