CN106096641B - A kind of multi-modal affective characteristics fusion method based on genetic algorithm - Google Patents

A kind of multi-modal affective characteristics fusion method based on genetic algorithm Download PDF

Info

Publication number
CN106096641B
CN106096641B CN201610397707.XA CN201610397707A CN106096641B CN 106096641 B CN106096641 B CN 106096641B CN 201610397707 A CN201610397707 A CN 201610397707A CN 106096641 B CN106096641 B CN 106096641B
Authority
CN
China
Prior art keywords
matrix
feature
column
modal
affective characteristics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610397707.XA
Other languages
Chinese (zh)
Other versions
CN106096641A (en
Inventor
程晓
卢官明
闫静杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Post and Telecommunication University
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201610397707.XA priority Critical patent/CN106096641B/en
Publication of CN106096641A publication Critical patent/CN106096641A/en
Application granted granted Critical
Publication of CN106096641B publication Critical patent/CN106096641B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/12Computing arrangements based on biological models using genetic models
    • G06N3/126Evolutionary algorithms, e.g. genetic algorithms or genetic programming
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Genetics & Genomics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The multi-modal affective characteristics fusion method based on genetic algorithm that the invention discloses a kind of, belongs to signal processing and area of pattern recognition.Including the following contents: establishing a multi-modal affection data library, then for each sample in the database, extract each mode affective characteristics of multi-modal affection data library sample, such as: facial expression feature, speech emotional feature and body posture feature etc., construct multi-modal affective characteristics matrix, genetic algorithm is used for the Fusion Features of multiple mode, including feature selecting, intersection and recombination based on genetic algorithm, finally using genetic algorithm multi-modal affective characteristics are carried out with the feature selecting and fusion of F iteration.The present invention is identified for multi-modal emotional semantic classification, proposes genetic algorithm being used for Feature-level fusion, provides a kind of new effective way for the multi-modal emotional semantic classification identification based on Feature-level fusion.

Description

A kind of multi-modal affective characteristics fusion method based on genetic algorithm
Technical field
The invention belongs to signal processings and area of pattern recognition, more particularly, to a kind of multimode based on genetic algorithm State affective characteristics fusion method.
Background technique
1997, Massachusetts Polytechnics multi-media Laboratory Picard professor et al. set up in the world first from thing Feel the research team calculated, research lays particular emphasis on the acquisition and identification of emotion signal.Carnegie Mellon University's research and development are based on feelings Feel the wearable computer calculated, is dedicated to the real application research of affection computation.China's National Natural Science base in 2004 The research of affection computation theory and method is included in the key project of quasi- subsidy by gold committee for the first time.2009, First Nationwide cognition In symposium on academic subjects, " affection computation " is classified as one of the advanced subject that cognitive science field is currently paid close attention to for the first time.2010 Year, in order to promote the research and development of " affection computation " this theme, IEEE Computer Association has newly established one " in terms of emotion The International Academic periodical " IEEE Transaction Affective Computing " of calculation " name.
Currently, single mode emotion recognition has been made highly developed in emotion recognition field, in multi-modal emotion recognition Research aspect, most crucial part is exactly the Fusion Features of multiple mode, and the quality of fusion directly affects the effect identified to the end Fruit, there are many fusion methods to occur now: principal component analysis (PCA), canonical correlation analysis (CCA), core canonical correlation point Analyse (KCCA), nuclear matrix fusion (KMF) etc..Through experimental comparison, use is below using the discrimination that above-mentioned fusion method obtains The discrimination that multi-modal affective characteristics fusion method based on genetic algorithm of the invention obtains.
In terms of multi-modal emotion recognition, have in existing patent document a grant number be CN102968643B, The patent of invention of entitled " a kind of multi-modal emotion identification method based on the theory of Lie groups ", the invention are obtaining human body, face respectively Portion, three different modalities of hand emotion recognition rate on the basis of, the probability obtained according to three kinds of modal characteristics is to final emotion State is weighted decision, is a kind of Decision-level fusion method, but there is no consider between different modalities characteristic for this method Connection.It must be associated for the same emotional category, between different modalities.
Genetic algorithm is taught by the J.Holland in the U.S. and was proposed first in 1975, is a kind of based on nature heredity and kind The adaptive optimization method that group evolves, initial population, also referred to as parent, parent obtain first generation filial generation kind by an operation circulation Group, this operation circulation are made of 3 operation operators: selection operator, crossover operator, mutation operator.Selection operator is according to reality Border problem needs, and calculates the fitness of population, screens out partial fitness and is worth low, leaves the high population of fitness, this is with regard to phase When in the feature selecting of multi-modal emotion, that is, the feature for having selected discrimination high is tested;Crossover operator is genetic algorithm Core, crossover operator is by two individuals with crossing-over rate p1It reconfigures, returns to a new individual, cross method has perhaps It is more, such as: single point crossing, two-point crossover and multiple point crossover etc..
In the prior art, there is presently no discoveries applies to the practice that multi-modal affective characteristics merge for above-mentioned genetic algorithm It attempts.
Summary of the invention
The technical problem to be solved by the present invention is to the discrimination in the prior art of emotion communication is carried out for computer and people The lower problem of accuracy, have found a new way for man-machine mutual words.
For this purpose, the present invention proposes a kind of multi-modal affective characteristics fusion method based on genetic algorithm, melted using characteristic layer The mode of conjunction solves the problems, such as that the prior art cannot obtain more accurate discrimination, to construct the need of friendly man-machine interface It asks.Specific technical solution is as follows:
A kind of multi-modal affective characteristics fusion method based on genetic algorithm comprising the steps of:
One, a multi-modal affection data library, the sample comprising L class emotion are established, the sample number of every class emotion is n, always Sample number is N=nL;
Two, for each sample in database, the affective characteristics of T kind different modalities are extracted, as phonetic feature, expression are special Sign, posture feature etc., wherein the affective characteristics d of t kind modetDimensional feature vector indicates, t=1,2 ..., T, then each sample This multi-modal affective characteristics vector dimension is M=(d1+d2+…+dT);
Three, for N number of sample in database, the multi-modal affective characteristics matrix A of a N*M size is constructed:
Wherein, matrix element ai,j,kIt is the kth dimensional feature value for belonging to j-th of sampling feature vectors of the i-th class emotion, i= 1,2 ..., L, j=1,2 ..., n, k=1,2 ..., M;
Four, using genetic algorithm multi-modal affective characteristics are carried out with the feature selecting and fusion of F iteration.
Further, above-mentioned steps four include following sub-step:
(1) for kth dimensional feature, the characteristic value for belonging to n sample of the i-th class emotion is combined, constitutes arrayCalculate the mean value of this arrayAnd variance Defining between class distance in the class of pth class emotion and q class emotion is Rp,q,k:
Defining fitness function used in genetic algorithm is Rk:
So the fitness function value of M dimensional feature can be expressed as R1,R2,…,RM
(2) probability that s column feature is selected in multi-modal affective characteristics matrix A is defined are as follows:
The uniform random number α in given [0,1] section, if ρs>=α, then s column are special in multi-modal affective characteristics matrix A Sign is selected, and is otherwise not selected, it is assumed that the multi-modal affective characteristics matrix B after chosen is the matrix of a N*m size, m < M;
(3) crossover operation is carried out to eigenmatrix B using the single point crossing operator of genetic algorithm, particular content is as follows:
The parity column of matrix B is separated, matrix B is respectively constitutedevenAnd matrix Bodd, size is respectively N*m1And N* m2, wherein m1It is the columns of matrix B even column, m2It is the columns of matrix B odd column, m1And m2Value are as follows:
Generate m1The random number in a [0,1] sectionThese random numbers are successively intersected with what is be previously set Rate p1Compare, is less than p1Logical value " 1 " is then returned, otherwise returns to logical value " 0 ", thus having obtained a length is m1Patrol Vector is collected, is usedIt indicates;Regenerate m1The random number in a [0,1] sectionPass through Following formula calculating matrix BevenAnd matrix BoddThe cross-point locations S of r columnr(r=1,2 ..., m1):
Sr=((N-1) * yr*dr+ N-1) %N+1
In above formula, % is complementation, does complementation to vector and refers to and does complementation to each element of vector; Successively by matrix BevenThe S of r columnrData and matrix B after a elementoddThe S of r columnrData after a element It is interchangeable, obtains B 'evenWith B 'odd, by matrix B 'oddEach column be sequentially inserted into matrix B 'evenEvery two column between, i.e. structure At after intersection eigenmatrix H (when m be odd number when, eigenmatrix H last column and eigenmatrix B last column phase Together), the fitness function value of the feature after intersecting, the fitness function value of m dimensional feature are calculated according to the formula in step (2.1) It can be expressed as R '1,R′2,…,R′m
(4) the eigenmatrix H by initial multi-modal affective characteristics matrix A and after intersecting is recombinated, and particular content is such as Under:
Set replacement rate p2, it is assumed that the γ column data in eigenmatrix A is replaced with the γ column data in eigenmatrix H, then The numerical value of γ is p2It is rounded again after being multiplied with m, constructs array Q1=(R1,R2,…,RM) and Q2=(R '1,R′2,…,R′m), by Q1 And Q2Element resequence respectively according to sequence from big to small, the location information after sequence uses vector respectivelyAnd vectorIt indicates, i.e., by array Q1In It is a Element is discharged to kth position, by array Q2InA element is discharged to w, such as: a=(3,5,9,7), It sorts from large to small as b=(9,7,5,3), location information is c=(3,4,2,1), it may be assumed that the 3rd element in a is discharged to the 1st, 4th element is discharged to the 2nd, and the 2nd element is discharged to the 3rd, and the 1st element is discharged to the 4th, the b after being sorted;By feature In matrix HThe data of column are successively replaced in eigenmatrix AThe data of column, are recombinated Eigenmatrix A ' afterwards, size N*M;
(5) when the number of iterations reaches the F value being previously set, stop circulation, obtain final fusion feature, otherwise return Step (1) carries out iteration next time.
Further, affective characteristics described in above-mentioned steps two include phonetic feature, expressive features, posture feature.
The utility model has the advantages that the present invention studies the multi-modal emotional semantic classification identification based on expression and voice, it will be hereditary Algorithm is used for Feature-level fusion, using selection and crossover operator.Experiment display, the Feature-level fusion method based on genetic algorithm Discrimination on eNTERFACE and RML database is 87.2% and 92.4%, and the discrimination than single mode is respectively increased 15% and 11%, (KMF) is merged with nuclear matrix, kernel canonical correlation analysis (KCCA) method is compared, also there is highest identification Rate illustrates that genetic algorithm is used for Feature-level fusion to be feasible and work well.
Detailed description of the invention
Fig. 1 is the flow chart of the multi-modal affective characteristics fusion method the present invention is based on genetic algorithm.
Fig. 2 is the part facial expression image in eNTERFACE database.
Specific embodiment
Now in conjunction with attached drawing, specific embodiments of the present invention are further described in detail.Of the invention is calculated based on heredity The realization of the multi-modal affective characteristics fusion method of method, as shown in Figure 1, mainly comprising the steps of:
Step 1: eNTERFACE bimodal emotion database and RML bimodal emotion database are obtained, as shown in Fig. 2, ENTERFACE database include 6 kinds of moods: angry (indignation), disgust (detest), fear (fear), happy (happiness), Sad (sadness), surprise (surprised), 44 experimenters, not due to the wherein number of samples of 2 experimenters and other experimenters Equally, we have chosen wherein 42 experimenters, have 5 samples corresponding to every kind of mood, come to 1260 samples.RML number It equally include 6 kinds of moods according to library, every class mood has 120 samples, altogether 720 samples.
Step 2: handle bimodal database, expression and voice are carried out for each sample in database Feature extraction:
(1) to video extraction key frame, the gabor feature of expression is extracted from key frame, obtains N number of d1The expression of dimension is special Vector is levied, N is total number of samples, d1It is the corresponding expressive features dimension of each sample key frame:
Gabor function is as follows:
Wherein (x, y) is the position coordinates of pixel,It is wavelet vectors,It is The centre frequency of gabor filter,It is maximum frequency, is set as 0.25, λ hereinnIt is scale factor, sets hereinIndicate filter direction selection.Choose herein be have 8 different directions m ∈ 1,2,3,4,5,6, 7,8 }, the gabor filter group of 5 different scale n ∈ { 1,2,3,4,5 } contains 40 gabor filters.
When we extract the gabor feature of piece image, color image I (x, y, z) gray processing of input is obtained first To J (x, y), J (x, y) and filter group above are then subjected to convolution, i.e. Gm,n(x, y)=J (x, y) * gm,n(x, y) is obtained To 40 complex values of some pixel, we select this gabor feature of 40 plural amplitudes as the pixel;
(2) to video extraction audio file, speech emotional feature is extracted directly from audio file with opensmile tool Emobase2010 obtains N number of d2The speech feature vector of dimension, N are total number of samples, d2It is that the corresponding voice of each sample audio is special Levy dimension;
Then the bimodal emotion feature vector dimension of each sample is M=(d1+d2);
Third step constructs the multi-modal affective characteristics matrix A of a N*M size for N number of sample in database:
Wherein, matrix element ai,j,kIt is the kth dimensional feature value for belonging to j-th of sampling feature vectors of the i-th class emotion, i= 1,2 ..., L, j=1,2 ..., n, k=1,2 ..., M;
4th step carries out the feature selecting and fusion of F iteration, this step using genetic algorithm to multi-modal affective characteristics Including following sub-step:
(4.1) for kth dimensional feature, the characteristic value for belonging to n sample of the i-th class emotion is combined, constitutes arrayCalculate the mean value of this arrayAnd variance Defining between class distance in the class of pth class emotion and q class emotion is Rp,q,k:
Defining fitness function used in genetic algorithm is Rk:
So the fitness function value of M dimensional feature can be expressed as R1,R2,…,RM
(4.2) probability that s column feature is selected in multi-modal affective characteristics matrix A is defined are as follows:
The uniform random number α in given [0,1] section, if ρs>=α, then s column are special in multi-modal affective characteristics matrix A Sign is selected, and is otherwise not selected, it is assumed that the multi-modal affective characteristics matrix B after chosen is the matrix of a N*m size, m < M;
(4.3) crossover operation is carried out to eigenmatrix B using the single point crossing operator of genetic algorithm, particular content is as follows:
The parity column of matrix B is separated, matrix B is respectively constitutedevenAnd matrix Bodd, size is respectively N*m1And N* m2, wherein m1It is the columns of matrix B even column, m2It is the columns of matrix B odd column, m1And m2Value are as follows:
Generate m1The random number in a [0,1] sectionThese random numbers are successively intersected with what is be previously set Rate p1Compare, is less than p1Logical value " 1 " is then returned, otherwise returns to logical value " 0 ", thus having obtained a length is m1Patrol Vector is collected, is usedIt indicates;Regenerate m1The random number in a [0,1] sectionPass through Following formula calculating matrix BevenAnd matrix BoddThe cross-point locations S of r columnr(r=1,2 ..., m1):
Sr=((N-1) * yr*dr+ N-1) %N+1
In above formula, % is complementation, does complementation to vector and refers to and does complementation to each element of vector; Successively by matrix BevenThe S of r columnrData and matrix B after a elementoddThe S of r columnrData after a element It is interchangeable, obtains B 'evenWith B 'odd, by matrix B 'oddEach column be sequentially inserted into matrix B 'evenEvery two column between, i.e. structure At after intersection eigenmatrix H (when m be odd number when, eigenmatrix H last column and eigenmatrix B last column phase Together), the fitness function value of the feature after intersecting is calculated according to the formula in step 4.1, the fitness function value of m dimensional feature can To be expressed as R '1,R′2,…,R′m
(4.4) the eigenmatrix H by initial multi-modal affective characteristics matrix A and after intersecting is recombinated, particular content It is as follows:
Set replacement rate p2, it is assumed that the γ column data in eigenmatrix A is replaced with the γ column data in eigenmatrix H, then The numerical value of γ is p2It is rounded again after being multiplied with m, constructs array Q1=(R1,R2,…,RM) and Q2=(R '1,R′2,…,R′m), by Q1 And Q2Element resequence respectively according to sequence from big to small, the location information after sequence uses vector respectivelyAnd vectorIt indicates, i.e., by array Q1In It is a Element is discharged to kth position, by array Q2InA element is discharged to w, the b after being sorted;It will be special It levies the in matrix HThe data of column are successively replaced in eigenmatrix AThe data of column obtain weight Eigenmatrix A ' after group, size N*M;
(4.5) when the number of iterations reaches the F value being previously set, stop circulation, obtain final fusion feature, otherwise return Return the iteration of step 4.1 progress next time;
5th step reduces to remove data redundancy and calculates consumption, carries out dimensionality reduction to fusion feature using PCA method, Fused data is normalized before dimensionality reduction, data are limited between 0-1, the present invention is set on eNTERFACE database PCA contribution rate is 0.99, and PCA contribution rate is set on RML database as 0.82;
Data after dimensionality reduction are inputted SVM, obtain classification recognition result by the 6th step.
Unspecified part of the present invention belongs to field technical staff's common knowledge, and the foregoing is merely of the invention one A specific embodiment, is not intended to limit the invention, all within the spirits and principles of the present invention, made any modification, etc. With replacement, improvement etc., should all be included in the protection scope of the present invention.

Claims (3)

1. a kind of multi-modal affective characteristics fusion method based on genetic algorithm, it is characterised in that comprise the steps of:
(1.1) a multi-modal affection data library, the sample comprising L class emotion are established, the sample number of every class emotion is n, gross sample This number is N=nL;
(1.2) for each sample in the database, the affective characteristics of T kind different modalities are extracted, wherein t kind mode Affective characteristics dtDimensional feature vector indicates that t=1,2 ..., T, then the multi-modal affective characteristics vector dimension of each sample is M =(d1+d2+…+dT);
(1.3) for N number of sample in database, the multi-modal affective characteristics matrix A of a N*M size is constructed:
Wherein, matrix element ai,j,kIt is the kth dimensional feature value for belonging to j-th of sampling feature vectors of the i-th class emotion, i=1, 2 ..., L, j=1,2 ..., n, k=1,2 ..., M;
(1.4) it is based on for multi-modal affective characteristics matrix being combined using genetic algorithm and multi-modal affective characteristics is carried out F times The feature selecting and fusion of iteration specifically include following sub-step:
The characteristic value for belonging to n sample of the i-th class emotion is combined kth dimensional feature by (1.4.1), constitutes arrayCalculate the mean value of this arrayAnd variance Defining between class distance in the class of pth class emotion and q class emotion is Rp,q,k:
Defining fitness function used in genetic algorithm is Rk:
So the fitness function value of M dimensional feature can be expressed as R1,R2,…,RM
(1.4.2) defines the probability that s column feature is selected in multi-modal affective characteristics matrix A are as follows:
The uniform random number α in given [0,1] section, if ρs>=α, then s column feature quilt in multi-modal affective characteristics matrix A Selection, is otherwise not selected, it is assumed that the multi-modal affective characteristics matrix B after chosen is the matrix of a N*m size, m < M;
(1.4.3) carries out crossover operation to eigenmatrix B using the single point crossing operator of genetic algorithm, and particular content is as follows:
The parity column of matrix B is separated, matrix B is respectively constitutedevenAnd matrix Bodd, size is respectively N*m1And N*m2, Middle m1It is the columns of matrix B even column, m2It is the columns of matrix B odd column, m1And m2Value are as follows:
Generate m1The random number in a [0,1] sectionBy these random numbers successively with the crossing-over rate p that is previously set1 Compare, is less than p1Logical value " 1 " is then returned, otherwise returns to logical value " 0 ", thus having obtained a length is m1Logic to Amount is usedIt indicates;Regenerate m1The random number in a [0,1] sectionBy following Formula calculating matrix BevenAnd matrix BoddThe cross-point locations S of r columnr(r=1,2 ..., m1):
Sr=((N-1) * yr*dr+ N-1) %N+1
In above formula, % is complementation, does complementation to vector and refers to and does complementation to each element of vector;Successively By matrix BevenThe S of r columnrData and matrix B after a elementoddThe S of r columnrData after a element carry out It exchanges, obtains B 'evenWith B 'odd, by matrix B 'oddEach column be sequentially inserted into matrix B 'evenIt is every two column between, that is, constitute hand over Eigenmatrix H after fork calculates the fitness function value of the feature after intersecting according to the formula in step 1.4.1, m dimensional feature Fitness function value can be expressed as R '1,R′2,…,R′m
The eigenmatrix H of (1.4.4) by initial multi-modal affective characteristics matrix A and after intersecting is recombinated, and particular content is such as Under:
Set replacement rate p2, it is assumed that the γ column data in eigenmatrix A is replaced with the γ column data in eigenmatrix H, then the number of γ Value is p2It is rounded again after being multiplied with m, constructs array Q1=(R1,R2,…,RM) and Q2=(R '1,R′2,…,R′m), by Q1And Q2's Element is resequenced respectively according to sequence from big to small, and the location information after sequence uses vector respectivelyAnd vectorIt indicates, i.e., by array Q1InIt is a Element is discharged to kth position, by array Q2In(w=1,2 ..., m) a element is discharged to w, the b after being sorted;By feature In matrix HThe data of column are successively replaced in eigenmatrix AThe data of column, are recombinated Eigenmatrix A ' afterwards, size N*M;
(1.4.5) stops circulation, obtains final fusion feature, otherwise return when the number of iterations reaches the F value being previously set Step 1.4.1 carries out iteration next time.
2. a kind of multi-modal affective characteristics fusion method based on genetic algorithm according to claim 1, it is characterised in that In step 1.4.3, when m is odd number, last column of the eigenmatrix H after intersection are identical as last column of eigenmatrix B.
3. a kind of multi-modal affective characteristics fusion method based on genetic algorithm according to claim 1, it is characterised in that Affective characteristics described in step 1.2 include phonetic feature, expressive features, posture feature.
CN201610397707.XA 2016-06-07 2016-06-07 A kind of multi-modal affective characteristics fusion method based on genetic algorithm Active CN106096641B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610397707.XA CN106096641B (en) 2016-06-07 2016-06-07 A kind of multi-modal affective characteristics fusion method based on genetic algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610397707.XA CN106096641B (en) 2016-06-07 2016-06-07 A kind of multi-modal affective characteristics fusion method based on genetic algorithm

Publications (2)

Publication Number Publication Date
CN106096641A CN106096641A (en) 2016-11-09
CN106096641B true CN106096641B (en) 2019-03-01

Family

ID=57227345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610397707.XA Active CN106096641B (en) 2016-06-07 2016-06-07 A kind of multi-modal affective characteristics fusion method based on genetic algorithm

Country Status (1)

Country Link
CN (1) CN106096641B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107977630A (en) * 2017-12-04 2018-05-01 杨世鹏 A kind of smile's kind judging method based on character face's Expression Recognition
CN108216254B (en) * 2018-01-10 2020-03-10 山东大学 Road anger emotion recognition method based on fusion of facial image and pulse information
CN109460737A (en) * 2018-11-13 2019-03-12 四川大学 A kind of multi-modal speech-emotion recognition method based on enhanced residual error neural network
CN109525892B (en) * 2018-12-03 2021-09-10 易视腾科技股份有限公司 Video key scene extraction method and device
CN109492420B (en) * 2018-12-28 2021-07-20 深圳前海微众银行股份有限公司 Model parameter training method, terminal, system and medium based on federal learning
CN109872728A (en) * 2019-02-27 2019-06-11 南京邮电大学 Voice and posture bimodal emotion recognition method based on kernel canonical correlation analysis
CN110119775B (en) * 2019-05-08 2021-06-08 腾讯科技(深圳)有限公司 Medical data processing method, device, system, equipment and storage medium
CN112220479A (en) * 2020-09-04 2021-01-15 陈婉婷 Genetic algorithm-based examined individual emotion judgment method, device and equipment
CN112401886B (en) * 2020-10-22 2023-01-31 北京大学 Processing method, device and equipment for emotion recognition and storage medium
CN112270972A (en) * 2020-10-22 2021-01-26 新华网股份有限公司 Emotion exchange information processing system
CN112668551B (en) * 2021-01-18 2023-09-22 上海对外经贸大学 Expression classification method based on genetic algorithm
CN112820071B (en) * 2021-02-25 2023-05-05 泰康保险集团股份有限公司 Behavior recognition method and device
CN113887476A (en) * 2021-10-19 2022-01-04 中用科技有限公司 Equipment health state signal acquisition and multi-domain feature fusion method
CN116578611B (en) * 2023-05-16 2023-11-03 广州盛成妈妈网络科技股份有限公司 Knowledge management method and system for inoculated knowledge

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103584872A (en) * 2013-10-29 2014-02-19 燕山大学 Psychological stress assessment method based on multi-physiological-parameter integration

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103584872A (en) * 2013-10-29 2014-02-19 燕山大学 Psychological stress assessment method based on multi-physiological-parameter integration

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Emotional Image and Musical Information Retrieval With Interactive Genetic Algorithm;SUNG-BAE CHO;《PROCEEDINGS OF THE IEEE》;20040531;第92卷(第4期);702-711
基于几何特征和子空间学习的人脸表情识别;王江;《中国优秀硕士学位论文全文数据库信息科技辑》;20120215;第2012年卷(第02期);I138-2336
基于多特征融合的全景图道路轮廓自动提取方法及应用研究;线冰曦;《中国优秀硕士学位论文全文数据库信息科技辑》;20160315;第2016年卷(第03期);I138-6800
多模态表情识别在SEEE模型中的应用研究;秦伟;《中国优秀硕士学位论文全文数据库信息科技辑》;20131215;第2013年卷(第12期);I138-235

Also Published As

Publication number Publication date
CN106096641A (en) 2016-11-09

Similar Documents

Publication Publication Date Title
CN106096641B (en) A kind of multi-modal affective characteristics fusion method based on genetic algorithm
Wen et al. Ensemble of deep neural networks with probability-based fusion for facial expression recognition
CN103548041B (en) For determining the information processor of weight of each feature in subjective hierarchical clustering, methods and procedures
Rashid Convolutional neural networks based method for improving facial expression recognition
CN106250855B (en) Multi-core learning based multi-modal emotion recognition method
CN112784798A (en) Multi-modal emotion recognition method based on feature-time attention mechanism
CN109325443A (en) A kind of face character recognition methods based on the study of more example multi-tag depth migrations
CN109977232A (en) A kind of figure neural network visual analysis method for leading figure based on power
CN107609572A (en) Multi-modal emotion identification method, system based on neutral net and transfer learning
CN105354593B (en) A kind of threedimensional model sorting technique based on NMF
CN110046656A (en) Multi-modal scene recognition method based on deep learning
Nie et al. Adaptive local embedding learning for semi-supervised dimensionality reduction
CN110046550A (en) Pedestrian&#39;s Attribute Recognition system and method based on multilayer feature study
Li et al. MRMR-based ensemble pruning for facial expression recognition
CN112732921B (en) False user comment detection method and system
Pan et al. Multimodal emotion recognition based on facial expressions, speech, and EEG
Shi et al. Improving facial attractiveness prediction via co-attention learning
CN110084211A (en) A kind of action identification method
Zhang et al. Multiview unsupervised shapelet learning for multivariate time series clustering
Weiwei Classification of sport actions using principal component analysis and random forest based on three-dimensional data
Chen et al. Bibliometric analysis of the application of convolutional neural network in computer vision
Shen et al. A high-precision feature extraction network of fatigue speech from air traffic controller radiotelephony based on improved deep learning
Khosroshahi et al. Deep neural networks-based offline writer identification using heterogeneous handwriting data: an evaluation via a novel standard dataset
Wei et al. Learning facial expression and body gesture visual information for video emotion recognition
Hu et al. Hierarchical attention vision transformer for fine-grained visual classification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20161109

Assignee: Nanjing causal Artificial Intelligence Research Institute Co., Ltd

Assignor: Nanjing Post & Telecommunication Univ.

Contract record no.: X2019320000168

Denomination of invention: Multimode emotion characteristic fusion method based on genetic algorithm

Granted publication date: 20190301

License type: Common License

Record date: 20191028