CN112579815A - Real-time training method for expression database and feedback mechanism for expression database - Google Patents
Real-time training method for expression database and feedback mechanism for expression database Download PDFInfo
- Publication number
- CN112579815A CN112579815A CN202011575777.2A CN202011575777A CN112579815A CN 112579815 A CN112579815 A CN 112579815A CN 202011575777 A CN202011575777 A CN 202011575777A CN 112579815 A CN112579815 A CN 112579815A
- Authority
- CN
- China
- Prior art keywords
- expression
- facial feature
- real
- feature points
- facial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1118—Determining activity level
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1121—Determining geometric values, e.g. centre of rotation or angular range of movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Animal Behavior & Ethology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Biomedical Technology (AREA)
- Physiology (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Artificial Intelligence (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Library & Information Science (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Fuzzy Systems (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a real-time training method of an expression database, which comprises the following steps: s1, pre-recording: before performance, an actor records expression coordinate sequences of all expressions in an expression database in advance; s2, predetermined range: obtaining a preset range by analyzing the numerical values of the multidirectional tension sensors corresponding to the positions of all facial feature points of each expression through big data; s3, real-time expression acquisition: when the actor performs, different expressions are made, and a set of expression coordinate sequences consistent with the facial feature point positions in the step S1 are generated; s4, comparison and screening: comparing and screening the numerical values of the multidirectional tension sensors corresponding to the positions of the facial feature points of the expression coordinate sequence; s5, multi-sample training: and continuously and accurately determining the preset range of the facial feature points through a large number of expression coordinate sequences. The aim of training the database is achieved by capturing the motion state of facial muscles in real time during performance to continuously update and accurately specify the predetermined range of each muscle of each expression of an actor.
Description
Technical Field
The invention relates to the technical field of database updating, in particular to a real-time training method of an expression database and a feedback mechanism of the expression database.
Background
Facial expression recognition has become a topical topic in recent years. Expressions are the most effective way in human emotional communication, and contain a lot of personal behavior information. Facial expressions typically include happiness, sadness, anger, fear, surprise, and disgust, among others. By establishing the facial expression database, convenience can be provided for later-stage movie and television special effects and role facial animation, and the cost of post-stage image processing is reduced.
The traditional facial expression database is a facial expression recognition algorithm based on a neural network, and a large number of training pictures are needed to form the expression database. The training pictures are usually collected manually, so the size of the pictures is greatly limited, resulting in an insufficient number of training samples. The facial expression database is concentrated in studying facial expression characteristics in static images, and overall characteristics cannot describe facial detail information, so that the detail characteristics of facial expressions are easily lost, and the trained characteristics lose the detail information of various expressions, so that the expression recognition effect is poor.
Disclosure of Invention
The invention overcomes the defects of the prior art and provides a real-time training method of an expression database and a feedback mechanism of the expression database.
In order to achieve the purpose, the first technical scheme adopted by the invention is as follows: a real-time training method of an expression database is characterized by being a real-time training method based on dynamic capture of facial specific muscle movement, facial specific muscles are provided with facial feature points and are adhered with multidirectional tension sensors, and the real-time training method comprises the following steps:
s1, pre-recording: before performance, an actor records expression coordinate sequences of all expressions in an expression database in advance;
s2, predetermined range: obtaining a preset range by analyzing the numerical values of the multidirectional tension sensors corresponding to the positions of all facial feature points of each expression through big data;
s3, real-time expression acquisition: when the actor performs, different expressions are made, and a set of expression coordinate sequences consistent with the facial feature point positions in the step S1 are generated;
s4, comparison and screening: comparing the numerical values of the multidirectional tension sensors corresponding to the positions of the facial feature points of the expression coordinate sequences, screening the expression coordinate sequences in a preset range, and updating the expression coordinate sequences into an expression database;
s5, multi-sample training: the predetermined range of each facial feature point of each expression is continuously refined by the increasing expression coordinate sequence.
In a preferred embodiment of the present invention, the selecting a plurality of facial feature points symmetrically along the axis of the human face, in S1, further includes the following steps:
s11, numbered facial feature points: selecting facial feature points at muscle positions with large facial expression activity amplitude and numbering C1,C2……CnAdhering multidirectional tension sensors to the facial feature points, wherein the adjacent multidirectional tension sensors are connected through fibers;
s12, constructing an expression coordinate sequence: the face makes expression, and the multidirectional tension sensor at each position obtains a corresponding numerical value N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Cn,Nn);
And S13, repeating different expressions for 1000 times to obtain a pre-recorded expression database.
In a preferred embodiment of the present invention, the multi-directional tension sensor is only disposed on one side of a central axis of the face, and in S1, the method further includes the following steps:
s11, numbered facial feature points: selecting facial feature points only on one side of the face and at muscle positions with large facial expression activity amplitude, selecting facial feature points and numbering C1,C2……CnAdhering multidirectional tension sensors to the facial feature points, wherein the adjacent multidirectional tension sensors are connected through fibers;
s12, symmetrical position number: number C on the other half of the face1s,C2s……Cns,
S13, making facial expression, and obtaining corresponding numerical value N by the multi-directional tension sensor of the half face1,N2……NnAnd is andthe facial feature point is symmetrical and is also set as N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Cn,Nn),(C1s,N1),(C2s,N2),……(Cns,Nn);
And S13, repeating the different expressions for 2000 times to obtain a pre-recorded expression database.
In a preferred embodiment of the present invention, in S11, the multidirectional tension sensor has at least two fiber connections attached to a fixing frame attached to the face; the fixed frame is a circumferential fiber surrounding the face and a middle axis fiber coinciding with the middle axis of the face.
In a preferred embodiment of the present invention, in S2, the predetermined range of the muscle motion capture value for each facial feature point is N1x~N1y,N2x~N2y,……Nnx~Nny。
In a preferred embodiment of the present invention, in S4, if at most 1 captured data is not within the predetermined range, the expression coordinate sequence is updated into the expression database, and if more than 1 captured data is not within the predetermined range, the expression coordinate sequence is rejected.
In a preferred embodiment of the present invention, the facial feature points are selected at least at the forehead, eyebrows, eye sockets, cheeks, corners of the mouth, and chin of the face, and the facial feature points are at least 62 symmetrical points.
In a preferred embodiment of the invention, the expression database records at least a sequence of expression coordinates of actor happiness, sadness, anger, disgust, urgency, surprise, fear and blankness.
The other technical scheme adopted by the invention is as follows: a feedback mechanism of an expression database, which uses any one of the above real-time training methods of the expression database, is characterized by comprising the following steps:
a. outputting different expression coordinate sequences through real-time expressions of actors, and feeding back the different expression coordinate sequences to the director;
b. the director judges whether the expression degree of the actor is in place or not through the expression coordinate sequence;
c. and (4) enabling the actor to correspondingly make expression adjustment, and inputting the adjusted expression coordinate sequence into an expression database for screening.
In a preferred embodiment of the present invention, the expression degrees are classified into blankness, faint, small, medium, large, and strong.
The invention solves the defects in the background technology, and has the following beneficial effects:
(1) the invention provides a training method of a facial expression database, which is characterized in that expression coordinate sequences of all expressions are recorded in advance, the motion state of facial muscles is captured in real time in the performance process, the traditional training method of the facial expression characteristic database using static images is abandoned, the muscle detail characteristics and the rules of a human face are described, the preset range of each muscle of each expression of an actor is continuously updated and accurately specified, and the aim of training the database is fulfilled.
(2) The invention also judges whether the expression degree meets the requirement or not through the real-time expression coordinate sequence of the actor, thereby giving feedback to the actor and continuously improving the accuracy of the database after adjustment.
(3) The facial feature points are selected, and the selected parts of the facial feature points are muscle positions with large facial expression activity amplitude, so that the dynamic changes of the muscles are accurately sensed, and the accuracy of an expression coordinate sequence is improved.
(4) In the real-time acquisition process of the expression database, because the muscle activity degrees of the same expression are different, the value of the inductive tension of the multi-directional tension sensor corresponding to each position can change within a certain range, and a more accurate and more complete expression database can be trained through a large amount of data acquisition and update of the same expression.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts;
fig. 1 is a flowchart of a real-time training method of an expression database of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
In the description of the present application, it is to be understood that the terms "center," "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in the orientation or positional relationship indicated in the drawings for convenience in describing the present application and for simplicity in description, and are not intended to indicate or imply that the referenced devices or elements must have a particular orientation, be constructed in a particular orientation, and be operated in a particular manner, and are not to be considered limiting of the scope of the present application. Furthermore, the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the invention, the meaning of "a plurality" is two or more unless otherwise specified.
In the description of the present application, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art through specific situations.
Referring to fig. 1, a real-time training method of an expression database is a real-time training method based on dynamic capture of facial specific muscle movements, facial specific muscles are facial feature points and are adhered with multidirectional tension sensors, expression coordinate sequences of all expressions are recorded in advance, and motion states of facial muscles are captured in real time in a performance process, so that a traditional training method of a facial expression feature database using static images is abandoned, muscle detail features and rules of a human face are described, and a preset range of each muscle of each expression of an actor is continuously updated and accurately specified, so that the aim of training the database is fulfilled.
The real-time training method of the expression database is mainly realized by the following steps:
s1, pre-recording: before performance, an actor records expression coordinate sequences of all expressions in an expression database in advance, and symmetrically selects a plurality of facial feature points along the internal axis of a face, wherein the facial feature points are selected at least at the forehead, the eyebrow, the eye orbit, the cheek, the mouth corner and the lower jaw of the face, and the number of the facial feature points is at least 62;
s11, numbered facial feature points: selecting facial feature points at muscle positions with large facial expression activity amplitude and numbering C1,C2……CnAdhering multidirectional tension sensors to the facial feature points, wherein the adjacent multidirectional tension sensors are connected through fibers;
s12, constructing expressionsCoordinate sequence: the face makes expression, and the multidirectional tension sensor at each position obtains a corresponding numerical value N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Cn,Nn);
S13, repeating different expressions for 1000 times to obtain a pre-recorded expression database;
s2, predetermined range: obtaining a preset range by analyzing the numerical values of the multidirectional tension sensors corresponding to all the facial feature point positions of each expression through big data, wherein the preset range of the numerical values captured by the muscle movement of each facial feature point is N1x~N1y,N2x~N2y,……Nnx~Nny;
S3, real-time expression acquisition: when the actor performs, different expressions are made, and a set of expression coordinate sequences consistent with the facial feature point positions in the step S1 are generated;
s4, comparison and screening: comparing the numerical values of the multidirectional tension sensors corresponding to the positions of the facial feature points of the expression coordinate sequences, screening the expression coordinate sequences within a preset range, updating the expression coordinate sequences into an expression database, updating the expression coordinate sequences into the expression database if at most 1 piece of capture data is not within the preset range, and rejecting the expression coordinate sequences if more than 1 piece of capture data is not within the preset range;
s5, multi-sample training: the predetermined range of each facial feature point of each expression is continuously refined by the increasing expression coordinate sequence.
The real-time training method of the expression database can be realized by the following steps according to different positions of the facial feature points:
s1, pre-recording: before performance, an actor records expression coordinate sequences of all expressions in an expression database in advance, facial feature points are only arranged on one side of a central axis of a face, the facial feature points are selected at least at the forehead, the eyebrow, the eye sockets, the cheeks, the corners of the mouth and the lower jaw of the face, and the facial feature points are 31 on one side;
S11number facial feature points: selecting facial feature points only on one side of the face and at muscle positions with large facial expression activity amplitude, selecting facial feature points and numbering C1,C2……CnAdhering multidirectional tension sensors to the facial feature points, wherein the adjacent multidirectional tension sensors are connected through fibers;
s12, symmetrical position number: number C on the other half of the face1s,C2s……Cns;
S13, making facial expression, and obtaining corresponding numerical value N by the multi-directional tension sensor of the half face1,N2……NnSymmetric to the facial feature point is also set to N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Cn,Nn),(C1s,N1),(C2s,N2),……(Cns,Nn);
S13, repeating different expressions for 2000 times to obtain a pre-recorded expression database;
s2, predetermined range: obtaining a preset range by analyzing the numerical values of the multidirectional tension sensors corresponding to all the facial feature point positions of each expression through big data, wherein the preset range of the numerical values captured by the muscle movement of each facial feature point is N1x~N1y,N2x~N2y,……Nnx~Nny;
S3, real-time expression acquisition: when the actor performs, different expressions are made, and a set of expression coordinate sequences consistent with the facial feature point positions in the step S1 are generated;
s4, comparison and screening: comparing the numerical values of the multidirectional tension sensors corresponding to the positions of the facial feature points of the expression coordinate sequences, screening the expression coordinate sequences within a preset range, updating the expression coordinate sequences into an expression database, updating the expression coordinate sequences into the expression database if at most 1 piece of capture data is not within the preset range, and rejecting the expression coordinate sequences if more than 1 piece of capture data is not within the preset range;
s5, multi-sample training: the predetermined range of each facial feature point of each expression is continuously refined by the increasing expression coordinate sequence.
In the training method of the expression database, at least two fibers of the multidirectional tension sensor are connected with a fixed frame adhered to the face; fixed frame is the circumference fibre around the face, with the axis fibre of face axis position coincidence to play fixed fibre, and guarantee that multidirectional tension sensor connects the fibre of more directions, catch the detail of the diversified motion of muscle with the accuracy.
The expression database provided by the invention at least records expression coordinate sequences of actor happiness, sadness, anger, disgust, urgency, surprise, fear and no expression.
The other technical scheme adopted by the invention is as follows: a feedback mechanism of an expression database uses any one of the real-time training methods of the expression database, judges whether the expression degree meets requirements or not through a real-time expression coordinate sequence of an actor, thereby giving feedback to the actor, and continuously improving the accuracy of the database after adjustment. Wherein, the expression degree is divided into the levels of no expression, weak, small, medium, large and strong.
The feedback mechanism of the expression database comprises the following steps:
a. outputting different expression coordinate sequences through real-time expressions of actors, and feeding back the different expression coordinate sequences to the director;
b. the director judges whether the expression degree of the actor is in place or not through the expression coordinate sequence;
c. and (4) enabling the actor to correspondingly make expression adjustment, and inputting the adjusted expression coordinate sequence into an expression database for screening.
In summary, the facial feature points are selected, and the selected parts of the facial feature points are muscle positions with large facial expression activity amplitude, so that the dynamic changes of the muscles are accurately sensed, and the accuracy of the expression coordinate sequence is improved. In the real-time acquisition process of the expression database, because the muscle activity degrees of the same expression are different, the value of the inductive tension of the multi-directional tension sensor corresponding to each position can change within a certain range, and a more accurate and more complete expression database can be trained through a large amount of data acquisition and update of the same expression.
In light of the foregoing description of the preferred embodiment of the present invention, it is to be understood that various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention. The technical scope of the present invention is not limited to the content of the specification, and must be determined according to the scope of the claims.
Claims (10)
1. A real-time training method of an expression database is characterized by being a real-time training method based on dynamic capture of facial specific muscle movement, facial specific muscles are provided with facial feature points and are adhered with multidirectional tension sensors, and the real-time training method comprises the following steps:
s1, pre-recording: before performance, an actor records expression coordinate sequences of all expressions in an expression database in advance;
s2, predetermined range: obtaining a preset range by analyzing the numerical values of the multidirectional tension sensors corresponding to the positions of all facial feature points of each expression through big data;
s3, real-time expression acquisition: when the actor performs, different expressions are made, and a set of expression coordinate sequences consistent with the facial feature point positions in the step S1 are generated;
s4, comparison and screening: comparing the numerical values of the multidirectional tension sensors corresponding to the positions of the facial feature points of the expression coordinate sequences, screening the expression coordinate sequences in a preset range, and updating the expression coordinate sequences into an expression database;
s5, multi-sample training: the predetermined range of each facial feature point of each expression is continuously refined by the increasing expression coordinate sequence.
2. The real-time training method of the expression database according to claim 1, wherein: symmetrically selecting a plurality of facial feature points along the axis in the human face, in S1, the method further includes the following steps:
s11, numbered facial feature points: selecting facial feature points at muscle positions with large facial expression activity amplitude and numbering C1,C2……CnAdhering multidirectional tension sensors to the facial feature points, wherein the adjacent multidirectional tension sensors are connected through fibers;
s12, constructing an expression coordinate sequence: the face makes expression, and the multidirectional tension sensor at each position obtains a corresponding numerical value N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Cn,Nn);
And S13, repeating different expressions for 1000 times to obtain a pre-recorded expression database.
3. The real-time training method of the expression database according to claim 1, wherein: the multidirectional pulling force sensor is only arranged on one side of the central axis of the face, and in the step S1, the multidirectional pulling force sensor further comprises the following steps:
s11, numbered facial feature points: selecting facial feature points only on one side of the face and at muscle positions with large facial expression activity amplitude, selecting facial feature points and numbering C1,C2……CnAdhering multidirectional tension sensors to the facial feature points, wherein the adjacent multidirectional tension sensors are connected through fibers;
s12, symmetrical position number: number C on the other half of the face1s,C2s……Cns;
S13, making facial expression, and obtaining corresponding numerical value N by the multi-directional tension sensor of the half face1,N2……NnSymmetric to the facial feature point is also set to N1,N2……NnAnd constructing a complete expression coordinate sequence (C)1,N1),(C2,N2),……(Cn,Nn),(C1s,N1),(C2s,N2),……(Cns,Nn);
And S13, repeating the different expressions for 2000 times to obtain a pre-recorded expression database.
4. The real-time training method of the expression database according to claim 2 or 3, wherein: in S11, the multidirectional tension sensor has at least two fiber connections attached to a fixed frame attached to the face; the fixed frame is a circumferential fiber surrounding the face and a middle axis fiber coinciding with the middle axis of the face.
5. The real-time training method of the expression database according to claim 1, wherein: in S2, the predetermined range of the muscle motion capture values for each of the facial feature points is N1x~N1y,N2x~N2y,……Nnx~Nny。
6. The real-time training method of the expression database according to claim 1, wherein: in S4, if at most 1 captured data is not within the predetermined range, the expression coordinate sequence is updated to the expression database, and if more than 1 captured data is not within the predetermined range, the expression coordinate sequence is eliminated.
7. The real-time training method of the expression database according to claim 2, wherein: the facial feature points are selected at least at the forehead, eyebrows, eye sockets, cheeks, corners of the mouth, and chin of the face, and the facial feature points are at least 62 symmetrical points.
8. The real-time training method of the expression database according to claim 1, wherein: the expression database records at least expression coordinate sequences with actor happiness, sadness, anger, disgust, urgency, surprise, fear and inexpression.
9. A feedback mechanism of an expression database, which uses the real-time training method of the expression database according to any one of claims 1 to 3, comprising the steps of:
a. outputting different expression coordinate sequences through real-time expressions of actors, and feeding back the different expression coordinate sequences to the director;
b. the director judges whether the expression degree of the actor is in place or not through the expression coordinate sequence;
c. and (4) enabling the actor to correspondingly make expression adjustment, and inputting the adjusted expression coordinate sequence into an expression database for screening.
10. The feedback mechanism of the expression database according to claim 9, wherein: the expression degrees are classified as non-expression, weak, small, medium, large, and strong.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011575777.2A CN112579815A (en) | 2020-12-28 | 2020-12-28 | Real-time training method for expression database and feedback mechanism for expression database |
PCT/CN2021/085073 WO2022141895A1 (en) | 2020-12-28 | 2021-04-01 | Real-time training method for expression database and feedback mechanism for expression database |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011575777.2A CN112579815A (en) | 2020-12-28 | 2020-12-28 | Real-time training method for expression database and feedback mechanism for expression database |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112579815A true CN112579815A (en) | 2021-03-30 |
Family
ID=75140058
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011575777.2A Withdrawn CN112579815A (en) | 2020-12-28 | 2020-12-28 | Real-time training method for expression database and feedback mechanism for expression database |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112579815A (en) |
WO (1) | WO2022141895A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022141895A1 (en) * | 2020-12-28 | 2022-07-07 | 苏州源睿尼科技有限公司 | Real-time training method for expression database and feedback mechanism for expression database |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140242560A1 (en) * | 2013-02-15 | 2014-08-28 | Emotient | Facial expression training using feedback from automatic facial expression recognition |
KR20180057096A (en) * | 2016-11-21 | 2018-05-30 | 삼성전자주식회사 | Device and method to perform recognizing and training face expression |
CN106934375A (en) * | 2017-03-15 | 2017-07-07 | 中南林业科技大学 | The facial expression recognizing method of distinguished point based movement locus description |
CN109948454B (en) * | 2019-02-25 | 2022-11-22 | 深圳大学 | Expression database enhancing method, expression database training method, computing device and storage medium |
CN112579815A (en) * | 2020-12-28 | 2021-03-30 | 苏州源睿尼科技有限公司 | Real-time training method for expression database and feedback mechanism for expression database |
-
2020
- 2020-12-28 CN CN202011575777.2A patent/CN112579815A/en not_active Withdrawn
-
2021
- 2021-04-01 WO PCT/CN2021/085073 patent/WO2022141895A1/en active Application Filing
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022141895A1 (en) * | 2020-12-28 | 2022-07-07 | 苏州源睿尼科技有限公司 | Real-time training method for expression database and feedback mechanism for expression database |
Also Published As
Publication number | Publication date |
---|---|
WO2022141895A1 (en) | 2022-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11113511B2 (en) | Makeup evaluation system and operating method thereof | |
CN111008971B (en) | Aesthetic quality evaluation method of group photo image and real-time shooting guidance system | |
CN113837153B (en) | Real-time emotion recognition method and system integrating pupil data and facial expressions | |
CN111507592B (en) | Evaluation method for active modification behaviors of prisoners | |
KR20200012355A (en) | Online lecture monitoring method using constrained local model and Gabor wavelets-based face verification process | |
CN112560786A (en) | Facial muscle feature-based expression database using method and computing processing equipment | |
CN111666845B (en) | Small sample deep learning multi-mode sign language recognition method based on key frame sampling | |
CN113920568A (en) | Face and human body posture emotion recognition method based on video image | |
CN112883867A (en) | Student online learning evaluation method and system based on image emotion analysis | |
CN107911601A (en) | A kind of intelligent recommendation when taking pictures is taken pictures the method and its system of expression and posture of taking pictures | |
CA3050456C (en) | Facial modelling and matching systems and methods | |
CN115349828A (en) | Neonate pain assessment system based on computer deep learning | |
CN112579815A (en) | Real-time training method for expression database and feedback mechanism for expression database | |
KR102285482B1 (en) | Method and apparatus for providing content based on machine learning analysis of biometric information | |
JP2016111612A (en) | Content display device | |
CN110543813B (en) | Face image and gaze counting method and system based on scene | |
KR102482841B1 (en) | Artificial intelligence mirroring play bag | |
CN113326729B (en) | Multi-mode classroom concentration detection method and device | |
KR20210019182A (en) | Device and method for generating job image having face to which age transformation is applied | |
CN113197542B (en) | Online self-service vision detection system, mobile terminal and storage medium | |
CN110443122A (en) | Information processing method and Related product | |
Brenner et al. | Developing an engagement-aware system for the detection of unfocused interaction | |
CN112995523B (en) | Online self-service environment detection method and system | |
KR102616172B1 (en) | System for character providing and information gathering method using same | |
CN115100560A (en) | Method, device and equipment for monitoring bad state of user and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210330 |