CN110363079A - Expression exchange method, device, computer installation and computer readable storage medium - Google Patents
Expression exchange method, device, computer installation and computer readable storage medium Download PDFInfo
- Publication number
- CN110363079A CN110363079A CN201910487847.XA CN201910487847A CN110363079A CN 110363079 A CN110363079 A CN 110363079A CN 201910487847 A CN201910487847 A CN 201910487847A CN 110363079 A CN110363079 A CN 110363079A
- Authority
- CN
- China
- Prior art keywords
- expression
- default
- identified
- human face
- library
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of expression exchange method, device, computer installation and computer readable storage medium.The expression exchange method includes: to receive interaction request instruction, and instruct pop-up detection block to carry out Face datection according to the interaction request;The key feature region of locating human face's image, and the expressive features for characterizing human face expression to be identified are extracted from the key feature region;Obtained expressive features will be extracted to be compared with the expressive features of each expression in default expression library, and have the expression of maximum likelihood probability as the human face expression to be identified for the default expression library;Corresponding interaction content is exported according to Expression Recognition result controlling terminal equipment;And the feedback information after the interaction content output is obtained, and control interaction content output is connected according to the feedback information.The present invention relates to technical field of face recognition, it can be achieved that interacting more vivid and interesting with terminal device, user experience is improved.
Description
Technical field
The present invention relates to technical field of electronic communication more particularly to a kind of expression exchange method, device, computer installation and
Computer readable storage medium.
Background technique
With the development of modern science and technology, the electronic equipments such as mobile phone, tablet computer are had become in most people's lives not
The part that can or lack constantly changes every aspect in our social and lives.Expression is that the mankind are used to express the one of mood
Kind basic mode, is one of nonverbal communication effective means.Existing electronic equipment is generally mounted with virtual robot
Realize human-computer interaction, however virtual robot is generally only supported to carry out man machine language's interaction, can not distinguish the expression of user, nothing
Method realizes human-computer interaction according to the expression of user.
Summary of the invention
In view of above-mentioned, the present invention provides a kind of expression exchange method, device, computer installation and computer-readable storage medium
Matter can be realized by expression and be controlled terminal device, improve user experience.
One embodiment of the application provides a kind of expression exchange method, which comprises
Interaction request instruction is received, and instructs one detection block of pop-up to carry out Face datection according to the interaction request;
Judge whether to detect facial image;
If detecting facial image, the key feature region of the facial image is positioned, and from the key feature area
The expressive features for characterizing human face expression to be identified are extracted in domain;
Obtained expressive features will be extracted to be compared with the expressive features of each expression in default expression library, obtain institute
State the likelihood probability of each expression in human face expression to be identified and the default expression library, and will in the default expression library
Have the expression of maximum likelihood probability as the human face expression to be identified;
Corresponding interaction content is exported according to the recognition result controlling terminal equipment of the human face expression to be identified;And
Feedback information after obtaining the interaction content output, and the control terminal is connected according to the feedback information and is set
Standby content output.
Preferably, the step of progress Face datection includes:
The convolutional neural networks model for carrying out Face datection is obtained according to multiple face sample trainings are preset;And
Face datection is carried out using the convolutional neural networks model.
Preferably, described to incite somebody to action when the expressive features of each expression in the default expression library are shape eigenvectors
It extracts obtained expressive features to be compared with the expressive features of each expression in default expression library, obtains the people to be identified
The step of likelihood probability of face expression and each expression in the default expression library includes:
Obtain the shape eigenvectors of the human face expression to be identified;
Calculate the shape eigenvectors of the human face expression to be identified and the shape of each expression in the default expression library
The distance between shape feature vector value;And
Each table in the human face expression to be identified and the default expression library is determined according to calculated distance value
The likelihood probability of feelings.
Preferably, described to incite somebody to action when the expressive features of each expression in the default expression library are texture feature vector
It extracts obtained expressive features to be compared with the expressive features of each expression in default expression library, obtains the people to be identified
The step of likelihood probability of face expression and each expression in the default expression library includes:
Obtain the texture feature vector of the human face expression to be identified;
Calculate the texture feature vector of the human face expression to be identified and the line of each expression in the default expression library
Manage the distance between feature vector value;And
Each table in the human face expression to be identified and the default expression library is determined according to calculated distance value
The likelihood probability of feelings.
Preferably, the distance value is calculated by the following formula to obtain:
dM(y,xj)=(y-xj)T*M*(y-xj)
Wherein, y is shape eigenvectors/texture feature vector of human face expression to be identified, xjFor in default expression library
Shape eigenvectors/texture feature vector of j-th of expression, M are goal-selling metric matrix, and j is whole more than or equal to 1
Number, dM(y,xj) be human face expression to be identified shape eigenvectors/texture feature vector and default expression library in j-th of table
The distance between shape eigenvectors/texture feature vector of feelings value, (y-xj) be human face expression to be identified shape feature to
Shape eigenvectors/texture feature vector difference of j-th of expression in amount/texture feature vector and default expression library, (y-
xj)TFor the shape of j-th of expression in shape eigenvectors/texture feature vector of human face expression to be identified and default expression library
The transposition of feature vector/texture feature vector difference;The likelihood probability is calculated by the following formula to obtain:
P={ 1+exp [D-b] }-1, wherein p is likelihood probability, and D is distance value, and b is default amount of bias.
Preferably, after the feedback information includes voice messaging or watches the terminal device output interaction content
Expression information.
Preferably, the feedback information is that the viewing terminal device exports the expression information after the interaction content, institute
Stating the step of content output for controlling the terminal device is connected according to the feedback information includes:
Judge whether the expression shape change before and after the viewing terminal device exports the interaction content meets default adjustment rule
Then;
If meeting the default adjustment rule, the interaction content of the terminal device output is adjusted;And
If not meeting the default adjustment rule, the interaction content of the terminal device output is not adjusted.
One embodiment of the application provides a kind of expression interactive device, and described device includes:
Detection module instructs one detection block of pop-up to carry out for receiving interaction request instruction, and according to the interaction request
Face datection;
Judgment module detects facial image for judging whether;
Extraction module, for when detecting facial image, positioning the key feature region of the facial image, and from institute
State the expressive features extracted in key feature region and characterize human face expression to be identified;
Comparison module, for will extract the expressive features of each expression in obtained expressive features and default expression library into
Row compares, and obtains the likelihood probability of each expression in the human face expression to be identified and the default expression library, and will be with institute
Stating in default expression library has the expression of maximum likelihood probability as the human face expression to be identified;
Output module, for corresponding mutually according to the output of the recognition result controlling terminal equipment of the human face expression to be identified
Dynamic content;And
Control module is connected for obtaining the feedback information after the interaction content exports, and according to the feedback information
Control the content output of the terminal device.
One embodiment of the application provides a kind of computer installation, and the computer installation includes processor and memory,
Several computer programs are stored on the memory, the processor is for when executing the computer program stored in memory
The step of realizing expression exchange method as elucidated before.
One embodiment of the application provides a kind of computer readable storage medium, is stored thereon with computer program, described
The step of expression exchange method as elucidated before is realized when computer program is executed by processor.
Above-mentioned expression exchange method, device, computer installation and computer readable storage medium can recognize user's expression simultaneously
Computer installation, which is controlled, according to Expression Recognition result exports corresponding interaction content, it can be achieved that user's anxiety of releiving, anxiety, consoling
The functions such as user mood, while user's expression after interaction content broadcasting can also be further analyzed, and according to analysis
As a result the interaction content output of control computer installation is connected, realization interacts more vivid and interesting with computer installation, improves use
Family usage experience.
Detailed description of the invention
It, below will be to required in embodiment description in order to illustrate more clearly of the technical solution of embodiment of the present invention
The attached drawing used is briefly described, it should be apparent that, the accompanying drawings in the following description is some embodiments of the present invention, for
For those of ordinary skill in the art, without creative efforts, it can also be obtained according to these attached drawings other
Attached drawing.
Fig. 1 is the step flow chart of expression exchange method in one embodiment of the invention.
Fig. 2 is the functional block diagram of expression interactive device in one embodiment of the invention.
Fig. 3 is computer schematic device in one embodiment of the invention.
Specific embodiment
To better understand the objects, features and advantages of the present invention, with reference to the accompanying drawing and specific real
Applying mode, the present invention will be described in detail.It should be noted that in the absence of conflict, presently filed embodiment and reality
The feature applied in mode can be combined with each other.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, described embodiment
Only some embodiments of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this field
Those of ordinary skill's every other embodiment obtained without making creative work, belongs to guarantor of the present invention
The range of shield.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention
The normally understood meaning of technical staff is identical.Term as used herein in the specification of the present invention is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.
Preferably, expression exchange method of the invention is applied in one or more computer installation.The computer
Device is that one kind can be according to the instruction for being previously set or storing, the automatic equipment for carrying out numerical value calculating and/or information processing,
Hardware includes but is not limited to microprocessor, specific integrated circuit (Application Specific Integrated
Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processing unit
(Digital Signal Processor, DSP), embedded device etc..
The computer installation can be the meter such as desktop PC, laptop, tablet computer, server, mobile phone
Calculate equipment.The computer installation can with user by the modes such as keyboard, mouse, remote controler, touch tablet or voice-operated device into
Row human-computer interaction.
Embodiment one:
Fig. 1 is the step flow chart of expression exchange method preferred embodiment of the present invention.The process according to different requirements,
The sequence of step can change in figure, and certain steps can be omitted.
As shown in fig.1, the expression exchange method specifically includes following steps.
Step S11, interaction request instruction is received, and instructs one detection block of pop-up to carry out face inspection according to the interaction request
It surveys.
In one embodiment, when receiving user's sending interaction request instruction, computer installation will be according to described
Interaction request instruction one detection block of pop-up, and Face datection is carried out by the detection block.For example user can pass through touch screen
The instruction of touching input interaction request is instructed by key-press input interaction request or inputs interaction request instruction by voice.
In one embodiment, Face datection can be realized by establishing and training a convolution neural network model.Tool
Body, Face datection can be accomplished by the following way: can first construct face sample database and establish one for carrying out
The convolutional neural networks model of Face datection, the face sample database include the face information of multiple people, everyone people
Face information may include multiple angles, and the face information of every kind of angle can have plurality of pictures;It will be in face sample database
Facial image is input to the convolutional neural networks model, carries out convolutional Neural using the default parameters of convolutional neural networks model
Network training;According to training intermediate result, the initial weight of default parameters, training rate, the number of iterations etc. are constantly adjusted
It is whole, until obtaining the network parameter of optimal convolutional neural networks model, finally by the convolutional Neural with optimal network parameter
Network model is as final identification model, after the completion of training, i.e., using the finally obtained convolutional neural networks model into
Row Face datection.
Step S12, judge whether to detect facial image.
It in one embodiment, can be according to the output of the convolutional neural networks model to determine whether detecting face
Image.If detecting facial image, go to step S13.If facial image is not detected, it is back to step S11.
Step S13 positions the key feature region of the facial image if detecting facial image, and from the pass
The expressive features for characterizing human face expression to be identified are extracted in key characteristic area.
In one embodiment, when facial image is not detected in a preset time, a prompt information can be exported.
When the preset time detects facial image, the key feature region of the facial image is positioned, and from the key
The expressive features for characterizing human face expression to be identified are extracted in characteristic area, due to not being to carry out spy to the whole region of facial image
Sign is extracted and operation, it is possible to reduce operand improves facial expression recognition speed.The key feature region of the facial image
It may include eyes, nose, mouth, eyebrow etc..
In one embodiment, eyes, nose, mouth, the eyebrow of facial image can be oriented by integral projection mode
The key features regions such as hair.Since eyes are face characteristics more outstanding in face, first eyes can be positioned, then
Other organs of face, such as: eyebrow, mouth, nose can be obtained by potential distribution relation and more accurately be positioned.Citing
For, key feature zone location is carried out by corresponding to the wave crest generated under different integral projection modes or trough, wherein product
Point projection is divided into upright projection and floor projection, if f (x, y) indicates the gray value at image (x, y), in image [y1, y2] and
The horizontal integral projection M in the region [x1, x2]h(y) and vertical integral projection Mv(x) it respectively indicates are as follows: Wherein, horizontal integral
Projection is to carry out the gray value of a line all pixels point to show again after adding up, and vertical integral projection is by a column all pixels
The gray value of point show again after adding up.By positioning two trough points x1, x2 the area horizontal axis [x1, x2] from facial image
The image interception in domain comes out, and the positioning of facial image right boundary can be realized.It is to be identified to binaryzation after right boundary positioning
Facial image carries out horizontal integral projection and vertical integral projection respectively.
Further, using the priori knowledge to facial image it is found that eyebrow and eyes are closer black in facial image
Color region corresponds to the first two minimum point in horizontal integral projection curve.Corresponding first minimum point is eyebrow
Position on longitudinal axis, is denoted as ybrow, corresponding second minimum point is the position of eyes on longitudinal axis, is denoted as yeye, third
Corresponding a minimum point is the position of nose on longitudinal axis, is denoted as ynose, corresponding the 4th minimum point is mouth vertical
Position on axis, is denoted as ymonth.Equally, there are two minimum points in facial image central symmetry axis two sides, respectively correspond left and right
The position of eye on transverse axis, is denoted as xleft-eye、xright-eye, the position of eyebrow on transverse axis is identical with eyes, and mouth and nose exist
Position on horizontal axis is (xleft-eye+xright-eye)/2, and then eyes can be determined according to the coordinate and preset rules of key feature
Region, lip region, brow region and nasal area, such as eye areas include 15 pixel to the left centered on left eye coordinates,
15 pixel to the right, upward 10 pixel, the region of downward 10 pixel and centered on right eye coordinate, to the left 15 pixel, to the right 15 picture
Element, upward 10 pixel, the region of downward 10 pixel.
In one embodiment, human face expression can have the following form of expression: face action when glad: the corners of the mouth is stuck up
It rises, wrinkle is lifted on cheek, eyelid is shunk, and eyes tail portion will form " crow's feet ".Facial characteristics when sad: narrowing eye, and eyebrow is received
Tightly, the corners of the mouth pulls down, and chin is lifted or tightened.Facial characteristics when fearing: mouth and eyes open, and eyebrow raises up, and nostril is magnified.
Facial characteristics when angry: eyebrow is sagging, and forehead is knitted tightly, and eyelid and lip are nervous.Facial characteristics when detest: nose, upper mouth are sneered
It is lifted on lip, eyebrow is sagging, narrows eye.Facial characteristics when surprised: lower jaw is sagging, and lip and mouth loosen, and eyes magnify, eyelid and
The micro- lift of eyebrow.Facial characteristics when contempt: corners of the mouth side is lifted, and is ridiculed or proud is laughed at shape etc..When completion key feature region
After positioning, the expressive features of characterization human face expression can be extracted from key feature region.Such as it can be using based on difference energy
Spirogram (DEI) method/centralization binary pattern (CGBP) method extracts the table of characterization human face expression to realize from key feature region
Feelings feature.
Step S14, the expressive features for extracting obtained expressive features and each expression in default expression library are compared
It is right, obtain the likelihood probability of each expression in the human face expression to be identified and the default expression library, and will with it is described pre-
If having the expression of maximum likelihood probability as the human face expression to be identified in expression library.
In one embodiment, the default expression library may include a variety of expressions, for example include: glad, startled, sad
The expressions such as wound, indignation, detest, fear and a variety of compound expressions, it is such as sad and frightened, sad and startled, angry and frightened
Deng.The expressive features of the expression to be identified can be shape eigenvectors or texture feature vector, when in default expression library
The expressive features of each expression are then to obtain the shape eigenvectors of the expression to be identified when characterizing with shape eigenvectors,
When the expressive features of each expression in default expression library are characterized with texture feature vector, then the expression to be identified is obtained
Texture feature vector.
In one embodiment, the expressive features (shape eigenvectors that extraction obtains can be determined in the following manner
Or texture feature vector) likelihood probability with each expression in the default expression library: obtain the spy of the expression to be identified
Levy between vector (shape eigenvectors or texture feature vector) and the feature vector of each expression in default expression library away from
From value;The likelihood probability of each expression in the human face expression to be identified and the default expression library is determined according to distance value.
For example, obtain the shape eigenvectors of the human face expression to be identified, calculate the shape feature of the human face expression to be identified to
The distance between shape eigenvectors of each expression in amount and default expression library value, and according to calculated distance
Value determines the likelihood probability of each expression in the human face expression to be identified and the default expression library.For another example, institute is obtained
It states the texture feature vector of human face expression to be identified, calculates the texture feature vector of the human face expression to be identified and described default
The distance between texture feature vector of each expression in expression library value, and according to calculated distance value determine it is described to
Identify the likelihood probability of each expression in human face expression and the default expression library.
In one embodiment, the distance value can be broad sense mahalanobis distance.It can be calculated by following formula
The distance between feature vector of each expression in the feature vector of the expression to be identified and default expression library value:
dM(y,xj)=(y-xj)T*M*(y-xj)
Wherein, y is the shape eigenvectors (texture feature vector) of human face expression to be identified, xjFor in default expression library
The shape eigenvectors (texture feature vector) of j-th of expression, M are goal-selling metric matrix, and j is more than or equal to 1
Integer, dM(y,xj) be human face expression to be identified shape eigenvectors (texture feature vector) and default expression library in j-th
The distance between shape eigenvectors (texture feature vector) of expression value, (y-xj) be human face expression to be identified shape feature
The difference of the shape eigenvectors (texture feature vector) of j-th of expression in vector (texture feature vector) and default expression library
Value, (y-xj)TFor j-th of table in the shape eigenvectors (texture feature vector) of human face expression to be identified and default expression library
The transposition of the difference of the shape eigenvectors (texture feature vector) of feelings;The likelihood probability can be calculated by the following formula
It arrives:
P={ 1+exp [D-b] }-1, wherein p is likelihood probability, and D is distance value, and b is default amount of bias.
In one embodiment, when each table being calculated in the human face expression to be identified and the default expression library
After the likelihood probability of feelings, there can be the expression of maximum likelihood probability as the table to be identified for default expression library
Feelings.
Step S15, computer installation is controlled according to the recognition result of the human face expression to be identified and exports corresponding interaction
Content.
In one embodiment, the mapping that can pre-establish the interaction content of multiple expressions and computer installation output is closed
It is table, and is realized according to the mapping table and computer installation is controlled according to Expression Recognition result.The interaction content can
To be that computer installation according to Expression Recognition result provides corresponding movement, voice, picture, text, video etc. come mutual with user
It is dynamic, to realize releive user's anxiety, anxiety, pleasant user mood.For example, when determining that the human face expression to be identified is nervous table
When feelings, it can control the music that computer installation output is releived and alleviate user's intense strain or the output of control computer installation such as
The suggestion content (for example suggesting content are as follows: attempt slowly to deeply breathe and carrys out keeping tensions down mood) of what keeping tensions down method is joined for user
It examines;When determining the human face expression to be identified for sad expression, can control computer installation output alleviate sad article,
The suggestion content how music, video or control computer installation output alleviate sad method is for reference.
Step S16, the feedback information after obtaining the interaction content output, and control institute is connected according to the feedback information
State the content output of computer installation.
In one embodiment, the feedback information may include voice messaging or the viewing computer installation output
Expression information after the interaction content.For example, control computer installation alleviates the current nervous expression of user, sound of releiving is exported
Pleasure is to alleviate user's intense strain, when the music of releiving finishes and receives " please repeating playing or ask for user's input
After the voice messaging of broadcasting again ", controlling terminal can play the music of releiving of eve broadcasting again;For another example, control calculates
For machine device in order to alleviate the current nervous expression of user, output releives music to alleviate user's intense strain, when the music of releiving
Finish and when the user's expression detected remains as intense strain, can control another head of terminal plays releive music or
Music of releiving is not played, is changed to control computer installation exports how the suggestion content of keeping tensions down method is to user.
It in one embodiment, can be directly according to voice messaging when feedback information is the voice messaging of user's input
It is required that the interaction content of the adjustment computer installation output, when feedback information is that user watches the computer installation output
Expression when expression information after interaction content, between the expression before can also judging viewing and the expression after viewing interaction content
Whether variation meets default adjustment rule, if meeting default adjustment rule, in the interaction for adjusting the computer installation output
Hold, if not meeting default adjustment rule, does not adjust the interaction content of the computer installation output.For example, the default rule
It is then the expression expression shape change sad from glad transformation, if the expression after identification obtains before viewing expression and viewing interaction content
Between expression shape change be to be changed into happiness from sadness, then the default adjustment rule is not met, without adjustment.
Above-mentioned expression exchange method can recognize user's expression and control computer installation output pair according to Expression Recognition result
The interaction content answered, while can also be to interaction content, it can be achieved that user's anxiety of releiving, anxiety, console the functions such as user mood
User's expression after broadcasting is further analyzed, and the interaction content for connecting control computer installation based on the analysis results is defeated
Out, it realizes and interacts more vivid and interesting with computer installation, improve user experience.
Embodiment two:
Fig. 2 is the functional block diagram of expression interactive device preferred embodiment of the present invention.
As shown in fig.2, the expression interactive device 10 may include detection module 101, judgment module 102, extract mould
Block 103, comparison module 104, output module 105 and control module 106.
The detection module 101 is detected for receiving interaction request instruction, and according to interaction request instruction pop-up one
Frame carries out Face datection.
In one embodiment, when receiving user's sending interaction request instruction, the detection module 101 will basis
Interaction request instruction one detection block of pop-up, and Face datection is carried out by the detection block.For example user can pass through touching
The instruction of control screen touching input interaction request is instructed by key-press input interaction request or inputs interaction request instruction by voice.
In one embodiment, the detection module 101 can first pass through foundation and one convolutional neural networks mould of training in advance
Type realizes Face datection.Specifically, Face datection can be accomplished by the following way in the detection module 101: Ke Yixian
Building face sample database simultaneously establishes one for carrying out the convolutional neural networks model of Face datection, the face sample data
Library includes the face information of multiple people, everyone face information may include multiple angles, and the face information of every kind of angle can
To there is plurality of pictures;Facial image in face sample database is input to the convolutional neural networks model, uses convolution
The default parameters of neural network model carries out convolutional neural networks training;According to training intermediate result, to the initial of default parameters
Weight, training rate, the number of iterations etc. are constantly adjusted, and the network until obtaining optimal convolutional neural networks model is joined
Number, it is described after the completion of training finally using the convolutional neural networks model with optimal network parameter as final identification model
Detection module 101 carries out Face datection using the finally obtained convolutional neural networks model.
The judgment module 102 is for judging whether to detect facial image.
In one embodiment, the judgment module 102 can be sentenced according to the output of the convolutional neural networks model
It is disconnected whether to detect facial image.
The extraction module 103 is used for when detecting facial image, positions the key feature region of the facial image,
And the expressive features for characterizing human face expression to be identified are extracted from the key feature region.
In one embodiment, when the judgment module 102 in a preset time judges that facial image is not detected,
A prompt information can be exported.It is described when the judgment module 102 judges to detect facial image in the preset time
Extraction module 103 positions the key feature region of the facial image, and extracts and characterized wait know from the key feature region
The expressive features of other human face expression, due to not being to carry out feature extraction and operation to the whole region of facial image, it is possible to reduce
Operand improves facial expression recognition speed.The key feature region of the facial image may include eyes, nose, mouth
Bar, eyebrow etc..
In one embodiment, the extraction module 103 can orient the eye of facial image by integral projection mode
The key features such as eyeball, nose, mouth, eyebrow region.It, can be first right since eyes are face characteristics more outstanding in face
Eyes are positioned, then other organs of face, such as: eyebrow, mouth, nose can be obtained by potential distribution relation and be compared
Accurately positioning.For example, key feature zone location by correspond under different integral projection modes the wave crest that generates or
Trough carries out, wherein and integral projection is divided into upright projection and floor projection, if f (x, y) indicates the gray value at image (x, y),
In the horizontal integral projection M of image [y1, y2] and the region [x1, x2]h(y) and vertical integral projection Mv(x) it respectively indicates are as follows:Wherein, horizontal integral projection
It is to carry out the gray value of a line all pixels point to show again after adding up, and vertical integral projection is by a column all pixels point
Gray value show again after adding up.By positioning two trough points x1, x2 the region horizontal axis [x1, x2] from facial image
Image interception comes out, and the positioning of facial image right boundary can be realized.To binaryzation face to be identified after right boundary positioning
Image carries out horizontal integral projection and vertical integral projection respectively.
Further, using the priori knowledge to facial image it is found that eyebrow and eyes are closer black in facial image
Color region corresponds to the first two minimum point in horizontal integral projection curve.Corresponding first minimum point is eyebrow
Position on longitudinal axis, is denoted as ybrow, corresponding second minimum point is the position of eyes on longitudinal axis, is denoted as yeye, third
Corresponding a minimum point is the position of nose on longitudinal axis, is denoted as ynose, corresponding the 4th minimum point is mouth vertical
Position on axis, is denoted as ymonth.Equally, there are two minimum points in facial image central symmetry axis two sides, respectively correspond left and right
The position of eye on transverse axis, is denoted as xleft-eye、xright-eye, the position of eyebrow on transverse axis is identical with eyes, and mouth and nose exist
Position on horizontal axis is (xleft-eye+xright-eye)/2, and then eyes can be determined according to the coordinate and preset rules of key feature
Region, lip region, brow region and nasal area, such as eye areas include 15 pixel to the left centered on left eye coordinates,
15 pixel to the right, upward 10 pixel, the region of downward 10 pixel and centered on right eye coordinate, to the left 15 pixel, to the right 15 picture
Element, upward 10 pixel, the region of downward 10 pixel.
In one embodiment, human face expression can have the following form of expression: face action when glad: the corners of the mouth is stuck up
It rises, wrinkle is lifted on cheek, eyelid is shunk, and eyes tail portion will form " crow's feet ".Facial characteristics when sad: narrowing eye, and eyebrow is received
Tightly, the corners of the mouth pulls down, and chin is lifted or tightened.Facial characteristics when fearing: mouth and eyes open, and eyebrow raises up, and nostril is magnified.
Facial characteristics when angry: eyebrow is sagging, and forehead is knitted tightly, and eyelid and lip are nervous.Facial characteristics when detest: nose, upper mouth are sneered
It is lifted on lip, eyebrow is sagging, narrows eye.Facial characteristics when surprised: lower jaw is sagging, and lip and mouth loosen, and eyes magnify, eyelid and
The micro- lift of eyebrow.Facial characteristics when contempt: corners of the mouth side is lifted, and is ridiculed or proud is laughed at shape etc..When completion key feature region
After positioning, the expressive features of characterization human face expression can be extracted from key feature region.Such as it can be using based on difference energy
Spirogram (DEI) method/centralization binary pattern (CGBP) method extracts the table of characterization human face expression to realize from key feature region
Feelings feature.
The comparison module 104 is used to extract the expression of each expression in obtained expressive features and default expression library
Feature is compared, and obtains the likelihood probability of each expression in the human face expression to be identified and the default expression library, and
Have the expression of maximum likelihood probability as the human face expression to be identified for the default expression library.
In one embodiment, the default expression library may include a variety of expressions, for example include: glad, startled, sad
The expressions such as wound, indignation, detest, fear and a variety of compound expressions, it is such as sad and frightened, sad and startled, angry and frightened
Deng.The expressive features of the expression to be identified can be shape eigenvectors or texture feature vector, when in default expression library
The expressive features of each expression are then to obtain the shape eigenvectors of the expression to be identified when characterizing with shape eigenvectors,
To be compared, when the expressive features of each expression in default expression library are characterized with texture feature vector, then institute is obtained
The texture feature vector of expression to be identified is stated, to be compared.
In one embodiment, the comparison module 104 can determine that the expression that extraction obtains is special in the following manner
Levy the likelihood probability of each expression in (shape eigenvectors or texture feature vector) and the default expression library: described in acquisition
The spy of feature vector (shape eigenvectors or the texture feature vector) and each expression in default expression library of expression to be identified
Levy the distance between vector value;Each table in the human face expression to be identified and the default expression library is determined according to distance value
The likelihood probability of feelings.For example, the comparison module 104 obtains the shape eigenvectors of the human face expression to be identified, institute is calculated
It states between the shape eigenvectors of human face expression to be identified and the shape eigenvectors of each expression in the default expression library
Distance value, and according to calculated distance value determine the human face expression to be identified with it is each in the default expression library
The likelihood probability of expression.For another example, the comparison module 104 obtains the texture feature vector of the human face expression to be identified, meter
Calculate the texture feature vector of the human face expression to be identified and the texture feature vector of each expression in the default expression library
The distance between value, and determined in the human face expression to be identified and the default expression library according to calculated distance value
The likelihood probability of each expression.
In one embodiment, the distance value can be broad sense mahalanobis distance.The comparison module 104 can be by such as
Lower formula is calculated between the feature vector of the expression to be identified and the feature vector of each expression in default expression library
Distance value:
dM(y,xj)=(y-xj)T*M*(y-xj)
Wherein, y is the shape eigenvectors (texture feature vector) of human face expression to be identified, xjFor in default expression library
The shape eigenvectors (texture feature vector) of j-th of expression, M are goal-selling metric matrix, and j is more than or equal to 1
Integer, dM(y,xj) be human face expression to be identified shape eigenvectors (texture feature vector) and default expression library in j-th
The distance between shape eigenvectors (texture feature vector) of expression value, (y-xj) be human face expression to be identified shape feature
The difference of the shape eigenvectors (texture feature vector) of j-th of expression in vector (texture feature vector) and default expression library
Value, (y-xj)TFor j-th of table in the shape eigenvectors (texture feature vector) of human face expression to be identified and default expression library
The transposition of the difference of the shape eigenvectors (texture feature vector) of feelings;The likelihood probability can be calculated by the following formula
It arrives:
P={ 1+exp [D-b] }-1, wherein p is likelihood probability, and D is distance value, and b is default amount of bias.
In one embodiment, when each table being calculated in the human face expression to be identified and the default expression library
After the likelihood probability of feelings, the comparison module 104 can will be made with the expression with maximum likelihood probability in default expression library
For the expression to be identified.
The output module 105 is used to control computer installation output according to the recognition result of the human face expression to be identified
Corresponding interaction content.
In one embodiment, the mapping that can pre-establish the interaction content of multiple expressions and computer installation output is closed
It is table, and is realized according to the mapping table and computer installation is controlled according to Expression Recognition result.The interaction content can
To be that computer installation according to Expression Recognition result provides corresponding movement, voice, picture, text, video etc. come mutual with user
It is dynamic, to realize releive user's anxiety, anxiety, pleasant user mood.For example, when determining that the human face expression to be identified is nervous table
When feelings, the output module 105 can control the music that computer installation output is releived and alleviate user's intense strain or control
Computer installation output how keeping tensions down method suggestion content (such as suggest content are as follows: attempt slowly deeply breathe to alleviate
Intense strain) it is for reference;When determining the human face expression to be identified for sad expression, the output module 105 can be with
Control computer installation output alleviates how sad article, music, video or control computer installation output alleviate sadness
The suggestion content of method is for reference.
The control module 106 is used to obtain the feedback information after the interaction content output, and according to the feedback letter
Breath connects the content output for controlling the computer installation.
In one embodiment, the feedback information may include voice messaging or the viewing computer installation output
Expression information after the interaction content.For example, control computer installation alleviates the current nervous expression of user, sound of releiving is exported
Pleasure is to alleviate user's intense strain, when the music of releiving finishes and receives " please repeating playing or ask for user's input
After the voice messaging of broadcasting again ", 106 controlling terminal of control module can play the music of releiving of eve broadcasting again;
For another example, for control computer installation in order to alleviate the current nervous expression of user, output releives music to alleviate user's anxiety feelings
Thread, when the music of releiving finishes and the user's expression detected remains as intense strain, the control module 106 can be with
Controlling terminal, which plays another head, releives and music or does not play music of releiving, be changed to control computer installation output how to alleviate it is tight
The suggestion content of Zhang Fangfa is to user.
In one embodiment, when feedback information is the voice messaging of user's input, the control module 106 can be straight
The interaction content that the computer installation output is adjusted according to voice messaging requirement is connect, when feedback information is that user watches the meter
Calculation machine device output interaction content after expression information when, the control module 106 can also judge viewing before expression with
Whether the expression shape change between expression after viewing interaction content meets default adjustment rule, if meeting default adjustment rule,
The control module 106 adjusts the interaction content of the computer installation output, if not meeting default adjustment rule, does not adjust
The interaction content of the computer installation output.For example, the preset rules are the expression expression shape changes sad from glad transformation,
If the expression shape change between expression after the expression that identification obtains before viewing and viewing interaction content is to be changed into happiness from sadness,
The default adjustment rule is not met then, without adjustment.
Above-mentioned expression interactive device can recognize user's expression and control computer installation output pair according to Expression Recognition result
The interaction content answered, while can also be to interaction content, it can be achieved that user's anxiety of releiving, anxiety, console the functions such as user mood
User's expression after broadcasting is further analyzed, and the interaction content for connecting control computer installation based on the analysis results is defeated
Out, it realizes and interacts more vivid and interesting with computer installation, improve user experience.
Fig. 3 is the schematic diagram of computer installation preferred embodiment of the present invention.
The computer installation 1 includes memory 20, processor 30 and is stored in the memory 20 and can be in institute
State the computer program 40 run on processor 30, such as expression interactive program.The processor 30 executes the computer journey
The step in above-mentioned expression exchange method embodiment, such as step S11~S16 shown in FIG. 1 are realized when sequence 40.Alternatively, described
Processor 30 realizes the function of each module in above-mentioned expression interactive device embodiment when executing the computer program 40, such as schemes
Module 101~106 in 2.
Illustratively, the computer program 40 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 20, and are executed by the processor 30, to complete the present invention.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, and described instruction section is used
In implementation procedure of the description computer program 40 in the computer installation 1.For example, the computer program 40 can be with
Be divided into detection module 101 in Fig. 2, judgment module 102, extraction module 103, comparison module 104, output module 105 and
Control module 106.Each module concrete function is referring to embodiment two.
The computer installation 1 can be desktop PC, notebook, palm PC, mobile phone, tablet computer and cloud
Server etc. calculates equipment.It will be understood by those skilled in the art that the schematic diagram is only the example of computer installation 1, and
The restriction to computer installation 1 is not constituted, may include components more more or fewer than diagram, or combine certain components, or
The different component of person, such as the computer installation 1 can also include input-output equipment, network access equipment, bus etc..
Alleged processor 30 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor 30 is also possible to any conventional processing
Device etc., the processor 30 are the control centres of the computer installation 1, utilize various interfaces and the entire computer of connection
The various pieces of device 1.
The memory 20 can be used for storing the computer program 40 and/or module/unit, and the processor 30 passes through
Operation executes the computer program and/or module/unit being stored in the memory 20, and calls and be stored in memory
Data in 20 realize the various functions of the computer installation 1.The memory 20 can mainly include storing program area and deposit
Store up data field, wherein storing program area can application program needed for storage program area, at least one function (for example sound is broadcast
Playing function, image player function etc.) etc.;Storage data area, which can be stored, uses created data (ratio according to computer installation 1
Such as audio data, phone directory) etc..In addition, memory 20 may include high-speed random access memory, it can also include non-easy
The property lost memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital
(Secure Digital, SD) card, flash card (Flash Card), at least one disk memory, flush memory device or other
Volatile solid-state part.
If the integrated module/unit of the computer installation 1 is realized in the form of SFU software functional unit and as independence
Product when selling or using, can store in a computer readable storage medium.Based on this understanding, of the invention
It realizes all or part of the process in above-described embodiment method, can also instruct relevant hardware come complete by computer program
At the computer program can be stored in a computer readable storage medium, and the computer program is held by processor
When row, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program includes computer program code, institute
Stating computer program code can be source code form, object identification code form, executable file or certain intermediate forms etc..It is described
Computer-readable medium may include: any entity or device, recording medium, U that can carry the computer program code
Disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only Memory), arbitrary access
Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It needs
It is bright, the content that the computer-readable medium includes can according in jurisdiction make laws and patent practice requirement into
Row increase and decrease appropriate, such as do not include electric load according to legislation and patent practice, computer-readable medium in certain jurisdictions
Wave signal and telecommunication signal.
In several embodiments provided by the present invention, it should be understood that disclosed computer installation and method, it can be with
It realizes by another way.For example, computer installation embodiment described above is only schematical, for example, described
The division of unit, only a kind of logical function partition, there may be another division manner in actual implementation.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in same treatment unit
It is that each unit physically exists alone, can also be integrated in same unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds software function module.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims
Variation is included in the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.This
Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.It is stated in computer installation claim
Multiple units or computer installation can also be implemented through software or hardware by the same unit or computer installation.The
One, the second equal words are used to indicate names, and are not indicated any particular order.
Finally it should be noted that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although reference
Preferred embodiment describes the invention in detail, those skilled in the art should understand that, it can be to of the invention
Technical solution is modified or equivalent replacement, without departing from the spirit and scope of the technical solution of the present invention.
Claims (10)
1. a kind of expression exchange method, which is characterized in that the described method includes:
Interaction request instruction is received, and instructs one detection block of pop-up to carry out Face datection according to the interaction request;
Judge whether to detect facial image;
If detecting facial image, the key feature region of the facial image is positioned, and from the key feature region
Extract the expressive features for characterizing human face expression to be identified;
Obtained expressive features will be extracted to be compared with the expressive features of each expression in default expression library, obtain described in
Identify the likelihood probability of each expression in human face expression and the default expression library, and by with the tool in the default expression library
There is the expression of maximum likelihood probability as the human face expression to be identified;
Corresponding interaction content is exported according to the recognition result controlling terminal equipment of the human face expression to be identified;And
Feedback information after obtaining the interaction content output, and connected according to the feedback information and control the terminal device
Content output.
2. expression exchange method as described in claim 1, which is characterized in that the step of progress Face datection includes:
The convolutional neural networks model for carrying out Face datection is obtained according to multiple face sample trainings are preset;And
Face datection is carried out using the convolutional neural networks model.
3. expression exchange method as described in claim 1, which is characterized in that when each expression in the default expression library
When expressive features are shape eigenvectors, the table that each expression in obtained expressive features and default expression library will be extracted
Feelings feature is compared, and obtains the likelihood probability of each expression in the human face expression to be identified and the default expression library
Step includes:
Obtain the shape eigenvectors of the human face expression to be identified;
The shape of the shape eigenvectors and each expression in the default expression library that calculate the human face expression to be identified is special
Levy the distance between vector value;And
Each expression in the human face expression to be identified and the default expression library is determined according to calculated distance value
Likelihood probability.
4. expression exchange method as described in claim 1, which is characterized in that when each expression in the default expression library
When expressive features are texture feature vector, the table that each expression in obtained expressive features and default expression library will be extracted
Feelings feature is compared, and obtains the likelihood probability of each expression in the human face expression to be identified and the default expression library
Step includes:
Obtain the texture feature vector of the human face expression to be identified;
The texture of the texture feature vector and each expression in the default expression library that calculate the human face expression to be identified is special
Levy the distance between vector value;And
Each expression in the human face expression to be identified and the default expression library is determined according to calculated distance value
Likelihood probability.
5. expression exchange method as described in claim 3 or 4, which is characterized in that the distance value is calculated by the following formula
It obtains:
dM(y,xj)=(y-xj)T*M*(y-xj)
Wherein, y is shape eigenvectors/texture feature vector of human face expression to be identified, xjTo preset j-th in expression library
Shape eigenvectors/texture feature vector of expression, M are goal-selling metric matrix, and j is the integer more than or equal to 1, dM
(y,xj) be human face expression to be identified shape eigenvectors/texture feature vector and j-th of expression in default expression library
The distance between shape eigenvectors/texture feature vector value, (y-xj) be human face expression to be identified shape eigenvectors/line
Manage shape eigenvectors/texture feature vector difference of j-th of expression in feature vector and default expression library, (y-xj)TFor
The shape feature of j-th of expression in shape eigenvectors/texture feature vector of human face expression to be identified and default expression library
The transposition of vector/texture feature vector difference;The likelihood probability is calculated by the following formula to obtain:
P={ 1+exp [D-b] }-1, wherein p is likelihood probability, and D is distance value, and b is default amount of bias.
6. the expression exchange method as described in claim 1-4 any one, which is characterized in that the feedback information includes voice
Information or the viewing terminal device export the expression information after the interaction content.
7. the expression exchange method as described in claim 1-4 any one, which is characterized in that the feedback information is viewing institute
It states terminal device and exports the expression information after the interaction content, it is described to be set according to the feedback information connecting control terminal
The step of standby content output includes:
Judge whether the expression shape change before and after the viewing terminal device exports the interaction content meets default adjustment rule;
If meeting the default adjustment rule, the interaction content of the terminal device output is adjusted;And
If not meeting the default adjustment rule, the interaction content of the terminal device output is not adjusted.
8. a kind of expression interactive device, which is characterized in that described device includes:
Detection module instructs one detection block of pop-up to carry out face for receiving interaction request instruction, and according to the interaction request
Detection;
Judgment module detects facial image for judging whether;
Extraction module, for when detecting facial image, positioning the key feature region of the facial image, and from the pass
The expressive features for characterizing human face expression to be identified are extracted in key characteristic area;
Comparison module, for comparing the expressive features for extracting obtained expressive features and each expression in default expression library
It is right, obtain the likelihood probability of each expression in the human face expression to be identified and the default expression library, and will with it is described pre-
If having the expression of maximum likelihood probability as the human face expression to be identified in expression library;
Output module, for being exported in corresponding interaction according to the recognition result controlling terminal equipment of the human face expression to be identified
Hold;And
Control module for obtaining the feedback information after the interaction content exports, and is connected according to the feedback information and is controlled
The content of the terminal device exports.
9. a kind of computer installation, the computer installation includes processor and memory, is stored on the memory several
Computer program, which is characterized in that such as right is realized when the processor is for executing the computer program stored in memory
It is required that described in any one of 1-7 the step of expression exchange method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of expression exchange method as described in any one of claim 1-7 is realized when being executed by processor.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910487847.XA CN110363079A (en) | 2019-06-05 | 2019-06-05 | Expression exchange method, device, computer installation and computer readable storage medium |
PCT/CN2019/103370 WO2020244074A1 (en) | 2019-06-05 | 2019-08-29 | Expression interaction method and apparatus, computer device, and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910487847.XA CN110363079A (en) | 2019-06-05 | 2019-06-05 | Expression exchange method, device, computer installation and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110363079A true CN110363079A (en) | 2019-10-22 |
Family
ID=68215622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910487847.XA Pending CN110363079A (en) | 2019-06-05 | 2019-06-05 | Expression exchange method, device, computer installation and computer readable storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN110363079A (en) |
WO (1) | WO2020244074A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110764618A (en) * | 2019-10-25 | 2020-02-07 | 郑子龙 | Bionic interaction system and method and corresponding generation system and method |
CN111507149A (en) * | 2020-01-03 | 2020-08-07 | 京东方科技集团股份有限公司 | Interaction method, device and equipment based on expression recognition |
CN111638784A (en) * | 2020-05-26 | 2020-09-08 | 浙江商汤科技开发有限公司 | Facial expression interaction method, interaction device and computer storage medium |
CN112381019A (en) * | 2020-11-19 | 2021-02-19 | 平安科技(深圳)有限公司 | Compound expression recognition method and device, terminal equipment and storage medium |
CN112530543A (en) * | 2021-01-27 | 2021-03-19 | 张强 | Drug management system |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269145B (en) * | 2021-06-22 | 2023-07-25 | 中国平安人寿保险股份有限公司 | Training method, device, equipment and storage medium of expression recognition model |
CN113723299A (en) * | 2021-08-31 | 2021-11-30 | 上海明略人工智能(集团)有限公司 | Conference quality scoring method, system and computer readable storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446753A (en) * | 2015-08-06 | 2017-02-22 | 南京普爱医疗设备股份有限公司 | Negative expression identifying and encouraging system |
KR20190008036A (en) * | 2017-07-14 | 2019-01-23 | 한국생산기술연구원 | System and method for generating facial expression of android robot |
CN109819100A (en) * | 2018-12-13 | 2019-05-28 | 平安科技(深圳)有限公司 | Mobile phone control method, device, computer installation and computer readable storage medium |
-
2019
- 2019-06-05 CN CN201910487847.XA patent/CN110363079A/en active Pending
- 2019-08-29 WO PCT/CN2019/103370 patent/WO2020244074A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106446753A (en) * | 2015-08-06 | 2017-02-22 | 南京普爱医疗设备股份有限公司 | Negative expression identifying and encouraging system |
KR20190008036A (en) * | 2017-07-14 | 2019-01-23 | 한국생산기술연구원 | System and method for generating facial expression of android robot |
CN109819100A (en) * | 2018-12-13 | 2019-05-28 | 平安科技(深圳)有限公司 | Mobile phone control method, device, computer installation and computer readable storage medium |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110764618A (en) * | 2019-10-25 | 2020-02-07 | 郑子龙 | Bionic interaction system and method and corresponding generation system and method |
CN111507149A (en) * | 2020-01-03 | 2020-08-07 | 京东方科技集团股份有限公司 | Interaction method, device and equipment based on expression recognition |
CN111507149B (en) * | 2020-01-03 | 2023-10-27 | 京东方艺云(杭州)科技有限公司 | Interaction method, device and equipment based on expression recognition |
CN111638784A (en) * | 2020-05-26 | 2020-09-08 | 浙江商汤科技开发有限公司 | Facial expression interaction method, interaction device and computer storage medium |
CN112381019A (en) * | 2020-11-19 | 2021-02-19 | 平安科技(深圳)有限公司 | Compound expression recognition method and device, terminal equipment and storage medium |
CN112381019B (en) * | 2020-11-19 | 2021-11-09 | 平安科技(深圳)有限公司 | Compound expression recognition method and device, terminal equipment and storage medium |
CN112530543A (en) * | 2021-01-27 | 2021-03-19 | 张强 | Drug management system |
CN112530543B (en) * | 2021-01-27 | 2021-11-02 | 张强 | Drug management system |
Also Published As
Publication number | Publication date |
---|---|
WO2020244074A1 (en) | 2020-12-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363079A (en) | Expression exchange method, device, computer installation and computer readable storage medium | |
CN105374055B (en) | Image processing method and device | |
US20180088677A1 (en) | Performing operations based on gestures | |
WO2020078119A1 (en) | Method, device and system for simulating user wearing clothing and accessories | |
Le et al. | Live speech driven head-and-eye motion generators | |
Varona et al. | Hands-free vision-based interface for computer accessibility | |
CN109461167A (en) | The training method of image processing model scratches drawing method, device, medium and terminal | |
US11074430B2 (en) | Directional assistance for centering a face in a camera field of view | |
CN107633203A (en) | Facial emotions recognition methods, device and storage medium | |
Szwoch et al. | Facial emotion recognition using depth data | |
CN107995428A (en) | Image processing method, device and storage medium and mobile terminal | |
CN110377201A (en) | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing | |
CN107632706A (en) | The application data processing method and system of multi-modal visual human | |
CN101847268A (en) | Cartoon human face image generation method and device based on human face images | |
CN109003224A (en) | Strain image generation method and device based on face | |
KR20120005587A (en) | Method and apparatus for generating face animation in computer system | |
JP7278307B2 (en) | Computer program, server device, terminal device and display method | |
CN108846356B (en) | Palm tracking and positioning method based on real-time gesture recognition | |
CN109819100A (en) | Mobile phone control method, device, computer installation and computer readable storage medium | |
CN108052250A (en) | Virtual idol deductive data processing method and system based on multi-modal interaction | |
CN109427105A (en) | The generation method and device of virtual video | |
CN108415561A (en) | Gesture interaction method based on visual human and system | |
CN107817799A (en) | The method and system of intelligent interaction are carried out with reference to virtual maze | |
CN112149599B (en) | Expression tracking method and device, storage medium and electronic equipment | |
CN109829965A (en) | Action processing method, device, storage medium and the electronic equipment of faceform |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191022 |