Summary of the invention
In order to solve the problem that input speed is slow and process is complicated of expressing one's feelings in prior art, the embodiment of the present invention provides a kind of expression input method and device.Described technical scheme is as follows:
First aspect, provides a kind of expression input method, is used in electronic equipment, and described method comprises:
By the input block Gather and input signal on described electronic equipment;
From described input signal, extract expressive features value;
From feature database, choose the expression that needs input according to the described expressive features value of extracting, in described feature database, store the corresponding relation between different expressive features values and different expression.
Optionally, the described expressive features value of extracting from described input signal, comprising:
If described input signal comprises the input signal of speech form, from the input signal of described speech form, extract the expressive features value of speech form;
If described input signal comprises the input signal of picture form, from the input signal of described picture form, determine human face region, and from described human face region, extract the expressive features value of face form;
If described input signal comprises the input signal of visual form, from the input signal of described visual form, extract the expressive features value of attitude track form.
Optionally, in the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, the described expressive features value that described basis is extracted is chosen the expression that needs input from feature database, comprising:
The described expressive features value of extracting is mated with the expressive features value of storing in described feature database;
Matching degree is greater than to n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1;
Choose at least one sort criteria according to pre-setting priority n described alternative expression sorted, described sort criteria comprises any one in historical access times, recently service time and described matching degree;
Filter out a described alternative expression as the described expression that needs input according to ranking results.
Optionally, when the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, the described expressive features value that described basis is extracted is chosen the expression that needs input from feature database, comprising:
The expressive features value of the described speech form extracting is mated with the first expressive features value of storing in First Characteristic storehouse;
Obtain a described the first expressive features value that matching degree is greater than first threshold, a >=1;
The expressive features value of the expressive features value of the described face form of extracting or described attitude track form is mated with the second expressive features value of storing in Second Characteristic storehouse;
Obtain b described the second expressive features value that matching degree is greater than Second Threshold, b >=1;
Using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b;
Choose at least one sort criteria according to pre-setting priority described alternative expression is sorted, described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree;
Filter out a described alternative expression as the described expression that needs input according to ranking results;
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
Optionally, the described expressive features value that described basis is extracted also comprised choose the expression that needs input from feature database before:
Gather described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information;
Determine current environment for use according to described environmental information;
From at least one alternative features storehouse, choose the described alternative features storehouse corresponding with described current environment for use as described feature database.
Optionally, described by the input block Gather and input signal on described electronic equipment, comprising:
If described input signal comprises the input signal of described speech form, gather the input signal of described speech form by microphone;
If described input signal comprises the input signal of described picture form or the input signal of described visual form, by the input signal of picture form or the input signal of described visual form described in camera collection.
Optionally, the described expressive features value that described basis is extracted also comprised choose the expression that needs input from feature database before:
For expressing one's feelings described in each, record is for training at least one training signal of described expression;
From training signal described at least one, extract at least one training characteristics value;
Using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression;
The corresponding relation of described expression and described expressive features value is stored in described feature database.
Optionally, the described expressive features value that described basis is extracted also comprises choose the expression that needs input from feature database after:
The described expression that needs input is directly shown in input frame or chat hurdle.
Second aspect, provides a kind of expression input media, is used in electronic equipment, and described device comprises:
Signal acquisition module, for passing through the input block Gather and input signal on described electronic equipment;
Characteristic extracting module, for extracting expressive features value from described input signal;
Expression is chosen module, for choose the expression that needs input from feature database according to the described expressive features value of extracting, stores the corresponding relation between different expressive features values and different expression in described feature database.
Optionally, described characteristic extracting module, comprising: the first extraction unit, and/or, the second extraction unit, and/or, the 3rd extraction unit;
Described the first extraction unit if comprise the input signal of speech form for described input signal, extracts the expressive features value of speech form from the input signal of described speech form;
Described the second extraction unit if comprise the input signal of picture form for described input signal, is determined human face region from the input signal of described picture form, and from described human face region, extracts the expressive features value of face form;
Described the 3rd extraction unit if comprise the input signal of visual form for described input signal, extracts the expressive features value of attitude track form from the input signal of described visual form.
Optionally, in the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, described expression is chosen module, comprising: characteristic matching unit, alternative unit, expression arrangement units and the expression determining unit chosen;
Described characteristic matching unit, mates for the expressive features value that the described expressive features value of extracting is stored with described feature database;
The described alternative unit of choosing, for being greater than matching degree n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1;
Described expression arrangement units, sorts to n described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in historical access times, recently service time and described matching degree;
Described expression determining unit, for filtering out a described alternative expression as the described expression that needs input according to ranking results.
Optionally, when the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, described expression is chosen module, comprising: the first matching unit, the first acquiring unit, the second matching unit, second acquisition unit, alternative determining unit, alternative sequencing unit and expression are chosen unit;
Described the first matching unit, mates for the first expressive features value that the expressive features value of the described speech form extracting is stored with First Characteristic storehouse;
Described the first acquiring unit, is greater than a the described first expressive features value of first threshold, a >=1 for obtaining matching degree;
Described the second matching unit, mates for the second expressive features value that the expressive features value of the expressive features value of the described face form of extracting or described attitude track form is stored with Second Characteristic storehouse;
Described second acquisition unit, is greater than b the described second expressive features value of Second Threshold, b >=1 for obtaining matching degree;
Described alternative determining unit, for using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b;
Described alternative sequencing unit, sorts to described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree;
Described expression is chosen unit, for filtering out a described alternative expression according to ranking results as the described expression that needs input;
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
Optionally, described device also comprises:
Information acquisition module, for gathering described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information;
Environment determination module, for determining current environment for use according to described environmental information;
Feature selection module, for choosing the described alternative features storehouse corresponding with described current environment for use as described feature database from least one alternative features storehouse.
Optionally, described signal acquisition module, comprising: voice collecting unit, and/or, image acquisition units;
Described voice collecting unit, if comprise the input signal of described speech form for described input signal, gathers the input signal of described speech form by microphone;
Described image acquisition units, if comprise the input signal of described picture form or the input signal of described visual form for described input signal, by the input signal of picture form or the input signal of described visual form described in camera collection.
Optionally, described device also comprises:
Signal logging modle, for for expressing one's feelings described in each, records at least one training signal for training described expression;
Feature logging modle, for extracting at least one training characteristics value from training signal described at least one;
Characteristic selecting module, for using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression;
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value.
Optionally, described device also comprises:
Expression display module, for being directly shown in input frame or chat hurdle by the described expression that needs input.
The beneficial effect that the technical scheme that the embodiment of the present invention provides is brought is:
By the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
In each embodiment of the present invention, electronic equipment can be mobile phone, panel computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio frequency aspect 3), MP4(Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio frequency aspect 3) player, pocket computer on knee, desk-top computer and intelligent television etc.
Please refer to Fig. 1, it shows the method flow diagram of the expression input method that one embodiment of the invention provides, and the present embodiment is applied to electronic equipment with this expression input method and illustrates.This expression input method comprises following several step:
Step 102, by the input block Gather and input signal on electronic equipment.
Step 104 is extracted expressive features value from input signal.
Step 106 is chosen the expression that needs input from feature database according to the expressive features value of extracting, store the corresponding relation between different expressive features values and different expression in feature database.
In sum, the expression input method that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
Please refer to Fig. 2 A, it shows the method flow diagram of the expression input method that another embodiment of the present invention provides, and the present embodiment is applied to electronic equipment with this expression input method and illustrates.This expression input method comprises following several step:
Step 201, judges that electronic equipment is in automatic acquisition state or manual acquisition state.
Electronic equipment judges that himself is in automatic acquisition state or manual acquisition state.Wherein, acquisition state refers to that automatically opening input block by electronic equipment carries out the collection of input signal automatically; Manually acquisition state refers to that opening input block by user carries out the collection of input signal.
Step 202, if judged result be electronic equipment in automatic acquisition state, open input block.
If judged result be electronic equipment in automatic acquisition state, electronic equipment is opened input block automatically.Input block comprises microphone and/or camera.Input block can be the built-in input block of electronic equipment, can be also the external input block of electronic equipment.
Electronic equipment is carried out following step 204. after opening input block
Step 203, if judged result be electronic equipment in manual acquisition state, whether detect input block in opening.
If judged result be electronic equipment in manual acquisition state, electronic equipment detect input block whether in opening.Because manual acquisition state refers to whether opening input block by user carries out the collection of input signal, open input block so electronic equipment now detects user.User can open input block by the control of button or switch and so on.
In the time that input block is microphone, incorporated by reference to reference to figure 2B, it shows a kind of chat interface of typical instant messaging application.Microphone button 22 is arranged in input frame 24.Press this microphone button 22 with the head of a household and can keep microphone in opening, in the time that user discharges this microphone button 22, microphone cuts out.
If testing result is yes, be also testing result be input block in opening, carry out following step 204; If testing result is no, be also testing result be input block not in opening, do not carry out following step.
Step 204, by the input block Gather and input signal on electronic equipment.
No matter be electronic equipment in automatic acquisition state or manual acquisition state, after input block is opened, electronic equipment is by input block Gather and input signal.
In the possible implementation of the first, if input block comprises microphone, gather the input signal of speech form by microphone.The input signal of speech form can be user's word, or the sound being sent by user or other object.
In the possible implementation of the second, if input block comprises camera, by the input signal of camera collection picture form or visual form.The input signal of picture form can be user's countenance, and the input signal of visual form can be user's movement posture or gesture track of user etc.
Step 205 is extracted expressive features value from input signal.
After electronic equipment collects input signal, from input signal, extract expressive features value.
In the possible implementation of the first, if input signal comprises the input signal of speech form, from the input signal of speech form, extract the expressive features value of speech form.
Electronic equipment can extract by Method of Data with Adding Windows or eigenwert system of selection the expressive features value of speech form from the input signal of speech form.Wherein, Method of Data with Adding Windows is a kind of conventional method that the signal of high-dimensional voice or image and so on is more simplified and effectively analyzed, and by high-dimensional signal is carried out to dimensionality reduction, can remove some does not have the data of the essential characteristic of reflected signal.Therefore, can obtain the eigenwert in input signal by Method of Data with Adding Windows, this eigenwert is the data of essential characteristic that can reflected input signal.Due in the present embodiment, be the eigenwert of extracting speech form from the input signal of speech form, and the expression input method that provides for the present embodiment of this eigenwert, so this eigenwert is called to expressive features value.
In addition, can also from input signal, extract expressive features value by eigenwert system of selection.Electronic equipment can set in advance at least one expressive features value, after collecting input signal, input signal is analyzed and search whether have an expressive features value setting in advance.
In the present embodiment, the input signal of supposing the speech form that electronic equipment collects by microphone is " certainly can be heartily out of question ", and electronic equipment is analyzed the expressive features value " heartily " of therefrom extracting speech form afterwards to the input signal of this speech form.
In the possible implementation of the second, if input signal comprises the input signal of picture form, from the input signal of picture form, determine human face region, and from human face region, extract the expressive features value of face form.
First electronic equipment can determine human face region from the input signal of picture form by image recognition technology, then from human face region, extract the expressive features value of face form by Method of Data with Adding Windows or eigenwert system of selection.
Such as, take the picture of user face by camera after, determine the human face region in picture, after then this human face region being analyzed, therefrom extract the expressive features value of the face form of " happily ", " sad ", " crying " or " going mad " and so on.
In the third possible implementation, if input signal comprises the input signal of visual form, from the input signal of visual form, extract the expressive features value of attitude track form.
When input signal is electronic equipment when gathering the input signal of visual form of user's attitude action in a period of time or gesture track and so on, electronic equipment can extract the expressive features value of attitude track form from the input signal of this visual form.
Step 206 is chosen the expression that needs input from feature database according to the expressive features value of extracting.
Due to the corresponding relation storing in feature database between different expressive features values and different expression, electronic equipment is chosen the expression of required input according to the corresponding relation of storing in the expressive features value extracted and feature database, then the expression of choosing is inserted into and in input frame 24, treats that user sends or be directly shown in chat hurdle 26.
Specifically,, in the time of any one in expressive features value, the expressive features value of face form and the expressive features value of attitude track form that the expressive features value of extracting is speech form, this step can comprise following a few sub-steps:
(1) the expressive features value of extracting is mated with the expressive features value of storing in feature database.
Electronic equipment mates the expressive features value of extracting with the expressive features value of storing in feature database.Because the expressive features value of storing in feature database is specific expressive features value, such as the expressive features value of speech form is by certain particular person typing, the expressive features value of storing in the expressive features value that electronic equipment extracts and feature database has difference to a certain degree, therefore electronic equipment need to mate both, obtains matching degree.
(2) matching degree is greater than to n expression corresponding to m expressive features value of predetermined threshold as alternative expression, n >=m >=1.
Electronic equipment is greater than n expression corresponding to m expressive features value of predetermined threshold as alternative expression, n >=m >=1 using matching degree.Wherein, an expressive features value is expressed one's feelings corresponding at least one.Predetermined threshold can preset according to actual conditions, such as being set as 80%.
In the present embodiment, suppose that the alternative expression that electronic equipment obtains is: corresponding A, B and tri-expressions of C of expressive features value that matching degree is 98%, and D corresponding to expressive features value that another matching degree is 90% expresses one's feelings.
(3) choosing at least one sort criteria according to pre-setting priority sorts to n alternative expression.
Electronic equipment is chosen at least one sort criteria according to pre-setting priority n alternative expression is sorted, and sort criteria comprises any one in historical access times, recently service time and matching degree.Priority orders between each sort criteria can preset according to actual conditions, such as according to priority being from high to low matching degree, historical access times, nearest service time.In the time that electronic equipment cannot filter out according to first sort criteria the expression that needs input, choose second sort criteria and continue screening, by that analogy, finishing screen is selected an alternative expression as the expression that needs input.
In the present embodiment, electronic equipment obtains A, B, C and D after first A, B, C and tetra-expressions of D being sorted according to matching degree successively, finds that the matching degree of A, B and tri-expressions of C is 98%; Afterwards, electronic equipment obtains B, A and C(hypothesis ordering rule successively for to arrange from more to less according to historical access times after A, B and tri-expressions of C being sorted according to historical access times, and the historical access times of A expression are 15 times, the historical access times of B expression are 20 times, and the historical access times of C expression are 3 times); Now electronic equipment finds that the historical access times of B expression are maximum, therefore chooses B expression as the expression that needs input.
(4) filter out an alternative expression as the expression that needs input according to ranking results.
Electronic equipment filters out an alternative expression as the expression that needs input according to ranking results.In the expression input method providing in the embodiment of the present invention, electronic equipment Automatic sieve from multiple alternative expressions is selected an alternative expression as the expression that needs input, do not need user choose or confirm, simplify the flow process of expression input, make expression input more efficient, convenient.
When the expressive features value of extracting comprises the expressive features value of speech form, and while also comprising the expressive features value of face form or the expressive features value of attitude track form, this step can comprise following several step:
(1) the expressive features value of the speech form extracting is mated with the first expressive features value of storing in First Characteristic storehouse.
Need the mode of expression of input different from above-mentioned choosing, electronic equipment is comprehensively analyzed the expressive features value of two kinds of forms and is determined the expression that needs input, can make the expression chosen more accurate, fully meets consumers' demand.
Electronic equipment mates the expressive features value of the speech form extracting with the first expressive features value of storing in First Characteristic storehouse.Same, the matching degree between the first expressive features value of storing in the expressive features value of the speech form that electronic equipment obtains extracting and First Characteristic storehouse.In the present embodiment, the expressive features value of supposing the speech form that electronic equipment extracts is " heartily ".
(2) obtain a the first expressive features value that matching degree is greater than first threshold, a >=1.
Electronic equipment obtains a the first expressive features value that matching degree is greater than first threshold, a >=1.In the present embodiment, suppose a=1.
(3) the expressive features value of the expressive features value of the face form of extracting or attitude track form is mated with the second expressive features value of storing in Second Characteristic storehouse.
Electronic equipment mates the expressive features value of the expressive features value of the face form of extracting or attitude track form with the second expressive features value of storing in Second Characteristic storehouse.In the present embodiment, the expressive features value of supposing the face form that electronic equipment extracts is the facial expression of laughing.
(4) obtain b the second expressive features value that matching degree is greater than Second Threshold, b >=1.
Electronic equipment obtains b the second expressive features value that matching degree is greater than Second Threshold, b >=1.In the present embodiment, suppose b=2.
(5) corresponding a the first expressive features value x expression and corresponding y of b the second expressive features value are expressed one's feelings as alternative expression, x >=a, y >=b.
Electronic equipment expresses one's feelings corresponding a the first expressive features value x expression and corresponding y of b the second expressive features value as alternative expression, x >=a, y >=b.In the present embodiment, suppose that alternative expression is corresponding " laugh ", " smile " and " snagging " three expressions of the first expressive features value that matching degree is greater than first threshold, matching degree is greater than " smile " expression corresponding to first the second expressive features value of Second Threshold, and matching degree is greater than " beep mouth " expression corresponding to second the second expressive features value of Second Threshold.
(6) choosing at least one sort criteria according to pre-setting priority sorts to alternative expression.
Electronic equipment is chosen at least one sort criteria according to pre-setting priority alternative expression is sorted, and sort criteria comprises any one in multiplicity, historical access times, nearest service time and matching degree.Priority orders between each sort criteria can preset according to actual conditions, such as according to priority being from high to low multiplicity, historical access times, nearest service time, matching degree.In the time that electronic equipment cannot filter out according to first sort criteria the expression that needs input, choose second sort criteria and continue screening, by that analogy, finishing screen is selected an alternative expression as the expression that needs input.
In the present embodiment, suppose first according to multiplicity, " laugh ", " smile ", " snagging " and " beep mouth " expression to be sorted, find that the multiplicity of " smile " expression is maximum, directly choose " smile " expression as the expression that needs input.
(7) filter out an alternative expression as the expression that needs input according to ranking results.
Electronic equipment filters out an alternative expression as the expression that needs input according to ranking results.In the expression input method providing in the embodiment of the present invention, electronic equipment Automatic sieve from multiple alternative expressions is selected an alternative expression as the expression that needs input, do not need user choose or confirm, simplify the flow process of expression input, make expression input more efficient, convenient.
In addition, when electronic equipment mates the expressive features value of extracting with the expressive features value of storing in feature database after, do not exist matching degree to be greater than the expressive features value of threshold value if find, can point out user cannot find matching result.Such as, inform user with the form that plays window.
Step 207, is directly shown in the expression of needs input in input frame or chat hurdle.
After electronic equipment is chosen the expression that needs input from feature database, the expression of needs input is directly shown in input frame or chat hurdle.In conjunction with reference to figure 2B, electronic equipment can be inserted into the expression of choosing to be treated that user sends in input frame 24 or is directly shown in chat hurdle 26.
It should be noted that, the expression input method that the present embodiment provides can also be chosen expression in conjunction with electronic equipment environment of living in.Particularly, before above-mentioned steps 206, can also comprise following several step:
(1) gather electronic equipment environmental information around.
Electronic equipment collection environmental information around, environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information.Wherein, environmental volume information can can be passed through camera collection by light intensity sensor collection, ambient image information by microphone collection, environmental light intensity information.
(2) determine current environment for use according to environmental information.
Electronic equipment is determined current environment for use according to environmental information.After electronic equipment collection environmental information around, comprehensively analyze each environmental information to determine current environment for use.Such as, when temporal information is that 22:00, environmental volume information are 2 decibels and environmental light intensity information when very weak, can determine that current environment for use is the environment of user in sleep.For another example, when temporal information is that 14:00, environmental volume information are the strong and ambient image information of 75 decibels, environmental light intensity information while being street, can determine that current environment for use is the environment that user is going window-shopping.
(3) from least one alternative features storehouse, choose the alternative features storehouse corresponding with current environment for use as feature database.
Corresponding relation in electronic equipment between pre-stored different environments for use and different alternative features storehouse, when electronic equipment obtains after current environment for use, chooses corresponding alternative features storehouse as feature database.Afterwards, electronic equipment chooses according to the expressive features value of extracting the expression that needs input again from feature database.
Also it should be noted that, the corresponding relation between different expressive features values and the different expression of storing in feature database can be to be set by system or designer in advance, such as in the time that user installation expression is wrapped, in this expression bag, just carries feature database.Designer, after design completes expression, has also set the corresponding relation between different expressive features values and different expression, and has created feature database simultaneously, then expression and feature database is together packaged into expression bag.Corresponding relation between different expressive features values and the different expression of storing in feature database in addition, can also be set voluntarily by user.In the time being set voluntarily by user, the expression input method that the present embodiment provides also comprises following several step:
The first, for each expression, record is for training at least one training signal of this expression.
For each expression, electronic equipment record is for training at least one training signal of this expression.User can train expression, by the corresponding relation between the different expressive features value of User Defined and different expression.Such as, user selects to have chosen four conventional expressions interface from expression, is respectively: expression A, expression B, expression C and expression D.So that expression A is trained for to example, the selected expression of user A, repeats to say " snagging " 3 times, electronic equipment records this 3 training signals.
Certainly, electronic equipment still gathers and record training signal by the input block of microphone or camera and so on.
The second, from least one training signal, extract at least one training characteristics value.
Electronic equipment extracts at least one training characteristics value from least one training signal.Identical with above-mentioned steps 205, electronic equipment can extract training characteristics value by Method of Data with Adding Windows or eigenwert system of selection from training signal.Training signal can be the training signal of speech form, can be also the training signal of picture form, can also be the training signal of visual form.
The 3rd, using training characteristics values maximum number of iterations as the expressive features value corresponding with expression.
Electronic equipment is using training characteristics values maximum number of iterations as the expressive features value corresponding with expression.In the time that the training signal of electronic equipment record is identical, it is identical conventionally from training signal, extracting the training characteristics value obtaining.While being such as 3 training signals of, electronic equipment record " snagging " that user says, its 3 training characteristics values extracting are " snagging " conventionally.
But, in the time that electronic equipment gathers training signal by the input block of microphone or camera and so on, may there is the interference of surrounding environment, such as the interference of noise or image, now electronic equipment from training signal, extract the training characteristics value obtaining may be different.Therefore, electronic equipment is using training characteristics values maximum number of iterations as the expressive features value corresponding with expression.Such as, when 3 training signals of electronic equipment record are " snagging " that user says, in its 3 training characteristics values extracting two for " snagging ", another be " ", now electronic equipment choose " snagging " for and the corresponding expressive features value of A of expressing one's feelings.
The 4th, the corresponding relation of expression and expressive features value is stored in feature database.
Electronic equipment is stored in the corresponding relation of expression and expressive features value in feature database.In actual applications, the corresponding relation obtaining through training can be stored in original feature database; Also can create voluntarily a self-defined feature database by user, the corresponding relation obtaining through training is stored in self-defined feature database.
By above-mentioned four steps, realize by user and set voluntarily the corresponding relation between expression and expressive features value, further improve user's experience.
Also it should be noted that, in order to distinguish when user needs to use the input of expressing one's feelings of expression input method that the present embodiment provides, can also carry out the step that detects cursor and whether be arranged in input frame before step 201.Cursor is used to indicate the position of the contents such as user's input characters, expression or picture.Incorporated by reference to reference to figure 2B, cursor 28 is arranged in input frame 24.Whether electronic equipment is using input frame 24 to carry out the input of the contents such as word, expression or picture according to the position probing user of cursor 28.In the time that cursor 28 is arranged in input frame 24, default user is using input frame 24, now carries out above-mentioned steps 201.
In sum, the expression input method that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
In addition, also gather the input signal of speech form by microphone, or the input signal of camera collection picture form or visual form, and then the input of expressing one's feelings, the mode that expression is inputted enriched; And user can also set the corresponding relation between different expressive features values and different expression voluntarily, fully meet user's demand.
In addition, above-described embodiment also provides two kinds to choose the modes of expression that need input, and first kind of way is determined the expression that needs input after analyzing a kind of expressive features value of form, comparatively simple, fast; The second way is determined the expression that needs input by the expressive features value of two kinds of forms of comprehensive analysis, can make the expression chosen more accurate, fully meets consumers' demand.
In a concrete example, Xiao Ming opens of installing in intelligent television and has the application software of information transmit-receive function, and the front-facing camera of simultaneously opening intelligent television gathers the picture of its human face region.Xiao Ming's corners of the mouth raises up slightly, the expression of smiling.Intelligent television extracts expressive features value from the picture of the human face region that collects, after finding the corresponding relation between expressive features value and expression, inserts the expression of smiling in the input frame of chat interface in feature database.Afterwards, after Xiao Ming exposes sad expression, intelligent television inserts sad expression in the input frame of chat interface.
In another concrete example, the Instant Messenger (IM) software of installing in little red use mobile phone, by expression is trained, the corresponding relation between having set voluntarily several groups of expressive features values and having expressed one's feelings.Afterwards, when little red in chat process, in the time that mobile phone receives the input signal of speech form of " today is well happy ", according to expressive features value " happily " and expression
corresponding relation, in the input frame of chat interface insert expression
; In the time that mobile phone receives the input signal of speech form of " having snowed in outside ", " snow " and expression according to expressive features value
corresponding relation, in the input frame of chat interface insert expression
; In the time that mobile phone receives the input signal of speech form of " this snow is very beautiful, and I like well ", " like " and expression according to expressive features value
corresponding relation, in the input frame of chat interface insert expression
.
Following is apparatus of the present invention embodiment, can be for carrying out the inventive method embodiment.For the details not disclosing in apparatus of the present invention embodiment, please refer to the inventive method embodiment.
Please refer to Fig. 3, it shows the block diagram of the expression input media that one embodiment of the invention provides, and this expression input media is for electronic equipment.This expression input media can be realized and be become the some or all of of electronic equipment by software, hardware or both combinations, and this expression input media comprises: signal acquisition module 310, characteristic extracting module 320 and expression are chosen module 330.
Signal acquisition module 310, for passing through the input block Gather and input signal on described electronic equipment.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Expression is chosen module 330, for choose the expression that needs input from feature database according to the described expressive features value of extracting, stores the corresponding relation between different expressive features values and different expression in described feature database.
In sum, the expression input media that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
Please refer to Fig. 4, it shows the block diagram of the expression input media that another embodiment of the present invention provides, and this expression input media is for electronic equipment.This expression input media can be realized and be become the some or all of of electronic equipment by software, hardware or both combinations, and this expression input media comprises: signal acquisition module 310, characteristic extracting module 320, information acquisition module 321, environment determination module 322, feature selection module 323, expression are chosen module 330 and expression display module 331.
Signal acquisition module 310, for passing through the input block Gather and input signal on described electronic equipment.
Specifically, described signal acquisition module 310, comprising: voice collecting unit 310a, and/or, image acquisition units 310b.
Described voice collecting unit 310a, if comprise the input signal of described speech form for described input signal, gathers the input signal of described speech form by microphone.
Described image acquisition units 310b, if comprise the input signal of described picture form or the input signal of described visual form for described input signal, by the input signal of picture form or the input signal of described visual form described in camera collection.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Specifically, described characteristic extracting module 320, comprising: the first extraction unit 320a, and/or, the second extraction unit 320b, and/or, the 3rd extraction unit 320c.
Described the first extraction unit 320a if comprise the input signal of speech form for described input signal, extracts the expressive features value of speech form from the input signal of described speech form.
Described the second extraction unit 320b if comprise the input signal of picture form for described input signal, determines human face region from the input signal of described picture form, and from described human face region, extracts the expressive features value of face form.
Described the 3rd extraction unit 320c if comprise the input signal of visual form for described input signal, extracts the expressive features value of attitude track form from the input signal of described visual form.
Optionally, described expression input media also comprises: information acquisition module 321, environment determination module 322 and feature selection module 323.
Information acquisition module 321, for gathering described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information.
Environment determination module 322, for determining current environment for use according to described environmental information.
Feature selection module 323, for choosing the described alternative features storehouse corresponding with described current environment for use as described feature database from least one alternative features storehouse.
Expression is chosen module 330, for choose the expression that needs input from feature database according to the described expressive features value of extracting, stores the corresponding relation between different expressive features values and different expression in described feature database.
In the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, described expression is chosen module 330, comprising: characteristic matching unit 330a, alternative unit 330b, expression arrangement units 330c and the expression determining unit 330d of choosing.
Described characteristic matching unit 330a, mates for the expressive features value that the described expressive features value of extracting is stored with described feature database.
The described alternative unit 330b that chooses, for being greater than matching degree n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1.
Described expression arrangement units 330c, sorts to n described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in historical access times, recently service time and described matching degree.
Described expression determining unit 330d, for filtering out a described alternative expression as the described expression that needs input according to ranking results.
When the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, described expression is chosen module 330, comprising: the first matching unit 330e, the first acquiring unit 330f, the second matching unit 330g, second acquisition unit 330h, alternative determining unit 330i, alternative sequencing unit 330j and expression are chosen unit 330k.
Described the first matching unit 330e, mates for the first expressive features value that the expressive features value of the described speech form extracting is stored with First Characteristic storehouse.
Described the first acquiring unit 330f, is greater than a the described first expressive features value of first threshold, a >=1 for obtaining matching degree.
Described the second matching unit 330g, mates for the second expressive features value that the expressive features value of the expressive features value of the described face form of extracting or described attitude track form is stored with Second Characteristic storehouse;
Described second acquisition unit 330h, is greater than b the described second expressive features value of Second Threshold, b >=1 for obtaining matching degree.
Described alternative determining unit 330i, for using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b.
Described alternative sequencing unit 330j, for choosing at least one sort criteria according to pre-setting priority, described alternative expression is sorted, described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree.
Described expression is chosen unit 330k, for filtering out a described alternative expression according to ranking results as the described expression that needs input.
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
Expression display module 331, for being directly shown in input frame or chat hurdle by the described expression that needs input.
Optionally, described expression input media, also comprises: signal logging modle, feature logging modle, characteristic selecting module and characteristic storage module.
Signal logging modle, for for expressing one's feelings described in each, records at least one training signal for training described expression.
Feature logging modle, for extracting at least one training characteristics value from training signal described at least one.
Characteristic selecting module, for using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression.
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value.
In sum, the expression input media that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.In addition, also gather the input signal of speech form by microphone, or the input signal of camera collection picture form or visual form, and then the input of expressing one's feelings, the mode that expression is inputted enriched; And user can also set the corresponding relation between different expressive features values and different expression voluntarily, fully meet user's demand.
It should be noted that: the expression input media that above-described embodiment provides is in the time of input expression, only be illustrated with the division of above-mentioned each functional module, in practical application, can above-mentioned functions be distributed and completed by different functional modules as required, be divided into different functional modules by the inner structure of equipment, to complete all or part of function described above.In addition, the expression input media that above-described embodiment provides and the embodiment of the method for expression input method belong to same design, and its specific implementation process refers to embodiment of the method, repeats no more here.
Should be understood that, use in this article, unless exception clearly supported in context, singulative " " (" a ", " an ", " the ") is intended to also comprise plural form.It is to be further understood that in this article the "and/or" using refer to comprise one or one project of listing explicitly above arbitrarily and likely combine.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that all or part of step that realizes above-described embodiment can complete by hardware, also can carry out the hardware that instruction is relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium of mentioning can be ROM (read-only memory), disk or CD etc.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.