CN103823561A - Expression input method and device - Google Patents

Expression input method and device Download PDF

Info

Publication number
CN103823561A
CN103823561A CN201410069166.9A CN201410069166A CN103823561A CN 103823561 A CN103823561 A CN 103823561A CN 201410069166 A CN201410069166 A CN 201410069166A CN 103823561 A CN103823561 A CN 103823561A
Authority
CN
China
Prior art keywords
expression
expressive features
features value
input signal
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410069166.9A
Other languages
Chinese (zh)
Other versions
CN103823561B (en
Inventor
陈超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Cubesili Information Technology Co Ltd
Original Assignee
Guangzhou Huaduo Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Huaduo Network Technology Co Ltd filed Critical Guangzhou Huaduo Network Technology Co Ltd
Priority to CN201410069166.9A priority Critical patent/CN103823561B/en
Publication of CN103823561A publication Critical patent/CN103823561A/en
Priority to PCT/CN2014/095872 priority patent/WO2015127825A1/en
Application granted granted Critical
Publication of CN103823561B publication Critical patent/CN103823561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/011Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an expression input method and device, and belongs to the field of Internet. The method includes the steps of collecting input signals through an input unit on an electronic device, extracting expression characteristic values from the input signals, and obtaining expressions which need to be inputted and correspond to the expression characteristic values from a characteristic base according to the extracted expression characteristic values, wherein corresponding relations between the different expression characteristic values and the different expressions are stored in the characteristic base. By collecting the input signals through the input unit on the electronic device, extracting the expression characteristic values from the input signals, obtaining the expressions which need to be inputted and correspond to the expression characteristic values from the characteristic base according to the extracted expression characteristic values, and storing the corresponding relations between the different expression characteristic values and the different expressions in the characteristic base, the problems that in the prior art, the expression input speed is low, and the process is complex are solved; the expression input process is simplified, and the expression input speed is increased.

Description

Expression input method and device
Technical field
The present invention relates to internet arena, particularly a kind of expression input method and device.
Background technology
Along with IM(Instant Messenger, instant messaging) application, Blog(blog) and SMS(Short Messaging Service, Short Message Service) application popularize, user has further depended on these application with information transmit-receive function and has carried out interchange and contact to each other.
User states in the use and applies while interchange, in order to increase the interest of input content, often needs to input some and expresses one's feelings to express particular meaning, or enrich input content.In specific implementation process, when a side user need to input expression, open expression and select interface therefrom to choose the expression that needs input, then the expression of choosing is sent to the opposing party user.Accordingly, the opposing party user receives and reads the expression that a side user sends.
Realizing in process of the present invention, inventor finds that prior art at least exists following problem: in order to meet as much as possible user's demand, application often include tens even a hundreds of expression select for a user.In the time that expression is selected to comprise compared with multiple expression in interface, need classification and/or these expressions of Pagination Display.First user, in the time of input expression, need to find the corresponding classification of expression of required input and/or the page at place, then therefrom chooses the expression of required input.This just causes the speed of user input feelings very slow, and has increased the complexity of expression input process.
Summary of the invention
In order to solve the problem that input speed is slow and process is complicated of expressing one's feelings in prior art, the embodiment of the present invention provides a kind of expression input method and device.Described technical scheme is as follows:
First aspect, provides a kind of expression input method, is used in electronic equipment, and described method comprises:
By the input block Gather and input signal on described electronic equipment;
From described input signal, extract expressive features value;
From feature database, choose the expression that needs input according to the described expressive features value of extracting, in described feature database, store the corresponding relation between different expressive features values and different expression.
Optionally, the described expressive features value of extracting from described input signal, comprising:
If described input signal comprises the input signal of speech form, from the input signal of described speech form, extract the expressive features value of speech form;
If described input signal comprises the input signal of picture form, from the input signal of described picture form, determine human face region, and from described human face region, extract the expressive features value of face form;
If described input signal comprises the input signal of visual form, from the input signal of described visual form, extract the expressive features value of attitude track form.
Optionally, in the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, the described expressive features value that described basis is extracted is chosen the expression that needs input from feature database, comprising:
The described expressive features value of extracting is mated with the expressive features value of storing in described feature database;
Matching degree is greater than to n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1;
Choose at least one sort criteria according to pre-setting priority n described alternative expression sorted, described sort criteria comprises any one in historical access times, recently service time and described matching degree;
Filter out a described alternative expression as the described expression that needs input according to ranking results.
Optionally, when the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, the described expressive features value that described basis is extracted is chosen the expression that needs input from feature database, comprising:
The expressive features value of the described speech form extracting is mated with the first expressive features value of storing in First Characteristic storehouse;
Obtain a described the first expressive features value that matching degree is greater than first threshold, a >=1;
The expressive features value of the expressive features value of the described face form of extracting or described attitude track form is mated with the second expressive features value of storing in Second Characteristic storehouse;
Obtain b described the second expressive features value that matching degree is greater than Second Threshold, b >=1;
Using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b;
Choose at least one sort criteria according to pre-setting priority described alternative expression is sorted, described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree;
Filter out a described alternative expression as the described expression that needs input according to ranking results;
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
Optionally, the described expressive features value that described basis is extracted also comprised choose the expression that needs input from feature database before:
Gather described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information;
Determine current environment for use according to described environmental information;
From at least one alternative features storehouse, choose the described alternative features storehouse corresponding with described current environment for use as described feature database.
Optionally, described by the input block Gather and input signal on described electronic equipment, comprising:
If described input signal comprises the input signal of described speech form, gather the input signal of described speech form by microphone;
If described input signal comprises the input signal of described picture form or the input signal of described visual form, by the input signal of picture form or the input signal of described visual form described in camera collection.
Optionally, the described expressive features value that described basis is extracted also comprised choose the expression that needs input from feature database before:
For expressing one's feelings described in each, record is for training at least one training signal of described expression;
From training signal described at least one, extract at least one training characteristics value;
Using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression;
The corresponding relation of described expression and described expressive features value is stored in described feature database.
Optionally, the described expressive features value that described basis is extracted also comprises choose the expression that needs input from feature database after:
The described expression that needs input is directly shown in input frame or chat hurdle.
Second aspect, provides a kind of expression input media, is used in electronic equipment, and described device comprises:
Signal acquisition module, for passing through the input block Gather and input signal on described electronic equipment;
Characteristic extracting module, for extracting expressive features value from described input signal;
Expression is chosen module, for choose the expression that needs input from feature database according to the described expressive features value of extracting, stores the corresponding relation between different expressive features values and different expression in described feature database.
Optionally, described characteristic extracting module, comprising: the first extraction unit, and/or, the second extraction unit, and/or, the 3rd extraction unit;
Described the first extraction unit if comprise the input signal of speech form for described input signal, extracts the expressive features value of speech form from the input signal of described speech form;
Described the second extraction unit if comprise the input signal of picture form for described input signal, is determined human face region from the input signal of described picture form, and from described human face region, extracts the expressive features value of face form;
Described the 3rd extraction unit if comprise the input signal of visual form for described input signal, extracts the expressive features value of attitude track form from the input signal of described visual form.
Optionally, in the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, described expression is chosen module, comprising: characteristic matching unit, alternative unit, expression arrangement units and the expression determining unit chosen;
Described characteristic matching unit, mates for the expressive features value that the described expressive features value of extracting is stored with described feature database;
The described alternative unit of choosing, for being greater than matching degree n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1;
Described expression arrangement units, sorts to n described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in historical access times, recently service time and described matching degree;
Described expression determining unit, for filtering out a described alternative expression as the described expression that needs input according to ranking results.
Optionally, when the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, described expression is chosen module, comprising: the first matching unit, the first acquiring unit, the second matching unit, second acquisition unit, alternative determining unit, alternative sequencing unit and expression are chosen unit;
Described the first matching unit, mates for the first expressive features value that the expressive features value of the described speech form extracting is stored with First Characteristic storehouse;
Described the first acquiring unit, is greater than a the described first expressive features value of first threshold, a >=1 for obtaining matching degree;
Described the second matching unit, mates for the second expressive features value that the expressive features value of the expressive features value of the described face form of extracting or described attitude track form is stored with Second Characteristic storehouse;
Described second acquisition unit, is greater than b the described second expressive features value of Second Threshold, b >=1 for obtaining matching degree;
Described alternative determining unit, for using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b;
Described alternative sequencing unit, sorts to described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree;
Described expression is chosen unit, for filtering out a described alternative expression according to ranking results as the described expression that needs input;
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
Optionally, described device also comprises:
Information acquisition module, for gathering described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information;
Environment determination module, for determining current environment for use according to described environmental information;
Feature selection module, for choosing the described alternative features storehouse corresponding with described current environment for use as described feature database from least one alternative features storehouse.
Optionally, described signal acquisition module, comprising: voice collecting unit, and/or, image acquisition units;
Described voice collecting unit, if comprise the input signal of described speech form for described input signal, gathers the input signal of described speech form by microphone;
Described image acquisition units, if comprise the input signal of described picture form or the input signal of described visual form for described input signal, by the input signal of picture form or the input signal of described visual form described in camera collection.
Optionally, described device also comprises:
Signal logging modle, for for expressing one's feelings described in each, records at least one training signal for training described expression;
Feature logging modle, for extracting at least one training characteristics value from training signal described at least one;
Characteristic selecting module, for using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression;
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value.
Optionally, described device also comprises:
Expression display module, for being directly shown in input frame or chat hurdle by the described expression that needs input.
The beneficial effect that the technical scheme that the embodiment of the present invention provides is brought is:
By the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing of required use during embodiment is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the method flow diagram of the expression input method that provides of one embodiment of the invention;
Fig. 2 A is the method flow diagram of the expression input method that provides of another embodiment of the present invention;
Fig. 2 B is a kind of schematic diagram of chat interface of typical instant messaging application;
Fig. 3 is the block diagram of the expression input media that provides of one embodiment of the invention;
Fig. 4 is the block diagram of the expression input media that provides of another embodiment of the present invention.
Embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing, embodiment of the present invention is described further in detail.
In each embodiment of the present invention, electronic equipment can be mobile phone, panel computer, E-book reader, MP3 player (Moving Picture Experts Group Audio Layer III, dynamic image expert compression standard audio frequency aspect 3), MP4(Moving Picture Experts Group Audio Layer IV, dynamic image expert compression standard audio frequency aspect 3) player, pocket computer on knee, desk-top computer and intelligent television etc.
Please refer to Fig. 1, it shows the method flow diagram of the expression input method that one embodiment of the invention provides, and the present embodiment is applied to electronic equipment with this expression input method and illustrates.This expression input method comprises following several step:
Step 102, by the input block Gather and input signal on electronic equipment.
Step 104 is extracted expressive features value from input signal.
Step 106 is chosen the expression that needs input from feature database according to the expressive features value of extracting, store the corresponding relation between different expressive features values and different expression in feature database.
In sum, the expression input method that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
Please refer to Fig. 2 A, it shows the method flow diagram of the expression input method that another embodiment of the present invention provides, and the present embodiment is applied to electronic equipment with this expression input method and illustrates.This expression input method comprises following several step:
Step 201, judges that electronic equipment is in automatic acquisition state or manual acquisition state.
Electronic equipment judges that himself is in automatic acquisition state or manual acquisition state.Wherein, acquisition state refers to that automatically opening input block by electronic equipment carries out the collection of input signal automatically; Manually acquisition state refers to that opening input block by user carries out the collection of input signal.
Step 202, if judged result be electronic equipment in automatic acquisition state, open input block.
If judged result be electronic equipment in automatic acquisition state, electronic equipment is opened input block automatically.Input block comprises microphone and/or camera.Input block can be the built-in input block of electronic equipment, can be also the external input block of electronic equipment.
Electronic equipment is carried out following step 204. after opening input block
Step 203, if judged result be electronic equipment in manual acquisition state, whether detect input block in opening.
If judged result be electronic equipment in manual acquisition state, electronic equipment detect input block whether in opening.Because manual acquisition state refers to whether opening input block by user carries out the collection of input signal, open input block so electronic equipment now detects user.User can open input block by the control of button or switch and so on.
In the time that input block is microphone, incorporated by reference to reference to figure 2B, it shows a kind of chat interface of typical instant messaging application.Microphone button 22 is arranged in input frame 24.Press this microphone button 22 with the head of a household and can keep microphone in opening, in the time that user discharges this microphone button 22, microphone cuts out.
If testing result is yes, be also testing result be input block in opening, carry out following step 204; If testing result is no, be also testing result be input block not in opening, do not carry out following step.
Step 204, by the input block Gather and input signal on electronic equipment.
No matter be electronic equipment in automatic acquisition state or manual acquisition state, after input block is opened, electronic equipment is by input block Gather and input signal.
In the possible implementation of the first, if input block comprises microphone, gather the input signal of speech form by microphone.The input signal of speech form can be user's word, or the sound being sent by user or other object.
In the possible implementation of the second, if input block comprises camera, by the input signal of camera collection picture form or visual form.The input signal of picture form can be user's countenance, and the input signal of visual form can be user's movement posture or gesture track of user etc.
Step 205 is extracted expressive features value from input signal.
After electronic equipment collects input signal, from input signal, extract expressive features value.
In the possible implementation of the first, if input signal comprises the input signal of speech form, from the input signal of speech form, extract the expressive features value of speech form.
Electronic equipment can extract by Method of Data with Adding Windows or eigenwert system of selection the expressive features value of speech form from the input signal of speech form.Wherein, Method of Data with Adding Windows is a kind of conventional method that the signal of high-dimensional voice or image and so on is more simplified and effectively analyzed, and by high-dimensional signal is carried out to dimensionality reduction, can remove some does not have the data of the essential characteristic of reflected signal.Therefore, can obtain the eigenwert in input signal by Method of Data with Adding Windows, this eigenwert is the data of essential characteristic that can reflected input signal.Due in the present embodiment, be the eigenwert of extracting speech form from the input signal of speech form, and the expression input method that provides for the present embodiment of this eigenwert, so this eigenwert is called to expressive features value.
In addition, can also from input signal, extract expressive features value by eigenwert system of selection.Electronic equipment can set in advance at least one expressive features value, after collecting input signal, input signal is analyzed and search whether have an expressive features value setting in advance.
In the present embodiment, the input signal of supposing the speech form that electronic equipment collects by microphone is " certainly can be heartily out of question ", and electronic equipment is analyzed the expressive features value " heartily " of therefrom extracting speech form afterwards to the input signal of this speech form.
In the possible implementation of the second, if input signal comprises the input signal of picture form, from the input signal of picture form, determine human face region, and from human face region, extract the expressive features value of face form.
First electronic equipment can determine human face region from the input signal of picture form by image recognition technology, then from human face region, extract the expressive features value of face form by Method of Data with Adding Windows or eigenwert system of selection.
Such as, take the picture of user face by camera after, determine the human face region in picture, after then this human face region being analyzed, therefrom extract the expressive features value of the face form of " happily ", " sad ", " crying " or " going mad " and so on.
In the third possible implementation, if input signal comprises the input signal of visual form, from the input signal of visual form, extract the expressive features value of attitude track form.
When input signal is electronic equipment when gathering the input signal of visual form of user's attitude action in a period of time or gesture track and so on, electronic equipment can extract the expressive features value of attitude track form from the input signal of this visual form.
Step 206 is chosen the expression that needs input from feature database according to the expressive features value of extracting.
Due to the corresponding relation storing in feature database between different expressive features values and different expression, electronic equipment is chosen the expression of required input according to the corresponding relation of storing in the expressive features value extracted and feature database, then the expression of choosing is inserted into and in input frame 24, treats that user sends or be directly shown in chat hurdle 26.
Specifically,, in the time of any one in expressive features value, the expressive features value of face form and the expressive features value of attitude track form that the expressive features value of extracting is speech form, this step can comprise following a few sub-steps:
(1) the expressive features value of extracting is mated with the expressive features value of storing in feature database.
Electronic equipment mates the expressive features value of extracting with the expressive features value of storing in feature database.Because the expressive features value of storing in feature database is specific expressive features value, such as the expressive features value of speech form is by certain particular person typing, the expressive features value of storing in the expressive features value that electronic equipment extracts and feature database has difference to a certain degree, therefore electronic equipment need to mate both, obtains matching degree.
(2) matching degree is greater than to n expression corresponding to m expressive features value of predetermined threshold as alternative expression, n >=m >=1.
Electronic equipment is greater than n expression corresponding to m expressive features value of predetermined threshold as alternative expression, n >=m >=1 using matching degree.Wherein, an expressive features value is expressed one's feelings corresponding at least one.Predetermined threshold can preset according to actual conditions, such as being set as 80%.
In the present embodiment, suppose that the alternative expression that electronic equipment obtains is: corresponding A, B and tri-expressions of C of expressive features value that matching degree is 98%, and D corresponding to expressive features value that another matching degree is 90% expresses one's feelings.
(3) choosing at least one sort criteria according to pre-setting priority sorts to n alternative expression.
Electronic equipment is chosen at least one sort criteria according to pre-setting priority n alternative expression is sorted, and sort criteria comprises any one in historical access times, recently service time and matching degree.Priority orders between each sort criteria can preset according to actual conditions, such as according to priority being from high to low matching degree, historical access times, nearest service time.In the time that electronic equipment cannot filter out according to first sort criteria the expression that needs input, choose second sort criteria and continue screening, by that analogy, finishing screen is selected an alternative expression as the expression that needs input.
In the present embodiment, electronic equipment obtains A, B, C and D after first A, B, C and tetra-expressions of D being sorted according to matching degree successively, finds that the matching degree of A, B and tri-expressions of C is 98%; Afterwards, electronic equipment obtains B, A and C(hypothesis ordering rule successively for to arrange from more to less according to historical access times after A, B and tri-expressions of C being sorted according to historical access times, and the historical access times of A expression are 15 times, the historical access times of B expression are 20 times, and the historical access times of C expression are 3 times); Now electronic equipment finds that the historical access times of B expression are maximum, therefore chooses B expression as the expression that needs input.
(4) filter out an alternative expression as the expression that needs input according to ranking results.
Electronic equipment filters out an alternative expression as the expression that needs input according to ranking results.In the expression input method providing in the embodiment of the present invention, electronic equipment Automatic sieve from multiple alternative expressions is selected an alternative expression as the expression that needs input, do not need user choose or confirm, simplify the flow process of expression input, make expression input more efficient, convenient.
When the expressive features value of extracting comprises the expressive features value of speech form, and while also comprising the expressive features value of face form or the expressive features value of attitude track form, this step can comprise following several step:
(1) the expressive features value of the speech form extracting is mated with the first expressive features value of storing in First Characteristic storehouse.
Need the mode of expression of input different from above-mentioned choosing, electronic equipment is comprehensively analyzed the expressive features value of two kinds of forms and is determined the expression that needs input, can make the expression chosen more accurate, fully meets consumers' demand.
Electronic equipment mates the expressive features value of the speech form extracting with the first expressive features value of storing in First Characteristic storehouse.Same, the matching degree between the first expressive features value of storing in the expressive features value of the speech form that electronic equipment obtains extracting and First Characteristic storehouse.In the present embodiment, the expressive features value of supposing the speech form that electronic equipment extracts is " heartily ".
(2) obtain a the first expressive features value that matching degree is greater than first threshold, a >=1.
Electronic equipment obtains a the first expressive features value that matching degree is greater than first threshold, a >=1.In the present embodiment, suppose a=1.
(3) the expressive features value of the expressive features value of the face form of extracting or attitude track form is mated with the second expressive features value of storing in Second Characteristic storehouse.
Electronic equipment mates the expressive features value of the expressive features value of the face form of extracting or attitude track form with the second expressive features value of storing in Second Characteristic storehouse.In the present embodiment, the expressive features value of supposing the face form that electronic equipment extracts is the facial expression of laughing.
(4) obtain b the second expressive features value that matching degree is greater than Second Threshold, b >=1.
Electronic equipment obtains b the second expressive features value that matching degree is greater than Second Threshold, b >=1.In the present embodiment, suppose b=2.
(5) corresponding a the first expressive features value x expression and corresponding y of b the second expressive features value are expressed one's feelings as alternative expression, x >=a, y >=b.
Electronic equipment expresses one's feelings corresponding a the first expressive features value x expression and corresponding y of b the second expressive features value as alternative expression, x >=a, y >=b.In the present embodiment, suppose that alternative expression is corresponding " laugh ", " smile " and " snagging " three expressions of the first expressive features value that matching degree is greater than first threshold, matching degree is greater than " smile " expression corresponding to first the second expressive features value of Second Threshold, and matching degree is greater than " beep mouth " expression corresponding to second the second expressive features value of Second Threshold.
(6) choosing at least one sort criteria according to pre-setting priority sorts to alternative expression.
Electronic equipment is chosen at least one sort criteria according to pre-setting priority alternative expression is sorted, and sort criteria comprises any one in multiplicity, historical access times, nearest service time and matching degree.Priority orders between each sort criteria can preset according to actual conditions, such as according to priority being from high to low multiplicity, historical access times, nearest service time, matching degree.In the time that electronic equipment cannot filter out according to first sort criteria the expression that needs input, choose second sort criteria and continue screening, by that analogy, finishing screen is selected an alternative expression as the expression that needs input.
In the present embodiment, suppose first according to multiplicity, " laugh ", " smile ", " snagging " and " beep mouth " expression to be sorted, find that the multiplicity of " smile " expression is maximum, directly choose " smile " expression as the expression that needs input.
(7) filter out an alternative expression as the expression that needs input according to ranking results.
Electronic equipment filters out an alternative expression as the expression that needs input according to ranking results.In the expression input method providing in the embodiment of the present invention, electronic equipment Automatic sieve from multiple alternative expressions is selected an alternative expression as the expression that needs input, do not need user choose or confirm, simplify the flow process of expression input, make expression input more efficient, convenient.
In addition, when electronic equipment mates the expressive features value of extracting with the expressive features value of storing in feature database after, do not exist matching degree to be greater than the expressive features value of threshold value if find, can point out user cannot find matching result.Such as, inform user with the form that plays window.
Step 207, is directly shown in the expression of needs input in input frame or chat hurdle.
After electronic equipment is chosen the expression that needs input from feature database, the expression of needs input is directly shown in input frame or chat hurdle.In conjunction with reference to figure 2B, electronic equipment can be inserted into the expression of choosing to be treated that user sends in input frame 24 or is directly shown in chat hurdle 26.
It should be noted that, the expression input method that the present embodiment provides can also be chosen expression in conjunction with electronic equipment environment of living in.Particularly, before above-mentioned steps 206, can also comprise following several step:
(1) gather electronic equipment environmental information around.
Electronic equipment collection environmental information around, environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information.Wherein, environmental volume information can can be passed through camera collection by light intensity sensor collection, ambient image information by microphone collection, environmental light intensity information.
(2) determine current environment for use according to environmental information.
Electronic equipment is determined current environment for use according to environmental information.After electronic equipment collection environmental information around, comprehensively analyze each environmental information to determine current environment for use.Such as, when temporal information is that 22:00, environmental volume information are 2 decibels and environmental light intensity information when very weak, can determine that current environment for use is the environment of user in sleep.For another example, when temporal information is that 14:00, environmental volume information are the strong and ambient image information of 75 decibels, environmental light intensity information while being street, can determine that current environment for use is the environment that user is going window-shopping.
(3) from least one alternative features storehouse, choose the alternative features storehouse corresponding with current environment for use as feature database.
Corresponding relation in electronic equipment between pre-stored different environments for use and different alternative features storehouse, when electronic equipment obtains after current environment for use, chooses corresponding alternative features storehouse as feature database.Afterwards, electronic equipment chooses according to the expressive features value of extracting the expression that needs input again from feature database.
Also it should be noted that, the corresponding relation between different expressive features values and the different expression of storing in feature database can be to be set by system or designer in advance, such as in the time that user installation expression is wrapped, in this expression bag, just carries feature database.Designer, after design completes expression, has also set the corresponding relation between different expressive features values and different expression, and has created feature database simultaneously, then expression and feature database is together packaged into expression bag.Corresponding relation between different expressive features values and the different expression of storing in feature database in addition, can also be set voluntarily by user.In the time being set voluntarily by user, the expression input method that the present embodiment provides also comprises following several step:
The first, for each expression, record is for training at least one training signal of this expression.
For each expression, electronic equipment record is for training at least one training signal of this expression.User can train expression, by the corresponding relation between the different expressive features value of User Defined and different expression.Such as, user selects to have chosen four conventional expressions interface from expression, is respectively: expression A, expression B, expression C and expression D.So that expression A is trained for to example, the selected expression of user A, repeats to say " snagging " 3 times, electronic equipment records this 3 training signals.
Certainly, electronic equipment still gathers and record training signal by the input block of microphone or camera and so on.
The second, from least one training signal, extract at least one training characteristics value.
Electronic equipment extracts at least one training characteristics value from least one training signal.Identical with above-mentioned steps 205, electronic equipment can extract training characteristics value by Method of Data with Adding Windows or eigenwert system of selection from training signal.Training signal can be the training signal of speech form, can be also the training signal of picture form, can also be the training signal of visual form.
The 3rd, using training characteristics values maximum number of iterations as the expressive features value corresponding with expression.
Electronic equipment is using training characteristics values maximum number of iterations as the expressive features value corresponding with expression.In the time that the training signal of electronic equipment record is identical, it is identical conventionally from training signal, extracting the training characteristics value obtaining.While being such as 3 training signals of, electronic equipment record " snagging " that user says, its 3 training characteristics values extracting are " snagging " conventionally.
But, in the time that electronic equipment gathers training signal by the input block of microphone or camera and so on, may there is the interference of surrounding environment, such as the interference of noise or image, now electronic equipment from training signal, extract the training characteristics value obtaining may be different.Therefore, electronic equipment is using training characteristics values maximum number of iterations as the expressive features value corresponding with expression.Such as, when 3 training signals of electronic equipment record are " snagging " that user says, in its 3 training characteristics values extracting two for " snagging ", another be " ", now electronic equipment choose " snagging " for and the corresponding expressive features value of A of expressing one's feelings.
The 4th, the corresponding relation of expression and expressive features value is stored in feature database.
Electronic equipment is stored in the corresponding relation of expression and expressive features value in feature database.In actual applications, the corresponding relation obtaining through training can be stored in original feature database; Also can create voluntarily a self-defined feature database by user, the corresponding relation obtaining through training is stored in self-defined feature database.
By above-mentioned four steps, realize by user and set voluntarily the corresponding relation between expression and expressive features value, further improve user's experience.
Also it should be noted that, in order to distinguish when user needs to use the input of expressing one's feelings of expression input method that the present embodiment provides, can also carry out the step that detects cursor and whether be arranged in input frame before step 201.Cursor is used to indicate the position of the contents such as user's input characters, expression or picture.Incorporated by reference to reference to figure 2B, cursor 28 is arranged in input frame 24.Whether electronic equipment is using input frame 24 to carry out the input of the contents such as word, expression or picture according to the position probing user of cursor 28.In the time that cursor 28 is arranged in input frame 24, default user is using input frame 24, now carries out above-mentioned steps 201.
In sum, the expression input method that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
In addition, also gather the input signal of speech form by microphone, or the input signal of camera collection picture form or visual form, and then the input of expressing one's feelings, the mode that expression is inputted enriched; And user can also set the corresponding relation between different expressive features values and different expression voluntarily, fully meet user's demand.
In addition, above-described embodiment also provides two kinds to choose the modes of expression that need input, and first kind of way is determined the expression that needs input after analyzing a kind of expressive features value of form, comparatively simple, fast; The second way is determined the expression that needs input by the expressive features value of two kinds of forms of comprehensive analysis, can make the expression chosen more accurate, fully meets consumers' demand.
In a concrete example, Xiao Ming opens of installing in intelligent television and has the application software of information transmit-receive function, and the front-facing camera of simultaneously opening intelligent television gathers the picture of its human face region.Xiao Ming's corners of the mouth raises up slightly, the expression of smiling.Intelligent television extracts expressive features value from the picture of the human face region that collects, after finding the corresponding relation between expressive features value and expression, inserts the expression of smiling in the input frame of chat interface in feature database.Afterwards, after Xiao Ming exposes sad expression, intelligent television inserts sad expression in the input frame of chat interface.
In another concrete example, the Instant Messenger (IM) software of installing in little red use mobile phone, by expression is trained, the corresponding relation between having set voluntarily several groups of expressive features values and having expressed one's feelings.Afterwards, when little red in chat process, in the time that mobile phone receives the input signal of speech form of " today is well happy ", according to expressive features value " happily " and expression
Figure BDA0000470460880000151
corresponding relation, in the input frame of chat interface insert expression
Figure BDA0000470460880000152
; In the time that mobile phone receives the input signal of speech form of " having snowed in outside ", " snow " and expression according to expressive features value
Figure BDA0000470460880000153
corresponding relation, in the input frame of chat interface insert expression
Figure BDA0000470460880000154
; In the time that mobile phone receives the input signal of speech form of " this snow is very beautiful, and I like well ", " like " and expression according to expressive features value
Figure BDA0000470460880000155
corresponding relation, in the input frame of chat interface insert expression
Figure BDA0000470460880000156
.
Following is apparatus of the present invention embodiment, can be for carrying out the inventive method embodiment.For the details not disclosing in apparatus of the present invention embodiment, please refer to the inventive method embodiment.
Please refer to Fig. 3, it shows the block diagram of the expression input media that one embodiment of the invention provides, and this expression input media is for electronic equipment.This expression input media can be realized and be become the some or all of of electronic equipment by software, hardware or both combinations, and this expression input media comprises: signal acquisition module 310, characteristic extracting module 320 and expression are chosen module 330.
Signal acquisition module 310, for passing through the input block Gather and input signal on described electronic equipment.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Expression is chosen module 330, for choose the expression that needs input from feature database according to the described expressive features value of extracting, stores the corresponding relation between different expressive features values and different expression in described feature database.
In sum, the expression input media that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.
Please refer to Fig. 4, it shows the block diagram of the expression input media that another embodiment of the present invention provides, and this expression input media is for electronic equipment.This expression input media can be realized and be become the some or all of of electronic equipment by software, hardware or both combinations, and this expression input media comprises: signal acquisition module 310, characteristic extracting module 320, information acquisition module 321, environment determination module 322, feature selection module 323, expression are chosen module 330 and expression display module 331.
Signal acquisition module 310, for passing through the input block Gather and input signal on described electronic equipment.
Specifically, described signal acquisition module 310, comprising: voice collecting unit 310a, and/or, image acquisition units 310b.
Described voice collecting unit 310a, if comprise the input signal of described speech form for described input signal, gathers the input signal of described speech form by microphone.
Described image acquisition units 310b, if comprise the input signal of described picture form or the input signal of described visual form for described input signal, by the input signal of picture form or the input signal of described visual form described in camera collection.
Characteristic extracting module 320, for extracting expressive features value from described input signal.
Specifically, described characteristic extracting module 320, comprising: the first extraction unit 320a, and/or, the second extraction unit 320b, and/or, the 3rd extraction unit 320c.
Described the first extraction unit 320a if comprise the input signal of speech form for described input signal, extracts the expressive features value of speech form from the input signal of described speech form.
Described the second extraction unit 320b if comprise the input signal of picture form for described input signal, determines human face region from the input signal of described picture form, and from described human face region, extracts the expressive features value of face form.
Described the 3rd extraction unit 320c if comprise the input signal of visual form for described input signal, extracts the expressive features value of attitude track form from the input signal of described visual form.
Optionally, described expression input media also comprises: information acquisition module 321, environment determination module 322 and feature selection module 323.
Information acquisition module 321, for gathering described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information.
Environment determination module 322, for determining current environment for use according to described environmental information.
Feature selection module 323, for choosing the described alternative features storehouse corresponding with described current environment for use as described feature database from least one alternative features storehouse.
Expression is chosen module 330, for choose the expression that needs input from feature database according to the described expressive features value of extracting, stores the corresponding relation between different expressive features values and different expression in described feature database.
In the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, described expression is chosen module 330, comprising: characteristic matching unit 330a, alternative unit 330b, expression arrangement units 330c and the expression determining unit 330d of choosing.
Described characteristic matching unit 330a, mates for the expressive features value that the described expressive features value of extracting is stored with described feature database.
The described alternative unit 330b that chooses, for being greater than matching degree n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1.
Described expression arrangement units 330c, sorts to n described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in historical access times, recently service time and described matching degree.
Described expression determining unit 330d, for filtering out a described alternative expression as the described expression that needs input according to ranking results.
When the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, described expression is chosen module 330, comprising: the first matching unit 330e, the first acquiring unit 330f, the second matching unit 330g, second acquisition unit 330h, alternative determining unit 330i, alternative sequencing unit 330j and expression are chosen unit 330k.
Described the first matching unit 330e, mates for the first expressive features value that the expressive features value of the described speech form extracting is stored with First Characteristic storehouse.
Described the first acquiring unit 330f, is greater than a the described first expressive features value of first threshold, a >=1 for obtaining matching degree.
Described the second matching unit 330g, mates for the second expressive features value that the expressive features value of the expressive features value of the described face form of extracting or described attitude track form is stored with Second Characteristic storehouse;
Described second acquisition unit 330h, is greater than b the described second expressive features value of Second Threshold, b >=1 for obtaining matching degree.
Described alternative determining unit 330i, for using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b.
Described alternative sequencing unit 330j, for choosing at least one sort criteria according to pre-setting priority, described alternative expression is sorted, described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree.
Described expression is chosen unit 330k, for filtering out a described alternative expression according to ranking results as the described expression that needs input.
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
Expression display module 331, for being directly shown in input frame or chat hurdle by the described expression that needs input.
Optionally, described expression input media, also comprises: signal logging modle, feature logging modle, characteristic selecting module and characteristic storage module.
Signal logging modle, for for expressing one's feelings described in each, records at least one training signal for training described expression.
Feature logging modle, for extracting at least one training characteristics value from training signal described at least one.
Characteristic selecting module, for using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression.
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value.
In sum, the expression input media that the present embodiment provides, by the input block Gather and input signal on electronic equipment, from input signal, extract expressive features value, from feature database, choose the expression that needs input according to the expressive features value of extracting, in feature database, store the corresponding relation between different expressive features values and different expression; Solve the problem that the input speed of expressing one's feelings in prior art is slow and process is complicated; Reach and simplified expression input process, improved the effect of the speed of expression input.In addition, also gather the input signal of speech form by microphone, or the input signal of camera collection picture form or visual form, and then the input of expressing one's feelings, the mode that expression is inputted enriched; And user can also set the corresponding relation between different expressive features values and different expression voluntarily, fully meet user's demand.
It should be noted that: the expression input media that above-described embodiment provides is in the time of input expression, only be illustrated with the division of above-mentioned each functional module, in practical application, can above-mentioned functions be distributed and completed by different functional modules as required, be divided into different functional modules by the inner structure of equipment, to complete all or part of function described above.In addition, the expression input media that above-described embodiment provides and the embodiment of the method for expression input method belong to same design, and its specific implementation process refers to embodiment of the method, repeats no more here.
Should be understood that, use in this article, unless exception clearly supported in context, singulative " " (" a ", " an ", " the ") is intended to also comprise plural form.It is to be further understood that in this article the "and/or" using refer to comprise one or one project of listing explicitly above arbitrarily and likely combine.
The invention described above embodiment sequence number, just to describing, does not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that all or part of step that realizes above-described embodiment can complete by hardware, also can carry out the hardware that instruction is relevant by program completes, described program can be stored in a kind of computer-readable recording medium, the above-mentioned storage medium of mentioning can be ROM (read-only memory), disk or CD etc.
The foregoing is only preferred embodiment of the present invention, in order to limit the present invention, within the spirit and principles in the present invention not all, any modification of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (16)

1. an expression input method, is characterized in that, is used in electronic equipment, and described method comprises:
By the input block Gather and input signal on described electronic equipment;
From described input signal, extract expressive features value;
From feature database, choose the expression that needs input according to the described expressive features value of extracting, in described feature database, store the corresponding relation between different expressive features values and different expression.
2. method according to claim 1, is characterized in that, the described expressive features value of extracting from described input signal, comprising:
If described input signal comprises the input signal of speech form, from the input signal of described speech form, extract the expressive features value of speech form;
If described input signal comprises the input signal of picture form, from the input signal of described picture form, determine human face region, and from described human face region, extract the expressive features value of face form;
If described input signal comprises the input signal of visual form, from the input signal of described visual form, extract the expressive features value of attitude track form.
3. method according to claim 2, it is characterized in that, in the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, the described expressive features value that described basis is extracted is chosen the expression that needs input from feature database, comprising:
The described expressive features value of extracting is mated with the expressive features value of storing in described feature database;
Matching degree is greater than to n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1;
Choose at least one sort criteria according to pre-setting priority n described alternative expression sorted, described sort criteria comprises any one in historical access times, recently service time and described matching degree;
Filter out a described alternative expression as the described expression that needs input according to ranking results.
4. method according to claim 2, it is characterized in that, when the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, the described expressive features value that described basis is extracted is chosen the expression that needs input from feature database, comprising:
The expressive features value of the described speech form extracting is mated with the first expressive features value of storing in First Characteristic storehouse;
Obtain a described the first expressive features value that matching degree is greater than first threshold, a >=1;
The expressive features value of the expressive features value of the described face form of extracting or described attitude track form is mated with the second expressive features value of storing in Second Characteristic storehouse;
Obtain b described the second expressive features value that matching degree is greater than Second Threshold, b >=1;
Using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b;
Choose at least one sort criteria according to pre-setting priority described alternative expression is sorted, described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree;
Filter out a described alternative expression as the described expression that needs input according to ranking results;
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
5. method according to claim 1, is characterized in that, the described expressive features value that described basis is extracted also comprised choose the expression that needs input from feature database before:
Gather described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information;
Determine current environment for use according to described environmental information;
From at least one alternative features storehouse, choose the described alternative features storehouse corresponding with described current environment for use as described feature database.
6. method according to claim 2, is characterized in that, described by the input block Gather and input signal on described electronic equipment, comprising:
If described input signal comprises the input signal of described speech form, gather the input signal of described speech form by microphone;
If described input signal comprises the input signal of described picture form or the input signal of described visual form, by the input signal of picture form or the input signal of described visual form described in camera collection.
7. method according to claim 1, is characterized in that, the described expressive features value that described basis is extracted also comprised choose the expression that needs input from feature database before:
For expressing one's feelings described in each, record is for training at least one training signal of described expression;
From training signal described at least one, extract at least one training characteristics value;
Using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression;
The corresponding relation of described expression and described expressive features value is stored in described feature database.
8. according to the arbitrary described method of claim 1 to 7, it is characterized in that, the described expressive features value that described basis is extracted also comprises choose the expression that needs input from feature database after:
The described expression that needs input is directly shown in input frame or chat hurdle.
9. an expression input media, is characterized in that, be used in electronic equipment, described device comprises:
Signal acquisition module, for passing through the input block Gather and input signal on described electronic equipment;
Characteristic extracting module, for extracting expressive features value from described input signal;
Expression is chosen module, for choose the expression that needs input from feature database according to the described expressive features value of extracting, stores the corresponding relation between different expressive features values and different expression in described feature database.
10. device according to claim 9, is characterized in that, described characteristic extracting module, comprising: the first extraction unit, and/or, the second extraction unit, and/or, the 3rd extraction unit;
Described the first extraction unit if comprise the input signal of speech form for described input signal, extracts the expressive features value of speech form from the input signal of described speech form;
Described the second extraction unit if comprise the input signal of picture form for described input signal, is determined human face region from the input signal of described picture form, and from described human face region, extracts the expressive features value of face form;
Described the 3rd extraction unit if comprise the input signal of visual form for described input signal, extracts the expressive features value of attitude track form from the input signal of described visual form.
11. devices according to claim 10, it is characterized in that, in the time of any one in the expressive features value of the expressive features value that the described expressive features value of extracting is described speech form, described face form and the expressive features value of described attitude track form, described expression is chosen module, comprising: characteristic matching unit, alternative unit, expression arrangement units and the expression determining unit chosen;
Described characteristic matching unit, mates for the expressive features value that the described expressive features value of extracting is stored with described feature database;
The described alternative unit of choosing, for being greater than matching degree n described expression corresponding to m the described expressive features value of predetermined threshold as alternative expression, n >=m >=1;
Described expression arrangement units, sorts to n described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in historical access times, recently service time and described matching degree;
Described expression determining unit, for filtering out a described alternative expression as the described expression that needs input according to ranking results.
12. devices according to claim 10, it is characterized in that, when the described expressive features value of extracting comprises the expressive features value of described speech form, and while also comprising the expressive features value of described face form or the expressive features value of described attitude track form, described expression is chosen module, comprising: the first matching unit, the first acquiring unit, the second matching unit, second acquisition unit, alternative determining unit, alternative sequencing unit and expression are chosen unit;
Described the first matching unit, mates for the first expressive features value that the expressive features value of the described speech form extracting is stored with First Characteristic storehouse;
Described the first acquiring unit, is greater than a the described first expressive features value of first threshold, a >=1 for obtaining matching degree;
Described the second matching unit, mates for the second expressive features value that the expressive features value of the expressive features value of the described face form of extracting or described attitude track form is stored with Second Characteristic storehouse;
Described second acquisition unit, is greater than b the described second expressive features value of Second Threshold, b >=1 for obtaining matching degree;
Described alternative determining unit, for using x corresponding a described a first expressive features value described expression and y described expression corresponding to b described the second expressive features value as alternative expression, x >=a, y >=b;
Described alternative sequencing unit, sorts to described alternative expression for choosing at least one sort criteria according to pre-setting priority, and described sort criteria comprises any one in multiplicity, historical access times, nearest service time and described matching degree;
Described expression is chosen unit, for filtering out a described alternative expression according to ranking results as the described expression that needs input;
Wherein, described feature database comprises described First Characteristic storehouse and described Second Characteristic storehouse, and described expressive features value comprises described the first expressive features value and described the second expressive features value.
13. devices according to claim 9, is characterized in that, described device also comprises:
Information acquisition module, for gathering described electronic equipment environmental information around, described environmental information comprises at least one in temporal information, environmental volume information, environmental light intensity information and ambient image information;
Environment determination module, for determining current environment for use according to described environmental information;
Feature selection module, for choosing the described alternative features storehouse corresponding with described current environment for use as described feature database from least one alternative features storehouse.
14. devices according to claim 10, is characterized in that, described signal acquisition module, comprising: voice collecting unit, and/or, image acquisition units;
Described voice collecting unit, if comprise the input signal of described speech form for described input signal, gathers the input signal of described speech form by microphone;
Described image acquisition units, if comprise the input signal of described picture form or the input signal of described visual form for described input signal, by the input signal of picture form or the input signal of described visual form described in camera collection.
15. devices according to claim 9, is characterized in that, described device also comprises:
Signal logging modle, for for expressing one's feelings described in each, records at least one training signal for training described expression;
Feature logging modle, for extracting at least one training characteristics value from training signal described at least one;
Characteristic selecting module, for using described training characteristics values maximum number of iterations as the expressive features value corresponding with described expression;
Characteristic storage module, for being stored in described feature database by the corresponding relation of described expression and described expressive features value.
16. according to the arbitrary described device of claim 9 to 15, it is characterized in that, described device also comprises:
Expression display module, for being directly shown in input frame or chat hurdle by the described expression that needs input.
CN201410069166.9A 2014-02-27 2014-02-27 expression input method and device Active CN103823561B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410069166.9A CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device
PCT/CN2014/095872 WO2015127825A1 (en) 2014-02-27 2014-12-31 Expression input method and apparatus and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410069166.9A CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device

Publications (2)

Publication Number Publication Date
CN103823561A true CN103823561A (en) 2014-05-28
CN103823561B CN103823561B (en) 2017-01-18

Family

ID=50758662

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410069166.9A Active CN103823561B (en) 2014-02-27 2014-02-27 expression input method and device

Country Status (2)

Country Link
CN (1) CN103823561B (en)
WO (1) WO2015127825A1 (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015127825A1 (en) * 2014-02-27 2015-09-03 广州华多网络科技有限公司 Expression input method and apparatus and electronic device
WO2016000219A1 (en) * 2014-07-02 2016-01-07 华为技术有限公司 Information transmission method and transmission device
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
CN105872838A (en) * 2016-04-28 2016-08-17 徐文波 Sending method and device of special media effects of real-time videos
CN106175727A (en) * 2016-07-25 2016-12-07 广东小天才科技有限公司 A kind of expression method for pushing being applied to wearable device and wearable device
CN106293131A (en) * 2016-08-16 2017-01-04 广东小天才科技有限公司 expression input method and device
CN106293120A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 Expression input method and mobile terminal
CN106339103A (en) * 2016-08-15 2017-01-18 珠海市魅族科技有限公司 Image checking method and device
CN106503744A (en) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 Input expression in chat process carries out the method and device of automatic error-correcting
CN106503630A (en) * 2016-10-08 2017-03-15 广东小天才科技有限公司 A kind of expression sending method, equipment and system
CN106682091A (en) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
CN106886396A (en) * 2015-12-16 2017-06-23 北京奇虎科技有限公司 expression management method and device
WO2017120924A1 (en) * 2016-01-15 2017-07-20 ***生 Information prompting method for use when inserting emoticon, and instant communication tool
CN107153496A (en) * 2017-07-04 2017-09-12 北京百度网讯科技有限公司 Method and apparatus for inputting emotion icons
CN107315820A (en) * 2017-07-01 2017-11-03 北京奇虎科技有限公司 The expression searching method and device of User Interface based on mobile terminal
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107479723A (en) * 2017-08-18 2017-12-15 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107623830A (en) * 2016-07-15 2018-01-23 掌赢信息科技(上海)有限公司 A kind of video call method and electronic equipment
WO2018023576A1 (en) * 2016-08-04 2018-02-08 薄冰 Method for adjusting emoji sending technique according to market feedback, and emoji system
CN106020504B (en) * 2016-05-17 2018-11-27 百度在线网络技术(北京)有限公司 Information output method and device
CN109254669A (en) * 2017-07-12 2019-01-22 腾讯科技(深圳)有限公司 A kind of expression picture input method, device, electronic equipment and system
CN110019885A (en) * 2017-08-01 2019-07-16 北京搜狗科技发展有限公司 A kind of expression data recommended method and device
WO2020042442A1 (en) * 2018-08-28 2020-03-05 珠海格力电器股份有限公司 Expression package generating method and device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109412935B (en) * 2018-10-12 2021-12-07 北京达佳互联信息技术有限公司 Instant messaging sending method, receiving method, sending device and receiving device
CN112306254A (en) * 2019-07-31 2021-02-02 北京搜狗科技发展有限公司 Expression processing method, device and medium
CN114173258B (en) * 2022-02-07 2022-05-10 深圳市朗琴音响技术有限公司 Intelligent sound box control method and intelligent sound box

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102104658A (en) * 2009-12-22 2011-06-22 康佳集团股份有限公司 Method, system and mobile terminal for sending expression by using short messaging service (SMS)
CN103353824A (en) * 2013-06-17 2013-10-16 百度在线网络技术(北京)有限公司 Method for inputting character strings through voice, device and terminal equipment
CN103529946A (en) * 2013-10-29 2014-01-22 广东欧珀移动通信有限公司 Input method and device
CN103530313A (en) * 2013-07-08 2014-01-22 北京百纳威尔科技有限公司 Searching method and device of application information

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102255820B (en) * 2010-05-18 2016-08-03 腾讯科技(深圳)有限公司 Instant communication method and device
CN102890776B (en) * 2011-07-21 2017-08-04 爱国者电子科技有限公司 The method that expression figure explanation is transferred by facial expression
CN102662961B (en) * 2012-03-08 2015-04-08 北京百舜华年文化传播有限公司 Method, apparatus and terminal unit for matching semantics with image
CN103823561B (en) * 2014-02-27 2017-01-18 广州华多网络科技有限公司 expression input method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1735240A (en) * 2004-10-29 2006-02-15 康佳集团股份有限公司 Method for realizing expression notation and voice in handset short message
CN101183294A (en) * 2007-12-17 2008-05-21 腾讯科技(深圳)有限公司 Expression input method and apparatus
CN102104658A (en) * 2009-12-22 2011-06-22 康佳集团股份有限公司 Method, system and mobile terminal for sending expression by using short messaging service (SMS)
CN103353824A (en) * 2013-06-17 2013-10-16 百度在线网络技术(北京)有限公司 Method for inputting character strings through voice, device and terminal equipment
CN103530313A (en) * 2013-07-08 2014-01-22 北京百纳威尔科技有限公司 Searching method and device of application information
CN103529946A (en) * 2013-10-29 2014-01-22 广东欧珀移动通信有限公司 Input method and device

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015127825A1 (en) * 2014-02-27 2015-09-03 广州华多网络科技有限公司 Expression input method and apparatus and electronic device
WO2016000219A1 (en) * 2014-07-02 2016-01-07 华为技术有限公司 Information transmission method and transmission device
US10387717B2 (en) 2014-07-02 2019-08-20 Huawei Technologies Co., Ltd. Information transmission method and transmission apparatus
CN106789543A (en) * 2015-11-20 2017-05-31 腾讯科技(深圳)有限公司 The method and apparatus that facial expression image sends are realized in session
CN106886396A (en) * 2015-12-16 2017-06-23 北京奇虎科技有限公司 expression management method and device
CN106886396B (en) * 2015-12-16 2020-07-07 北京奇虎科技有限公司 Expression management method and device
CN105677059A (en) * 2015-12-31 2016-06-15 广东小天才科技有限公司 Method and system for inputting expression pictures
WO2017120924A1 (en) * 2016-01-15 2017-07-20 ***生 Information prompting method for use when inserting emoticon, and instant communication tool
CN105872838A (en) * 2016-04-28 2016-08-17 徐文波 Sending method and device of special media effects of real-time videos
CN106020504B (en) * 2016-05-17 2018-11-27 百度在线网络技术(北京)有限公司 Information output method and device
CN107623830A (en) * 2016-07-15 2018-01-23 掌赢信息科技(上海)有限公司 A kind of video call method and electronic equipment
CN107623830B (en) * 2016-07-15 2019-03-15 掌赢信息科技(上海)有限公司 A kind of video call method and electronic equipment
CN106175727A (en) * 2016-07-25 2016-12-07 广东小天才科技有限公司 A kind of expression method for pushing being applied to wearable device and wearable device
CN106293120A (en) * 2016-07-29 2017-01-04 维沃移动通信有限公司 Expression input method and mobile terminal
WO2018023576A1 (en) * 2016-08-04 2018-02-08 薄冰 Method for adjusting emoji sending technique according to market feedback, and emoji system
CN106339103A (en) * 2016-08-15 2017-01-18 珠海市魅族科技有限公司 Image checking method and device
CN106293131A (en) * 2016-08-16 2017-01-04 广东小天才科技有限公司 expression input method and device
CN106503630A (en) * 2016-10-08 2017-03-15 广东小天才科技有限公司 A kind of expression sending method, equipment and system
CN106503744A (en) * 2016-10-26 2017-03-15 长沙军鸽软件有限公司 Input expression in chat process carries out the method and device of automatic error-correcting
CN106682091A (en) * 2016-11-29 2017-05-17 深圳市元征科技股份有限公司 Method and device for controlling unmanned aerial vehicle
CN107315820A (en) * 2017-07-01 2017-11-03 北京奇虎科技有限公司 The expression searching method and device of User Interface based on mobile terminal
CN107153496A (en) * 2017-07-04 2017-09-12 北京百度网讯科技有限公司 Method and apparatus for inputting emotion icons
CN107153496B (en) * 2017-07-04 2020-04-28 北京百度网讯科技有限公司 Method and device for inputting emoticons
US10984226B2 (en) 2017-07-04 2021-04-20 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for inputting emoticon
CN109254669A (en) * 2017-07-12 2019-01-22 腾讯科技(深圳)有限公司 A kind of expression picture input method, device, electronic equipment and system
CN109254669B (en) * 2017-07-12 2022-05-10 腾讯科技(深圳)有限公司 Expression picture input method and device, electronic equipment and system
CN110019885A (en) * 2017-08-01 2019-07-16 北京搜狗科技发展有限公司 A kind of expression data recommended method and device
CN110019885B (en) * 2017-08-01 2021-10-15 北京搜狗科技发展有限公司 Expression data recommendation method and device
CN107479723A (en) * 2017-08-18 2017-12-15 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
CN107450746A (en) * 2017-08-18 2017-12-08 联想(北京)有限公司 A kind of insertion method of emoticon, device and electronic equipment
WO2020042442A1 (en) * 2018-08-28 2020-03-05 珠海格力电器股份有限公司 Expression package generating method and device

Also Published As

Publication number Publication date
WO2015127825A1 (en) 2015-09-03
CN103823561B (en) 2017-01-18

Similar Documents

Publication Publication Date Title
CN103823561A (en) Expression input method and device
CN107169430B (en) Reading environment sound effect enhancement system and method based on image processing semantic analysis
CN103699547B (en) A kind of application program recommended method and terminal
CN112565899A (en) System and method for visual analysis of emotion consistency in video
US20110243529A1 (en) Electronic apparatus, content recommendation method, and program therefor
US20240205368A1 (en) Methods and Apparatus for Displaying, Compressing and/or Indexing Information Relating to a Meeting
US20100008641A1 (en) Electronic apparatus, video content editing method, and program
CN106250553A (en) A kind of service recommendation method and terminal
CN102868830A (en) Switching control method and device of mobile terminal themes
CN103826160A (en) Method and device for obtaining video information, and method and device for playing video
CN106789543A (en) The method and apparatus that facial expression image sends are realized in session
CN108227950A (en) A kind of input method and device
CN110019777B (en) Information classification method and equipment
CN109933782B (en) User emotion prediction method and device
US11270684B2 (en) Generation of speech with a prosodic characteristic
JP6046393B2 (en) Information processing apparatus, information processing system, information processing method, and recording medium
CN108038102A (en) Recommendation method, apparatus, terminal and the storage medium of facial expression image
CN106802913A (en) One kind plays content recommendation method and its device
CN109783656A (en) Recommended method, system and the server and storage medium of audio, video data
CN104461545B (en) Content in mobile terminal is provided to the method and device of user
CN109877834A (en) Multihead display robot, method and apparatus, display robot and display methods
CN107239447A (en) Junk information recognition methods and device, system
CN105528077A (en) Theme setting method and device
CN110378190A (en) Video content detection system and detection method based on topic identification
Hsiao et al. Recognizing continuous social engagement level in dyadic conversation by using turn-taking and speech emotion patterns

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 511446 Guangzhou City, Guangdong Province, Panyu District, South Village, Huambo Business District Wanda Plaza, block B1, floor 28

Applicant after: Guangzhou Huaduo Network Technology Co., Ltd.

Address before: 510655, Guangzhou, Whampoa Avenue, No. 2, creative industrial park, building 3-08,

Applicant before: Guangzhou Huaduo Network Technology Co., Ltd.

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210111

Address after: 511442 3108, 79 Wanbo 2nd Road, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee after: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 511446 28th floor, block B1, Wanda Plaza, Wanbo business district, Nancun Town, Panyu District, Guangzhou City, Guangdong Province

Patentee before: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20140528

Assignee: GUANGZHOU HUADUO NETWORK TECHNOLOGY Co.,Ltd.

Assignor: GUANGZHOU CUBESILI INFORMATION TECHNOLOGY Co.,Ltd.

Contract record no.: X2021440000053

Denomination of invention: Expression input method and device

Granted publication date: 20170118

License type: Common License

Record date: 20210208