CN105374366A - Method and system for wearable device to identify meaning - Google Patents

Method and system for wearable device to identify meaning Download PDF

Info

Publication number
CN105374366A
CN105374366A CN201510650043.9A CN201510650043A CN105374366A CN 105374366 A CN105374366 A CN 105374366A CN 201510650043 A CN201510650043 A CN 201510650043A CN 105374366 A CN105374366 A CN 105374366A
Authority
CN
China
Prior art keywords
voice
user
physiological data
word
data parameter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510650043.9A
Other languages
Chinese (zh)
Inventor
郑战海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201510650043.9A priority Critical patent/CN105374366A/en
Publication of CN105374366A publication Critical patent/CN105374366A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a method and a system for a wearable device to identify a meaning. The method comprises the steps of: obtaining voices emitted by a user and physiological data parameters when the user emits the voices; identifying characters of the voices, and identifying the emotion of the user according to the physiological data parameters; and identifying the meaning according to the characters and the emotion of the user. In the voice and meaning identification process, the voices emitted by the user are combined with the emotion of user in the voice emitting process, so that the meaning can be accurately identified, and the experience of the user is substantially improved.

Description

A kind of method and system of wearable device identification semanteme
The present invention relates to semantics recognition field, particularly relate to a kind of method and system of wearable device identification semanteme.
Background technology
Current wearable device, because the limitation of screen, Voice command will become main interactive mode, but current speech semantics recognition, completely according to the feature of sound, and forward-backward correlation carries out Semantic judgement, speech recognition, speech conversion can only be become word, and then identify semantic, sometimes sound differs and well feeds back the emotional change of people surely, can not change according to user's current emotional, provide suitable feedback, do not make good use of the advantage that intelligent wearing itself can detect physiological status.Same word content, be mingled with mood different, the tone is also different, if only may be not easy to distinguish with voice itself, but just can well embody mood in conjunction with physiological measurement data, so just can, on same text basis, coordinate mood to reach more intelligent identification.
Summary of the invention
The invention provides a kind of method and system of wearable device identification semanteme, the method and system send the word of voice according to user and user emotion can identify semanteme accurately.
For realizing above-mentioned design, the present invention by the following technical solutions:
On the one hand, have employed a kind of method of wearable device identification semanteme, comprising:
The voice that acquisition user sends and physiological data parameter when sending voice;
Identify the word of described voice, identify user emotion according to described physiological data parameter;
Semanteme is identified by described word and described user emotion.
Wherein, the voice that described acquisition user sends and physiological data parameter when sending voice, comprising:
The voice semantics recognition opening wearable device controls mutual, obtains the voice that user sends;
Start the biosensor of wearable device, acquisition user sends physiological data parameter during voice.
Wherein, the word of the described voice of described identification, identifies user emotion according to described physiological data parameter, comprising:
Voice semantics recognition controls the word that interactive identification goes out described voice;
According to the preset relation of physiological data parameter and mood in described biosensor, identify user emotion.
Wherein, described biosensor comprises: heart rate pulse transducer, skin pricktest sensor.
Wherein, described physiological data parameter comprises: palmic rate, skin temperature, skin conductivity level, skin conductivity reaction, galvanic skin reflex, skin potential, dermatopolyneuritis, skin conductance.
Wherein, described user emotion comprises sadness, happiness, surprised, angry.
Wherein, describedly identify semanteme by word and user emotion, comprising:
The user emotion of physiological data parameter identification when pre-set user sends the word of speech recognition and sends voice, forms sound bank;
Mated with sound bank by word and user emotion, identify semanteme.
Wherein, described identify semanteme by described word and described user emotion after, also comprise:
According to semantic and user emotion, carry out information feed back.
Have employed a kind of system of wearable device identification semanteme on the other hand, comprising:
Acquisition module, the voice that acquisition user sends and physiological data parameter when sending voice;
First identification module, identifies the word of described voice, identifies user emotion according to described physiological data parameter;
Second identification module, identifies semanteme by described word and described user emotion.
Wherein, described acquisition module comprises:
First acquisition module, the voice semantics recognition opening wearable device controls mutual, obtains the voice that user sends; And
Detection module, starts the detection biosensor of wearable device, and acquisition user sends physiological data parameter during voice;
Described first identification module comprises:
Text region module, voice semantics recognition controls the word that interactive identification goes out described voice; And
Emotion identification module, according to the preset relation of physiological data parameter and mood in described biosensor, identifies user emotion;
Described second identification module comprises:
Presetting module, the user emotion of physiological data parameter identification when pre-set user sends the word of speech recognition and sends voice, forms sound bank; And
Matching module, is mated with sound bank by word and user emotion, identifies semanteme;
System also comprises:
Feedback module, according to semantic and user emotion, carries out information feed back.
Beneficial effect of the present invention is: physiological data parameter when the present invention is by obtaining the voice that send of user and sending voice; Identify the word of described voice, identify user emotion according to described physiological data parameter; Semanteme is identified by described word and described user emotion.The present invention is in identification voice semantic procedure, and voice user sent combine with the mood of user when sending voice, can accurately identify semanteme, greatly improve the experience of user.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below the accompanying drawing used required in describing the embodiment of the present invention is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the content of the embodiment of the present invention and these accompanying drawings.
Fig. 1 is the method flow diagram of the first embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.
Fig. 2 is the method flow diagram of the second embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.
Fig. 3 is the method flow diagram of the 3rd embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.
Fig. 4 is the method flow diagram of the 4th embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.
Fig. 5 is the method flow diagram of the 5th embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.
Fig. 6 is the block diagram of the first embodiment of the system of a kind of wearable device identification semanteme provided in the specific embodiment of the invention.
Fig. 7 is the block diagram of the second embodiment of the system of a kind of wearable device identification semanteme provided in the specific embodiment of the invention.
Embodiment
The technical matters solved for making the present invention, the technical scheme of employing and the technique effect that reaches are clearly, be described in further detail below in conjunction with the technical scheme of accompanying drawing to the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those skilled in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
Please refer to Fig. 1, it is the method flow diagram of the first embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.As shown in the figure, the method, comprising:
Step S101, the voice that acquisition user sends and physiological data parameter when sending voice.
Concrete, wearable device is provided with heart rate pulse transducer, skin pricktest sensor, when carrying out semantics recognition, user sends voice, wearable device obtains the voice that send of user, and obtains physiological data parameter when user sends voice by heart rate pulse transducer, skin pricktest sensor.In the present embodiment, physiological data parameter when user sends voice comprises: palmic rate, skin temperature, skin conductivity level, skin conductivity reaction, galvanic skin reflex, skin potential, dermatopolyneuritis, skin conductance etc.
Step S102, identifies the word of described voice, identifies user emotion according to described physiological data parameter.
Concrete, controlled the word identified alternately corresponding to the voice that user sends by voice semantics recognition on wearable device, and physiological data parameter when sending voice according to the user recorded by the biosensor on wearable device identifies the mood of user.In the present embodiment, user emotion comprises: sad, glad, surprised, angry.
Step S103, identifies semanteme by described word and described user emotion.
Concrete, the user emotion that the word corresponding according to the voice of the voice semantics recognition control interactive identification on wearable device and the physiological parameter recorded reflect identifies the semanteme of user.
In sum, the physiological data parameter when embodiment of the present invention is by obtaining the voice that send of user and sending voice; Identify the word of described voice, identify user emotion according to described physiological data parameter; Semanteme is identified by described word and described user emotion.The present invention is in identification voice semantic procedure, and voice user sent combine with the mood of user when sending voice, can accurately identify semanteme, greatly improve the experience of user.
Please refer to Fig. 2, it is the method flow diagram of the second embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.As shown in the figure, the method, comprising:
Step S201, the voice semantics recognition opening wearable device controls mutual, obtains the voice that user sends.
Step S202, starts the biosensor of wearable device, and acquisition user sends physiological data parameter during voice.
Concrete, biosensor comprises: heart rate pulse transducer, skin pricktest sensor; Physiological data parameter comprises: palmic rate, skin temperature, skin conductivity level, skin conductivity reaction, galvanic skin reflex, skin potential, dermatopolyneuritis, skin conductance.
Step S203, identifies the word of described voice, identifies user emotion according to described physiological data parameter.
Step S204, identifies semanteme by described word and described user emotion.
In sum, the embodiment of the present invention controls mutual by the voice semantics recognition opening wearable device, obtain the voice that user sends, start the biosensor of wearable device, obtain physiological data parameter when user sends voice, and identify the word of the voice that user sends and the mood corresponding to described physiological data parameter, semanteme when determining that user speech is mutual according to mood and word, make interactive voice more accurately and hommization, improve the experience of user.
Please refer to Fig. 3, it is the method flow diagram of the 3rd embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.As shown in the figure, the method, comprising:
Step S301, the voice semantics recognition opening wearable device controls mutual, obtains the voice that user sends.
Step S302, starts the biosensor of wearable device, and acquisition user sends physiological data parameter during voice.
Step S303, voice semantics recognition controls the word that interactive identification goes out described voice.
Concrete, the language and characters recognition function in mutual according to the control of voice semantics recognition identifies the word corresponding to the voice that user sends.
Step S304, according to the preset relation of physiological data parameter and mood in described biosensor, identifies user emotion.
Concrete, physiological data parameter comprises: palmic rate, skin temperature, skin conductivity level, skin conductivity reaction, galvanic skin reflex, skin potential, dermatopolyneuritis, skin conductance.Palmic rate is one of good physical signs of mood, satisfied or happy time, heartbeat is normal, is in 60-100 beat/min; And when being in anxiety, fear or violent rage state, heartbeat is just accelerated.Electrodermal response: emotional state is when changing, and in skin, the diastole of blood vessel and the change such as contraction and sweat gland secretion, can cause the change of dermatopolyneuritis.The principal element affecting skin pricktest foundation level has three: arousal level, temperature and activity.Skin pricktest level is lower in the morning, culminates to noon, reduces again at night.Skin pricktest mainly reflects the temperature adjuster system of health, and when the very high somagenic need heat radiation of temperature, skin is because perspiring, and skin pricktest level is just high; And temperature is lower, skin pricktest level is just low.In addition, when just preparing a certain task, skin pricktest level can rise gradually; When taking to a certain activity, skin pricktest level is just correspondingly elevated to a higher level; And skin pricktest level reduces when rest.
In the present embodiment, according to the mood of people in daily life in sad, glad, surprised, angry situation, the scope of physiological data parameter corresponding respectively sets up the preset relation of physiological data parameter and mood in biosensor; When detecting that physiological data parameter is when the scope of the physiological data parameter of sadness, then the mood identified when user sends voice is sad; When detecting that physiological data parameter is when the scope of the physiological data parameter of happiness, then the mood identified when user sends voice is glad; In like manner identify surprised, angry mood.
Step S305, identifies semanteme by described word and described user emotion.
In sum, the embodiment of the present invention controls by voice semantics recognition the word that interactive identification goes out described voice, according to the preset relation of physiological data parameter and mood in described biosensor, identify user emotion, mood and word are combined and identifies semanteme, greatly improve the accuracy rate of interactive voice.
Please refer to Fig. 4, it is the method flow diagram of the 4th embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.As shown in the figure, the method, comprising:
Step S401, the voice that acquisition user sends and physiological data parameter when sending voice.
Step S402, identifies the word of described voice, identifies user emotion according to described physiological data parameter.
Step S403, the user emotion of physiological data parameter identification when pre-set user sends the word of speech recognition and sends voice, forms sound bank.
Concrete, when carrying out speech recognition, rule of thumb and preset the relation set up between the voice word corresponding with voice; According to the mood of people in daily life in sad, glad, surprised, angry situation, the scope of physiological data parameter corresponding respectively sets up the preset relation of physiological data parameter and mood in biosensor; When detecting that physiological data parameter is when the scope of the physiological data parameter of sadness, then the mood identified when user sends voice is sad; When detecting that physiological data parameter is when the scope of the physiological data parameter of happiness, then the mood identified when user sends voice is glad; In like manner identify surprised, angry mood; With this set up send voice time physiological data parameter and the relation of user emotion.Relation between the word that voice are corresponding with voice and when sending voice the relation of physiological data parameter and user emotion form sound bank.
Step S404, is mated with sound bank by word and user emotion, identifies semanteme.
In sum, the user emotion of physiological data parameter identification when embodiment of the present invention pre-set user sends the word of speech recognition and sends voice, forms sound bank; Mated with sound bank by word and user emotion, identify semanteme, simple to operate, efficiency is higher, improves the Experience Degree of user.
Please refer to Fig. 5, it is the method flow diagram of the 5th embodiment of the method for a kind of wearable device identification semanteme provided in the specific embodiment of the invention.As shown in the figure, the method, comprising:
Step S501, the voice that acquisition user sends and physiological data parameter when sending voice;
Step S502, identifies the word of described voice, identifies user emotion according to described physiological data parameter;
Step S503, identifies semanteme by described word and described user emotion.
Step S504, according to semantic and user emotion, carries out information feed back.
Concrete, wearable device uses some voice or word to give user suitable information feed back according to the semanteme identified and user emotion.Such as: when user provides repeatedly similar voice command, and and with mood restless time, the feedback given according to the user command of speech recognition before then representing is inaccurate, need to change in time, now wearable device will first be apologized, then adopt finer guidance mode, allow user change a kind of voice saying of other equivalent.
In sum, the embodiment of the present invention is in conjunction with the mood of user and voice semantics recognition, and give suitable feedback, allowing user feel is not icy machine, but a nature person that can experience mood is in dialogue, improves the Experience Degree of user greatly.
It is below the embodiment of the system of a kind of wearable device identification of this programme semanteme, a kind of embodiment of system of wearable device identification semanteme realizes based on a kind of embodiment of method of wearable device identification semanteme, description not most in a kind of embodiment of system of wearable device identification semanteme, please refer to a kind of embodiment of method of wearable device identification semanteme.
Please refer to Fig. 6, it is the block diagram of the first embodiment of the system of a kind of wearable device identification semanteme provided in the specific embodiment of the invention.As shown in the figure, the method, comprising:
Acquisition module 610, the voice that acquisition user sends and physiological data parameter when sending voice.
First identification module 620, identifies the word of described voice, identifies user emotion according to described physiological data parameter.
Concrete, described user emotion comprises sadness, happiness, surprised, angry.
Second identification module 630, identifies semanteme by described word and described user emotion.
In sum, above-mentioned each unit collaborative work, the voice sent by acquisition user and physiological data parameter when sending voice; Identify the word of described voice, identify user emotion according to described physiological data parameter; Semanteme is identified by described word and described user emotion.The present invention is in identification voice semantic procedure, and voice user sent combine with the mood of user when sending voice, can accurately identify semanteme, greatly improve the experience of user.
Please refer to Fig. 7, it is the block diagram of the second embodiment of the system of a kind of wearable device identification semanteme provided in the specific embodiment of the invention.As shown in the figure, this system, comprising:
Acquisition module 610, the voice that acquisition user sends and physiological data parameter when sending voice;
Concrete, acquisition module 610 comprises the first acquisition module 611 and detection module 612.
First acquisition module 611, the voice semantics recognition opening wearable device controls mutual, obtains the voice that user sends.
Detection module 612, starts the detection biosensor of wearable device, and acquisition user sends physiological data parameter during voice.
Concrete, biosensor comprises: heart rate pulse transducer, skin pricktest sensor; Described physiological data parameter comprises: palmic rate, skin temperature, skin conductivity level, skin conductivity reaction, galvanic skin reflex, skin potential, dermatopolyneuritis, skin conductance.
First identification module 620, identifies the word of described voice, identifies user emotion according to described physiological data parameter.
Concrete, the first identification module 620 comprises Text region module 621 and Emotion identification module 622.
Text region module 621, voice semantics recognition controls the word that interactive identification goes out described voice.
Emotion identification module 622, according to the preset relation of physiological data parameter and mood in described biosensor, identifies user emotion.
Second identification module 630, identifies semanteme by described word and described user emotion.
Concrete, the second identification module 630 comprises presetting module 631 and matching module 632.
Presetting module 631, the user emotion of physiological data parameter identification when pre-set user sends the word of speech recognition and sends voice, forms sound bank.
Matching module 632, is mated with sound bank by word and user emotion, identifies semanteme.
Feedback module 640, according to semantic and user emotion, carries out information feed back.
In sum, above-mentioned each unit collaborative work, in acquisition module 610, the first acquisition module 611 obtains the voice that user sends, and detection module 612 obtains physiological data parameter when user sends voice; In first identification module 620, word identification module 621 identifies the word of described voice, and Emotion identification module 622 identifies user emotion; In second identification module 630, presetting module 631 forms sound bank, and matching module 632 identifies semanteme; Feedback module 640 carries out information feed back.Physiological data monitoring attribute main in embodiment of the present invention convergence voice semantics recognition and wearable device, allow in the process of voice semantics recognition, incorporate more users emotional change, the semanteme making identification and the interactive voice result giving user feedback, more meet the experience of nature person, can hold user emotion more accurately, user experience is high.
Below know-why of the present invention is described in conjunction with specific embodiments.These describe just in order to explain principle of the present invention, and can not be interpreted as limiting the scope of the invention by any way.Based on explanation herein, those skilled in the art does not need to pay performing creative labour can associate other embodiment of the present invention, and these modes all will fall within protection scope of the present invention.

Claims (10)

1. a method for wearable device identification semanteme, is characterized in that, comprising:
The voice that acquisition user sends and physiological data parameter when sending voice;
Identify the word of described voice, identify user emotion according to described physiological data parameter;
Semanteme is identified by described word and described user emotion.
2. method according to claim 1, is characterized in that, the voice that described acquisition user sends and physiological data parameter when sending voice, comprising:
The voice semantics recognition opening wearable device controls mutual, obtains the voice that user sends;
Start the biosensor of wearable device, acquisition user sends physiological data parameter during voice.
3. method according to claim 2, is characterized in that, the word of the described voice of described identification, identifies user emotion, comprising according to described physiological data parameter:
Voice semantics recognition controls the word that interactive identification goes out described voice;
According to the preset relation of physiological data parameter and mood in described biosensor, identify user emotion.
4. according to the method in claim 2 or 3, it is characterized in that, described biosensor comprises: heart rate pulse transducer, skin pricktest sensor.
5. method according to claim 1, is characterized in that, described physiological data parameter comprises: palmic rate, skin temperature, skin conductivity level, skin conductivity reaction, galvanic skin reflex, skin potential, dermatopolyneuritis, skin conductance.
6. method according to claim 1, is characterized in that, described user emotion comprises sadness, happiness, surprised, angry.
7. method according to claim 1, is characterized in that, describedly identifies semanteme by word and user emotion, comprising:
The user emotion of physiological data parameter identification when pre-set user sends the word of speech recognition and sends voice, forms sound bank;
Mated with sound bank by word and user emotion, identify semanteme.
8. method according to claim 1, is characterized in that, described identify semanteme by described word and described user emotion after, also comprise:
According to semantic and user emotion, carry out information feed back.
9. a system for wearable device identification semanteme, is characterized in that, comprising:
Acquisition module, the voice that acquisition user sends and physiological data parameter when sending voice;
First identification module, identifies the word of described voice, identifies user emotion according to described physiological data parameter;
Second identification module, identifies semanteme by described word and described user emotion.
10. system according to claim 9, is characterized in that:
Described acquisition module comprises:
First acquisition module, the voice semantics recognition opening wearable device controls mutual, obtains the voice that user sends; And
Detection module, starts the detection biosensor of wearable device, and acquisition user sends physiological data parameter during voice;
Described first identification module comprises:
Text region module, voice semantics recognition controls the word that interactive identification goes out described voice; And
Emotion identification module, according to the preset relation of physiological data parameter and mood in described biosensor, identifies user emotion;
Described second identification module comprises:
Presetting module, the user emotion of physiological data parameter identification when pre-set user sends the word of speech recognition and sends voice, forms sound bank; And
Matching module, is mated with sound bank by word and user emotion, identifies semanteme;
System also comprises:
Feedback module, according to semantic and user emotion, carries out information feed back.
CN201510650043.9A 2015-10-09 2015-10-09 Method and system for wearable device to identify meaning Pending CN105374366A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510650043.9A CN105374366A (en) 2015-10-09 2015-10-09 Method and system for wearable device to identify meaning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510650043.9A CN105374366A (en) 2015-10-09 2015-10-09 Method and system for wearable device to identify meaning

Publications (1)

Publication Number Publication Date
CN105374366A true CN105374366A (en) 2016-03-02

Family

ID=55376494

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510650043.9A Pending CN105374366A (en) 2015-10-09 2015-10-09 Method and system for wearable device to identify meaning

Country Status (1)

Country Link
CN (1) CN105374366A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601231A (en) * 2016-12-22 2017-04-26 深圳市元征科技股份有限公司 Vehicle control method and apparatus
CN106910514A (en) * 2017-04-30 2017-06-30 上海爱优威软件开发有限公司 Method of speech processing and system
CN107016996A (en) * 2017-06-06 2017-08-04 广东小天才科技有限公司 A kind of processing method and processing device of voice data
CN107393529A (en) * 2017-07-13 2017-11-24 珠海市魅族科技有限公司 Audio recognition method, device, terminal and computer-readable recording medium
CN107463684A (en) * 2017-08-09 2017-12-12 珠海市魅族科技有限公司 Voice replying method and device, computer installation and computer-readable recording medium
CN108877801A (en) * 2018-06-14 2018-11-23 南京云思创智信息科技有限公司 More wheel dialog semantics based on multi-modal Emotion identification system understand subsystem
CN110808041A (en) * 2019-09-24 2020-02-18 深圳市火乐科技发展有限公司 Voice recognition method, intelligent projector and related product
WO2021073646A1 (en) * 2019-10-18 2021-04-22 四川大学华西医院 Method for evaluating emotional characteristics based on language guidance and heart rate response

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102831891A (en) * 2011-06-13 2012-12-19 富士通株式会社 Processing method and system for voice data
CN103500578A (en) * 2013-10-22 2014-01-08 上海云视科技有限公司 Voice control method and device
CN103561652A (en) * 2011-06-01 2014-02-05 皇家飞利浦有限公司 Method and system for assisting patients
WO2014036263A1 (en) * 2012-08-29 2014-03-06 Brown University An accurate analysis tool and method for the quantitative acoustic assessment of infant cry
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis
CN104038804A (en) * 2013-03-05 2014-09-10 三星电子(中国)研发中心 Subtitle synchronization device and subtitle synchronization method based on speech recognition
US20140303982A1 (en) * 2013-04-09 2014-10-09 Yally Inc. Phonetic conversation method and device using wired and wiress communication

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561652A (en) * 2011-06-01 2014-02-05 皇家飞利浦有限公司 Method and system for assisting patients
CN102831891A (en) * 2011-06-13 2012-12-19 富士通株式会社 Processing method and system for voice data
WO2014036263A1 (en) * 2012-08-29 2014-03-06 Brown University An accurate analysis tool and method for the quantitative acoustic assessment of infant cry
CN104038804A (en) * 2013-03-05 2014-09-10 三星电子(中国)研发中心 Subtitle synchronization device and subtitle synchronization method based on speech recognition
US20140303982A1 (en) * 2013-04-09 2014-10-09 Yally Inc. Phonetic conversation method and device using wired and wiress communication
CN103500578A (en) * 2013-10-22 2014-01-08 上海云视科技有限公司 Voice control method and device
CN103811009A (en) * 2014-03-13 2014-05-21 华东理工大学 Smart phone customer service system based on speech analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵力: "《语音信号处理》", 31 March 2003 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106601231A (en) * 2016-12-22 2017-04-26 深圳市元征科技股份有限公司 Vehicle control method and apparatus
CN106910514A (en) * 2017-04-30 2017-06-30 上海爱优威软件开发有限公司 Method of speech processing and system
CN107016996A (en) * 2017-06-06 2017-08-04 广东小天才科技有限公司 A kind of processing method and processing device of voice data
CN107393529A (en) * 2017-07-13 2017-11-24 珠海市魅族科技有限公司 Audio recognition method, device, terminal and computer-readable recording medium
CN107463684A (en) * 2017-08-09 2017-12-12 珠海市魅族科技有限公司 Voice replying method and device, computer installation and computer-readable recording medium
CN108877801A (en) * 2018-06-14 2018-11-23 南京云思创智信息科技有限公司 More wheel dialog semantics based on multi-modal Emotion identification system understand subsystem
CN108877801B (en) * 2018-06-14 2020-10-02 南京云思创智信息科技有限公司 Multi-turn dialogue semantic understanding subsystem based on multi-modal emotion recognition system
CN110808041A (en) * 2019-09-24 2020-02-18 深圳市火乐科技发展有限公司 Voice recognition method, intelligent projector and related product
CN110808041B (en) * 2019-09-24 2021-01-12 深圳市火乐科技发展有限公司 Voice recognition method, intelligent projector and related product
WO2021073646A1 (en) * 2019-10-18 2021-04-22 四川大学华西医院 Method for evaluating emotional characteristics based on language guidance and heart rate response

Similar Documents

Publication Publication Date Title
CN105374366A (en) Method and system for wearable device to identify meaning
CN109460752B (en) Emotion analysis method and device, electronic equipment and storage medium
EP2713881B1 (en) Method and system for assisting patients
WO2017020660A1 (en) Method and apparatus for starting preset function in wearable electronic terminal
KR102416782B1 (en) Method for operating speech recognition service and electronic device supporting the same
CN105144027A (en) Using nonverbal communication in determining actions
JP2006071936A (en) Dialogue agent
CN108629313A (en) Emotion adjustment method, device, system and computer storage media
KR102276415B1 (en) Apparatus and method for predicting/recognizing occurrence of personal concerned context
US20230046658A1 (en) Synthesized speech audio data generated on behalf of human participant in conversation
EP3890342A1 (en) Method and apparatus for awakening wearable device
CN110658742A (en) Multi-mode cooperative control wheelchair control system and method
CN110598611A (en) Nursing system, patient nursing method based on nursing system and readable storage medium
CN112217941A (en) Method, apparatus and medium for operating a digital assistant
CN113287175A (en) Interactive health status evaluation method and system thereof
Zhang et al. Speechin: A smart necklace for silent speech recognition
Katada et al. Is she truly enjoying the conversation? analysis of physiological signals toward adaptive dialogue systems
JP2020126195A (en) Voice interactive device, control device for voice interactive device and control program
US10983808B2 (en) Method and apparatus for providing emotion-adaptive user interface
WO2017179262A1 (en) Information processing device, information processing method, and program
CN109616109B (en) Voice awakening method, device and system
JP7021488B2 (en) Information processing equipment and programs
US20160321356A1 (en) A device and a method for establishing a personal digital profile of a user
WO2018116556A1 (en) Information processing device and information processing method
CN115019798A (en) Control method and device of voice recognition equipment, electronic equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160302