CN110032740A - It customizes individual character semanteme and learns application method - Google Patents

It customizes individual character semanteme and learns application method Download PDF

Info

Publication number
CN110032740A
CN110032740A CN201910320548.7A CN201910320548A CN110032740A CN 110032740 A CN110032740 A CN 110032740A CN 201910320548 A CN201910320548 A CN 201910320548A CN 110032740 A CN110032740 A CN 110032740A
Authority
CN
China
Prior art keywords
information
recognition result
data library
personalization database
common denominator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910320548.7A
Other languages
Chinese (zh)
Inventor
卢劲松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201910320548.7A priority Critical patent/CN110032740A/en
Publication of CN110032740A publication Critical patent/CN110032740A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

It customizes individual character semanteme and learns application method, comprising the following steps: a) define personalization database;B) common denominator data library is defined;C) it inputs information and personalization database is called to be identified;D) operation is executed according to personalization database recognition result;E) it inputs information and common denominator data library is called to be identified;F) operation is executed according to common denominator data library recognition result;G) storage recognition result is chosen whether to personalization database, recognition result is such as stored, executes step a, if do not stored recognition result, executes step h;H) corresponding semantic information is exported.Compared with prior art, beneficial effects of the present invention: sound, image, movement etc. can be converted to corresponding semantic information, pass through customized semanteme, it can be by distinctive sound, image and action recognition at standard semantic information, the semantic conversion being able to achieve between different types of information, it can effectively improve discrimination and recognition speed, reduce error rate, realize personalized identification.

Description

It customizes individual character semanteme and learns application method
Technical field
The present invention relates to semantics recognition field more particularly to personalized customization semantics recognition learning areas.
Background technique
With the continuous progress of science and technology, semantics recognition technology starts more and more to apply on various intelligent terminals, but Be limited to the various factors such as processor performance, algorithm model, network bandwidth, current semantics recognition primarily directed to standard words and A small number of dialects, it is clear to various dialects or asophia and the complex situations such as phonetic representation can not be passed through, it may appear that discrimination Low, error rate is high, and recognition speed is slow, can not personalized identification the problems such as.
Summary of the invention
The present invention can carry out personalized customization individual character semanteme study application method in view of the above-mentioned problems, providing one kind, Technical solution is as follows:
It customizes individual character semanteme and learns application method, comprising the following steps:
A) personalization database is defined;
B) common denominator data library is defined;
C) it inputs information and personalization database is called to be identified;
D) operation is executed according to personalization database recognition result;
D1) when correct identification, step h is executed;
D2) when part identifies or can not identify, step e is executed;
E) it inputs information and common denominator data library is called to be identified;
F) operation is executed according to common denominator data library recognition result:
When correct identification, step h is executed;
When part identifies, correct information is screened, executes step g;
When that can not identify, customized, execution step g is carried out to input information.
G) storage recognition result is chosen whether to personalization database:
Recognition result is stored, step a is executed;
Recognition result is not stored, step h is executed;
H) corresponding semantic information is exported.
Definition personalization database in step a is input self-defined information as information in personalization database.
Personalization database includes sound, writings and image corresponding informance.
Definition common denominator data library in step b is using existing database as common denominator data library.
Common denominator data library includes sound, writings and image corresponding informance.
The language (including dialect, unintelligible pronunciation, special sound or various movements) for identifying and learning user, generates a Whether property database can prompt to store when identifying new voice or movement every time, not with frequency of use and the data of storage Disconnected to increase, the scene domain covered also constantly expands, and discrimination is also higher.By taking Chinese as an example, when the data accumulation of storage is arrived When the Chinese characters in common use of 2000-3000, most usage scenarios can be covered, if special screne needs, professional art can also be added Language, popular word etc., then system can be converted to corresponding grapholect and voice automatically, further according to needing to translate into foreign language, To meet the communication needs of the various language of people.
Compared with prior art, beneficial effects of the present invention: sound, image, movement etc. can be converted to corresponding language Adopted information can be able to achieve difference by distinctive sound, image and action recognition at standard semantic information by customized semanteme Semantic conversion between type information can effectively improve discrimination and recognition speed, reduce error rate, realize personalized knowledge Not.
Detailed description of the invention
Fig. 1 is flow chart of the invention.
Specific embodiment
Embodiment 1
User first creates personalization database and common denominator data library, wherein the corresponding informance comprising sound, writings and image, individual character Database needs user's self-defining, and existing database can be used in common denominator data library, when user needs to express semantic information When, inputting information for the first time first can call personalization database to be identified, such as correct identification, then direct output information;Such as can not Identification can call common denominator data library to be identified automatically, such as correct identification, then direct output information;If part identifies, user It needs to confirm and identify whether accurately, and choose whether for recognition result to be stored in personalization database, semantic information is exported after confirmation; It can not such as identify, user needs Manual definition's semantic information, and chooses whether for recognition result to be stored in personalization database, really Semantic information is exported after recognizing;It is learning process that recognition result, which is stored in the process in personalization database, selectively storage identification It as a result can be to avoid unwanted invalid information be generated in personalization database, when completion learning process and then secondary input are identical When information, it can preferentially call the recognition result of personalization database storage to be identified, then export corresponding semantic information, then root According to needing to be transcribed into multi-lingual and text.
Embodiment 2
User first creates personalization database and common denominator data library, wherein including voice and sign language and its corresponding text letter Breath, personalization database need user's self-defining, and existing database can be used in common denominator data library, when user needs to input hand When language movement is to express semanteme, when inputting the limb action information of oneself by image-input device for the first time, can preferentially it adjust It is identified with personalization database, such as correct identification, then direct output information;Can not such as identify can call common denominator data library automatically It is identified, such as correct identification, then direct output information;If part identifies, user, which needs to confirm, to be identified whether accurately, and is selected It selects and whether recognition result is stored in personalization database, semantic information is exported after confirmation;It can not such as identify, user needs artificial Semantic information is defined, and chooses whether for recognition result to be stored in personalization database, semantic information is exported after confirmation;Identification is tied It is learning process that fruit, which is stored in the process in personalization database, and selectively store recognition result can be to avoid producing in personalization database Raw unwanted invalid information, when inputting identical sign language information again, can preferentially call individual character after completing learning process The recognition result of database purchase is identified, corresponding semantic information is then exported, multinational further according to needing to be transcribed into Language and text.
Embodiment 3
When user needs to input oneself special sound or movement to express semanteme, sound and image-input device can be passed through It is inputted, then its customized semanteme, customized semantic information is stored in personalization database, learning process is completed.When When inputting special sound or movement again, it can preferentially call the recognition result of personalization database storage to be identified, then export Corresponding semantic information, further according to needing to be transcribed into multi-lingual and text.
Embodiment 4
When occurring the text of mistake in identification process, manual confirmation is needed again to modify, for example, polyphone can be by Intelligent drainage Sequence selects sequence number to determine, is defaulted as the first word of the pronunciation, is moved forward automatically according to frequency of use, with Sichuan Province China dialect and For dialogue between texas,U.S dialect, this method can modify wrong word according to the word sequence number of display, such as say and " eat When meal ", if it is shown that " 1 shame, 2 meal 34 scolding " as long as at this moment saying " 1 changes, and 4 change ", system can enter next error correction page Face, display " 1-1 eat the 1-2 pond 1-3 1-4 hold 1-5 ruler 1-6 speed mother 4-1 4-2 4-3 4-4 code 4-5 scold ", then voice confirmation 1-1,4-3 can show selected text: " having had a meal " automatically, while can be automatically translated into the text and voice of English To other side, it is also possible to Dezhou dialect (according to demands of individuals), other side on the contrary says Dezhou dialect also and can be converted any language Text and deaf and dumb sign language, also may be implemented the Mixed design of voice, sign language and body action.
Embodiment 5
When deaf-mute and normal person link up, the sign language of deaf-mute is inputted by image input device, by calling data Semanteme expressed by user is recognized in library, is then converted into the voice and text of standard, normal person is enable to understand its institute The semanteme of expression, then the voice of normal person makes both sides again by the system converting sign language that can be identified at deaf-mute or text It accessible can link up.
Embodiment 6
When needing to be linked up in particular circumstances, such as serious noise occasion forbids the occasion etc. of sound, the gesture of user or Person's movement is inputted by image input device, semanteme expressed by user is identified by calling database, then by its turn Become the text of standard, to realize written communication.
Embodiment 7
Various sound can be inputted in use and carry out storage record, and are identified as grapholect, such as the lyrics of song, flowing water Sound, chirm, the sound of motor vehicle, tucket etc., making various sound all has corresponding semanteme.
Embodiment 8
It after special pattern is identified and defined in advance, stores into personalization database, when using special pattern, passes through figure As input unit input, personalization database is called to be identified, and be converted into received pronunciation and text, to meet such as the disabled The special communication requirements such as scholar.
Embodiment 9
Personalization database can be loaded into other application with modular form and be used, and such as be loaded into navigation application, be led with improving Boat application meets the needs of people to the recognition efficiency of personal voice.
Embodiment 10
When user is under special state while moving (such as), the voice of sending generates variation, calls database can not positive common sense It when other, will be operated according to first time voice input mode, user, which needs to confirm, to be identified whether accurately, by correct result It is stored in personalization database, completes learning process.
Embodiment 11
The modes Mixed design information such as different language, sign language and figure may be implemented using the present invention, when people are multi-party long-range In use, sender issues sign language information in video conference, recipient can voluntarily select intelligible text, voice and figure Information receives, and realizes accessible communication.
Therefore, in all respects, the present embodiments are to be considered as illustrative and not restrictive, this The range of invention is indicated by the appended claims rather than the foregoing description, it is intended that the equivalent requirements of the claims will be fallen in All changes in meaning and scope are included within the present invention.It should not treat any reference in the claims as limitation institute The claim being related to.

Claims (5)

1. customizing individual character semanteme learns application method, which comprises the following steps:
A) personalization database is defined;
B) common denominator data library is defined;
C) it inputs information and personalization database is called to be identified;
D) operation is executed according to personalization database recognition result;
D1) when correct identification, step h is executed;
D2) when part identifies or can not identify, step e is executed;
E) it inputs information and common denominator data library is called to be identified;
F) operation is executed according to common denominator data library recognition result:
When correct identification, step h is executed;
When part identifies, correct information is screened, executes step g;
When that can not identify, customized, execution step g is carried out to input information;
G) storage recognition result is chosen whether to personalization database:
Recognition result is stored, step a is executed;
Recognition result is not stored, step h is executed;
H) corresponding semantic information is exported.
2. the customization individual character semanteme learns application method according to claim 1, it is characterised in that determine in the step a Adopted personalization database is input self-defined information as information in personalization database.
3. according to claim 1 or customization individual character semanteme described in 2 learns application method, it is characterised in that the personality data Library includes sound, writings and image corresponding informance.
4. the customization individual character semanteme learns application method according to claim 1, it is characterised in that determine in the step b Adopted common denominator data library is using existing database as common denominator data library.
5. according to claim 1 or customization individual character semanteme described in 4 learns application method, it is characterised in that the common denominator data Library includes sound, writings and image corresponding informance.
CN201910320548.7A 2019-04-20 2019-04-20 It customizes individual character semanteme and learns application method Pending CN110032740A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910320548.7A CN110032740A (en) 2019-04-20 2019-04-20 It customizes individual character semanteme and learns application method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910320548.7A CN110032740A (en) 2019-04-20 2019-04-20 It customizes individual character semanteme and learns application method

Publications (1)

Publication Number Publication Date
CN110032740A true CN110032740A (en) 2019-07-19

Family

ID=67239392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910320548.7A Pending CN110032740A (en) 2019-04-20 2019-04-20 It customizes individual character semanteme and learns application method

Country Status (1)

Country Link
CN (1) CN110032740A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527092A (en) * 2009-04-08 2009-09-09 西安理工大学 Computer assisted hand language communication method under special session context
CN102831195A (en) * 2012-08-03 2012-12-19 河南省佰腾电子科技有限公司 Individualized voice collection and semantics determination system and method
JP2015026057A (en) * 2013-07-29 2015-02-05 韓國電子通信研究院Electronics and Telecommunications Research Institute Interactive character based foreign language learning device and method
CN106446836A (en) * 2016-09-28 2017-02-22 戚明海 Sign language recognition and interpretation device
CN106649278A (en) * 2016-12-30 2017-05-10 三星电子(中国)研发中心 Method and system for extending spoken language dialogue system corpora
CN108268835A (en) * 2017-12-28 2018-07-10 努比亚技术有限公司 sign language interpretation method, mobile terminal and computer readable storage medium
CN108427910A (en) * 2018-01-30 2018-08-21 浙江凡聚科技有限公司 Deep-neural-network AR sign language interpreters learning method, client and server
CN109215638A (en) * 2018-10-19 2019-01-15 珠海格力电器股份有限公司 A kind of phonetic study method, apparatus, speech ciphering equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527092A (en) * 2009-04-08 2009-09-09 西安理工大学 Computer assisted hand language communication method under special session context
CN102831195A (en) * 2012-08-03 2012-12-19 河南省佰腾电子科技有限公司 Individualized voice collection and semantics determination system and method
JP2015026057A (en) * 2013-07-29 2015-02-05 韓國電子通信研究院Electronics and Telecommunications Research Institute Interactive character based foreign language learning device and method
CN106446836A (en) * 2016-09-28 2017-02-22 戚明海 Sign language recognition and interpretation device
CN106649278A (en) * 2016-12-30 2017-05-10 三星电子(中国)研发中心 Method and system for extending spoken language dialogue system corpora
CN108268835A (en) * 2017-12-28 2018-07-10 努比亚技术有限公司 sign language interpretation method, mobile terminal and computer readable storage medium
CN108427910A (en) * 2018-01-30 2018-08-21 浙江凡聚科技有限公司 Deep-neural-network AR sign language interpreters learning method, client and server
CN109215638A (en) * 2018-10-19 2019-01-15 珠海格力电器股份有限公司 A kind of phonetic study method, apparatus, speech ciphering equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111477216B (en) Training method and system for voice and meaning understanding model of conversation robot
Merdivan et al. Dialogue systems for intelligent human computer interactions
CN111223498A (en) Intelligent emotion recognition method and device and computer readable storage medium
CN112352275A (en) Neural text-to-speech synthesis with multi-level textual information
CN111339771B (en) Text prosody prediction method based on multitasking multi-level model
CN112131359A (en) Intention identification method based on graphical arrangement intelligent strategy and electronic equipment
CN116863038A (en) Method for generating digital human voice and facial animation by text
CN112599113B (en) Dialect voice synthesis method, device, electronic equipment and readable storage medium
CN114911932A (en) Heterogeneous graph structure multi-conversation person emotion analysis method based on theme semantic enhancement
CN113628610A (en) Voice synthesis method and device and electronic equipment
CN116303966A (en) Dialogue behavior recognition system based on prompt learning
CN109933773A (en) A kind of multiple semantic sentence analysis system and method
GB2376554A (en) Artificial language generation and evaluation
CN112257432A (en) Self-adaptive intention identification method and device and electronic equipment
KR20220070826A (en) Utterance manipulation apparatus for retrieval-based response selection and method thereof
CN112489634A (en) Language acoustic model training method and device, electronic equipment and computer medium
CN117012177A (en) Speech synthesis method, electronic device, and storage medium
CN110032740A (en) It customizes individual character semanteme and learns application method
CN112242134A (en) Speech synthesis method and device
CN115374784A (en) Chinese named entity recognition method based on multi-mode information selective fusion
CN112150103B (en) Schedule setting method, schedule setting device and storage medium
CN115238048A (en) Quick interaction method for joint chart identification and slot filling
CN114974218A (en) Voice conversion model training method and device and voice conversion method and device
Fujimoto et al. Semi-supervised learning based on hierarchical generative models for end-to-end speech synthesis
O'Brien Knowledge-based systems in speech recognition: a survey

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190719