CN108415905A - Real time translating method, intelligent glasses and computer readable storage medium - Google Patents
Real time translating method, intelligent glasses and computer readable storage medium Download PDFInfo
- Publication number
- CN108415905A CN108415905A CN201810184999.8A CN201810184999A CN108415905A CN 108415905 A CN108415905 A CN 108415905A CN 201810184999 A CN201810184999 A CN 201810184999A CN 108415905 A CN108415905 A CN 108415905A
- Authority
- CN
- China
- Prior art keywords
- information
- translated
- real time
- type
- translation result
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/40—Processing or translation of natural language
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C11/00—Non-optical adjuncts; Attachment thereof
- G02C11/10—Electronic devices other than hearing aids
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Acoustics & Sound (AREA)
- Otolaryngology (AREA)
- Ophthalmology & Optometry (AREA)
- Optics & Photonics (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses a kind of real time translating method, intelligent glasses and computer readable storage medium, the real time translating method includes:Scene information is acquired, the information to be translated for including is extracted in the scene information;It determines the information type of the information to be translated, selects the corresponding interpretative system of described information type to translate the information to be translated, obtain the corresponding translation result of the information to be translated;Export the translation result.Through the invention, intelligent glasses can carry out the information to be translated in collected scene information real time translation, and user, which only need to wear intelligent glasses, can obtain the translation result of information to be translated, bring great convenience for activities such as the learning of user, social activities.
Description
Technical field
The present invention relates to wearable device field more particularly to real time translating method, intelligent glasses and computer-readable deposit
Storage media.
Background technology
People can encounter the language of many various countries, such as with English in social activities or business and personal travel
Menu, price list, contract of the appearance such as language, German, Japanese, Korean etc., it is also possible to encounter the airport much reported with foreign language, fire
Language messages such as station, subway etc..This can allow some that cannot be proficient in multi-lingual people and bring inconvenience.
Invention content
The main purpose of the present invention is to provide a kind of real time translating method, intelligent glasses and computer-readable storage mediums
Matter, it is intended to which solution cannot be proficient in multi-lingual people in the language for encountering other countries in the prior art, can not understanding language
The technical issues of meaning.
To achieve the above object, the present invention provides a kind of real time translating method, is applied to intelligent glasses, the real time translation
Method includes:
Scene information is acquired, the information to be translated for including is extracted in the scene information;
It determines the information type of the information to be translated, the corresponding interpretative system of described information type is selected to wait turning over to described
It translates information to be translated, obtains the corresponding translation result of the information to be translated;
Export the translation result.
Optionally, include after the acquisition scene information:
The scene information is uploaded to preset memory.
Optionally, the information to be translated for including in the scene information of extracting includes:
The text message for whether including non-target language type in the scene information detected;
If including the text message of non-target language type in the scene information, the text message is extracted.
Optionally, the information type of the determination information to be translated selects the corresponding translation side of described information type
Formula translates the information to be translated, obtains the corresponding translation result of the information to be translated and includes:
When it is text type to determine the information type of the information to be translated, the corresponding interpretative system of text type is selected
The text message is translated, the corresponding translation result of the text message is obtained.
Optionally, the information to be translated for including in the scene information of extracting further includes:
The voice messaging for whether including non-target language type in the scene information detected;
If including the voice messaging of non-target language type in the scene information, the voice messaging is extracted.
Optionally, the information type of the determination information to be translated selects the corresponding translation side of described information type
Formula translates the information to be translated, obtains the corresponding translation result of the information to be translated and includes:
When it is sound-type to determine the information type of the information to be translated, the corresponding interpretative system of sound-type is selected
The voice messaging is translated, the corresponding translation result of the voice messaging is obtained.
Optionally, the output translation result includes:
The translation result is exported with speech form by bone conduction earphone.
Optionally, include after the output translation result:
When receiving the explanation requirement command based on translation result, the annotation of the translation result is obtained;
The annotation is exported with speech form by bone conduction earphone.
In addition, to achieve the above object, the present invention also provides a kind of intelligent glasses, the intelligent glasses include:Memory,
Processor and it is stored in the real time translation program that can be run on the memory and on the processor, the real time translation journey
The step of sequence realizes real time translating method as described above when being executed by the processor.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
It is stored with real time translation program on storage medium, is realized when the real time translation program is executed by processor as described above real-time
The step of interpretation method.
In the present invention, scene information is acquired, information to be translated is extracted from scene information, determines the information of information to be translated
Type selects the corresponding interpretative system of information type to treat translation information and is translated, obtains the corresponding translation of information to be translated
As a result, and translation result is exported.Through the invention, intelligent glasses can to the information to be translated in collected scene information into
Row real time translation, user only need wearing intelligent glasses that can obtain the translation result of information to be translated, are learning, being social for user
Equal activities bring great convenience.
Description of the drawings
Fig. 1 is the intelligent glasses structural schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of real time translating method first embodiment of the present invention;
Fig. 3 a to 3b are the structural schematic diagram of intelligent glasses of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific implementation mode
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, the intelligent glasses structural representation for the hardware running environment that Fig. 1, which is the embodiment of the present invention, to be related to
Figure.
As shown in Figure 1, the intelligent glasses may include:Processor 1001, such as CPU, network interface 1004, user interface
1003, memory 1005, communication bus 1002, microphone 1006, camera 1007.Wherein, communication bus 1002 for realizing
Connection communication between these components.Memory 1005 can be high-speed RAM memory, can also be stable memory
(non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor
1001 storage device.
Optionally, intelligent glasses can also include RF (Radio Frequency, radio frequency) circuit, sensor, audio-frequency electric
Road, WiFi module etc..Wherein, sensor such as optical sensor, motion sensor and other sensors.Specifically, light passes
Sensor may include ambient light sensor and proximity sensor, and as a kind of motion sensor, gravity accelerometer can be examined
The size for surveying in all directions (generally three axis) acceleration, can detect that size and the direction of gravity when static, certainly, intelligence
Glasses can also configure the other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, no longer superfluous herein
It states.
It will be understood by those skilled in the art that intelligent glasses structure shown in Fig. 1 does not constitute the limit to intelligent glasses
It is fixed, may include either combining certain components or different components arrangement than illustrating more or fewer components.
As shown in Figure 1, as may include that operating system, network are logical in a kind of memory 1005 of computer storage media
Believe module, Subscriber Interface Module SIM and real time translation program.
In intelligent glasses shown in Fig. 1, network interface 1004 is mainly used for connecting background server, with background server
Into row data communication;User interface 1003 is mainly used for connecting client (user terminal), with client into row data communication;And locate
Reason device 1001 can be used for calling the real time translation program stored in memory 1005, and execute following operation:
Scene information is acquired, the information to be translated for including is extracted in the scene information;
It determines the information type of the information to be translated, the corresponding interpretative system of described information type is selected to wait turning over to described
It translates information to be translated, obtains the corresponding translation result of the information to be translated;
Export the translation result.
Further, include after the acquisition scene information:
The scene information is uploaded to preset memory.
Further, the information to be translated for including in the scene information of extracting includes:
The text message for whether including non-target language type in the scene information detected;
If including the text message of non-target language type in the scene information, the text message is extracted.
Further, the information type of the determination information to be translated selects the corresponding translation of described information type
Mode translates the information to be translated, obtains the corresponding translation result of the information to be translated and includes:
When it is text type to determine the information type of the information to be translated, the corresponding interpretative system of text type is selected
The text message is translated, the corresponding translation result of the text message is obtained.
Further, the information to be translated for including in the scene information of extracting further includes:
The voice messaging for whether including non-target language type in the scene information detected;
If including the voice messaging of non-target language type in the scene information, the voice messaging is extracted.
Further, the information type of the determination information to be translated selects the corresponding translation of described information type
Mode translates the information to be translated, obtains the corresponding translation result of the information to be translated and includes:
When it is sound-type to determine the information type of the information to be translated, the corresponding interpretative system of sound-type is selected
The voice messaging is translated, the corresponding translation result of the voice messaging is obtained.
Further, the output translation result includes:
The translation result is exported with speech form by bone conduction earphone.
Further, include after the output translation result:
When receiving the explanation requirement command based on translation result, the annotation of the translation result is obtained;
The annotation is exported with speech form by bone conduction earphone.
It is the flow diagram of real time translating method first embodiment of the present invention with reference to Fig. 2, Fig. 2.
In one embodiment, real time translating method is applied to intelligent glasses, and real time translating method includes:
Step S10 acquires scene information, extracts in the scene information information to be translated for including;
In the present embodiment, real time translating method is applied to intelligent glasses.It is intelligence of the present invention with reference to Fig. 3 a to 3b, Fig. 3 a to 3b
The structural schematic diagram of energy glasses.The intelligent glasses as shown in Fig. 3 a to 3b, eyeglass 02 is replaceable, and spectacle frame 01 can be liked according to user
It is customized well and (for example, designing spectacle frame based on 3D face recognition technologies, improves and wear compatible degree and comfort).In spectacle frame
On 01 equipped with:Touch tablet 11, the touch operation for receiving user;Camera 12, user shoot image;Photosensitive sensors
13;Range sensor 14;Microphone (microphone) 15, for acquiring sound;Loudspeaker 16;Switch 17, for being turned on and off intelligence
The working condition of glasses;Mainboard 18 is integrated with communication module (4G/WIFI/GPS etc.), speech recognition conversion module, text on mainboard
This conversion module, gyroscope etc.;SIM card holder 19, for installing SIM card;Battery 20 can support wire/wireless to charge;Flexible electrical
Road plate 21.
It in the present embodiment, after user wears intelligent glasses, turns on the switch, camera and microphone are started to work, camera
Image information is acquired, microphone acquires voice messaging.In the present embodiment, scene information can be:Image information, voice messaging or figure
As information+voice messaging.
In one embodiment of the invention, the scene mode of intelligent glasses, such as user can also be arranged by touching in user
Touch operation on plate sets the scene mode of intelligent glasses to AC mode.Using OCR (Optical Character
Recognition, optical character identification) the collected picture of identification technology identification camera, extracts the text message on picture.
OCR identifications essentially consist in the word or character on identification picture.For example, the picture of acquisition is the picture of signpost, can identify
Go out the route information on the picture, obtain the information of text formatting (such as * .txt), then detecting the information of text format is
The no text message for non-target language type.For example, in the present embodiment, it sets Chinese to target language type in advance, if
Route information on the picture of the signpost is other language (such as English), then preserves text information, continue subsequent turn over
Processing is translated, if the route information on the picture of the signpost is Chinese, i.e., the route information on the picture of signpost is target language
It says type, then text information is not preserved, without carrying out subsequent translation processing procedure.In the present embodiment, under AC mode, inspection
Survey microphone collected voice messaging whether be non-target language type voice messaging.For example, in the present embodiment, in advance will in
Text is set as target language type.If collected voice messaging is English, the voice messaging is preserved, subsequent translation is continued
Process;If collected voice messaging is Chinese, the voice messaging is not preserved, without carrying out subsequent translation processing procedure.This
It invents in another embodiment, user opens bilingual intertranslation function by touch operation on a touchpad.And it pre-sets bilingual
Target language type in intertranslation function is Chinese.If collected language message is not Chinese, such as collected voice letter
Breath is English, it is determined that bilingual intertranslation function is Chinese and English intertranslation, i.e., defeated by earphone by collected translator of English at Chinese
Go out to wearer, subsequently, if wearer needs and this says that the people of English links up, by collected Chinese (during wearer says
Text) English is translated into, it is exported by loud speaker.If collected voice messaging is Japanese, it is determined that during bilingual intertranslation function is
Japanese intertranslation is exported collected translator of Japanese by earphone to wearer at Chinese, subsequently, if wearer need and
This says that the people of Japanese links up, then collected Chinese (Chinese that wearer says) is translated into Japanese, exported by loud speaker.
In this way, the real-time exchange between just realizing different language people.
In another alternative embodiment of the present invention, user is by touch operation on a touchpad, by the scene of intelligent glasses
Mode setting is mode of learning.Skill is identified using OCR (Optical Character Recognition, optical character identification)
The collected picture of art identification camera extracts the text message on picture.OCR identifications essentially consist in the word on identification picture
Or character.In the present embodiment, if recognizing on picture, there are text messages, and text message is preserved, and continue subsequent translation
Processing.In the present embodiment, under mode of learning, the voice messaging of microphone acquisition is preserved, subsequent translation process is continued.For example,
There are text message " delicious " under mode of learning, in the picture of camera acquisition extracts the text using OCR technique
Type information, and text information is translated into corresponding Chinese speech in subsequent translation processing, i.e., " delicious ", and with language
Sound form is exported to user.Or under mode of learning, it is " delicious " that microphone, which collects voice messaging, is turned over subsequent
It translates in processing, in a manner of voice converting speech, which is translated into " delicious ", and is exported to use with speech form
Family.It, can be with built-in various sensors, such as range sensor, three-axis gyroscope etc. in intelligent glasses in the present embodiment.It is learning
Under habit pattern, the data of range sensor and/or three-axis gyroscope acquisition are obtained, for example, the number acquired by range sensor
According to, judge whether user excessively close from books, that is, judge range sensor acquisition data whether be less than pre-determined distance threshold value (such as
It is set as 20cm), if user distance books are excessively close, prompt is sent out, prompts user excessively close apart from books at this time;Pass through three axis tops
The data of spiral shell instrument acquisition, judge whether user's sitting posture is correct, when user's abnormal sitting posture is true, user are prompted to adjust sitting posture.
Step S20 determines the information type of the information to be translated, selects the corresponding interpretative system pair of described information type
The information to be translated is translated, and the corresponding translation result of the information to be translated is obtained;
In the present embodiment, if the information type of information to be translated is text type, pass through the translation of text conversion voice
Mode is treated translation information and is translated, obtain language form be target language type (such as Chinese, with specific reference to actual needs
Be configured) translation result.If the information type of information to be translated is sound-type, pass through the translation of voice converting speech
Mode is treated translation information and is translated, obtain language form be target language type (such as Chinese, with specific reference to actual needs
Be configured) translation result.
Step S30 exports the translation result.
In the present embodiment, the translation result that step S20 is obtained is voice, then is exported with speech form by bone conduction earphone
Translation result, user can receive the translation result.
In the present embodiment, when user is travelling abroad or when being exchanged with the people of other countries, if not being proficient in his national language
Speech, will reduce the Tourist Experience of user or there are communication disorders with other people.In the present embodiment, user only needs to wear Brilliant Eyes
Mirror, the unacquainted text message that user can be seen or the voice messaging that can not be understood heard translate into user's mother tongue class
The voice messaging of type, and export the Tourist Experience for promoting user to user or reduce and other people communication disorders.When user is learning
When habit, if encountering unacquainted text or hearing unacquainted voice messaging, needs progress manual queries to understand, cause to learn
Inefficiency, in the present embodiment, user wears intelligent glasses, can turn over the text message in the collected picture of intelligent glasses
It is translated into its corresponding voice (target language type), and exports and is turned over to user, or by the collected voice messaging of intelligent glasses
It is translated into corresponding voice (target language type), and is exported to user, the learning efficiency of user is improved.
Further, in one embodiment of real time translating method of the present invention, include after the acquisition scene information:
The scene information is uploaded to preset memory.
In the tourism process of user, some special circumstances are can be potentially encountered.It forces shopping for example, meeting with, plunder event
Deng, if at this point, without record, will lead to not leave evidence so that user itself damage of the rights and interests.And if using such as hand
Machine, video camera etc. are shot, and the situation of user may be made more dangerous.In the present embodiment, the intelligent glasses that user wears have
There is good concealment, acquires scene information by intelligent glasses, i.e., image information and voice are acquired by camera and microphone
Information, and collected scene information is uploaded to preset memory, evidence just is provided for right-safeguarding in the future, ensures user's
Itself equity.
In user's tourism process, common situation is in another, and the wonderful scape of wonder at the moment is transient, and user has little time
Take pictures and shoot record.And in the present embodiment, the intelligent glasses that user wears are acquiring in real time, and by the scene information of acquisition
It is uploaded to preset memory, just has recorded various beautiful scenerys during the journey.
In the present embodiment, intelligent glasses can be used as personal record instrument, and the scene information of acquisition is uploaded to preset storage
Device.In the present embodiment, memory can be built in intelligent glasses, can also be high in the clouds memory.For example, intelligent glasses pass through
News module (WIFI/4G) is established network communication with cloud server and is connect, and the scene information of acquisition is uploaded to cloud server
Memory in.
Further, in one embodiment of real time translating method of the present invention, include in the extraction scene information waits for
Translation information includes:
The text message for whether including non-target language type in the scene information detected;
If including the text message of non-target language type in the scene information, the text message is extracted.
In the present embodiment, target language type is the mother tongue of user, such as Chinese, and non-target language type is other than Chinese
Other countries' word.For example, user sees one piece of signpost, but the line information on signpost is not Chinese mark, is led
Family of applying can not understand route information.At this point, user wears intelligent glasses, intelligent glasses acquire the picture of signpost, then lead to
It crosses OCR technique and identifies route information on signpost, due to the text message that the route information is non-target language type, then
Extract text information.In the present embodiment, if the picture a that intelligent glasses are acquired by camera, which is identified by OCR technique
Text message in piece a without translating text information for user, does not then carry if text information is target language type
Text type information is taken, is also just handled without subsequent translation.
In another alternative embodiment of the present invention, it is contemplated that children may carry out Words study by wearing intelligent glasses, then
It sets the scene mode of intelligent glasses to mode of learning, under mode of learning, picture is acquired by camera by intelligent glasses,
The text message in the picture is recognized by OCR technique, no matter whether text message is target language type, by the text
Information translates into corresponding voice (target language type, by user setting), and exports.
Further, in one embodiment of real time translating method of the present invention, step S20 includes:
When it is text type to determine the information type of the information to be translated, the corresponding interpretative system of text type is selected
The text message is translated, the corresponding translation result of the text message is obtained.
In the present embodiment, when the information type of information to be translated is text type, the translation of text conversion voice is selected
Mode translates text message, obtains translation result, and the language form for pre-setting translation result is object language class
Type (by user setting, such as Chinese).In the present embodiment, the technology for translating the text into voice is the prior art, is not done herein
It repeats.
Further, in one embodiment of real time translating method of the present invention, include in the extraction scene information waits for
Translation information further includes:
The voice messaging for whether including non-target language type in the scene information detected;
If including the voice messaging of non-target language type in the scene information, the voice messaging is extracted.
In the present embodiment, target language type is the mother tongue of user, such as Chinese, and non-target language type is other than Chinese
Other countries' word.For example, microphone collects one section of voice messaging, which is English, then extracts the voice messaging,
Continue subsequent translation processing.In the present embodiment, if intelligent glasses collect one section of voice messaging by microphone, if the voice is believed
Breath is target language type, then is not necessarily to translate the voice messaging for user, then the voice messaging is not extracted, also just without follow-up
Translation processing.
In another alternative embodiment of the present invention, it is contemplated that children may carry out Words study by wearing intelligent glasses, then
It sets the scene mode of intelligent glasses to mode of learning, under mode of learning, passes through microphone by intelligent glasses and acquire voice letter
Breath, no matter whether voice messaging is target language type, translates into corresponding voice (object language class by the voice messaging
Type, by user setting), and export.
Further, in one embodiment of real time translating method of the present invention, step S20 further includes:
When it is sound-type to determine the information type of the information to be translated, the corresponding interpretative system of sound-type is selected
The voice messaging is translated, the corresponding translation result of the voice messaging is obtained.
In the present embodiment, when the information type of information to be translated is sound-type, the translation of voice converting speech is selected
Mode translates voice messaging, obtains translation result, and the language form for pre-setting translation result is object language class
Type (by user setting, such as Chinese).In the present embodiment, by voiced translation at voice technology be the prior art, do not do herein
It repeats.
Further, in one embodiment of real time translating method of the present invention, step S30 includes:
The translation result is exported with speech form by bone conduction earphone.
In the present embodiment, translation result is exported with speech form by bone conduction earphone, user can receive the translation
As a result.User can obtain translation result in first time, promote user experience.
Further, in one embodiment of real time translating method of the present invention, include after step S30:
When receiving the explanation requirement command based on translation result, the annotation of the translation result is obtained;
The annotation is exported with speech form by bone conduction earphone.
For adult user, even if some foreign languages translation, at Chinese, user may still not know its meaning.It is right
For child user, the translation result of intelligent glasses output, it is any meaning that user, which may be unaware that,.In the present embodiment, if
After exporting translation result XX, " what meaning XX is ", " annotation of output XX " etc. collected and explains that requirement command is (specific for microphone
, explain that requirement command is configured according to user demand, this is not restricted), or the specific operation of user is received, from
And trigger explain requirement command, such as receive user on a touchpad twice in succession clicking operation (specifically, specific operation root
It is configured according to actual conditions, this is not restricted), then the annotation of XX (i.e. translation result) is obtained, and pass through bone conduction earphone
The annotation is exported with speech form.In the present embodiment, can be in the memory of intelligent glasses built-in each words and expressions with it is corresponding
The contrast relationship table of annotation from contrast relationship table, searches whether exist in the explanation requirement command for receiving translation result
The translation result, and if it exists, the corresponding annotation of the translation result is then further obtained, then by bone conduction earphone with voice shape
Formula exports the annotation.In another alternative embodiment of the present invention, intelligent glasses can also pass through network module (WiFi/4G) and service
Device establishes communication connection, and in the explanation requirement command for receiving translation result, the explanation requirement command of the translation result is sent out
It send to server, after the annotation of the whois lookup translation result, annotation is sent to intelligent glasses, then passes through osteoacusis ear
Machine exports the annotation with speech form.
In the present embodiment, when user does not understand output result still, it can explain that requirement command obtains this and turns over by triggering
The detailed annotation of result is translated, understanding of the user to translation result is improved.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
On be stored with real time translation program, the real time translation program realizes real time translating method as described above when being executed by processor
The step of.
Each embodiment base of the specific embodiment of computer readable storage medium of the present invention and above-mentioned real time translating method
This is identical, and this will not be repeated here.
It should be noted that herein, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that process, method, article or system including a series of elements include not only those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or system institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including this
There is also other identical elements in the process of element, method, article or system.
The embodiments of the present invention are for illustration only, can not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical scheme of the present invention substantially in other words does the prior art
Going out the part of contribution can be expressed in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disc, CD), including some instructions use so that a station terminal equipment (can be mobile phone,
Computer, server, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of real time translating method, which is characterized in that be applied to intelligent glasses, the real time translating method includes:
Scene information is acquired, the information to be translated for including is extracted in the scene information;
It determines the information type of the information to be translated, selects the corresponding interpretative system of described information type to the letter to be translated
Breath is translated, and the corresponding translation result of the information to be translated is obtained;
Export the translation result.
2. real time translating method as described in claim 1, which is characterized in that include after the acquisition scene information:
The scene information is uploaded to preset memory.
3. real time translating method as described in claim 1, which is characterized in that include in the extraction scene information waits for
Translation information includes:
The text message for whether including non-target language type in the scene information detected;
If including the text message of non-target language type in the scene information, the text message is extracted.
4. real time translating method as claimed in claim 3, which is characterized in that the info class of the determination information to be translated
Type selects the corresponding interpretative system of described information type to translate the information to be translated, obtains the information to be translated
Corresponding translation result includes:
When it is text type to determine the information type of the information to be translated, select the corresponding interpretative system of text type to institute
It states text message to be translated, obtains the corresponding translation result of the text message.
5. real time translating method as described in claim 1, which is characterized in that include in the extraction scene information waits for
Translation information further includes:
The voice messaging for whether including non-target language type in the scene information detected;
If including the voice messaging of non-target language type in the scene information, the voice messaging is extracted.
6. real time translating method as claimed in claim 5, which is characterized in that the info class of the determination information to be translated
Type selects the corresponding interpretative system of described information type to translate the information to be translated, obtains the information to be translated
Corresponding translation result includes:
When it is sound-type to determine the information type of the information to be translated, select the corresponding interpretative system of sound-type to institute
It states voice messaging to be translated, obtains the corresponding translation result of the voice messaging.
7. real time translating method as described in claim 1, which is characterized in that the output translation result includes:
The translation result is exported with speech form by bone conduction earphone.
8. real time translating method as described in claim 1, which is characterized in that include after the output translation result:
When receiving the explanation requirement command based on translation result, the annotation of the translation result is obtained;
The annotation is exported with speech form by bone conduction earphone.
9. a kind of intelligent glasses, which is characterized in that the intelligent glasses include:Memory, processor and it is stored in the storage
It is real when the real time translation program is executed by the processor on device and the real time translation program that can run on the processor
Now such as the step of real time translating method described in any item of the claim 1 to 8.
10. a kind of computer readable storage medium, which is characterized in that be stored on the computer readable storage medium and turn in real time
Translator program realizes such as real time translation described in any item of the claim 1 to 8 when the real time translation program is executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810184999.8A CN108415905A (en) | 2018-03-06 | 2018-03-06 | Real time translating method, intelligent glasses and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810184999.8A CN108415905A (en) | 2018-03-06 | 2018-03-06 | Real time translating method, intelligent glasses and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108415905A true CN108415905A (en) | 2018-08-17 |
Family
ID=63130101
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810184999.8A Pending CN108415905A (en) | 2018-03-06 | 2018-03-06 | Real time translating method, intelligent glasses and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108415905A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344414A (en) * | 2018-11-02 | 2019-02-15 | 芋头科技(杭州)有限公司 | Simultaneous interpretation method, system and equipment and controller and medium |
CN109657252A (en) * | 2018-12-25 | 2019-04-19 | 北京微播视界科技有限公司 | Information processing method, device, electronic equipment and computer readable storage medium |
CN109977866A (en) * | 2019-03-25 | 2019-07-05 | 联想(北京)有限公司 | Content translation method and device, computer system and computer readable storage medium |
CN110133872A (en) * | 2019-05-24 | 2019-08-16 | 中国人民解放军东部战区总医院 | A kind of intelligent glasses can be realized multilingual intertranslation |
CN111160333A (en) * | 2019-12-29 | 2020-05-15 | 歌尔科技有限公司 | AR glasses, text translation method and device thereof, and computer-readable storage medium |
CN111814492A (en) * | 2020-06-15 | 2020-10-23 | 上海传英信息技术有限公司 | Translation method, terminal and computer storage medium |
EP3876025A1 (en) * | 2020-03-02 | 2021-09-08 | Siemens Aktiengesellschaft | Obstacle detection and collision warning assembly |
CN115797815A (en) * | 2021-09-08 | 2023-03-14 | 荣耀终端有限公司 | AR translation processing method and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104750678A (en) * | 2015-04-19 | 2015-07-01 | 王学庆 | Image text recognizing translation glasses and method |
CN204537126U (en) * | 2015-04-18 | 2015-08-05 | 王学庆 | A kind of image text identification translation glasses |
CN204679734U (en) * | 2015-06-15 | 2015-09-30 | 徐世通 | Intelligent translation glasses |
CN107391500A (en) * | 2017-08-21 | 2017-11-24 | 阿里巴巴集团控股有限公司 | Text interpretation method, device and equipment |
-
2018
- 2018-03-06 CN CN201810184999.8A patent/CN108415905A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN204537126U (en) * | 2015-04-18 | 2015-08-05 | 王学庆 | A kind of image text identification translation glasses |
CN104750678A (en) * | 2015-04-19 | 2015-07-01 | 王学庆 | Image text recognizing translation glasses and method |
CN204679734U (en) * | 2015-06-15 | 2015-09-30 | 徐世通 | Intelligent translation glasses |
CN107391500A (en) * | 2017-08-21 | 2017-11-24 | 阿里巴巴集团控股有限公司 | Text interpretation method, device and equipment |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109344414A (en) * | 2018-11-02 | 2019-02-15 | 芋头科技(杭州)有限公司 | Simultaneous interpretation method, system and equipment and controller and medium |
CN109657252A (en) * | 2018-12-25 | 2019-04-19 | 北京微播视界科技有限公司 | Information processing method, device, electronic equipment and computer readable storage medium |
CN109977866A (en) * | 2019-03-25 | 2019-07-05 | 联想(北京)有限公司 | Content translation method and device, computer system and computer readable storage medium |
CN109977866B (en) * | 2019-03-25 | 2021-04-13 | 联想(北京)有限公司 | Content translation method and device, computer system and computer readable storage medium |
CN110133872A (en) * | 2019-05-24 | 2019-08-16 | 中国人民解放军东部战区总医院 | A kind of intelligent glasses can be realized multilingual intertranslation |
CN111160333A (en) * | 2019-12-29 | 2020-05-15 | 歌尔科技有限公司 | AR glasses, text translation method and device thereof, and computer-readable storage medium |
EP3876025A1 (en) * | 2020-03-02 | 2021-09-08 | Siemens Aktiengesellschaft | Obstacle detection and collision warning assembly |
WO2021175641A1 (en) * | 2020-03-02 | 2021-09-10 | Siemens Aktiengesellschaft | Arrangement for obstacle detection and collision warning |
CN111814492A (en) * | 2020-06-15 | 2020-10-23 | 上海传英信息技术有限公司 | Translation method, terminal and computer storage medium |
CN115797815A (en) * | 2021-09-08 | 2023-03-14 | 荣耀终端有限公司 | AR translation processing method and electronic device |
CN115797815B (en) * | 2021-09-08 | 2023-12-15 | 荣耀终端有限公司 | AR translation processing method and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108415905A (en) | Real time translating method, intelligent glasses and computer readable storage medium | |
US10592763B2 (en) | Apparatus and method for using background change to determine context | |
US10341544B2 (en) | Determining a matching score between users of wearable camera systems | |
CN104880835A (en) | Intelligent glasses | |
CN104983511A (en) | Voice-helping intelligent glasses system aiming at totally-blind visual handicapped | |
CN107924395A (en) | Personal translator | |
WO2015081694A1 (en) | Smart glasses and method of controlling same | |
US20210287308A1 (en) | Using a wearable apparatus in social events | |
US10943117B2 (en) | Translation to braille | |
CN111985252A (en) | Dialogue translation method and device, storage medium and electronic equipment | |
KR20160011302A (en) | System and method for digital image processing by wearable glass device | |
CN111046223A (en) | Voice assisting method, terminal, server and system for visually impaired | |
US11493959B2 (en) | Wearable apparatus and methods for providing transcription and/or summary | |
CN111524518B (en) | Augmented reality processing method and device, storage medium and electronic equipment | |
CN114462539A (en) | Training method of content classification model, and content classification method and device | |
KR102224985B1 (en) | System and method for recognition acquaintance by wearable glass device | |
WO2021116760A1 (en) | Wearable systems and methods for selectively reading text | |
EP3882894B1 (en) | Seeing aid for a visually impaired individual | |
CN118414822A (en) | Portable information terminal and information processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180817 |
|
RJ01 | Rejection of invention patent application after publication |