CN105657174A - Voice converting method and terminal - Google Patents

Voice converting method and terminal Download PDF

Info

Publication number
CN105657174A
CN105657174A CN201610054063.4A CN201610054063A CN105657174A CN 105657174 A CN105657174 A CN 105657174A CN 201610054063 A CN201610054063 A CN 201610054063A CN 105657174 A CN105657174 A CN 105657174A
Authority
CN
China
Prior art keywords
text message
voice messaging
current state
earphone
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610054063.4A
Other languages
Chinese (zh)
Inventor
张涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nubia Technology Co Ltd
Original Assignee
Nubia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nubia Technology Co Ltd filed Critical Nubia Technology Co Ltd
Priority to CN201610054063.4A priority Critical patent/CN105657174A/en
Publication of CN105657174A publication Critical patent/CN105657174A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72433User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for voice messaging, e.g. dictaphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72436User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for text messaging, e.g. short messaging services [SMS] or e-mails
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Environmental & Geological Engineering (AREA)
  • Telephone Function (AREA)

Abstract

The embodiment of the invention discloses a voice converting method. In adoption of the method, the important information of a user can be processed timely; and therefore, the user experience is greatly promoted. The method comprises following steps of receiving first text information; converting the first text information into first voice information; obtaining a current state; and when the current state is an earphone answering state or a telephone receiver answering state, playing the first voice information. The embodiment of the invention also discloses a terminal.

Description

A kind of phonetics transfer method and terminal
Technical field
The present invention relates to communication technical field, particularly relate to a kind of phonetics transfer method and terminal.
Background technology
In the last few years, along with the fast development of communication technology and terminal, the number of users sustainable growth of terminal, particularly mobile phone. Mobile phone has become people in daily life and work requisite " little assistant ", and it not only strengthens interpersonal communication, and it is many convenient also to bring for the clothing, food, lodging and transportion--basic necessities of life etc. of people. Meanwhile, people are also more and more higher to the functional requirement of mobile phone.
Mobile phone has been now the necessary article that user never leaves each other, and no matter user is driving or when amusement, mobile phone is just at us on hand. But in the process driven or entertain, it may occur that situations below: such as, when user is when driving, mobile phone receives important information, now user sees the mobile phone screen because going for a long time to stare at and cannot process this information in time; Or, for instance, when user is when listening music entertainment, mobile phone have received important information, and now the information that this is important will not be taked to inform timely and effectively and the process of user make user not know to receive important information by prior art. Therefore, it is impossible to check or process this information. Based on both the above situation, the important information of user all can be made to fail to be processed in time, thus reducing Consumer's Experience.
Summary of the invention
For solving above-mentioned technical problem, embodiment of the present invention expectation provides a kind of phonetics transfer method and terminal, makes the important information of user be processed in time, thus being greatly promoted Consumer's Experience.
The technical scheme is that and be achieved in that:
First aspect, it is provided that a kind of phonetics transfer method, described method includes:
Receive the first text message;
Described first text message is converted to the first voice messaging;
Obtain current state;
Described current state is earphone receiving state or when state answered by receiver, plays described first voice messaging.
Optionally, described described first text message be converted to the first voice messaging include:
When described first text message is the text message of the application in default list of application or contact person, described first text message is converted to described first voice messaging.
Optionally, described acquisition current state includes:
Whether test earphone hole inserts earphone;
When described earpiece holes inserts described earphone, it is determined that described current state is described earphone receiving state.
Optionally, described acquisition current state also includes:
The detection distance from shelter;
When described distance is less than predeterminable range, detect whether described shelter is human ear;
When described shelter is human ear, it is determined that described current state is that state answered by described receiver.
Optionally, after described first voice messaging of described broadcasting, described method also includes:
Delete described first voice messaging.
Second aspect, it is provided that a kind of terminal, described terminal includes:
Receiver module, is used for receiving the first text message;
Modular converter, for being converted to the first voice messaging by described first text message;
Acquisition module, is used for obtaining current state;
Playing module, is earphone receiving state for described current state or when state answered by receiver, plays described first voice messaging.
Optionally, described modular converter specifically for:
When described first text message is the text message of the application in default list of application or contact person, described first text message is converted to described first voice messaging.
Optionally, described acquisition module specifically for:
Whether test earphone hole inserts earphone;
When described earpiece holes inserts described earphone, it is determined that described current state is described earphone receiving state.
Optionally, described acquisition module specifically for:
The detection distance from shelter;
When described distance is less than predeterminable range, detect whether described shelter is human ear;
When described shelter is human ear, it is determined that described current state is that state answered by described receiver.
Optionally, described terminal also includes:
Removing module, is used for deleting described first voice messaging. Embodiments provide a kind of phonetics transfer method and terminal, receive the first text message; Again the first text message is converted to the first voice messaging; Obtain current state; Afterwards, if current state is earphone receiving state or state answered by receiver, the first voice messaging is play. So, according to current state, determine whether to play the first voice messaging being converted to by the first text message, therefore, user just can be avoided to see the mobile phone screen because going for a long time to stare at and the important information that mobile phone receives cannot be processed in time, can pass through to change existing Chat mode, it is achieved the voice of Word message is changed and reported. So, the important information of user is made to be processed in time, thus being greatly promoted Consumer's Experience.
Accompanying drawing explanation
Fig. 1 is the hardware architecture diagram of the optional mobile terminal realizing each embodiment of the present invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
The flow chart of a kind of phonetics transfer method that Fig. 3 provides for the embodiment of the present invention;
The flow chart of the another kind of phonetics transfer method that Fig. 4 provides for the embodiment of the present invention;
The structural representation of a kind of terminal that Fig. 5 provides for the embodiment of the present invention;
The structural representation of the another kind of terminal that Fig. 6 provides for the embodiment of the present invention;
The structural representation of another terminal that Fig. 7 provides for the embodiment of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described.
Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing. In follow-up description, use the suffix being used for representing such as " module ", " parts " or " unit " of element only for being conducive to the explanation of the present invention, itself do not have specific meaning. Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners. Such as, the terminal described in the present invention can include the mobile terminal of such as mobile phone, smart phone, notebook computer, digit broadcasting receiver, personal digital assistant (PDA), panel computer (PAD), portable media player (PMP), guider etc. and the fixed terminal of such as numeral TV, desk computer etc. Hereinafter it is assumed that terminal is mobile terminal. However, it will be understood by those skilled in the art that, except being used in particular for the element of mobile purpose, structure according to the embodiment of the present invention can also apply to the terminal of fixed attribute.
Fig. 1 is the hardware architecture diagram of the optional mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can include wireless communication unit 110, sensing unit 140, output unit 150, memorizer 160, controller 180 and power subsystem 190 etc. Fig. 1 illustrates the mobile terminal with various assembly, it should be understood that be not required for implementing all assemblies illustrated. Can alternatively implement more or less of assembly. Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the radio communication between mobile terminal 100 and wireless communication system or network. Such as, wireless communication unit can include at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114.
Broadcast reception module 111 manages server via broadcast channel from external broadcasting and receives broadcast singal and/or broadcast related information. Broadcast channel can include satellite channel and/or terrestrial channel. Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or broadcast singal that reception is previously created and/or broadcast related information and send it to the server of terminal. Broadcast singal can include TV broadcast singal, radio signals, data broadcasting signal etc. And, broadcast singal may further include the broadcast singal combined with TV or radio signals. Broadcast related information can also provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112. Broadcast singal can exist in a variety of manners, such as, it can exist with the electronic program guides (EPG) of DMB (DMB), the form of the electronic service guidebooks (ESG) etc. of digital video broadcast-handheld (DVB-H). Broadcast reception module 111 can be passed through to use the broadcast system of each attribute to receive signal broadcast. Especially, broadcast reception module 111 can be passed through to use such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), DVB-hand-held (DVB-H), forward link media (MediaFLO) Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc. digit broadcasting system receive digital broadcasting. Broadcast reception module 111 may be constructed such that the various broadcast systems and above-mentioned digit broadcasting system that are adapted to provide for broadcast singal. The broadcast singal and/or the broadcast related information that receive via broadcast reception module 111 can be stored in memorizer 160 (or storage medium of other attribute).
Mobile communication module 112 sends radio signals at least one in base station (such as, access point, node B etc.), exterior terminal and server and/or receives from it radio signal.Such radio signal can include voice call signal, video calling signal or the data according to text and/or each attribute of Multimedia Message transmission and/or reception.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal. This module can internally or externally be couple to terminal. Wi-Fi (Wireless Internet Access) technology involved by this module can include WLAN (WLAN) (Wi-Fi), WiMAX (Wibro), worldwide interoperability for microwave accesses (Wimax), high-speed downlink packet accesses (HSDPA) etc.
Short range communication module 114 is the module for supporting junction service. Some examples of short-range communication technology include bluetoothTM, RF identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybeeTMEtc..
Sensing unit 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, mobile terminal 100 acceleration or deceleration move and direction etc., and generate the order of operation for controlling mobile terminal 100 or signal. Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing unit 140 can sense this sliding-type phone and open or close. Sensing unit 140 can include proximity transducer 141 and below in conjunction with touch screen, this will be described.
Display unit 151 may be displayed on the information processed in mobile terminal 100. Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show the user interface (UI) relevant with call or other communicate (such as, text messaging, multimedia file download etc.) or graphic user interface (GUI). When being in video calling pattern or image capture mode when mobile terminal 100, display unit 151 can show the image of image and/or the reception caught, UI or GUI illustrating video or image and correlation function etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch screen time, display unit 151 can serve as input equipment and output device. Display unit 151 can include at least one in liquid crystal display (LCD), thin film transistor (TFT) LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. Some in these display may be constructed such that transparence is to allow user to watch from outside, and this is properly termed as transparent display, and typical transparent display can be such as transparent organic light emitting diode (TOLED) display etc. According to the specific embodiment wanted, mobile terminal 100 can include two or more display units (or other display device), such as, mobile terminal can include outernal display unit (not shown) and inner display unit (not shown). Touch screen can be used for detecting touch input pressure and touch input position and touch input area.
Dio Output Modules 152 can mobile terminal be in call signal receive under the isotype such as pattern, call mode, logging mode, speech recognition mode, broadcast reception mode time, that wireless communication unit 110 is received or storage in memorizer 160 voice data transducing audio signal and be output as sound. And, dio Output Modules 152 can provide the audio frequency output (such as, call signal receive sound, message sink sound etc.) relevant to the specific function of mobile terminal 100 execution.Dio Output Modules 152 can include speaker, buzzer etc.
Memorizer 160 can store the process performed by controller 180 and the software program controlling operation etc., or can temporarily store the data (such as, telephone directory, message, still image, video etc.) that oneself maybe will export through output. And, memorizer 160 can store the vibration about the various modes exported when touching and being applied to touch screen and the data of audio signal.
Memorizer 160 can include the storage medium of at least one attribute, described storage medium includes flash memory, hard disk, multimedia card, card-type memorizer (such as, SD or DX memorizer etc.), random access storage device (RAM), static random-access memory (SRAM), read only memory (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc. And, mobile terminal 100 can be connected the network storage device cooperation of the storage function performing memorizer 160 with by network.
Controller 180 generally controls the overall operation of mobile terminal. Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process. It addition, controller 180 can include the multi-media module 181 for reproducing (or playback) multi-medium data, multi-media module 181 can construct in controller 180, or it is so structured that separates with controller 180. Controller 180 can perform pattern recognition process, so that the handwriting input performed on the touchscreen or picture drafting input are identified as character or image.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides the suitable electric power operated needed for each element and assembly.
Various embodiment described herein can to use such as computer software, hardware or its any combination of computer-readable medium to implement. Hardware is implemented, embodiment described herein can pass through to use application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, at least one that is designed to perform in the electronic unit of function described herein to implement, in some cases, such embodiment can be implemented in controller 180. Implementing for software, the embodiment of such as process or function can be implemented with allowing the independent software module performing at least one function or operation. Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in memorizer 160 and be performed by controller 180.
So far, oneself is through describing mobile terminal according to its function. Below, for the sake of brevity, by the slide type mobile terminal in the mobile terminal describing each attribute of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily. Therefore, the present invention can be applied to the mobile terminal of any attribute, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 may be constructed such that utilization operates via such as wired and wireless communication system and the satellite-based communication system of frame or packet transmission data.
The communication system being wherein operable to according to the mobile terminal of the present invention is described referring now to Fig. 2.
Such communication system can use different air interfaces and/or physical layer. Such as, the air interface used by communication system includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and UMTS (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc. As non-limiting example, as explained below relates to cdma communication system, but such instruction is equally applicable to the system of other attribute.
With reference to Fig. 2, cdma wireless communication system can include multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 280. MSC280 is configured to form interface with Public Switched Telephony Network (PSTN) 290. MSC280 is also structured to and the BSC275 formation interface that can be couple to base station 270 via back haul link. Back haul link can construct according to any one in some interfaces that oneself knows, described interface includes such as E1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL. It will be appreciated that system as shown in Figure 2 can include multiple BSC275.
Each BS270 can service one or more subregion (or region), by each subregion of multidirectional antenna or the antenna covering pointing to specific direction radially away from BS270. Or, each subregion can be covered by two or more antennas for diversity reception. Each BS270 may be constructed such that support multiple frequencies distribution, and the distribution of each frequency has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Intersecting that subregion and frequency are distributed can be referred to as CDMA Channel. BS270 can also be referred to as base station transceiver subsystem (BTS) or other equivalent terms. In this case, term " base station " may be used for broadly representing single BSC275 and at least one BS270. Base station can also be referred to as " cellular station ". Or, each subregion of specific BS270 can be referred to as multiple cellular station.
As shown in Figure 2, broadcast singal is sent in system the mobile terminal 100 of operation by broadcsting transmitter (BT) 295. Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal that reception is sent by BT295. In fig. 2 it is shown that several global positioning systems (GPS) satellite 300. Satellite 300 helps to position at least one in multiple mobile terminals 100.
In fig. 2, depict multiple satellite 300, it is understood that be, it is possible to use any number of satellite obtains useful location information.
As a typical operation of wireless communication system, BS270 receives the reverse link signal from various mobile terminals 100. Mobile terminal 100 generally participates in call, information receiving and transmitting communicates with other attribute. Each reverse link signal that certain base station 270 receives is processed in specific BS270. The data obtained are forwarded to relevant BSC275. BSC provides call resource distribution and the mobile management function of the coordination of soft switching process included between BS270. The data received also are routed to MSC280 by BSC275, and it provides the extra route service for forming interface with PSTN290. Similarly, PSTN290 and MSC280 forms interface, and MSC and BSC275 forms interface, and BSC275 correspondingly controls BS270 so that forward link signals to be sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, it is proposed to each embodiment of the present invention.
Embodiment one
The embodiment of the present invention provides a kind of phonetics transfer method, is applied to terminal, and terminal includes mobile phone, notebook computer, panel computer and even includes vehicle-mounted computer machine, as it is shown on figure 3, the method includes:
Step 301, receive the first text message.
Here, the first text message can include the information that shows with forms such as word, letter or figures, for instance, the information of short message, mail, multimedia message and application. Wherein, the information of application can be chat message and the system information that wechat, note, Fetion, QQ etc. are typically based on the chat software of Word message.
Step 302, the first text message is converted to the first voice messaging.
Concrete, first have to judge whether the first text message is the text message presetting application or contact person in list of application; When this first text message is the text message of the application in default list of application or contact person, this first text message is converted to the first voice messaging.
Here, judge whether this first text message is the text message presetting the application in list of application, first to determine this application belonging to the first text message, again the application in this application and default list of application is carried out comparison one by one, it is judged that whether this application is preset the application in list of application. If this application is to preset application, carry out voice messaging conversion; Otherwise, voice messaging conversion is not carried out. What deserves to be explained is, it is judged that the method whether this first text message is the text message presetting the contact person in list of application is identical with said method, and the present embodiment just no longer describes in detail.
Conventional and technology maturation Voice Conversion Techniques is from Text To Speech (TextToSpeech, TTS) technology at present. TTS is an interactive part, and machine can be spoken. It is to use linguistics and psychologic outstanding work simultaneously, under the support of built-in chip, by the design of neutral net, word is converted to intelligently natural-sounding stream. Text is changed by TTS technology in real time, and the short of conversion time can calculate the second. Under its peculiar intelligent sound controller action, the voice musical note of text output is smooth so that hearer feels nature when listening to information, has no the cold and detached of machine talk output and jerky sense. TTS technology covers GB I and II Chinese character, has English interface, it is possible to automatically identify Chinese and English, supports that Chinese and English is mixed and reads. All sound adopt true man's mandarin to be RP, it is achieved that 120-150 Chinese character/minute Rapid Speech synthesis, bright reading rate reaches 3-4 Chinese character/second, makes user can hear clearly melodious tonequality and the intonation of the smoothness that links up.
TTS is the one of phonetic synthesis application, and it will be stored in the file in terminal, such as help file or webpage, converts natural-sounding output to. TTS not only assists in the information on people's reading terminal visually impaired, more can increase the readability of text document. Present TTS application includes mail and the sound sensitive system of voice driven, and often uses together with speech recognition program.
Step 303, acquisition current state.
Step 303 can divide three kinds of situations:
Whether situation 1, test earphone hole insert earphone; When earpiece holes inserts described earphone, it is determined that current state is earphone receiving state. Concrete, the resistance value that the method that test earphone inserts can be through between the contact chip of the left and right acoustic channels of terminal holes judges.When the resistance value detected is be more than or equal to first threshold and less than or equal to Second Threshold, it is determined that have headset plug to insert described jack. By detecting the resistance value between jack left and right acoustic channels, solve class earphone object (such as, dust plug) insert jack or two-in-one socket after, the problem being misidentified as earphone, especially for the socket that earphone and optical fiber are two-in-one, it is possible to efficiently differentiate headset plug and insert or Optical fiber plug insertion jack. Judge that current state is whether as earphone receiving state by this method. Preferably, first threshold is 30 ohm, and Second Threshold is 62 ohm. The present embodiment test earphone insertion method is merely illustrative. What deserves to be explained is, when terminal uses bluetooth earphone, automatically current state can be switched to earphone receiving state.
Situation 2, the detection distance from shelter; When distance is less than predeterminable range, whether detection shelter is human ear; When shelter is human ear, it is determined that current state is that state answered by receiver. Concrete, this detection method can utilize range sensor and photographic head to detect. When by terminal near shelter, distance-sensor can measure the distance between them, when distance is less than predeterminable range, open photographic head, taken pictures by photographic head, shooting image information is carried out image recognition, recognition result represents that close shelter is human ear, rather than other shelters are close, therefore, it is determined that current state is that state answered by receiver. Wherein, distance-sensor is again displacement transducer, and distance-sensor generally all in the both sides of earpiece or in earpiece groove, is so easy to its work. Range sensor utilizes time-of-flight method principle to measure distance, time-of-flight method (flyingtime) is by launching short especially and measure this light pulse from being transmitted into the time that the object that is blocked reflects, calculating the distance between object by surveying interval. Distance perspective mainly by the physical change amount of various element testing objects, by this variable quantity is scaled distance, should measure the machine of distance displacement from sensor to object. Different according to using element, it is divided into optical displacement sensor, linear proximity transducer, ultrasonic displacement sensor etc. What deserves to be explained is, when the distance detected less than predeterminable range time, open photographic head shoot in-plant shelter, when in recognition result human ear more and more clearly time, it is believed that be that human ear is close. Judge whether current state answers state as receiver by this method.
Situation 3, when above two state is all unsatisfactory for, then it is assumed that current state is other states except answering state except earphone receiving and receiver. Such as, video state, namely open the state of terminal viewing video.
Step 304, current state are earphone receiving state or when state answered by receiver, play the first voice messaging.
Concrete, after determining that current state is earphone receiving state or receiver answers state, preset the applications client in list of application call local TTS tool interface or pass to the TTS service platform transmission text-to-speech request of far-end, it will the first text message is converted to the first voice messaging. Described speech play instrument completes to play by speech play plug-in unit.
Step 304 can be in two kinds of situation:
Situation 1, when current state is earphone receiving state. Insert once earphone when detecting, send text-to-speech prompting. If now terminal is in the audio frequency and video application of Internet phone-calling application and social chat software, does not start text-to-speech service or startup turns voice service and still do not play. Otherwise, the broadcasting of voice document is completed.
Alternatively, the method also includes: based on the stopping text-to-speech instruction of user's input, and terminal will stop whether test earphone hole inserts earphone, and then stopping the first text turns the first voice and the broadcasting of the first voice.
Situation 2, when current state be receiver answer state time. According to default distance, in conjunction with the recognition result of shooting image information, it is determined that during current state, state answered by receiver. If now terminal is in the audio frequency and video application of Internet phone-calling application and social chat software, does not start text-to-speech service or startup turns voice service and still do not play. Otherwise, the broadcasting of voice document is completed.
Alternatively, the method also includes: based on the stopping text-to-speech instruction of user's input, and terminal will stop the detection distance from shelter and whether shelter is human ear, and then stopping the first text turns the first voice and the broadcasting of the first voice.
In the present embodiment, after playing the first voice messaging and deleting broadcasting, the first voice messaging can be relevant step, it is also possible to be completely unrelated step.
When the first voice messaging is relevant step after playing the first voice messaging and deleting broadcasting, after playing the first voice messaging, it is automatically deleted the first voice messaging.
When after playing the first voice messaging and deleting and playing, the first voice messaging is completely unrelated step, after playing the first voice messaging, can deletion first voice messaging of manual selectivity.
Before step 301, described method also includes:
Arrange the application in list of application and or contact person.
Here, application or the text message of contact person in the list of application preset are exactly the first text message. The application in list of application in the present embodiment is generally the application of social chat software.
Assuming that presetting application includes wechat, wechat provides, as intelligent terminal, the free novel chat social activity software that instant messaging services, and it captures rapidly the first list of user's daily communication means with convenient and fast communication, the advantage such as powerful and easy to use. As everybody is at the chat social activity software used at one's side, user is likely at any time use wechat, for instance drive. Now, user because of can not go for a long time to stare at see the mobile phone screen and cannot viewing textual information, with regard to necessary, the text message of wechat is converted to voice messaging in this case, makes user pass through the content sounding learning the text message received.
What deserves to be explained is, according to user personality, the social chat software that wechat, note, Fetion, QQ etc. can be typically based on Word message applies arbitrary or appoints several application being set in list of application.
If assuming because the text message that a certain particular contact is sent to be converted to voice messaging by demand, this particular contact now just should be set to the contact person in list of application, then the text message of the contact person in this default list of application will be converted to the first voice messaging. Here, the arranging because personal preference can be any contact person in terminal of the contact person in list of application.
After step 303, answering to allow user carry out after this first voice messaging prepares and answer this first voice messaging again, without missing this first voice messaging because of unripe, described method also includes:
When current state is not earphone receiving or state answered by receiver, store the first voice messaging. So, when current state switches to earphone receiving or state answered by receiver, play the first voice messaging of storage.
Before step 304, described method also includes:
Receiving the deletion instruction of user, the first voice messaging is deleted in described deletion instruction instruction. According to deleting instruction, delete the first voice messaging. The first voice messaging answered it is not desired to, it is possible to carry out self-defined deletion before being played for user.
What deserves to be explained is, after the present embodiment can also receive the first text message, obtain current state, current state is earphone receiving state or when state answered by receiver, the first text message is converted to the first voice messaging, and plays this first voice messaging.
So, according to current state, determine whether to play the first voice messaging being converted to by the first text message, therefore, user just can be avoided to see the mobile phone screen because going for a long time to stare at and the important information that mobile phone receives cannot be processed in time, can pass through to change existing Chat mode, it is achieved the voice of Word message is changed and reported. So, the important information of user is made to be processed in time, thus being greatly promoted Consumer's Experience.
Embodiment two
The bright embodiment of we provides a kind of phonetics transfer method, is applied to terminal, it is assumed that T is the terminal of user U, the present embodiment, to judge whether current state is earphone receiving state, receives first text message of terminal T, and the method includes:
Step 401, receive T the first text message.
Here, the first text message can include the information that shows with forms such as word, letter or figures, for instance, the information of short message, mail, multimedia message and application. Wherein, the information of application can be chat message and the system information that wechat, note, Fetion, QQ etc. are typically based on the chat software of Word message.
Step 402, judge that whether first text message of T is the text message of the application in T list of application or contact person. If so, step 403 is then performed; If it is not, then perform step 408.
Here, whether the first text message judging T is the text message presetting the application in list of application, first to determine the application belonging to the first text message of T, again the application in this application and default list of application is carried out comparison one by one, it is judged that whether this application is preset the application in list of application. What deserves to be explained is, it is judged that the method whether first text message of T is the text message presetting the contact person in list of application is identical with said method, and the present embodiment just no longer describes in detail.
Step 403, first text message of T is converted to first voice messaging of T.
When the text message of the application in the list of application that first text message of T is default or contact person, first text message of T is converted to first voice messaging of T.
Here adopting TTS technology, TTS is the one of phonetic synthesis application, and it will be stored in the file in terminal, such as help file or webpage, converts natural-sounding output to. TTS not only assists in the information on people's reading terminal visually impaired, more can increase the readability of text document. Present TTS application includes mail and the sound sensitive system of voice driven, and often uses together with speech recognition program.
Step 404, obtain T current state.
Here the current state of T can be the current state of following three kinds of situation: T is earphone receiving state; The current state of T is that state answered by receiver; The current state of T is other states except answering state except earphone receiving and receiver.
Step 405, judge that whether the current state of T is earphone receiving state or state answered by receiver. If so, step 406 is then performed; If it is not, then perform step 409.
Whether example, be earphone receiving state for the current state that judges T here, and whether test earphone hole inserts earphone; When earpiece holes inserts described earphone, it is determined that current state is earphone receiving state. Concrete, the resistance value that the method that test earphone inserts can be through between the contact chip of the left and right acoustic channels of terminal holes judges. When the resistance value detected is be more than or equal to first threshold and less than or equal to Second Threshold, it is determined that have headset plug to insert described jack.By detecting the resistance value between jack left and right acoustic channels, solve class earphone object (such as, dust plug) insert jack or two-in-one socket after, the problem being misidentified as earphone, especially for the socket that earphone and optical fiber are two-in-one, it is possible to efficiently differentiate headset plug and insert or Optical fiber plug insertion jack. Judge that current state is whether as earphone receiving state by this method. Preferably, first threshold is 30 ohm, and Second Threshold is 62 ohm.
Step 406, judging that T is under current state, whether T is in voice call. If so, 409 are then performed; If it is not, then perform 407.
Here, voice call refers to that user U carries out audio or video call by T to other users.
Step 407, play T the first voice messaging.
Here, speech play completes to play by speech play plug-in unit.
Step 408, delete T the first text message or and the first voice messaging.
Here, first text message of first voice messaging of the T play and the T not meeting switch condition is deleted, clears up internal memory, to reach to promote the speed of service of T.
Step 409, storage T the first voice messaging.
Answer to allow user carry out after this first voice messaging prepares and answer this first voice messaging again, without missing this first voice messaging because of unripe, when current state is not earphone receiving or state answered by receiver, store the first voice messaging. So, when current state switches to earphone receiving or state answered by receiver, play the first voice messaging of storage.
Embodiment three
The embodiment of the present invention provides a kind of terminal 50, as it is shown in figure 5, this terminal 50 includes:
Receiver module 501, is used for receiving the first text message.
Modular converter 502, for being converted to the first voice messaging by described first text message.
Acquisition module 503, is used for obtaining current state.
Playing module 504, is earphone receiving state for described current state or when state answered by receiver, plays described first voice messaging.
Concrete, described modular converter 502 specifically for:
When described first text message is the text message of the application in default list of application or contact person, described first text message is converted to described first voice messaging.
Concrete, described acquisition module 503 specifically for:
Whether test earphone hole inserts earphone;
When described earpiece holes inserts described earphone, it is determined that described current state is described earphone receiving state.
Concrete, described acquisition module 503 specifically for:
The detection distance from shelter;
When described distance is less than predeterminable range, detect whether described shelter is human ear;
When described shelter is human ear, it is determined that described current state is that state answered by described receiver.
Further, as shown in Figure 6, described terminal 50 also includes:
Removing module 505, is used for deleting described first voice messaging.
Embodiment four
The embodiment of the present invention provides a kind of terminal 60, as it is shown in fig. 7, this terminal 60 includes:
Receiver 601, is used for receiving the first text message.
Processor 602, for being converted to the first voice messaging by described first text message; Obtain current state.
Player 603, is earphone receiving state for described current state or when state answered by receiver, plays described first voice messaging.
So, according to current state, determine whether to play the first voice messaging being converted to by the first text message, therefore, user just can be avoided to see the mobile phone screen because going for a long time to stare at and the important information that mobile phone receives cannot be processed in time, can pass through to change existing Chat mode, it is achieved the voice of Word message is changed and reported.So, the important information of user is made to be processed in time, thus being greatly promoted Consumer's Experience.
Concrete, described processor 602 specifically for:
When described first text message is the text message of the application in default list of application or contact person, described first text message is converted to described first voice messaging.
Concrete, described processor 602 specifically for:
Whether test earphone hole inserts earphone;
When described earpiece holes inserts described earphone, it is determined that described current state is described earphone receiving state.
Concrete, described processor 602 specifically for:
The detection distance from shelter;
When described distance is less than predeterminable range, detect whether described shelter is human ear;
When described shelter is human ear, it is determined that described current state is that state answered by described receiver.
Further, described processor 602, it is additionally operable to delete described first voice messaging.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program. Therefore, the present invention can adopt the form of hardware embodiment, software implementation or the embodiment in conjunction with software and hardware aspect. And, the present invention can adopt the form at one or more upper computer programs implemented of computer-usable storage medium (including but not limited to disk memory and optical memory etc.) wherein including computer usable program code.
The present invention is that flow chart and/or block diagram with reference to method according to embodiments of the present invention, equipment (system) and computer program describe. It should be understood that can by the combination of the flow process in each flow process in computer program instructions flowchart and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame. These computer program instructions can be provided to produce a machine to the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device so that the instruction performed by the processor of computer or other programmable data processing device is produced for realizing the device of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide in the computer-readable memory that computer or other programmable data processing device work in a specific way, the instruction making to be stored in this computer-readable memory produces to include the manufacture of command device, and this command device realizes the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices provides for realizing the step of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
The above, be only presently preferred embodiments of the present invention, is not intended to limit protection scope of the present invention.

Claims (10)

1. a phonetics transfer method, it is characterised in that described method includes:
Receive the first text message;
Described first text message is converted to the first voice messaging;
Obtain current state;
Described current state is earphone receiving state or when state answered by receiver, plays described first voice messaging.
2. method according to claim 1, it is characterised in that described described first text message is converted to the first voice messaging includes:
When described first text message is the text message of the application in default list of application or contact person, described first text message is converted to described first voice messaging.
3. method according to claim 1, it is characterised in that described acquisition current state includes:
Whether test earphone hole inserts earphone;
When described earpiece holes inserts described earphone, it is determined that described current state is described earphone receiving state.
4. method according to claim 1, it is characterised in that described acquisition current state also includes:
The detection distance from shelter;
When described distance is less than predeterminable range, detect whether described shelter is human ear;
When described shelter is human ear, it is determined that described current state is that state answered by described receiver.
5. method according to claim 1 or claim 2, it is characterised in that after described first voice messaging of described broadcasting, described method also includes:
Delete described first voice messaging.
6. a terminal, it is characterised in that described terminal includes:
Receiver module, is used for receiving the first text message;
Modular converter, for being converted to the first voice messaging by described first text message;
Acquisition module, is used for obtaining current state;
Playing module, is earphone receiving state for described current state or when state answered by receiver, plays described first voice messaging.
7. terminal according to claim 6, it is characterised in that described modular converter specifically for:
When described first text message is the text message of the application in default list of application or contact person, described first text message is converted to described first voice messaging.
8. terminal according to claim 6, it is characterised in that described acquisition module specifically for:
Whether test earphone hole inserts earphone;
When described earpiece holes inserts described earphone, it is determined that described current state is described earphone receiving state.
9. terminal according to claim 6, it is characterised in that described acquisition module specifically for:
The detection distance from shelter;
When described distance is less than predeterminable range, detect whether described shelter is human ear;
When described shelter is human ear, it is determined that described current state is that state answered by described receiver.
10. terminal according to claim 6 or 7, it is characterised in that described terminal also includes:
Removing module, is used for deleting described first voice messaging.
CN201610054063.4A 2016-01-26 2016-01-26 Voice converting method and terminal Pending CN105657174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610054063.4A CN105657174A (en) 2016-01-26 2016-01-26 Voice converting method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610054063.4A CN105657174A (en) 2016-01-26 2016-01-26 Voice converting method and terminal

Publications (1)

Publication Number Publication Date
CN105657174A true CN105657174A (en) 2016-06-08

Family

ID=56487065

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610054063.4A Pending CN105657174A (en) 2016-01-26 2016-01-26 Voice converting method and terminal

Country Status (1)

Country Link
CN (1) CN105657174A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106817698A (en) * 2016-12-22 2017-06-09 深圳市元征科技股份有限公司 A kind of intercommunication method and terminal
CN107231477A (en) * 2017-06-01 2017-10-03 深圳市伊特利网络科技有限公司 The information-reading method and system of wechat
CN107888776A (en) * 2017-11-18 2018-04-06 珠海市魅族科技有限公司 Voice broadcast method and device, computer installation and computer-readable recording medium
CN108270925A (en) * 2018-01-31 2018-07-10 广东欧珀移动通信有限公司 Processing method, device, terminal and the computer readable storage medium of voice messaging
WO2018137306A1 (en) * 2017-01-26 2018-08-02 华为技术有限公司 Method and device for triggering speech function
CN108391013A (en) * 2018-03-19 2018-08-10 广东欧珀移动通信有限公司 Playback method, terminal and the computer readable storage medium of voice data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581853A (en) * 2013-10-23 2014-02-12 青岛海信传媒网络技术有限公司 Method and device for voice broadcast of text message on mobile phone
CN103873690A (en) * 2014-03-13 2014-06-18 北京百纳威尔科技有限公司 Information processing method and intelligent terminal during driving
CN103973544A (en) * 2014-04-02 2014-08-06 小米科技有限责任公司 Voice communication method, voice playing method and devices
CN104202474A (en) * 2014-08-27 2014-12-10 东莞市和乐电子有限公司 Wireless time telling and photographing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103581853A (en) * 2013-10-23 2014-02-12 青岛海信传媒网络技术有限公司 Method and device for voice broadcast of text message on mobile phone
CN103873690A (en) * 2014-03-13 2014-06-18 北京百纳威尔科技有限公司 Information processing method and intelligent terminal during driving
CN103973544A (en) * 2014-04-02 2014-08-06 小米科技有限责任公司 Voice communication method, voice playing method and devices
CN104202474A (en) * 2014-08-27 2014-12-10 东莞市和乐电子有限公司 Wireless time telling and photographing method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106817698A (en) * 2016-12-22 2017-06-09 深圳市元征科技股份有限公司 A kind of intercommunication method and terminal
WO2018137306A1 (en) * 2017-01-26 2018-08-02 华为技术有限公司 Method and device for triggering speech function
CN108605074A (en) * 2017-01-26 2018-09-28 华为技术有限公司 A kind of method and apparatus of triggering phonetic function
CN107231477A (en) * 2017-06-01 2017-10-03 深圳市伊特利网络科技有限公司 The information-reading method and system of wechat
WO2018218807A1 (en) * 2017-06-01 2018-12-06 深圳市伊特利网络科技有限公司 Information reading method and system of wechat
CN107888776A (en) * 2017-11-18 2018-04-06 珠海市魅族科技有限公司 Voice broadcast method and device, computer installation and computer-readable recording medium
CN108270925A (en) * 2018-01-31 2018-07-10 广东欧珀移动通信有限公司 Processing method, device, terminal and the computer readable storage medium of voice messaging
CN108391013A (en) * 2018-03-19 2018-08-10 广东欧珀移动通信有限公司 Playback method, terminal and the computer readable storage medium of voice data

Similar Documents

Publication Publication Date Title
KR101977087B1 (en) Mobile terminal having auto answering function and auto answering method thereof
CN101604521B (en) Mobile terminal and method for recognizing voice thereof
CN105657174A (en) Voice converting method and terminal
CN105100892B (en) Video play device and method
US9639251B2 (en) Mobile terminal and method of controlling the mobile terminal for moving image playback
CN101557432B (en) Mobile terminal and menu control method thereof
KR101912409B1 (en) Mobile terminal and mothod for controling of the same
US20150126252A1 (en) Mobile terminal and menu control method thereof
US20100009719A1 (en) Mobile terminal and method for displaying menu thereof
CN106531149A (en) Information processing method and device
KR20090107365A (en) Mobile terminal and its menu control method
KR20100006089A (en) Mobile terminal and method for inputting a text thereof
US9565289B2 (en) Mobile terminal and method of controlling the same
CN106911850A (en) Mobile terminal and its screenshotss method
CN106157970A (en) A kind of audio identification methods and terminal
CN106302137A (en) Group chat message processing apparatus and method
CN106448665A (en) Voice processing device and method
KR20100011786A (en) Mobile terminal and method for recognition voice command thereof
CN104810033B (en) Audio frequency playing method and device
US9336242B2 (en) Mobile terminal and displaying method thereof
KR101521909B1 (en) Mobile terminal and its menu control method
Kardyś et al. A new Android application for blind and visually impaired people
CN110392158A (en) A kind of message treatment method, device and terminal device
CN106385514A (en) Music processing method, device and terminal
CN106648505A (en) Mobile terminal control method and mobile terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160608

RJ01 Rejection of invention patent application after publication