WO2015024409A1 - Audio calling method and device thereof - Google Patents

Audio calling method and device thereof Download PDF

Info

Publication number
WO2015024409A1
WO2015024409A1 PCT/CN2014/080232 CN2014080232W WO2015024409A1 WO 2015024409 A1 WO2015024409 A1 WO 2015024409A1 CN 2014080232 W CN2014080232 W CN 2014080232W WO 2015024409 A1 WO2015024409 A1 WO 2015024409A1
Authority
WO
WIPO (PCT)
Prior art keywords
jump
voice
voice unit
time
played
Prior art date
Application number
PCT/CN2014/080232
Other languages
French (fr)
Inventor
Xiayu WU
Jianye Li
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015024409A1 publication Critical patent/WO2015024409A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/54Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback

Definitions

  • the present disclosure relates to the field of Internet technologies, and particularly to an application voice playback switching method and apparatus.
  • policies may be selectively configured for different voice types.
  • Embodiments of the present disclosure provide an application voice playback switching method and apparatus, aimed at improving flexibility of voice playback switching in an application, and improving playback effects of the application.
  • the embodiments of the present disclosure propose an application voice playback switching method.
  • the method includes: acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • the embodiments of the present disclosure further propose an application voice playback switching apparatus.
  • the apparatus includes a hardware processor and a non-transitory storage medium accessible to the hardware processor.
  • the non-transitory storage medium is configured to store modules including: an acquisition module, a judgment module, and a switching module.
  • the acquisition module is configured to acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the judgment module is configured to determine a category of the jump state according to the jump state information.
  • the switching module is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • FIG. 1 is a schematic view of a flow of a first embodiment of an application voice playback switching method according to the present disclosure
  • FIG. 2 is a schematic view of a flow of a second embodiment of the application voice playback switching method according to the present disclosure
  • FIG. 3 is a schematic view of a flow of a third embodiment of the application voice playback switching method according to the present disclosure
  • FIG. 4 is a schematic view of functional modules of a first embodiment of an application voice playback switching apparatus according to the present disclosure
  • FIG. 5 is a schematic view of functional modules of a second embodiment of the application voice playback switching apparatus according to the present disclosure.
  • FIG. 6 is a schematic view of an example embodiment of a terminal according to embodiments of the present disclosure.
  • module may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
  • ASIC Application Specific Integrated Circuit
  • FPGA field programmable gate array
  • processor shared, dedicated, or group
  • the term module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor.
  • the exemplary environment may include a server, a client, and a communication network.
  • the server and the client may be coupled through the communication network for information exchange, such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • information exchange such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc.
  • client and one server are shown in the environment, any number of terminals or servers may be included, and other devices may also be included.
  • the communication network may include any appropriate type of communication network for providing network connections to the server and client or among multiple servers or clients.
  • communication network may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless.
  • the disclosed methods and apparatus may be implemented, for example, in a wireless network that includes at least one client.
  • the client may refer to any appropriate user terminal with certain computing capabilities, such as a personal computer (PC), a work station computer, a server computer, a hand-held computing device (tablet), a smart phone or mobile phone, or any other user-side computing device.
  • the client may include a network access device.
  • the client may be stationary or mobile.
  • a server may refer to one or more server computers configured to provide certain server functionalities, such as database management and search engines.
  • a server may also include one or more processors to execute computer programs in parallel.
  • a first embodiment of the present disclosure proposes an application voice playback switching method, which includes the following steps.
  • Step S 101 Acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the operating environment of the method in this embodiment relates to online games, single -player games and other applications, and particularly to switching management policies for voice playback when application content bound to a voice performs jump.
  • the jump of the application content bound to a voice may be graphics jump, duplicate content jump and the like.
  • Step S102 Determine a category of the jump state according to the jump state information.
  • Step S103 Select a corresponding voice switching policy according to the category of the jump state and dynamically perform voice playback switch.
  • This embodiment previously classifies all the switch jump states in the application, as one implementation manner, which may be divided into hard jump and soft jump.
  • soft jump refers to switch between the players' skills
  • hard jump refers to that other players interrupt current local players' skills.
  • a local player operates a character the character can control skills that the player plays back, the skills include voices and graphics bound to the voices, and if it is necessary to play two skills, each skill takes 1.5 seconds to complete playback. If the player first plays a first skill and continuously plays a second skill when the first skill is not played, the graphics bound to the voices perform a soft jump operation.
  • the voice unit may include an audio file that represents an action or a move of a game character in a video game.
  • different voice units may correspond to different moves of a game character.
  • different game character may have different voice units when performing the same action or movement.
  • a mandatory command of fading out to negative infinity within a certain time may be set for each voice unit; if obtaining a command of being suspended, fade out the voice unit to negative infinity within the set time to be recovered then.
  • the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
  • a voice switching policy may be set based on the following principle: a time axis is set for each voice unit, and the time axis automatically operates synchronously in milliseconds when the voice unit is played back.
  • a time axis position of a voice unit being played before the jump may be acquired, making playback of the voice unit before the jump begin fading out to negative infinity herein and then recovered, and the voice unit after the jump is played back.
  • a time when playback of the voice unit before the jump fades out to negative infinity may be obtained through calculation based on a time cut-in position of the bound content after the jump and a total playback time of the voice unit after the jump and in combination with a time when playback of the voice unit before the jump fades out of a mandatory command.
  • the hard jump and the soft jump in the above embodiment need to be preset, by taking a game as an example, that action jump of players' leading roles is soft jump and interruption of non-leading roles is hard jump in the above embodiment is only default jump state classification setting in current game development. Therefore, different soft and hard jump rules may be set in different game types. That is to say, in other implementation manners, other forms of classification may be performed on the jump state, and corresponding voice switching policies are set for different jump state categories respectively, so as to achieve the aim of improving flexibility of voice switch and then improving playback effects of the application.
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • a second embodiment of the present disclosure proposes an application voice playback switching method; compared with the first embodiment, this embodiment specifically defines the step S103 in the above embodiment, i.e., selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, but other steps are the same as those in the first embodiment.
  • the method in this embodiment includes the following steps implemented by a terminal device.
  • Step S101 The terminal device acquires jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the step S101 is the same as that in the first embodiment, i.e., when monitoring that application content bound to a voice currently played by an application performs jump, first acquire jump state information of the application content, so as to acquire a jump state of the application content bound from the jump state information.
  • Step S 102 The terminal device determines a category of the jump state according to the jump state information. There are two categories of the jump states: a hard jump and a soft jump. When the category of the jump state is hard jump, perform step S 1031; and when the category of the jump state is soft jump, perform step S1032. [054] Step S 1031. The terminal device interrupts playback of a voice unit being played before the jump, and fades out to negative infinity to be recovered; and play back the voice unit after the jump.
  • Step S1032. The terminal device acquires a time axis position of a voice unit being played before the jump, to serve as a fade-out time point of the voice unit being played before the jump.
  • Step S1033. The terminal device acquires a time cut-in position of the application content after the jump, to serve as a time point when the voice unit after the jump starts to play.
  • Step S 1034 The terminal device subtracts the time point when the voice unit after the jump starts to play from a total playback time of the voice unit after the jump, to obtain a time in which the voice unit after the jump is not played.
  • Step S 1035 The terminal device adds the time in which the voice unit after the jump is not played to a preset time when the voice unit being played before the jump fades out of a mandatory command, to obtain a time when the voice unit being played before the jump fades out to the negative infinity.
  • Step S 1036 From the fade-out time point of the voice unit being played before the jump, The terminal device makes the voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the voice unit being played before the jump fades out to negative infinity, and plays back the voice unit after the jump from the time point when the voice unit after the jump starts to play.
  • This embodiment previously classifies all the switch jump states in the application, which are specifically divided into two types, i.e., hard jump and soft jump, and corresponding voice switching policies are respectively set for the two different jump state categories, i.e., hard jump and soft jump.
  • a hard jump an action of the game character is interrupted by another game character or player.
  • a soft jump the action of the game character is interrupted by the same game character itself or by the game player that controls the same game character.
  • each voice unit has a time axis, and the time axis automatically operates
  • the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
  • the voice switching operation of the soft jump is actually the same as that of the hard jump. If the switching content after the soft jump is at any time point of the content, in the case of jump switch, the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump.
  • the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump.
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • a third embodiment of the present disclosure proposes an application voice playback switching method, and on the basis of the first embodiment, before the step S101: acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping, the method further includes:
  • step S100 Setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application.
  • this embodiment further includes the solution of setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, thus, by setting a mandatory command of fading out to the negative infinity within a predetermined time for voice units, it is favorable for, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when it is determined that the category of the jump state is soft jump, a time when the voice unit being played before the jump fades out to negative infinity can be calculated and acquired based on a preset time when the voice unit being played before the jump fades out of the mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenuate the voice unit being played before the jump to the negative infinity within the time
  • This embodiment through the above solution, that is, setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; selecting a corresponding voice switching policy according to the category of the jump state, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when determining that the category of the jump state is soft jump, calculating and acquiring a time when the voice unit being played before the jump fades out to negative infinity based on a preset time when the voice unit being played before the jump fades out of the mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenu
  • the first embodiment of the present disclosure proposes an application voice playback switching apparatus 200.
  • the apparatus includes a hardware processor 210 and a non-transitory storage medium 220 accessible to the hardware processor 210.
  • the non-transitory storage medium 220 is configured to store modules including: an acquisition module 201, a judgment module 202 and a switching module 203.
  • the apparatus may be a user terminal.
  • the acquisition module 201 is configured to acquire jump state
  • the judgment module 202 is configured to determine a category of the jump state according to the jump state information.
  • the switching module 203 is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • This embodiment relates to online games, single -player games and other applications, and particularly to switching management policies for voice playback when application content bound to a voice performs jump.
  • the jump of the application content bound to a voice may be graphics jump, duplicate content jump and the like.
  • the acquisition module 201 acquires jump state information of the application content, so as to facilitate the judgment module 202 to acquire a jump state of the application content bound from the jump state information.
  • the switching module 203 selects a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • This embodiment previously classifies all the switch jump states in the application, as one implementation manner, which specifically may be divided into hard jump and soft jump.
  • soft jump refers to switch between the players' skills
  • hard jump refers to that other players interrupt current local players' skills.
  • a local player operates a character the character can control skills that the player plays back, the skills include voices and graphics bound to the voices, and if it is necessary to play two skills, each skill takes 1.5 seconds to complete playback. If the player first plays a first skill and continuously plays a second skill when the first skill is not played, the graphics bound to the voices perform a soft jump operation.
  • playback of a voice unit before jump may be directly interrupted, and the voice unit after the jump is played back.
  • the voice unit is represented with frame.
  • a mandatory command of fading out to negative infinity within a certain time may be set for each voice unit; if obtaining a command of being suspended, fade out the voice unit to negative infinity within the set time to be recovered then.
  • a voice switching policy may be set based on the following principle: a time axis is set for each voice unit, and the time axis automatically operates synchronously in milliseconds when the voice unit is played.
  • a time axis position of a voice unit being played before the jump may be acquired, making playback of the voice unit before the jump begin fading out to negative infinity herein and then recovered, and the voice unit after the jump is played back.
  • a time when playback of the voice unit before the jump fades out to negative infinity may be obtained through calculation based on a time cut-in position of the bound content after the jump and a total playback time of the voice unit after the jump and in combination with a time when playback of the voice unit before the jump fades out of a mandatory command.
  • the hard jump and the soft jump in the above embodiment need to be preset, by taking a game as an example, that action jump of players' leading roles is soft jump and interruption of non-leading roles is hard jump in the above embodiment is only default jump state classification setting in current game development. Therefore, different soft and hard jump rules may be set in different game types. That is to say, in other implementation manners, other forms of classification may be performed on the jump state, and corresponding voice switching policies are set for different jump state categories respectively, so as to achieve the aim of improving flexibility of voice switch and then improving playback effects of the application.
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • each voice unit has a time axis, and the time axis automatically operates synchronously in milliseconds when the voice unit is played.
  • the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
  • the specific calculation process may include the follow acts implemented by a terminal device:
  • This embodiment through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
  • the second embodiment of the present disclosure proposes an application voice playback switching apparatus, and on the basis of the first embodiment, the apparatus may further include:
  • this embodiment further includes the solution of setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, thus, by setting a mandatory command of fading out to the negative infinity within a predetermined time for voice units, it is favorable for, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when it is determined that the category of the jump state is soft jump, a time when the voice unit being played before the jump fades out to negative infinity can be calculated and acquired based on a preset time when the voice unit being played before the jump fades out of a mandatory command in combination with the time point when the voice unit after the jump
  • This embodiment through the above solution, that is, setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; selecting a corresponding voice switching policy according to the category of the jump state, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when determining that the category of the jump state is soft jump, calculating and acquiring a time when the voice unit being played before the jump fades out to negative infinity based on a preset time when the voice unit being played before the jump fades out of a mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to atten
  • FIG. 6 shows a block diagram of an example embodiment of the terminal.
  • the terminal includes a radio frequency (RF) circuit 20, a memory 21 including one or more computer-readable storage mediums, an input unit 22, a display unit 23, a sensor 24, an audio circuit 25, a wireless fidelity (WiFi) module 26, a processor 27 including one or more cores, and a power 28, etc.
  • RF radio frequency
  • FIG. 6 the structure of the terminal shown in FIG. 6 is not limiting, it can includes less or more components, or includes other combinations or arrangements.
  • the RF circuit 20 can be used for receiving and sending signals during calling or process of receiving and sending message. Specially, the RF circuit 20 will receive downlink information from the base station and send it to the processor 27; or send uplink data to the base station.
  • the RF circuit 20 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a diplexer, and the like.
  • the RF circuit 20 can communicate with network or other devices by wireless communication.
  • Such wireless communication can use any communication standard or protocol, which includes, but is not limited to, Global System of Mobile communication (GSM), (General Packet Radio Service, GPRS), (Code Division Multiple Access, CDMA), (Wideband Code Division Multiple Access, WCDMA), (Long Term Evolution, LTE), email, or (Short Messaging Service, SMS).
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • SMS Short Messaging Service
  • the memory 21 is configured to store software program and module which will be run by the processor 27, so as to perform multiple functional applications of the mobile phone and data processing.
  • the memory 21 mainly includes storing program area and storing data area.
  • the storing program area can store the operating system, at least one application program with required function (such as sound playing function, image playing function, etc.).
  • the storing data area can store data established by mobile phone according to actual using demand (such as audio data, phonebook, etc.)
  • the memory 21 can be high-speed random access memory, or nonvolatile memory, such as disk storage, flash memory device, or other volatile solid-state memory devices.
  • the memory 21 may include a storing controller to help the processor 27 and the input unit 22 to access the memory 21.
  • the input unit 22 is configured to receive the entered number or character information, and the entered key signal related to user setting and function control.
  • the input unit 22 includes a touch- sensitive surface 221 or other input devices 222.
  • the touch- sensitive surface 221 is called as touch screen or touch panel, which can collect user's touch operations thereon or nearby (for example the operations generated by fingers of user or stylus pen, and the like, touching on or near the touch-sensitive surface 221), and drive the corresponding connection device according to the preset program.
  • the touch- sensitive surface 221 includes two portions including a touch detection device and a touch controller.
  • the touch detection device is configured to detect touch position of the user and detecting signals accordingly, and then sending the signals to the touch controller.
  • the touch controller receives touch information from the touch detection device, and converts it to contact coordinates which are to be sent to the processor 27, and then receives command sent by the processor 27 to perform.
  • the input unit 22 can include, but is not limited to, other input devices 222, such as one or more selected from physical keyboard, function keys (such as volume control keys, switch key-press, etc.), a trackball, a mouse, and an operating lever, etc..
  • the display unit 23 is configured to display information entered by the user or information supplied to the user, and menus of the mobile phone.
  • the display unit 23 includes a display panel 231, such as a Liquid Crystal Display (LCD), or an Organic Light-Emitting Diode (OLED).
  • the display panel 231 can be covered by the touch-sensitive surface 221, after touch operations are detected on or near the touch- sensitive surface 221, they will be sent to the processor 27 to determine the type of the touching event. Subsequently, the processor 27 supplies the corresponding visual output to the display panel 231 according to the type of the touching event.
  • the touch- sensitive surface 221 and the display panel 231 are two individual components to implement input and output, but they can be integrated together to implement the input and output in some embodiments.
  • the terminal may include at least one sensor 24, such as light sensors, motion sensors, or other sensors.
  • the light sensors includes ambient light sensors for adjusting brightness of the display panel 231 according to the ambient light, and proximity sensors for turning off the display panel 231 and/or maintaining backlight when the terminal is moved to the ear side.
  • Accelerometer sensor as one of the motion sensors can detect the magnitude of accelerations in every direction (Triaxial, generally), and detect the magnitude and direction of gravity in an immobile status, which is applicable to applications of identifying attitudes of the mobile (such as switching between horizontal and vertical screens, related games, magnetometer attitude calibration, etc.), vibration recognition related functions (such as pedometer, percussion, etc.).
  • the terminal also can configure other sensors (such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc.) whose detailed descriptions are omitted here.
  • the audio circuit 25, the speaker 251 and the microphone 252 supply an audio interface between the user and the terminal. Specifically, the audio data is received and converted to electrical signals by audio circuit 25, and then transmitted to the speaker 251, which are converted to sound signal to output. On the other hand, the sound signal collected by the speaker is then converted to electrical signals which will be received and converted to audio data. Subsequently, the audio data are output to the processor 27 to process, and then sent to another mobile phone via the RF circuit 20, or sent to the memory 21 to process further.
  • the audio circuit 25 may further include an earplug jack to provide a communication between the external earphone and the terminal.
  • WiFi pertains to short-range wireless transmission technology providing a wireless broadband Internet, by which the mobile phone can help the user to receive and send email, browse web, and access streaming media, etc.
  • WiFi module 26 is illustrated in FIG. 6, it should be understood that, WiFi module 26 is not a necessary for the terminal, which can be omitted according the actual demand without changing the essence of the present disclosure.
  • the processor 27 is a control center of the mobile phone, which connects with every part of the mobile phone by various interfaces or circuits, and performs various functions and processes data by running or performing software
  • the processor 27 may include one or more processing units.
  • the processor 27 can integrate with application processors and modem processors, for example, the application processors include processing operating system, user interface and applications, etc.; the modern processors are used for performing wireless communication. It can be understood that, it's an option to integrate the modern processors to the processor 27.
  • the terminal may include a power supply 28 (such as battery) supplying power for each component, preferably, the power supply can connect with the processor 27 by power management system, so as to manage charging, discharging and power consuming.
  • the power supply 28 may include one or more AC or DC powers, recharging systems, power failure detection circuits, power converters or inverters, or power status indicators, etc..
  • the terminal may include a camera, and a Bluetooth module, etc., which are not illustrated.
  • the processor 27 of the terminal will perform an executable file stored in the memory 21 according to one or more program of the application, as the following steps.
  • the terminal is configured to acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
  • the terminal is configured to determine a category of the jump state according to the jump state information.
  • the terminal is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
  • the methods in the above embodiments may be accomplished by software on a necessary universal hardware platform, and definitely may also be accomplished by hardware; however, in most circumstances, the former is a better implementation manner.
  • the technical solution of the present disclosure or the part that makes contributions to the prior art can be substantially embodied in the form of a software product.
  • the computer software product may be stored in a storage medium (for example, a ROM/RAM, a magnetic disk, or an optical disk), and contain several instructions to instruct a terminal device (for example, a mobile phone, a computer, a server, or a device) to perform the methods as described in the embodiments of the present disclosure.
  • program instructions corresponding to the application voice playback switching apparatuses in FIG. 4 and FIG. 5 can be stored in a readable storage medium of a computer, a server or other terminals, and are executed by at least one processor therein, so as to implement the application voice playback switching methods in FIG. 1 to FIG. 3.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Studio Circuits (AREA)

Abstract

An application voice playback switching method and an apparatus thereof are provided. The method includes: acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping;determining a category of the jump state according to the jump state information;and selecting a corresponding voice switching policy according to the category of the jump state,to dynamically perform voice playback switch. So, a policy to be selected currently can be automatically determined according to a jump state of current content,a switching manner is real- time flexibly selected,and flexible switching and suspension of any voice are performed, which improve flexibility of voice playback switching in an application.

Description

AUDIO CALLING METHOD AND DEVICE THEREOF
CROSS-REFERENCE TO RELATED APPLICATIONS
[001 ] This application claims priority to Chinese Patent Application No.
201310364343.1, filed on August 20, 2013, which is hereby incorporated by reference in its entirety.
FIELD
[002] The present disclosure relates to the field of Internet technologies, and particularly to an application voice playback switching method and apparatus.
BACKGROUND
[003] Currently, in online games, when content bound to voice trigger performs jump, such as graphics jump or duplicate content jump, switching management of voice playback adopts non-dynamic solutions, which generally include the following ones.
[004] 1. Directly interrupt voice playback before the jump, and execute a playback command after the jump.
[005] 2. Perform no management on the voice playback before the jump, to make it automatically recovered upon completion of playback thereof. If there is a playback request after the jump, shield it.
[006] 3. Perform no management on the voice playback before the jump, to make it automatically recovered upon completion of playback thereof. If there is a playback request after the jump, still execute it.
[007] 4. Fade out the voice playback before the jump, and fade in or directly play back a voice request after the jump.
[008] In addition, the above policies may be selectively configured for different voice types.
[009] However, such non-dynamic solutions of the prior art have the following disadvantages:
[010] for the policy of directly interrupting voice playback before the jump, and executing a playback command after the jump, it may make the voice have an evident discontinuous feeling, and affect user experience; for the policy of performing no management on the voice playback before the jump, to make it automatically recovered upon completion of playback thereof, and if there is a playback request after the jump, shielding it, if the playback request after the jump is very important for user feedback, such shielding definitely may affect the user experience; for the policy of performing no management on the to make it automatically recovered upon completion of playback thereof, and if there is a playback request after the jump, still executing it, it may make the voice played back in an overlapping manner, and that the playback time requires a longer voice may affect the user experience; for the policy of fading out the voice playback before the jump, and fading in or directly playing back a voice request after the jump, fade-out and fade-in time thereof only can be fixedly set, but content switching usually may have random flexible features, for example, art actions of roles may be switched at any frame, and it is difficult to flexibly deal with the fade-out fade-in voice switching manner fixedly set.
[011] In addition, although the prior art selectively configures the above policies for different voice types, which may solve most problems, however, the flexibility of voice playback switching is still low, and it is required that each voice should be configured individually; thus, the manpower cost is increased, and when different types of voices require switching therebetween, it also may exceed the range of the configuration solution.
SUMMARY
[012] Embodiments of the present disclosure provide an application voice playback switching method and apparatus, aimed at improving flexibility of voice playback switching in an application, and improving playback effects of the application.
[013] In a first aspect, the embodiments of the present disclosure propose an application voice playback switching method. The method includes: acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
[014] In a second aspect, the embodiments of the present disclosure further propose an application voice playback switching apparatus. The apparatus includes a hardware processor and a non-transitory storage medium accessible to the hardware processor. The non-transitory storage medium is configured to store modules including: an acquisition module, a judgment module, and a switching module. The acquisition module is configured to acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping. The judgment module is configured to determine a category of the jump state according to the jump state information. The switching module is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
[015] According to an application voice playback switching method and an apparatus thereof proposed in the embodiments of the present disclosure, by acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping;
determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, a policy to be selected currently can be automatically determined according to a jump state of current content, a switching manner is real-time flexibly selected, and flexible switching and suspension of any voice are performed, which improve flexibility of voice playback switching in an application.
BRIEF DESCRIPTION OF THE DRAWINGS
[016] To explain the technical solutions of the embodiments of the present disclosure, accompanying drawings used in the embodiments are followed.
Apparently, the following drawings merely illustrate some embodiments of the disclosure, but for persons skilled in the art, other drawings can be obtained without creative works according to these drawings.
[017] Fig. 1 is a schematic view of a flow of a first embodiment of an application voice playback switching method according to the present disclosure;
[018] FIG. 2 is a schematic view of a flow of a second embodiment of the application voice playback switching method according to the present disclosure;
[019] FIG. 3 is a schematic view of a flow of a third embodiment of the application voice playback switching method according to the present disclosure;
[020] FIG. 4 is a schematic view of functional modules of a first embodiment of an application voice playback switching apparatus according to the present disclosure;
[021] FIG. 5 is a schematic view of functional modules of a second embodiment of the application voice playback switching apparatus according to the present disclosure; and
[022] FIG. 6 is a schematic view of an example embodiment of a terminal according to embodiments of the present disclosure.
DETAILED DESCRIPTION OF THE DRAWINGS
[023] Reference throughout this specification to "one embodiment," "an embodiment," "example embodiment," or the like in the singular or plural means that one or more particular features, structures, or characteristics described in connection with an embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment," "in an example embodiment," or the like in the singular or plural in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
[024] The terminology used in the description of the disclosure herein is for the purpose of describing particular examples only and is not intended to be limiting of the disclosure. As used in the description of the disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of "in" includes "in" and "on" unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "may include," "including," "comprises," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, operations, elements, components, and/or groups thereof.
[025] As used herein, the term "module" or "unit" may refer to, be part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module or unit may include memory (shared, dedicated, or group) that stores code executed by the processor.
[026] The exemplary environment may include a server, a client, and a communication network. The server and the client may be coupled through the communication network for information exchange, such as sending/receiving identification information, sending/receiving data files such as splash screen images, etc. Although only one client and one server are shown in the environment, any number of terminals or servers may be included, and other devices may also be included.
[027] The communication network may include any appropriate type of communication network for providing network connections to the server and client or among multiple servers or clients. For example, communication network may include the Internet or other types of computer networks or telecommunication networks, either wired or wireless. In a certain embodiment, the disclosed methods and apparatus may be implemented, for example, in a wireless network that includes at least one client.
[028] In some cases, the client may refer to any appropriate user terminal with certain computing capabilities, such as a personal computer (PC), a work station computer, a server computer, a hand-held computing device (tablet), a smart phone or mobile phone, or any other user-side computing device. In various embodiments, the client may include a network access device. The client may be stationary or mobile.
[029] A server, as used herein, may refer to one or more server computers configured to provide certain server functionalities, such as database management and search engines. A server may also include one or more processors to execute computer programs in parallel.
[030] The solutions in the embodiments of the present disclosure are clearly and completely described in combination with the attached drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part, but not all, of the embodiments of the present disclosure. On the basis of the embodiments of the present disclosure, all other embodiments acquired by those of ordinary skill in the art under the precondition that no creative efforts have been made shall be covered by the protective scope of the present disclosure.
[031 ] It should be understood that, the specific embodiments described herein are merely for explaining the present disclosure, but are not intended to limit the present disclosure.
[032] As shown in FIG. 1, a first embodiment of the present disclosure proposes an application voice playback switching method, which includes the following steps.
[033] Step S 101. Acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
[034] The operating environment of the method in this embodiment relates to online games, single -player games and other applications, and particularly to switching management policies for voice playback when application content bound to a voice performs jump.
[035] The jump of the application content bound to a voice, for example, may be graphics jump, duplicate content jump and the like.
[036] As the prior art adopts non-dynamic manners for voice switching management, it results in that flexibility of voice playback switching is very poor, thereby affecting playback effects of the application, while this embodiment can automatically determine a policy to be selected currently according to a jump state of current bound content with respect to a voice being played, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching and improve playback effects of the application.
[037] For example, when monitoring that application content bound to a voice currently played by an application performs jump, first acquire jump state information of the application content, so as to acquire a jump state of the application content bound from the jump state information.
[038] Step S102. Determine a category of the jump state according to the jump state information.
[039] Step S103. Select a corresponding voice switching policy according to the category of the jump state and dynamically perform voice playback switch.
[040] This embodiment previously classifies all the switch jump states in the application, as one implementation manner, which may be divided into hard jump and soft jump.
[041 ] By taking a game as an example, for character skills operated by players, soft jump refers to switch between the players' skills, and hard jump refers to that other players interrupt current local players' skills. For example, a local player operates a character, the character can control skills that the player plays back, the skills include voices and graphics bound to the voices, and if it is necessary to play two skills, each skill takes 1.5 seconds to complete playback. If the player first plays a first skill and continuously plays a second skill when the first skill is not played, the graphics bound to the voices perform a soft jump operation.
[042] Corresponding voice switching policies are set for different jump state categories respectively.
[043] For example, for a hard jump state, playback of a voice unit before jump may be directly interrupted, and the voice unit after the jump is played back. The voice unit is represented with frame. The voice unit may include an audio file that represents an action or a move of a game character in a video game. In an action game, different voice units may correspond to different moves of a game character. Similarly, different game character may have different voice units when performing the same action or movement.
[044] In addition, a mandatory command of fading out to negative infinity within a certain time may be set for each voice unit; if obtaining a command of being suspended, fade out the voice unit to negative infinity within the set time to be recovered then.
[045] Therefore, for the hard jump state, the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
[046] For a soft jump state, a voice switching policy may be set based on the following principle: a time axis is set for each voice unit, and the time axis automatically operates synchronously in milliseconds when the voice unit is played back. When the jump is performed for voice switching management, a time axis position of a voice unit being played before the jump may be acquired, making playback of the voice unit before the jump begin fading out to negative infinity herein and then recovered, and the voice unit after the jump is played back. A time when playback of the voice unit before the jump fades out to negative infinity may be obtained through calculation based on a time cut-in position of the bound content after the jump and a total playback time of the voice unit after the jump and in combination with a time when playback of the voice unit before the jump fades out of a mandatory command.
[047] It should be noted that, in actual use, in order to ensure flexibility of voice switch, the hard jump and the soft jump in the above embodiment need to be preset, by taking a game as an example, that action jump of players' leading roles is soft jump and interruption of non-leading roles is hard jump in the above embodiment is only default jump state classification setting in current game development. Therefore, different soft and hard jump rules may be set in different game types. That is to say, in other implementation manners, other forms of classification may be performed on the jump state, and corresponding voice switching policies are set for different jump state categories respectively, so as to achieve the aim of improving flexibility of voice switch and then improving playback effects of the application. [048] This embodiment, through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
[049] As shown in FIG. 2, a second embodiment of the present disclosure proposes an application voice playback switching method; compared with the first embodiment, this embodiment specifically defines the step S103 in the above embodiment, i.e., selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, but other steps are the same as those in the first embodiment.
[050] For example, the method in this embodiment includes the following steps implemented by a terminal device.
[051] Step S101. The terminal device acquires jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
[052] The step S101 is the same as that in the first embodiment, i.e., when monitoring that application content bound to a voice currently played by an application performs jump, first acquire jump state information of the application content, so as to acquire a jump state of the application content bound from the jump state information.
[053] Step S 102. The terminal device determines a category of the jump state according to the jump state information. There are two categories of the jump states: a hard jump and a soft jump. When the category of the jump state is hard jump, perform step S 1031; and when the category of the jump state is soft jump, perform step S1032. [054] Step S 1031. The terminal device interrupts playback of a voice unit being played before the jump, and fades out to negative infinity to be recovered; and play back the voice unit after the jump.
[055] Step S1032. The terminal device acquires a time axis position of a voice unit being played before the jump, to serve as a fade-out time point of the voice unit being played before the jump.
[056] Step S1033. The terminal device acquires a time cut-in position of the application content after the jump, to serve as a time point when the voice unit after the jump starts to play.
[057] Step S 1034. The terminal device subtracts the time point when the voice unit after the jump starts to play from a total playback time of the voice unit after the jump, to obtain a time in which the voice unit after the jump is not played.
[058] Step S 1035. The terminal device adds the time in which the voice unit after the jump is not played to a preset time when the voice unit being played before the jump fades out of a mandatory command, to obtain a time when the voice unit being played before the jump fades out to the negative infinity.
[059] Step S 1036. From the fade-out time point of the voice unit being played before the jump, The terminal device makes the voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the voice unit being played before the jump fades out to negative infinity, and plays back the voice unit after the jump from the time point when the voice unit after the jump starts to play.
[060] This embodiment previously classifies all the switch jump states in the application, which are specifically divided into two types, i.e., hard jump and soft jump, and corresponding voice switching policies are respectively set for the two different jump state categories, i.e., hard jump and soft jump. For example, in a hard jump, an action of the game character is interrupted by another game character or player. In a soft jump, the action of the game character is interrupted by the same game character itself or by the game player that controls the same game character.
[061 ] For example, in this embodiment, first, preset a mandatory command of fading out to negative infinity within a certain time for all voice units in an application, and if obtaining a command of being suspended, fade out the voice units to negative infinity within the set time to be recovered then. In addition, it is set that each voice unit has a time axis, and the time axis automatically operates
synchronously in milliseconds when the voice unit is played.
[062] Then, when the application content bound to a voice currently played by an application executes jumping, automatically determine whether a policy to be selected currently is a hard jump voice switching policy or a soft jump voice switching policy according to the acquired jump state type, real-time flexibly select a corresponding switching manner, and perform flexible switching and suspension of any voice.
[063] When determining that the category of the jump state is hard jump, directly interrupt playback of a voice unit being played before the jump, and fade it out to negative infinity within a set time of a mandatory command, and play back the voice unit after the jump. The voice unit is represented with frame.
[064] The voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
[065] When determining that the category of the jump state is soft jump, acquire a time axis position of a voice unit being played before the jump, to serve as a fade- out time point of the voice unit being played before the jump, that is, playback of the voice unit being played before the jump begins fading out at this time point.
[066] Then, acquire a time cut-in position of the application content after the jump, to serve as a time point when the voice unit after the jump starts to play.
[067] Afterwards, calculate and acquire a time when the voice unit being played before the jump fades out to negative infinity based on the time point when the voice unit after the jump starts to play, a total playback time of the voice unit after the jump and a preset time when the voice unit being played before the jump fades out of a mandatory command, so as to attenuate the voice unit being played before the jump to the negative infinity within the time.
[068] The specific calculation process thereof is as follows: [069] subtracting the time point when the voice unit after the jump starts to play from a total playback time of the voice unit after the jump, to obtain a time in which the voice unit after the jump is not played; and
[070] adding the time in which the voice unit after the jump is not played to a preset time when the voice unit being played before the jump fades out of a mandatory command, to obtain a time when the voice unit being played before the jump fades out to the negative infinity.
[071 ] Finally, in specific switch, from the fade-out time point of the voice unit being played before the jump, make the voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the voice unit being played before the jump fades out to negative infinity, and play back the voice unit after the jump from the time point when the voice unit after the jump starts to play.
[072] It should be noted that, if switching content after the soft jump is executed from the beginning, the voice switching operation of the soft jump is actually the same as that of the hard jump. If the switching content after the soft jump is at any time point of the content, in the case of jump switch, the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump.
[073] By taking a game as an example, for the soft jump, if a player has two skills, graphics of Skill A have 10 frames, and graphics of Skill B have 10 frames, now it is necessary to jump from Skill A to Skill B, if it jumps from Skill A to a first frame of Skill B, in this case, the connection manner of the soft jump and the calculated time when the voice unit being played before the jump fades out to negative infinity are the same as those of the hard jump; however, if it jumps from Skill A to another frame of Skill B, such as a second frame or a third frame, instead of the first frame, the calculation method in this embodiment is adopted, to calculate the time when the voice unit being played before the jump fades out to the negative infinity. Thus, in the case of jump switch, the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump. [074] This embodiment, through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
[075] As shown in FIG. 3, a third embodiment of the present disclosure proposes an application voice playback switching method, and on the basis of the first embodiment, before the step S101: acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping, the method further includes:
[076] step S100. Setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application.
[077] The difference between this embodiment and the firs embodiment lies in that, this embodiment further includes the solution of setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, thus, by setting a mandatory command of fading out to the negative infinity within a predetermined time for voice units, it is favorable for, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when it is determined that the category of the jump state is soft jump, a time when the voice unit being played before the jump fades out to negative infinity can be calculated and acquired based on a preset time when the voice unit being played before the jump fades out of the mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenuate the voice unit being played before the jump to the negative infinity within the time. [078] This embodiment, through the above solution, that is, setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; selecting a corresponding voice switching policy according to the category of the jump state, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when determining that the category of the jump state is soft jump, calculating and acquiring a time when the voice unit being played before the jump fades out to negative infinity based on a preset time when the voice unit being played before the jump fades out of the mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenuate the voice unit being played before the jump to the negative infinity within the time, can automatically determine a policy to be selected currently according to a jump state of current content with respect to a voice being played, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
[079] As shown in FIG. 4, the first embodiment of the present disclosure proposes an application voice playback switching apparatus 200. The apparatus includes a hardware processor 210 and a non-transitory storage medium 220 accessible to the hardware processor 210. The non-transitory storage medium 220 is configured to store modules including: an acquisition module 201, a judgment module 202 and a switching module 203. For example, the apparatus may be a user terminal.
[080] The acquisition module 201 is configured to acquire jump state
information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping.
[081 ] The judgment module 202 is configured to determine a category of the jump state according to the jump state information. [082] The switching module 203 is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
[083] This embodiment relates to online games, single -player games and other applications, and particularly to switching management policies for voice playback when application content bound to a voice performs jump.
[084] The jump of the application content bound to a voice, for example, may be graphics jump, duplicate content jump and the like.
[085] As the prior art adopts non-dynamic manners for voice switching management, it results in that flexibility of voice playback switching is very poor, thereby affecting playback effects of the application, while this embodiment can automatically determine a policy to be selected currently according to a jump state of current content with respect to a voice being played, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching and improve playback effects of the application.
[086] For example, when monitoring application content bound to a voice currently played by an application performs jump, first, the acquisition module 201 acquires jump state information of the application content, so as to facilitate the judgment module 202 to acquire a jump state of the application content bound from the jump state information. Afterwards, the switching module 203 selects a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
[087] This embodiment previously classifies all the switch jump states in the application, as one implementation manner, which specifically may be divided into hard jump and soft jump.
[088] By taking a game as an example, for character skills operated by players, soft jump refers to switch between the players' skills, and hard jump refers to that other players interrupt current local players' skills. For example, a local player operates a character, the character can control skills that the player plays back, the skills include voices and graphics bound to the voices, and if it is necessary to play two skills, each skill takes 1.5 seconds to complete playback. If the player first plays a first skill and continuously plays a second skill when the first skill is not played, the graphics bound to the voices perform a soft jump operation.
[089] Corresponding voice switching policies are set for different jump state categories respectively.
[090] For example, for a hard jump state, playback of a voice unit before jump may be directly interrupted, and the voice unit after the jump is played back. The voice unit is represented with frame.
[091 ] In addition, a mandatory command of fading out to negative infinity within a certain time may be set for each voice unit; if obtaining a command of being suspended, fade out the voice unit to negative infinity within the set time to be recovered then.
[092] Therefore, for the hard jump state, the voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
[093] For a soft jump state, a voice switching policy may be set based on the following principle: a time axis is set for each voice unit, and the time axis automatically operates synchronously in milliseconds when the voice unit is played. When the jump is performed for voice switching management, a time axis position of a voice unit being played before the jump may be acquired, making playback of the voice unit before the jump begin fading out to negative infinity herein and then recovered, and the voice unit after the jump is played back. A time when playback of the voice unit before the jump fades out to negative infinity may be obtained through calculation based on a time cut-in position of the bound content after the jump and a total playback time of the voice unit after the jump and in combination with a time when playback of the voice unit before the jump fades out of a mandatory command.
[094] It should be noted that, in actual use, in order to ensure flexibility of voice switch, the hard jump and the soft jump in the above embodiment need to be preset, by taking a game as an example, that action jump of players' leading roles is soft jump and interruption of non-leading roles is hard jump in the above embodiment is only default jump state classification setting in current game development. Therefore, different soft and hard jump rules may be set in different game types. That is to say, in other implementation manners, other forms of classification may be performed on the jump state, and corresponding voice switching policies are set for different jump state categories respectively, so as to achieve the aim of improving flexibility of voice switch and then improving playback effects of the application.
[095] This embodiment, through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
[096] The solution in this embodiment is elaborated below by specifically dividing the jump state into two categories, that is, hard jump and soft jump.
[097] For example, first, preset a mandatory command of fading out to negative infinity within a certain time for all voice units in an application, and if obtaining a command of being suspended, fade out the voice units to negative infinity within the time to be recovered then. In addition, it is set that each voice unit has a time axis, and the time axis automatically operates synchronously in milliseconds when the voice unit is played.
[098] Then, when the application content bound to a voice currently played by an application executes jumping, automatically determine whether a policy to be selected currently is a hard jump voice switching policy or a soft jump voice switching policy according to the acquired jump state type, real-time flexibly select a corresponding switching manner, and perform flexible switching and suspension of any voice.
[099] When determining that the category of the jump state is hard jump, directly interrupt playback of a voice unit being played before the jump, and fade it out to negative infinity within a set time of a mandatory command, and play back the voice unit after the jump. The voice unit is represented with frame.
[0100] The voice units all come with a fade-out mandatory command, so that the hard jump exhibits fade-out switch, to achieve the aim of natural switch.
[0101] When determining that the category of the jump state is soft jump, acquire a time axis position of a voice unit being played before the jump, to serve as a fade- out time point of the voice unit being played before the jump, that is, playback of the voice unit being played before the jump begins fading out at this time point.
[0102] Then, acquire a time cut-in position of the application content after the jump, to serve as a time point when the voice unit after the jump starts to play.
[0103] Afterwards, calculate and acquire a time when the voice unit being played before the jump fades out to negative infinity based on the time point when the voice unit after the jump starts to play, a total playback time of the voice unit after the jump and a preset time when the voice unit being played before the jump fades out of a mandatory command, so as to attenuate the voice unit being played before the jump to the negative infinity within the time.
[0104] The specific calculation process may include the follow acts implemented by a terminal device:
[0105] subtracting the time point when the voice unit after the jump starts to play from a total playback time of the voice unit after the jump, to obtain a time in which the voice unit after the jump is not played; and
[0106] adding the time in which the voice unit after the jump is not played to a preset time when the voice unit being played before the jump fades out of a mandatory command, to obtain a time when the voice unit being played before the jump fades out to the negative infinity.
[0107] Finally, in specific switch, from the fade-out time point of the voice unit being played before the jump, make the voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the voice unit being played before the jump fades out to the negative infinity, and play back the voice unit after the jump from the time point when the voice unit after the jump starts to play. [0108] It should be noted that, if switching content after the soft jump is executed from the beginning, the voice switching operation of the soft jump is actually the same as that of the hard jump. If the switching content after the soft jump is at any time point of the content, in the case of jump switch, the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump.
[0109] By taking a game as an example, for the soft jump, if a player has two skills, graphics of Skill A have 10 frames, and graphics of Skill B have 10 frames, now it is necessary to jump from Skill A to Skill B, if it jumps from Skill A to a first frame of Skill B, in this case, the connection manner of the soft jump and the calculated time when the voice unit being played before the jump fades out to negative infinity are the same as those of the hard jump; however, if it jumps from Skill A to another frame of Skill B, such as a second frame or a third frame, instead of the first frame, the calculation method in this embodiment is adopted, to calculate the time when the voice unit being played before the jump fades out to the negative infinity. Thus, in the case of jump switch, the switching time may be automatically and flexibly determined to perform smooth access of the voice unit after the jump.
[01 10] This embodiment, through the above solution, that is, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; and selecting a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch, can automatically determine a policy to be selected currently according to a jump state of current content, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
[01 1 1 ] As shown in FIG. 5, the second embodiment of the present disclosure proposes an application voice playback switching apparatus, and on the basis of the first embodiment, the apparatus may further include:
[01 12] a setting module 205, for setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application. [01 13] The difference between this embodiment and the firs embodiment lies in that, this embodiment further includes the solution of setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, thus, by setting a mandatory command of fading out to the negative infinity within a predetermined time for voice units, it is favorable for, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when it is determined that the category of the jump state is soft jump, a time when the voice unit being played before the jump fades out to negative infinity can be calculated and acquired based on a preset time when the voice unit being played before the jump fades out of a mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenuate the voice unit being played before the jump to the negative infinity within the time.
[01 14] This embodiment, through the above solution, that is, setting a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application, acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; determining a category of the jump state according to the jump state information; selecting a corresponding voice switching policy according to the category of the jump state, when determining that the category of the jump state is hard jump, interrupting playback of a voice unit being played before the jump, and fading it out to negative infinity within a set time of a mandatory command; in addition, when determining that the category of the jump state is soft jump, calculating and acquiring a time when the voice unit being played before the jump fades out to negative infinity based on a preset time when the voice unit being played before the jump fades out of a mandatory command in combination with the time point when the voice unit after the jump starts to play and a total playback time of the voice unit after the jump, so as to attenuate the voice unit being played before the jump to the negative infinity within the time, can automatically determine a policy to be selected currently according to a jump state of current content with respect to a voice being played, real-time flexibly select a switching manner, and perform flexible switching and suspension of any voice, so as to improve flexibility of voice playback switching in an application.
[0115] FIG. 6 shows a block diagram of an example embodiment of the terminal.
[0116] For example, the terminal includes a radio frequency (RF) circuit 20, a memory 21 including one or more computer-readable storage mediums, an input unit 22, a display unit 23, a sensor 24, an audio circuit 25, a wireless fidelity (WiFi) module 26, a processor 27 including one or more cores, and a power 28, etc.. It's understood that, the structure of the terminal shown in FIG. 6 is not limiting, it can includes less or more components, or includes other combinations or arrangements.
[0117] Specifically, the RF circuit 20 can be used for receiving and sending signals during calling or process of receiving and sending message. Specially, the RF circuit 20 will receive downlink information from the base station and send it to the processor 27; or send uplink data to the base station. Generally, the RF circuit 20 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier (LNA), a diplexer, and the like. In addition, the RF circuit 20 can communicate with network or other devices by wireless communication. Such wireless communication can use any communication standard or protocol, which includes, but is not limited to, Global System of Mobile communication (GSM), (General Packet Radio Service, GPRS), (Code Division Multiple Access, CDMA), (Wideband Code Division Multiple Access, WCDMA), (Long Term Evolution, LTE), email, or (Short Messaging Service, SMS).
[0118] The memory 21 is configured to store software program and module which will be run by the processor 27, so as to perform multiple functional applications of the mobile phone and data processing. The memory 21 mainly includes storing program area and storing data area. For example, the storing program area can store the operating system, at least one application program with required function (such as sound playing function, image playing function, etc.). The storing data area can store data established by mobile phone according to actual using demand (such as audio data, phonebook, etc.) Furthermore, the memory 21 can be high-speed random access memory, or nonvolatile memory, such as disk storage, flash memory device, or other volatile solid-state memory devices. Accordingly, the memory 21 may include a storing controller to help the processor 27 and the input unit 22 to access the memory 21.
[0119] The input unit 22 is configured to receive the entered number or character information, and the entered key signal related to user setting and function control. For example, the input unit 22 includes a touch- sensitive surface 221 or other input devices 222. The touch- sensitive surface 221 is called as touch screen or touch panel, which can collect user's touch operations thereon or nearby (for example the operations generated by fingers of user or stylus pen, and the like, touching on or near the touch-sensitive surface 221), and drive the corresponding connection device according to the preset program. Optionally, the touch- sensitive surface 221 includes two portions including a touch detection device and a touch controller. Specifically, the touch detection device is configured to detect touch position of the user and detecting signals accordingly, and then sending the signals to the touch controller. Subsequently, the touch controller receives touch information from the touch detection device, and converts it to contact coordinates which are to be sent to the processor 27, and then receives command sent by the processor 27 to perform. In addition, besides the touch- sensitive surface 221, the input unit 22 can include, but is not limited to, other input devices 222, such as one or more selected from physical keyboard, function keys (such as volume control keys, switch key-press, etc.), a trackball, a mouse, and an operating lever, etc..
[0120] The display unit 23 is configured to display information entered by the user or information supplied to the user, and menus of the mobile phone. For example, the display unit 23 includes a display panel 231, such as a Liquid Crystal Display (LCD), or an Organic Light-Emitting Diode (OLED). Furthermore, the display panel 231 can be covered by the touch-sensitive surface 221, after touch operations are detected on or near the touch- sensitive surface 221, they will be sent to the processor 27 to determine the type of the touching event. Subsequently, the processor 27 supplies the corresponding visual output to the display panel 231 according to the type of the touching event. As shown in FIG. 6, the touch- sensitive surface 221 and the display panel 231 are two individual components to implement input and output, but they can be integrated together to implement the input and output in some embodiments.
[0121 ] Furthermore, the terminal may include at least one sensor 24, such as light sensors, motion sensors, or other sensors. Specifically, the light sensors includes ambient light sensors for adjusting brightness of the display panel 231 according to the ambient light, and proximity sensors for turning off the display panel 231 and/or maintaining backlight when the terminal is moved to the ear side. Accelerometer sensor as one of the motion sensors can detect the magnitude of accelerations in every direction (Triaxial, generally), and detect the magnitude and direction of gravity in an immobile status, which is applicable to applications of identifying attitudes of the mobile (such as switching between horizontal and vertical screens, related games, magnetometer attitude calibration, etc.), vibration recognition related functions (such as pedometer, percussion, etc.). And the terminal also can configure other sensors (such as gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc.) whose detailed descriptions are omitted here.
[0122] The audio circuit 25, the speaker 251 and the microphone 252 supply an audio interface between the user and the terminal. Specifically, the audio data is received and converted to electrical signals by audio circuit 25, and then transmitted to the speaker 251, which are converted to sound signal to output. On the other hand, the sound signal collected by the speaker is then converted to electrical signals which will be received and converted to audio data. Subsequently, the audio data are output to the processor 27 to process, and then sent to another mobile phone via the RF circuit 20, or sent to the memory 21 to process further. The audio circuit 25 may further include an earplug jack to provide a communication between the external earphone and the terminal.
[0123] WiFi pertains to short-range wireless transmission technology providing a wireless broadband Internet, by which the mobile phone can help the user to receive and send email, browse web, and access streaming media, etc.. Although the WiFi module 26 is illustrated in FIG. 6, it should be understood that, WiFi module 26 is not a necessary for the terminal, which can be omitted according the actual demand without changing the essence of the present disclosure.
[0124] The processor 27 is a control center of the mobile phone, which connects with every part of the mobile phone by various interfaces or circuits, and performs various functions and processes data by running or performing software
program/module stored in the memory 21 or calling data stored in the memory 21, so as to monitor the mobile phone. Optionally, the processor 27 may include one or more processing units. Preferably, the processor 27 can integrate with application processors and modem processors, for example, the application processors include processing operating system, user interface and applications, etc.; the modern processors are used for performing wireless communication. It can be understood that, it's an option to integrate the modern processors to the processor 27.
[0125] Furthermore, the terminal may include a power supply 28 (such as battery) supplying power for each component, preferably, the power supply can connect with the processor 27 by power management system, so as to manage charging, discharging and power consuming. The power supply 28 may include one or more AC or DC powers, recharging systems, power failure detection circuits, power converters or inverters, or power status indicators, etc..
[0126] In addition, the terminal may include a camera, and a Bluetooth module, etc., which are not illustrated. In this embodiment, the processor 27 of the terminal will perform an executable file stored in the memory 21 according to one or more program of the application, as the following steps.
[0127] The terminal is configured to acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping. The terminal is configured to determine a category of the jump state according to the jump state information. The terminal is configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
[0128] It should be noted that, herein, the terms "include" and "comprise" or any other variations intend to cover non-exclusive inclusion, so that processes, methods, articles or apparatuses including a series of elements not only include the elements, but also include other elements not explicitly listed, or also include inherent elements of the processes, methods, articles or apparatuses. In the absence of more restrictions, for the elements defined by the expression "including one...," it does not exclude that the processes, methods, articles or apparatuses including the elements also have other identical elements.
[0129] The sequence numbers of the above embodiments of the present disclosure are merely for the convenience of description, and do not imply the preference among the embodiments.
[0130] Through the above description of the embodiments, it is apparent to persons skilled in the art that the methods in the above embodiments may be accomplished by software on a necessary universal hardware platform, and definitely may also be accomplished by hardware; however, in most circumstances, the former is a better implementation manner. Based on this, the technical solution of the present disclosure or the part that makes contributions to the prior art can be substantially embodied in the form of a software product. The computer software product may be stored in a storage medium (for example, a ROM/RAM, a magnetic disk, or an optical disk), and contain several instructions to instruct a terminal device (for example, a mobile phone, a computer, a server, or a device) to perform the methods as described in the embodiments of the present disclosure. For example, program instructions corresponding to the application voice playback switching apparatuses in FIG. 4 and FIG. 5 can be stored in a readable storage medium of a computer, a server or other terminals, and are executed by at least one processor therein, so as to implement the application voice playback switching methods in FIG. 1 to FIG. 3.
[0131 ] The above are merely preferred embodiments of the present disclosure, and are not intended to limit the patent scope of the present disclosure. Any equivalent structures or flow variations made according to the specification and content of the drawings of the present disclosure, either directly or indirectly applied to other relevant technical fields, should similarly fall within the protection scope of the present disclosure.

Claims

Claims What is claimed is:
1. An application voice playback switching method, comprising:
acquiring, by a terminal device, jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping;
determining, by the terminal device, a category of the jump state according to the jump state information; and
selecting, by the terminal device, a corresponding voice switching policy according to the category of the jump state and dynamically performing voice playback switch.
2. The method according to claim 1, wherein selecting a corresponding voice switching policy according to the category of the jump state and dynamically performing voice playback switching comprises:
when the category of the jump state is a hard jump, interrupting, by the terminal device, playback of a first voice unit being played before the jump, and fading out to negative infinity to be recovered; and playing back a second voice unit after the jump.
3. The method according to claim 1, wherein selecting a corresponding voice switching policy according to the category of the jump state and dynamically performing voice playback switching comprises:
when the category of the jump state is a soft jump, acquiring, by the terminal device, a time axis position of a first voice unit being played before the jump, to serve as a fade-out time point of the first voice unit being played before the jump;
acquiring, by the terminal device, a time cut-in position of the application content after the jump, to serve as a time point when the second voice unit after the jump starts to play;
acquiring, by the terminal device, a time when the first voice unit being played before the jump fades out to negative infinity; from the fade-out time point of the first voice unit being played before the jump, making, by the terminal device, the first voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the first voice unit being played before the jump fades out to negative infinity; and
playing, by the terminal device, back the second voice unit after the jump from the time point when the second voice unit after the jump starts to play.
4. The method according to claim 3, wherein acquiring a time when the first voice unit being played before the jump fades out to negative infinity comprises:
subtracting, by the terminal device, the time point when the second voice unit after the jump starts to play from a total playback time of the second voice unit after the jump, to obtain a time in which the second voice unit after the jump is not played; and
adding, by the terminal device, the time in which the second voice unit after the jump is not played to a preset time when the first voice unit being played before the jump fades out of a mandatory command, to obtain a time when the first voice unit being played before the jump fades out to the negative infinity.
5. The method according to any one of claims 1 to 4, wherein before acquiring jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping, the method further comprises:
setting, by the terminal device, a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application.
6. An application voice playback switching apparatus, comprising a hardware processor and a non-transitory storage medium accessible to the hardware processor, the non-transitory storage medium is configured to store modules comprising:
an acquisition module, configured to acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping; a judgment module, configured to determine a category of the jump state according to the jump state information; and
a switching module, configured to select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
7. The apparatus according to claim 6, wherein: when the category of the jump state is a hard jump, the switching module is further configured to interrupt playback of a first voice unit being played before the jump, fade out to negative infinity to be recovered; and play back a second voice unit after the jump.
8. The apparatus according to claim 6, wherein: when the category of the jump state is a soft jump, the switching module is further configured to:
acquire a time axis position of a first voice unit being played before the jump, to serve as a fade-out time point of the first voice unit being played before the jump; acquire a time cut-in position of the application content after the jump, to serve as a time point when the second voice unit after the jump starts to play;
acquire a time when the first voice unit being played before the jump fades out to negative infinity;
from the fade-out time point of the first voice unit being played before the jump, make the first voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the first voice unit being played before the jump fades out to negative infinity; and
play back the second voice unit after the jump from the time point when the second voice unit after the jump starts to play.
9. The apparatus according to claim 8, wherein the switching module is further configured to:
subtract the time point when the second voice unit after the jump starts to play from a total playback time of the second voice unit after the jump, to obtain a time in which the second voice unit after the jump is not played; and add the time in which the second voice unit after the jump is not played to a preset time when the first voice unit being played before the jump fades out of a mandatory command, to obtain a time when the first voice unit being played before the jump fades out to the negative infinity.
10. The apparatus according to any one of claims 6 to 8, further comprising:
a setting module configured to set a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application.
11. An application voice playback switching apparatus, comprising a hardware processor and a non-transitory storage medium accessible to the hardware processor, the hardware processor is configured to:
acquire jump state information of application content when monitoring that the application content bound to a voice currently played by an application executes jumping;
determine a category of the jump state according to the jump state information; and
select a corresponding voice switching policy according to the category of the jump state, to dynamically perform voice playback switch.
12. The apparatus according to claim 11, wherein: when the category of the jump state is a hard jump, the apparatus is further configured to interrupt playback of a first voice unit being played before the jump, fade out to negative infinity to be recovered; and play back a second voice unit after the jump.
13. The apparatus according to claim 11, wherein: when the category of the jump state is a soft jump, the apparatus is further configured to:
acquire a time axis position of a first voice unit being played before the jump, to serve as a fade-out time point of the first voice unit being played before the jump; acquire a time cut-in position of the application content after the jump, to serve as a time point when the second voice unit after the jump starts to play; acquire a time when the first voice unit being played before the jump fades out to negative infinity;
from the fade-out time point of the first voice unit being played before the jump, make the first voice unit being played before the jump fade out to the negative infinity and recovered in the acquired time when the first voice unit being played before the jump fades out to negative infinity; and
play back the second voice unit after the jump from the time point when the second voice unit after the jump starts to play.
14. The apparatus according to claim 13, wherein the apparatus is further configured to:
subtract the time point when the second voice unit after the jump starts to play from a total playback time of the second voice unit after the jump, to obtain a time in which the second voice unit after the jump is not played; and
add the time in which the second voice unit after the jump is not played to a preset time when the first voice unit being played before the jump fades out of a mandatory command, to obtain a time when the first voice unit being played before the jump fades out to the negative infinity.
15. The apparatus according to any one of claims 11 to 13, further configured to set a mandatory command of fading out to the negative infinity within a predetermined time for all voice units of the application.
PCT/CN2014/080232 2013-08-20 2014-06-18 Audio calling method and device thereof WO2015024409A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310364343.1 2013-08-20
CN201310364343.1A CN104423924B (en) 2013-08-20 2013-08-20 Using sound play switching method and device

Publications (1)

Publication Number Publication Date
WO2015024409A1 true WO2015024409A1 (en) 2015-02-26

Family

ID=52483035

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/080232 WO2015024409A1 (en) 2013-08-20 2014-06-18 Audio calling method and device thereof

Country Status (2)

Country Link
CN (1) CN104423924B (en)
WO (1) WO2015024409A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107233734B (en) * 2017-06-07 2020-08-11 珠海金山网络游戏科技有限公司 Method and device for controlling game application and other application sound playing
CN110265017B (en) * 2019-06-27 2021-08-17 百度在线网络技术(北京)有限公司 Voice processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007044329A2 (en) * 2005-10-04 2007-04-19 Run-Tech Llc System and method for selecting music to guide a user through an activity
US20070208770A1 (en) * 2006-01-23 2007-09-06 Sony Corporation Music content playback apparatus, music content playback method and storage medium
WO2008150340A1 (en) * 2007-05-31 2008-12-11 Sony Computer Entertainment America Inc. System and method for taking control of a system during a commercial break
WO2011069357A1 (en) * 2009-12-10 2011-06-16 腾讯科技(深圳)有限公司 Method and apparatus for dynamically adjusting volume

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007044329A2 (en) * 2005-10-04 2007-04-19 Run-Tech Llc System and method for selecting music to guide a user through an activity
US20070208770A1 (en) * 2006-01-23 2007-09-06 Sony Corporation Music content playback apparatus, music content playback method and storage medium
WO2008150340A1 (en) * 2007-05-31 2008-12-11 Sony Computer Entertainment America Inc. System and method for taking control of a system during a commercial break
WO2011069357A1 (en) * 2009-12-10 2011-06-16 腾讯科技(深圳)有限公司 Method and apparatus for dynamically adjusting volume

Also Published As

Publication number Publication date
CN104423924B (en) 2019-01-29
CN104423924A (en) 2015-03-18

Similar Documents

Publication Publication Date Title
US10834237B2 (en) Method, apparatus, and storage medium for controlling cooperation of multiple intelligent devices with social application platform
US20160291929A1 (en) Audio playback control method, and terminal device
US10951557B2 (en) Information interaction method and terminal
WO2015172704A1 (en) To-be-shared interface processing method, and terminal
CN106850983B (en) screen-off control method and device, terminal and storage medium
CN108958629B (en) Split screen quitting method and device, storage medium and electronic equipment
US9680921B2 (en) Method, apparatus, and system for controlling voice data transmission
CN103530520A (en) Method and terminal for obtaining data
CN109067981B (en) Split screen application switching method and device, storage medium and electronic equipment
US20150043312A1 (en) Sound playing method and device thereof
WO2017215661A1 (en) Scenario-based sound effect control method and electronic device
CN103294442A (en) Method, device and terminal unit for playing prompt tones
CN109692474A (en) Game control method, mobile terminal and readable storage medium storing program for executing based on mobile terminal
CN110930964B (en) Display screen brightness adjusting method and device, storage medium and terminal
US9479888B2 (en) Methods and apparatus for implementing sound events
CN107193551B (en) Method and device for generating image frame
WO2015135457A1 (en) Method, apparatus, and system for sending and playing multimedia information
WO2016019695A1 (en) Voice interaction method and terminal
CN108388400A (en) A kind of operation processing method and mobile terminal
WO2015024409A1 (en) Audio calling method and device thereof
WO2015184959A2 (en) Method and apparatus for playing behavior event
CN108920086B (en) Split screen quitting method and device, storage medium and electronic equipment
US10225388B2 (en) Method and apparatus for adjusting volume of an accepted session
US10127009B2 (en) Data processing method and terminal thereof
WO2015021805A1 (en) Audio calling method and device thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14837251

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 130716)

122 Ep: pct application non-entry in european phase

Ref document number: 14837251

Country of ref document: EP

Kind code of ref document: A1