WO2015124087A1 - Audio control method and apparatus - Google Patents

Audio control method and apparatus Download PDF

Info

Publication number
WO2015124087A1
WO2015124087A1 PCT/CN2015/072986 CN2015072986W WO2015124087A1 WO 2015124087 A1 WO2015124087 A1 WO 2015124087A1 CN 2015072986 W CN2015072986 W CN 2015072986W WO 2015124087 A1 WO2015124087 A1 WO 2015124087A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio file
character
target object
identifier
character identifier
Prior art date
Application number
PCT/CN2015/072986
Other languages
French (fr)
Inventor
Yi Yan
Weilin Zhang
Gen DONG
Original Assignee
Tencent Technology (Shenzhen) Company Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology (Shenzhen) Company Limited filed Critical Tencent Technology (Shenzhen) Company Limited
Publication of WO2015124087A1 publication Critical patent/WO2015124087A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/424Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/80Special adaptations for executing a specific game genre or game mode
    • A63F13/837Shooting of targets

Definitions

  • the present disclosure relates to the field of data processing, and in particular, to an audio control method and apparatus.
  • Audios can be used in many application scenarios (such as a game or a social network) to represent a sound made by a character or a sound effect of a character in a different state.
  • the present disclosure provides an audio control method and apparatus, which can distinguish characters of different types only by using played audios in many application scenarios.
  • One aspect of the present disclosure provides an audio control method, including:
  • the method further includes:
  • the separately binding the first audio file and the second audio file with the character identifier of the target object includes:
  • the method further includes:
  • the character type identifier is a species identifier.
  • Another aspect of the present disclosure further provides an audio control apparatus, including:
  • a determining module configured to determine a character identifier of a target object
  • a first configuring module configured to obtain a character type identifier corresponding to the character identifier, and configure a first audio file for the target object according to the character type identifier
  • a second configuring module configured to obtain currently used resource data corresponding to the character identifier, and configure a second audio file for the target object according to the currently used resource data
  • a first binding module configured to separately bind the first audio file and the second audio file with the character identifier of the target object.
  • the apparatus further includes:
  • a first receiving module configured to receive a state change request of the target object, where the state change request carries the character identifier
  • a first obtaining module configured to obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
  • the first binding module includes:
  • a first obtaining submodule configured to separately obtain address information of the first audio file and address information of the second audio file
  • a binding submodule configured to separately bind the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
  • the apparatus further includes:
  • a second receiving module configured to receive a state change request of the target object, where the state change request carries the character identifier
  • a second obtaining module configured to obtain the address information of the first audio file and the second audio file that correspond to the character identifier
  • a third obtaining module configured to obtain the first audio file and the second audio file according to the address information, so as to synchronously play the first audio file and the second audio file.
  • the present disclosure further provides an audio control system, including:
  • a client configured to send a state change request to a server
  • the server configured to receive the state change request from the client, the state change request carrying a character identifier; and obtain a first audio file and a second audio file that correspond to the character identifier, and transfer the first audio file and the second audio file to the client;
  • the client being configured to synchronously play the first audio file and the second audio file.
  • a first audio file and a second audio file are respectively configured for a target object according to a character type and currently used resource data, and characters of different types can be distinguished in many application scenarios by synchronously playing the first audio file and the second audio file that correspond to the target object.
  • FIG. 1 is a flowchart of an audio control method according to Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart of another audio control method according to Embodiment 1 of the present invention.
  • FIG. 3 is a schematic architectural diagram of a network system according to Embodiment 1 of the present invention.
  • FIG. 4 is a schematic flowchart of an audio control method according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic structural diagram of an audio control apparatus according to Embodiment 2 of the present invention.
  • FIG. 6 is a schematic diagram of a server according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic structural diagram of an audio control system according to Embodiment 2 of the present invention.
  • FIG. 1 is a flowchart of an audio control method according to this embodiment, which specifically includes:
  • S101 Determine a character identifier of a target object.
  • each target object has a character identifier uniquely corresponding to the target object; and the character identifier is used for uniquely identifying the target object, and the character identifier may be an identifying character string, letter, or the like.
  • S102 Obtain a character type identifier corresponding to the character identifier, and configure a first audio file for the target object according to the character type identifier.
  • the character type identifier corresponding to the character identifier is obtained.
  • Different target objects may belong to different character types, and in this embodiment, a character type to which the target object belongs may be determined according to the character type identifier corresponding to the character identifier of the target object.
  • the character type identifier is used for uniquely identifying the character type.
  • the first audio file is configured according to the determined character type identifier. That is, the first audio file is configured according to the character type to which the target object belongs. For example, in an online game, different first audio files are configured for different character types, for "man” , an audio file A may be configured to the first audio file; and for "tiger” , an audio file B may be configured to the first audio file.
  • the character type identifier may be a species identifier, and the species identifier may indicate that the character type is man, woman, horse, dog, kylin, tiger, eagle, or roc.
  • S103 Obtain currently used resource data corresponding to the character identifier, and configure a second audio file for the target object according to the currently used resource data.
  • the currently used resource data corresponding to the character identifier of the target object may be further obtained, and the second audio file is configured for the target object according to the currently used resource data of the target object, that is, the second audio file is configured according to a currently used resource data of the target object.
  • second audio files may be configured for different characters according to costumes that the different characters wear and the like.
  • step S102 there is no necessary sequence of step S102 and step S103.
  • S104 Separately bind the first audio file and the second audio file with the character identifier of the target object.
  • the first audio file and the second audio file are separately bound with the character identifier of the target object.
  • the first audio file, the second audio file, and the character identifier of the target object may be stored in a list, so that after the character identifier of the target object is determined, the first audio file and the second audio file that correspond to the character identifier can be obtained from the list.
  • first, address information of the first audio file and address information of the second audio file may be separately obtained first; and second, the address information of the first audio file and the address information of the second audio file are separately bound with the character identifier of the target object.
  • storage positions of the first audio file and the second audio file may be determined according to the address information thereof, and the first audio file and the second audio file are obtained.
  • FIG. 2 is a flowchart of another audio control method according to this embodiment. Based on S101 to S104, referring to FIG. 2, this embodiment may further include the following method, which is specifically:
  • S201 Receive a state change request of the target object, where the state change request carries the character identifier.
  • a target object may change a state, for example, when a character in the online game is hit, the character enters a hit state.
  • a state change request from the target object is received, where the state change request carries a character identifier of the target object.
  • S202 Obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
  • the first audio file and the second audio file that correspond to the character identifier of the target object are obtained from the foregoing determined binding relationships.
  • the first audio file reflects the character type of the target object
  • the second audio file reflects the currently used resource data of the target object.
  • the first audio file and the second audio file are synchronously played.
  • the first audio file and the second audio file may be played simultaneously by using a same interface.
  • a first audio file and a second audio file are separately configured for a target object according to a character type and currently used resource data, and the different target objects can be distinguished by synchronously playing the first audio file and the second audio file that correspond to the target object.
  • a game network system of the architecture shown in FIG. 3 includes a game server and multiple game clients.
  • the game server for example, may be a game server of the DIABLO 3, the World of Warcraft, the Blade and Soul, or another online game.
  • FIG. 4 is a schematic flowchart of an audio control method according to an embodiment of the present invention.
  • a game client A1 is only a game client in FIG. 3; however, this embodiment of the present invention is not limited to such a client, and the audio control method provided by this embodiment of the present invention may include the following content:
  • a game server determines a character identifier of any game character.
  • An identifier of a game character may be a nickname of the game character, an account of the game character, or the like.
  • the game server obtains a character type identifier corresponding to the character identifier, and configures an audio file HitVoice for the game character according to the character type identifier.
  • Different game characters may belong to different character types, for example, a character type of a game character A is "man” , and a character type of a game character B is "tiger” .
  • Audio files HitVoice of game characters of a same type are the same, that is, audio files HitVoice of game characters that belong to the type "man” are the same, and whether a game character belongs to the type "man” can be identified by playing an audio file HitVoice.
  • the game server obtains currently used resource data corresponding to the character identifier, and configures an audio file HitSound for the game character according to the currently used resource data.
  • the currently used resource data may be a costume that the game character is wearing, and a specific costume that the game character is wearing may be a steel armor, a cloth costume, or a leather costume.
  • the audio file HitSound is configured according to a type of the costume that the game character is wearing. Specifically, game characters of different costume types can be identified by playing an audio file HitSound.
  • the game server separately binds the audio file HitVoice and the audio file HitSound with the character identifier of the game character.
  • the game client A1 sends a request for a hit sound effect of the game character to the game server, where the request for a hit sound effect carries the character identifier.
  • the game client A1 sends, to the game server, a request for a hit sound effect carrying a character identifier of the game character, so as to reflect, by using the sound effect, an effect when the game character is hit.
  • the game server receives the request for a hit sound effect from the game client A1, and then reads the character identifier in the request for a hit sound effect, and obtains the audio file HitVoice and the audio file HitSound that correspond to the character identifier.
  • the game server presets the audio file HitVoice and the audio file HitSound of the game character, when receiving the request for a hit sound effect from the game client A1, the game server directly obtains the audio file HitVoice and the audio file HitSound that correspond to the character identifier in the request for a hit sound effect.
  • Audio types of the audio file HitVoice and the audio file HitSound are not limited.
  • the address information of the audio file HitVoice and the address information of the audio file HitSound may be obtained first, and second, the audio file HitVoice and the audio file HitSound are searched for according to the address information.
  • the game server sends the audio file HitVoice and the audio file HitSound to the game client A1.
  • the game client A1 receives the audio file HitVoice and the audio file HitSound from the game server, and then synchronously plays the audio file HitVoice and the audio file HitSound.
  • the character type of the game character can be identified by playing the audio file HitVoice, and the currently used resource data of the game character can be identified by playing the audio file HitSound.
  • different game characters can be distinguished by synchronously playing an audio file HitVoice and an audio file HitSound.
  • FIG. 5 is a schematic structural diagram of an audio control apparatus according to this embodiment, which specifically includes:
  • a determining module 501 a determining module 501, a first configuring module 502, a second configuring module 503, and a first binding module 504.
  • the determining module 501 is configured to determine a character identifier of a target object.
  • the first configuring module 502 is configured to obtain a character type identifier corresponding to the character identifier, and configure a first audio file for the target object according to the character type identifier.
  • the second configuring module 503 is configured to obtain currently used resource data corresponding to the character identifier, and configure a second audio file for the target object according to the currently used resource data.
  • the first binding module 504 is configured to separately bind the first audio file and the second audio file with the character identifier of the target object.
  • the apparatus further includes:
  • a first receiving module configured to receive a state change request of the target object, where the state change request carries the character identifier
  • a first obtaining module configured to obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
  • the first binding module may specifically include:
  • a first obtaining submodule configured to separately obtain address information of the first audio file and address information of the second audio file
  • a binding submodule configured to separately bind the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
  • the apparatus may further include:
  • a second receiving module configured to receive a state change request of the target object, where the state change request carries the character identifier
  • a second obtaining module configured to obtain the address information of the first audio file and the second audio file that correspond to the character identifier
  • a third obtaining module configured to obtain the first audio file and the second audio file according to the address information, so as to synchronously play the first audio file and the second audio file.
  • each functional module of the audio control apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and for a specific implementation process, reference may be made to related descriptions in the foregoing method embodiment, which is not described in detail herein again.
  • the audio control apparatus obtains a character type identifier corresponding to a character identifier of a determined target object, and configures a first audio file for the target object according to the character type identifier; obtains currently used resource data corresponding to the character identifier, and configures a second audio file for the target object according to the currently used resource data; and separately binds the first audio file and the second audio file with the character identifier of the target object.
  • the audio control apparatus may obtain the first audio file and the second audio file that correspond to the character identifier; and synchronously play the first audio file and the second audio file.
  • the audio control apparatus can distinguish different target objects by synchronously playing a preset first audio file and second audio file.
  • a server 600 which may include:
  • processors 610 there may be one or more processors 610 in the server 600, and one processor is used as an example in FIG. 6.
  • processors 610, the memory 620, the input apparatus 630, and the output apparatus 640 may be connected by using a bus or in another manner, and that they are connected by using a bus is used as an example in FIG. 6.
  • the processor 610 performs the following steps:
  • the processor 610 may further receive a state change request of the target object, where the state change request carries the character identifier;
  • the processor 610 may further separately obtain address information of the first audio file and address information of the second audio file;
  • the foregoing embodiment may further include: the processor 610 receives a state change request of the target object, where the state change request carries the character identifier;
  • the character type identifier may be a species identifier.
  • FIG. 7 is a schematic structural diagram of an audio control system according to this embodiment, where the system includes:
  • a client 701 configured to send a state change request to a server
  • the server 702 configured to receive the state change request from the client 701, the state change request carrying a character identifier; and obtain a first audio file and a second audio file that correspond to the character identifier, and transfer the first audio file and the second audio file to the client 701;
  • the client 701 being configured to synchronously play the first audio file and the second audio file.
  • the server 702 may be an online game server; the client 701 may be configured to send a request for a hit sound effect to the online game server; and after receiving the request for a hit sound effect from the client, the online game server extracts a character identifier of a game character in the request for a hit sound effect, obtains an audio file HitVoice and an audio file HitSound that correspond to the character identifier, and finally, transfers the audio file HitVoice and the audio file HitSound to the client.
  • the client 701 may be further configured to synchronously play the received audio file HitVoice and audio file HitSound.
  • the disclosed apparatus may be implemented in other manners.
  • the described apparatus embodiment is merely exemplary.
  • the unit division is merely logical function division and may be other division in actual implementation.
  • multiple units or components may be combined or integrated into another system, or some features may be ignored or not performed.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces.
  • the indirect couplings or communication connections between the apparatuses or units may be implemented in electronic or other forms.
  • the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units may be selected to achieve the objectives of the solutions of the embodiments according to actual needs.
  • functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit.
  • the integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit.
  • the integrated unit When the integrated unit is implemented in a form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium.
  • the computer software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or some of the steps of the method described in the embodiments of the present invention.
  • the storage medium includes any medium that can store program code, such as a USB flash drive, a read-only memory (ROM) , a random access memory (RAM) , a removable hard disk, a magnetic disk, or an optical disc.
  • the apparatus embodiment basically corresponds to the method embodiment, reference may be made to the method embodiment for the associated part.
  • the described apparatus embodiment is merely exemplary.
  • the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected to achieve the objectives of the solutions of the embodiments according to actual needs. Persons of ordinary skill in the art may understand and implement the embodiments without creative efforts.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Human Computer Interaction (AREA)
  • Information Transfer Between Computers (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The present disclosure discloses an audio control method and apparatus. The method includes: determining a character identifier of a target object; obtaining a character type identifier corresponding to the character identifier, and configuring a first audio file for the target object according to the character type identifier; obtaining currently used resource data corresponding to the character identifier, and configuring a second audio file for the target object according to the currently used resource data; and separately binding the first audio file and the second audio file with the character identifier of the target object. According to the present disclosure, a first audio file and a second audio file are respectively configured for a target object according to a character type and currently used resource data, and characters of different types can be distinguished in many application scenarios by synchronously playing the first audio file and the second audio file that correspond to the target object.

Description

AUDIO CONTROL METHOD AND APPARATUS
FIELD OF THE TECHNOLOGY
The present disclosure relates to the field of data processing, and in particular, to an audio control method and apparatus.
BACKGROUND OF THE DISCLOSURE
With the development of information and network technologies, application of audios is increasingly extensive. Audios can be used in many application scenarios (such as a game or a social network) to represent a sound made by a character or a sound effect of a character in a different state.
However, currently, in these application scenarios, application of audios is inflexible, for example, when characters of different types in an online game server are in a same state (for example, are hit) , a same audio is used to represent a sound effect of the characters in this state; in this case, the characters of different types cannot be distinguished by using the played audio.
SUMMARY
The present disclosure provides an audio control method and apparatus, which can distinguish characters of different types only by using played audios in many application scenarios.
One aspect of the present disclosure provides an audio control method, including:
determining a character identifier of a target object;
obtaining a character type identifier corresponding to the character identifier, and configuring a first audio file for the target object according to the character type identifier;
obtaining currently used resource data corresponding to the character identifier, and configuring a second audio file for the target object according to the currently used resource data; and
separately binding the first audio file and the second audio file with the character identifier of the target object.
Preferably, the method further includes:
receiving a state change request of the target object, where the state change request carries the character identifier; and
obtaining the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
Preferably, the separately binding the first audio file and the second audio file with the character identifier of the target object includes:
separately obtaining address information of the first audio file and address information of the second audio file; and
separately binding the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
Preferably, the method further includes:
receiving a state change request of the target object, where the state change request carries the character identifier;
obtaining the address information of the first audio file and the second audio file that correspond to the character identifier; and
obtaining the first audio file and the second audio file according to the address information, so as to synchronously play the first audio file and the second audio file.
Preferably, the character type identifier is a species identifier.
Another aspect of the present disclosure further provides an audio control apparatus, including:
a determining module, configured to determine a character identifier of a target object;
a first configuring module, configured to obtain a character type identifier corresponding to the character identifier, and configure a first audio file for the target object according to the character type identifier;
a second configuring module, configured to obtain currently used resource data corresponding to the character identifier, and configure a second audio file for the target object according to the currently used resource data; and
a first binding module, configured to separately bind the first audio file and the second audio file with the character identifier of the target object.
Preferably, the apparatus further includes:
a first receiving module, configured to receive a state change request of the target object, where the state change request carries the character identifier; and
a first obtaining module, configured to obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
Preferably, the first binding module includes:
a first obtaining submodule, configured to separately obtain address information of the first audio file and address information of the second audio file; and
a binding submodule, configured to separately bind the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
Preferably, the apparatus further includes:
a second receiving module, configured to receive a state change request of the target object, where the state change request carries the character identifier;
a second obtaining module, configured to obtain the address information of the first audio file and the second audio file that correspond to the character identifier; and
a third obtaining module, configured to obtain the first audio file and the second audio file according to the address information, so as to synchronously play the first audio file and the second audio file.
The present disclosure further provides an audio control system, including:
a client, configured to send a state change request to a server; and
the server, configured to receive the state change request from the client, the state change request carrying a character identifier; and obtain a first audio file and a second audio file that correspond to the character identifier, and transfer the first audio file and the second audio file to the client;
the client being configured to synchronously play the first audio file and the second audio file.
According to the present disclosure, a first audio file and a second audio file are respectively configured for a target object according to a character type and currently used resource data, and characters of different types can be distinguished in many application scenarios by  synchronously playing the first audio file and the second audio file that correspond to the target object.
BRIEF DESCRIPTION OF THE DRAWINGS
To describe the technical solutions in the embodiments of this application more clearly, the following briefly describes the accompanying drawings required for describing the embodiments. Apparently, the accompanying drawings in the following descriptions merely show some embodiments of this application, and persons of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.
FIG. 1 is a flowchart of an audio control method according to Embodiment 1 of the present invention;
FIG. 2 is a flowchart of another audio control method according to Embodiment 1 of the present invention;
FIG. 3 is a schematic architectural diagram of a network system according to Embodiment 1 of the present invention;
FIG. 4 is a schematic flowchart of an audio control method according to Embodiment 1 of the present invention;
FIG. 5 is a schematic structural diagram of an audio control apparatus according to Embodiment 2 of the present invention;
FIG. 6 is a schematic diagram of a server according to Embodiment 2 of the present invention; and
FIG. 7 is a schematic structural diagram of an audio control system according to Embodiment 2 of the present invention.
DESCRIPTION OF EMBODIMENTS
The following clearly and completely describes the technical solutions in the embodiments of this application with reference to the accompanying drawings in the embodiments of this application. Apparently, the described embodiments are merely some but not all of the embodiments of this application. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of this application without creative efforts shall fall within the protection scope of this application.
Embodiment 1
Referring to FIG. 1, FIG. 1 is a flowchart of an audio control method according to this embodiment, which specifically includes:
S101: Determine a character identifier of a target object.
In this embodiment, each target object has a character identifier uniquely corresponding to the target object; and the character identifier is used for uniquely identifying the target object, and the character identifier may be an identifying character string, letter, or the like.
S102: Obtain a character type identifier corresponding to the character identifier, and configure a first audio file for the target object according to the character type identifier.
In this embodiment, after the character identifier of the target object is determined, the character type identifier corresponding to the character identifier is obtained. Different target objects may belong to different character types, and in this embodiment, a character type to which the target object belongs may be determined according to the character type identifier corresponding to the character identifier of the target object. The character type identifier is used for uniquely identifying the character type.
After the character type of the target object is determined, in this embodiment, the first audio file is configured according to the determined character type identifier. That is, the first audio file is configured according to the character type to which the target object belongs. For example, in an online game, different first audio files are configured for different character types, for "man" , an audio file A may be configured to the first audio file; and for "tiger" , an audio file B may be configured to the first audio file.
It can be seen that, the character type identifier may be a species identifier, and the species identifier may indicate that the character type is man, woman, horse, dog, kylin, tiger, eagle, or roc.
S103: Obtain currently used resource data corresponding to the character identifier, and configure a second audio file for the target object according to the currently used resource data.
In this embodiment, the currently used resource data corresponding to the character identifier of the target object may be further obtained, and the second audio file is configured for the target object according to the currently used resource data of the target object, that is, the second audio file is configured according to a currently used resource data of the target object. For example, in the online game, second audio files may be configured for different characters according to costumes that the different characters wear and the like.
It can be understood that, there is no necessary sequence of step S102 and step S103. 
S104: Separately bind the first audio file and the second audio file with the character identifier of the target object.
In this embodiment, after the first audio file and the second audio file are configured completely for the target object, the first audio file and the second audio file are separately bound with the character identifier of the target object. Specifically, the first audio file, the second audio file, and the character identifier of the target object may be stored in a list, so that after the character identifier of the target object is determined, the first audio file and the second audio file that correspond to the character identifier can be obtained from the list.
In an actual application, first, address information of the first audio file and address information of the second audio file may be separately obtained first; and second, the address information of the first audio file and the address information of the second audio file are separately bound with the character identifier of the target object. When the first audio file and the second audio file need to be obtained, storage positions of the first audio file and the second audio file may be determined according to the address information thereof, and the first audio file and the second audio file are obtained.
FIG. 2 is a flowchart of another audio control method according to this embodiment. Based on S101 to S104, referring to FIG. 2, this embodiment may further include the following method, which is specifically:
S201: Receive a state change request of the target object, where the state change request carries the character identifier.
In this embodiment, a target object may change a state, for example, when a character in the online game is hit, the character enters a hit state. In this case, a state change request from the target object is received, where the state change request carries a character identifier of the target object.
S202: Obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
In this embodiment, after the state change request from the target object is received, the first audio file and the second audio file that correspond to the character identifier of the target object are obtained from the foregoing determined binding relationships. The first audio file reflects  the character type of the target object, and the second audio file reflects the currently used resource data of the target object.
In this embodiment, after the first audio file and the second audio file are obtained, the first audio file and the second audio file are synchronously played. Specifically, the first audio file and the second audio file may be played simultaneously by using a same interface.
Because character types of different target objects may be different, and currently used resource data of the different target objects may also be different, in this embodiment, a first audio file and a second audio file are separately configured for a target object according to a character type and currently used resource data, and the different target objects can be distinguished by synchronously playing the first audio file and the second audio file that correspond to the target object.
For ease of understanding and implementing the method in this embodiment of the present invention, an application scenario is used as an example for further description below. The following first uses a specific implementation solution in a network system of an architecture shown in FIG. 3 as an example for description. A game network system of the architecture shown in FIG. 3 includes a game server and multiple game clients. The game server, for example, may be a game server of the DIABLO 3, the World of Warcraft, the Blade and Soul, or another online game.
Referring to FIG. 4, FIG. 4 is a schematic flowchart of an audio control method according to an embodiment of the present invention. As shown in FIG. 4, a game client A1 is only a game client in FIG. 3; however, this embodiment of the present invention is not limited to such a client, and the audio control method provided by this embodiment of the present invention may include the following content:
S401: A game server determines a character identifier of any game character.
An identifier of a game character may be a nickname of the game character, an account of the game character, or the like.
S402: The game server obtains a character type identifier corresponding to the character identifier, and configures an audio file HitVoice for the game character according to the character type identifier.
Different game characters may belong to different character types, for example, a character type of a game character A is "man" , and a character type of a game character B is "tiger" . Audio files HitVoice of game characters of a same type are the same, that is, audio files HitVoice of  game characters that belong to the type "man" are the same, and whether a game character belongs to the type "man" can be identified by playing an audio file HitVoice.
S403: The game server obtains currently used resource data corresponding to the character identifier, and configures an audio file HitSound for the game character according to the currently used resource data.
The currently used resource data may be a costume that the game character is wearing, and a specific costume that the game character is wearing may be a steel armor, a cloth costume, or a leather costume. The audio file HitSound is configured according to a type of the costume that the game character is wearing. Specifically, game characters of different costume types can be identified by playing an audio file HitSound.
It can be understood that, there is no necessary sequence of step S402 and step S403.
S404: The game server separately binds the audio file HitVoice and the audio file HitSound with the character identifier of the game character.
It can be figured out that, for establishment of a binding relationship between the audio file HitVoice and the character identifier of the game character and a binding relationship between the audio file HitSound and the character identifier of the game character, address information of the audio file HitVoice and address information of the audio file HitSound may be obtained, and the address information is bound with the character identifier of the game character, to finally obtain the binding relationships.
S405: The game client A1 sends a request for a hit sound effect of the game character to the game server, where the request for a hit sound effect carries the character identifier.
When any game character in the game is hit, the game client A1 sends, to the game server, a request for a hit sound effect carrying a character identifier of the game character, so as to reflect, by using the sound effect, an effect when the game character is hit.
S406: The game server receives the request for a hit sound effect from the game client A1, and then reads the character identifier in the request for a hit sound effect, and obtains the audio file HitVoice and the audio file HitSound that correspond to the character identifier.
Because the game server presets the audio file HitVoice and the audio file HitSound of the game character, when receiving the request for a hit sound effect from the game client A1, the game server directly obtains the audio file HitVoice and the audio file HitSound that correspond to  the character identifier in the request for a hit sound effect. Audio types of the audio file HitVoice and the audio file HitSound are not limited.
The address information of the audio file HitVoice and the address information of the audio file HitSound may be obtained first, and second, the audio file HitVoice and the audio file HitSound are searched for according to the address information.
S407: The game server sends the audio file HitVoice and the audio file HitSound to the game client A1.
S408: The game client A1 receives the audio file HitVoice and the audio file HitSound from the game server, and then synchronously plays the audio file HitVoice and the audio file HitSound.
The character type of the game character can be identified by playing the audio file HitVoice, and the currently used resource data of the game character can be identified by playing the audio file HitSound. Compared with the existing technology, in this embodiment, different game characters can be distinguished by synchronously playing an audio file HitVoice and an audio file HitSound.
Embodiment 2
Referring to FIG. 5, FIG. 5 is a schematic structural diagram of an audio control apparatus according to this embodiment, which specifically includes:
a determining module 501, a first configuring module 502, a second configuring module 503, and a first binding module 504.
The determining module 501 is configured to determine a character identifier of a target object.
The first configuring module 502 is configured to obtain a character type identifier corresponding to the character identifier, and configure a first audio file for the target object according to the character type identifier.
The second configuring module 503 is configured to obtain currently used resource data corresponding to the character identifier, and configure a second audio file for the target object according to the currently used resource data.
The first binding module 504 is configured to separately bind the first audio file and the second audio file with the character identifier of the target object.
In some embodiments of the present invention, the apparatus further includes:
a first receiving module, configured to receive a state change request of the target object, where the state change request carries the character identifier; and
a first obtaining module, configured to obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
In some embodiments of the present invention, the first binding module may specifically include:
a first obtaining submodule, configured to separately obtain address information of the first audio file and address information of the second audio file; and
a binding submodule, configured to separately bind the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
In some other embodiments of the present invention, the apparatus may further include:
a second receiving module, configured to receive a state change request of the target object, where the state change request carries the character identifier;
a second obtaining module, configured to obtain the address information of the first audio file and the second audio file that correspond to the character identifier; and
a third obtaining module, configured to obtain the first audio file and the second audio file according to the address information, so as to synchronously play the first audio file and the second audio file.
It can be understood that, a function of each functional module of the audio control apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and for a specific implementation process, reference may be made to related descriptions in the foregoing method embodiment, which is not described in detail herein again.
It can be seen that, in the solutions in this embodiment, the audio control apparatus obtains a character type identifier corresponding to a character identifier of a determined target object, and configures a first audio file for the target object according to the character type identifier;  obtains currently used resource data corresponding to the character identifier, and configures a second audio file for the target object according to the currently used resource data; and separately binds the first audio file and the second audio file with the character identifier of the target object. In this way, when receiving a state change request of the target object, where the state change request carries the character identifier, the audio control apparatus may obtain the first audio file and the second audio file that correspond to the character identifier; and synchronously play the first audio file and the second audio file. Compared with the existing technology, in the solutions in this embodiment, the audio control apparatus can distinguish different target objects by synchronously playing a preset first audio file and second audio file.
Referring to FIG. 6, the present disclosure further includes a server 600, which may include:
processor 610, a memory 620, an input apparatus 630, and an output apparatus 640. There may be one or more processors 610 in the server 600, and one processor is used as an example in FIG. 6. In some embodiments of the present invention, the processor 610, the memory 620, the input apparatus 630, and the output apparatus 640 may be connected by using a bus or in another manner, and that they are connected by using a bus is used as an example in FIG. 6.
The processor 610 performs the following steps:
determining a character identifier of a target object;
obtaining a character type identifier corresponding to the character identifier, and configuring a first audio file for the target object according to the character type identifier;
obtaining currently used resource data corresponding to the character identifier, and configuring a second audio file for the target object according to the currently used resource data; and
separately binding the first audio file and the second audio file with the character identifier of the target object.
In some embodiments of the present invention, the processor 610 may further receive a state change request of the target object, where the state change request carries the character identifier; and
obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
In some embodiments of the present invention, the processor 610 may further separately obtain address information of the first audio file and address information of the second audio file; and
separately bind the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
The foregoing embodiment may further include: the processor 610 receives a state change request of the target object, where the state change request carries the character identifier;
obtains the address information of the first audio file and the second audio file that correspond to the character identifier; and
obtains the first audio file and the second audio file according to the address information, so as to synchronously play the first audio file and the second audio file.
In some embodiments of the present invention, the character type identifier may be a species identifier.
Referring to FIG. 7, FIG. 7 is a schematic structural diagram of an audio control system according to this embodiment, where the system includes:
client 701, configured to send a state change request to a server; and
the server 702, configured to receive the state change request from the client 701, the state change request carrying a character identifier; and obtain a first audio file and a second audio file that correspond to the character identifier, and transfer the first audio file and the second audio file to the client 701;
the client 701 being configured to synchronously play the first audio file and the second audio file.
In some embodiments of the present invention, in an online game application scenario, the server 702 may be an online game server; the client 701 may be configured to send a request for a hit sound effect to the online game server; and after receiving the request for a hit sound effect from the client, the online game server extracts a character identifier of a game character in the request for a hit sound effect, obtains an audio file HitVoice and an audio file HitSound that correspond to the character identifier, and finally, transfers the audio file HitVoice and the audio file HitSound to the client. The client 701 may be further configured to synchronously play the received audio file HitVoice and audio file HitSound.
In the several embodiments provided by this application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, multiple units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented through some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic or other forms.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units may be selected to achieve the objectives of the solutions of the embodiments according to actual needs.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units may be integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit.
When the integrated unit is implemented in a form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present disclosure essentially, or the part contributing to the existing technology, or all or some of the technical solutions may be implemented in a form of a software product. The computer software product is stored in a storage medium and includes several instructions for instructing a computer device (which may be a personal computer, a server, a network device, or the like) to perform all or some of the steps of the method described in the embodiments of the present invention. The storage medium includes any medium that can store program code, such as a USB flash drive, a read-only memory (ROM) , a random access memory (RAM) , a removable hard disk, a magnetic disk, or an optical disc.
The foregoing embodiments are merely intended for describing the technical solutions of the present disclosure rather than limiting the present disclosure. Although the present disclosure is described in detail with reference to the foregoing embodiments, persons of ordinary skill in the art should understand that they may still make modifications to the technical solutions  described in the foregoing embodiments or make equivalent replacements to some of the technical features thereof, as long as these modifications or replacements do not cause the essence of corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present invention.
Because the apparatus embodiment basically corresponds to the method embodiment, reference may be made to the method embodiment for the associated part. The described apparatus embodiment is merely exemplary. The units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the modules may be selected to achieve the objectives of the solutions of the embodiments according to actual needs. Persons of ordinary skill in the art may understand and implement the embodiments without creative efforts.
It should be noted that the relational terms herein such as first and second are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Moreover, the terms "include" , "comprise" , and any variants thereof are intended to cover a non-exclusive inclusion. Therefore, in the context of a process, method, object, or device that includes a series of elements, the process, method, object, or device not only includes such elements, but also includes other elements not specified expressly, or may include inherent elements of the process, method, object, or device. Unless otherwise specified, an element limited by "include a/an..." does not exclude other same elements existing in the process, method, object, or device that includes the element.
The audio control method and apparatus provided by the embodiments of the present invention are described in detail above. The principle and implementation manners of the present disclosure are described herein by using specific examples. The description about the embodiments is merely provided for ease of understanding of the method and core ideas of the present disclosure. Persons of ordinary skill in the art may make variations and modifications to the present disclosure in terms of the specific implementation manners and application scopes according to the ideas of the present disclosure. Therefore, the specification shall not be construed as a limit to the present disclosure.

Claims (10)

  1. An audio control method, comprising:
    determining a character identifier of a target object;
    obtaining a character type identifier corresponding to the character identifier, and configuring a first audio file for the target object according to the character type identifier;
    obtaining currently used resource data corresponding to the character identifier, and configuring a second audio file for the target object according to the currently used resource data; and
    separately binding the first audio file and the second audio file with the character identifier of the target object.
  2. The method according to claim 1, wherein the method further comprises:
    receiving a state change request of the target object, wherein the state change request carries the character identifier; and
    obtaining the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
  3. The method according to claim 1, wherein the separately binding the first audio file and the second audio file with the character identifier of the target object comprises:
    separately obtaining address information of the first audio file and address information of the second audio file; and
    separately binding the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
  4. The method according to claim 3, wherein the method further comprises:
    receiving a state change request of the target object, wherein the state change request carries the character identifier;
    obtaining the address information of the first audio file and the second audio file that correspond to the character identifier; and
    obtaining the first audio file and the second audio file according to the address information, so as to synchronously play the first audio file and the second audio file.
  5. The method according to claim 1, wherein the character type identifier is a species identifier.
  6. An audio control apparatus, comprising:
    a determining module, configured to determine a character identifier of a target object;
    a first configuring module, configured to obtain a character type identifier corresponding to the character identifier, and configure a first audio file for the target object according to the character type identifier;
    a second configuring module, configured to obtain currently used resource data corresponding to the character identifier, and configure a second audio file for the target object according to the currently used resource data; and
    a first binding module, configured to separately bind the first audio file and the second audio file with the character identifier of the target object.
  7. The apparatus according to claim 6, wherein the apparatus further comprises:
    a first receiving module, configured to receive a state change request of the target object, wherein the state change request carries the character identifier; and
    a first obtaining module, configured to obtain the first audio file and the second audio file that correspond to the character identifier, so as to synchronously play the first audio file and the second audio file.
  8. The apparatus according to claim 6, wherein the first binding module comprises:
    a first obtaining submodule, configured to separately obtain address information of the first audio file and address information of the second audio file; and
    a binding submodule, configured to separately bind the address information of the first audio file and the address information of the second audio file with the character identifier of the target object.
  9. The apparatus according to claim 8, wherein the apparatus further comprises:
    a second receiving module, configured to receive a state change request of the target object, wherein the state change request carries the character identifier;
    a second obtaining module, configured to obtain the address information of the first audio file and the second audio file that correspond to the character identifier; and
    a third obtaining module, configured to obtain the first audio file and the second audio file  according to the address information, so as to synchronously play the first audio file and the second audio file.
  10. An audio control system, comprising:
    a client, configured to send a state change request to a server; and
    the server, configured to receive the state change request from the client, the state change request carrying a character identifier; and obtain a first audio file and a second audio file that correspond to the character identifier, and transfer the first audio file and the second audio file to the client;
    the client being configured to synchronously play the first audio file and the second audio file.
PCT/CN2015/072986 2014-02-18 2015-02-13 Audio control method and apparatus WO2015124087A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410055378.1A CN104841131B (en) 2014-02-18 2014-02-18 A kind of audio control method and device
CN201410055378.1 2014-02-18

Publications (1)

Publication Number Publication Date
WO2015124087A1 true WO2015124087A1 (en) 2015-08-27

Family

ID=53841353

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/072986 WO2015124087A1 (en) 2014-02-18 2015-02-13 Audio control method and apparatus

Country Status (2)

Country Link
CN (1) CN104841131B (en)
WO (1) WO2015124087A1 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107115672A (en) * 2016-02-24 2017-09-01 网易(杭州)网络有限公司 Gaming audio resource player method, device and games system
CN106621331B (en) * 2016-10-14 2017-12-19 福州市马尾区朱雀网络信息技术有限公司 A kind of game object performance objective switching method and apparatus
CN109529335B (en) * 2018-11-06 2022-05-20 Oppo广东移动通信有限公司 Game role sound effect processing method and device, mobile terminal and storage medium
CN109857363B (en) * 2019-01-17 2021-10-22 腾讯科技(深圳)有限公司 Sound effect playing method and related device
CN110234036B (en) * 2019-06-19 2022-02-11 广州酷狗计算机科技有限公司 Method, device and system for playing multimedia file
CN110704863B (en) * 2019-08-23 2021-11-26 深圳市铭数信息有限公司 Configuration information processing method and device, computer equipment and storage medium
CN110534131A (en) * 2019-08-30 2019-12-03 广州华多网络科技有限公司 A kind of audio frequency playing method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101414948A (en) * 2008-12-02 2009-04-22 腾讯科技(深圳)有限公司 Method for using sound prop, server and client terminal
US20100120533A1 (en) * 2008-11-07 2010-05-13 Bracken Andrew E Customizing player-generated audio in electronic games
CN102314917A (en) * 2010-07-01 2012-01-11 北京中星微电子有限公司 Method and device for playing video and audio files
CN102527039A (en) * 2010-12-30 2012-07-04 德信互动科技(北京)有限公司 Sound effect control device and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001009157A (en) * 1999-06-30 2001-01-16 Konami Co Ltd Control method for video game, video game device and medium recording program of video game allowing reading by computer
CN101009577A (en) * 2006-12-28 2007-08-01 北京金山数字娱乐科技有限公司 Method and device for playing audio

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100120533A1 (en) * 2008-11-07 2010-05-13 Bracken Andrew E Customizing player-generated audio in electronic games
CN101414948A (en) * 2008-12-02 2009-04-22 腾讯科技(深圳)有限公司 Method for using sound prop, server and client terminal
CN102314917A (en) * 2010-07-01 2012-01-11 北京中星微电子有限公司 Method and device for playing video and audio files
CN102527039A (en) * 2010-12-30 2012-07-04 德信互动科技(北京)有限公司 Sound effect control device and method

Also Published As

Publication number Publication date
CN104841131B (en) 2019-09-17
CN104841131A (en) 2015-08-19

Similar Documents

Publication Publication Date Title
WO2015124087A1 (en) Audio control method and apparatus
JP7018517B2 (en) Verification methods, devices, storage media, and programs for smart contracts on the blockchain
JP6559254B2 (en) Virtual assistant for communication sessions
CN111404990B (en) File transmission method, device, client and storage medium
CN105653630B (en) Data migration method and device for distributed database
JP2020513267A (en) Method and associated device for performing user matching
JP2020523700A (en) Distributed search and index update method, system, server, and computer device
CN106161392A (en) A kind of auth method and equipment
GB2550451A (en) Offline peer-assisted notification delivery
US10673931B2 (en) Synchronizing method, terminal, and server
US8321617B1 (en) Method and apparatus of server I/O migration management
CN101876921A (en) Method, device and system for migration decision-making of virtual machine
JP2008040858A5 (en)
WO2017045450A1 (en) Resource operation processing method and device
US20190155505A1 (en) Data Processing Method and Related Storage Device
CN110830234A (en) User traffic distribution method and device
CN106170010A (en) The data processing method of a kind of cross-server cluster and device
CN105429929A (en) Information processing method, client, server and information processing system
CN112675543A (en) Role attribute configuration method and device, storage medium and electronic device
CN105553714B (en) A kind of method and system of business configuration
US20170039259A1 (en) Method and Apparatus for Implementing Storage of File in IP Disk
US20150050999A1 (en) Method for providing an online game enabling the user to change the shape of an item and a system thereof
CN105991791A (en) Message forwarding method and device
US11881996B2 (en) Input and output for target device communication
CN109213924B (en) Popularization task allocation method and device and computer equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15751715

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 28.04.2017)

122 Ep: pct application non-entry in european phase

Ref document number: 15751715

Country of ref document: EP

Kind code of ref document: A1